149 61
English Pages [277] Year 2019
i
Innovative Assessment in Higher Education
Contextualising why assessment is still the single most important factor affecting student learning in higher education, this second edition of Innovative Assessment in Higher Education: A Handbook for Academic Practitioners offers a critical discourse about the value of assessment for learning alongside practical suggestions about how to enhance the student experience of assessment and feedback. With 17 new chapters, this edition:
• • • • • •
contextualises assessment within the current higher education landscape; explores how student, parent and government expectations impact on assessment design; presents case studies on how to develop, incorporate and assess employability skills; reviews how technology and social media can be used to enhance assessment and feedback; provides examples and critical review of the use and development of feedback practices and how to assess professional, creative and performance-based subjects; offers guidance on how to develop assessment that is inclusive and enables all students to advance their potential.
Bridging the gap between theory and the practical elements of assessment, Innovative Assessment in Higher Education: A Handbook for Academic Practitioners is an essential resource for busy academics looking to make a tangible difference to their academic practice and their students’ learning. This practical and accessible guide will aid both new and more experienced practitioners looking to learn more about how and why assessment in higher education can make such a difference to student learning. Cordelia Bryan is a Principal Fellow of the Higher Education Academy (HEA) and leads the HEA Recognition Scheme at University of Hertfordshire, UK. She is also Programme Leader for an international Post Graduate Certificate in Learning and Teaching in Higher Education at Rose Bruford College of Theatre and Performance, UK. Karen Clegg is Head of Research Excellence Training at the University of York, UK, where she provides strategic direction and delivery of transferable skills, leadership and support interventions for doctoral students, researchers and senior staff. Karen is a qualified coach and Senior Fellow of the HEA.
ii
A thoroughly revised second edition of Innovative Assessment in Higher Education is a welcome addition to the literature addressing the ‘wicked problem’ of assessment. A talented group of mainly UK- based authors provide a range of contributions to tackle perennial and fresh challenges for assessment and feedback. Well-marshalled by Cordelia Bryan and Karen Clegg, the collection offers plenty of food for thought for would-be innovators. Highly recommended.
David Carless, Professor, University of Hong Kong, Hong Kong It is a positive delight to endorse this new edition of Innovative Assessment in Higher Education –a wonderful complement to A Handbook for Teaching and Learning in Higher Education (fifth edition, 2019). As the demands to ensure assessment and feedback are ‘fit for purpose’ increase so do our needs for innovative solutions. Fortunately, ‘fit for purpose’ is a recurring theme in this second edition, with authors covering a range of perspectives and concerns. Looking through the lens of employers, policy makers, wellbeing experts and, most importantly, individual students, the collection highlights the need for absolute clarity with respect to the purpose and role of assessment. The range of global offerings, different approaches (which include dialogic and those drawing on technological enhancements) and clear overriding concerns for practical approaches to drive up student learning, all make this an essential text for colleagues committed to professionalising their own approaches. This book is a great addition to the reading list of many an accredited programme, I commend it to you!
Stephanie Marshall, Vice-Principal (Education), Queen Mary University London, UK
iii
Innovative Assessment in Higher Education A Handbook for Academic Practitioners Second Edition Edited by Cordelia Bryan and Karen Clegg
iv
Second edition published 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 selection and editorial matter, Cordelia Bryan and Karen Clegg; individual chapters, the contributors The right of the Cordelia Bryan and Karen Clegg; to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published by Routledge 2006 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Bryan, Cordelia, editor. | Clegg, Karen, editor. Title: Innovative assessment in higher education: a handbook for academic practitioners / edited by Cordelia Bryan and Karen Clegg. Description: Second Edition. | New York : Routledge, 2019. | “[First edition published by Routledge 2006]”—T.p. verso. Identifiers: LCCN 2018056353 | ISBN 9781138581180 (Hardback) | ISBN 9781138581197 (Paperback) | ISBN 9780429506857 (Ebook) Subjects: LCSH: Universities and colleges—Great Britain—Examinations. | Education, Higher—Great Britain—Evaluation. | Educational evaluation—Great Britain. Classification: LCC LB2367.G7 I56 2019 | DDC 378.1/662—dc23 LC record available at https://lccn.loc.gov/2018056353 ISBN: 978-1-138-58118-0 (hbk) ISBN: 978-1-138-58119-7 (pbk) ISBN: 978-0-429-50685-7 (ebk) Typeset in Times New Roman by Newgen Publishing UK
v
This book is dedicated to all those who are committed to enhancing learning through assessment practice and especially to our sadly missed colleagues, Professor Liz McDowell and Professor Lewis Elton, who transformed the way in which we think about assessment, learning and teaching in higher education. We hope that in some small way their legacy lives through this collection.
vi
vii
Contents
List of contributors Foreword
x xvi
S AL LY B ROWN
Acknowledgements Introduction: how innovative are we?
xix 1
K ARE N C L E G G A N D C O R D ELI A BRYA N
PART I
Assessment in context 1
Stepping back to move forward: the wider context of assessment in higher education
7 9
HELEN KING
2
How assessment frames student learning
22
G RAH AM G I B BS
3
Changing the narrative: a programme approach to assessment through TESTA
36
TAN SY J E S SOP
4
Using assessment and feedback to empower students and enhance their learning
50
S AL LY B ROWN
5
The transformative role of self-and peer-assessment in developing critical thinkers J OAN NA TAI AN D C H I E A DAC H I
64
viii
viii Contents PART II
Implementing feedback
75
6 Evaluating written feedback
77
E V E LYN B ROWN A N D C H R I S G LOV ER
7 Assessing oral presentations: style, substance and the ‘razzle dazzle’ trap
88
ST E V E H U TC H I N SO N
8 Assessing and developing employability skills through triangulated feedback
101
SU S AN K ANE A N D TO M BA N H A M
9 Innovative assessment: the academic’s perspective
111
L I N N ORT O N
10 Developing emotional literacy in assessment and feedback
121
EDD PITT
11 Developing students’ proactive engagement with feedback
129
NAOMI E . WI N STO N E A N D RO BERT A . NA S H
PART III
Stimulating learning
139
12 Certainty-based marking: stimulating thinking and improving objective tests
141
T ON Y G ARDN ER -M EDWI N
13 Developing and assessing inclusivity in group learning
151
T H E O G I L B ERT A N D C O R D ELI A BRYA N
14 Designing engaging assessment through the use of social media and collaborative technologies
163
RI C H ARD WA LK ER A N D MA RTI N JEN K I N S
15 Developing autonomy via assessment for learning: students’ views of their involvement in self and peer review activities K AY S AMB E LL A N D A LI STA I R SA MBELL
173
ix
Contents ix 16 Assessing simulated professional practice in the performing arts 190 K AT H Y DAC R E
17 Archimedean levers and assessment: disseminating digital innovation in higher education
198
PAU L MAH ARG
PART IV
Assessing professional development
207
18 Developing the next generation of academics: the graduate teacher assistant experience
209
K ARE N C L E G G A N D G I LES MA RTI N
19 Practitioner perspectives: using the UK Professional Standards Framework to design assessment
220
C ORD E L I A B RYA N, TH O MA S BA K ER , A DA M CRYMBL E , F U MI G I L E S AN D DA R JA R EZN I KOVA
20 Measure for measure: wider applications of practices in professional assessment
231
C H RI S MAG U I R E, A N G ELA D EV ER EU X , LY N NE GEL L AN D D I MI T RA PAC H I
Conclusion: resilience, resourcefulness and reflections
241
C ORD E L I A B RYA N A N D K A R EN C LEG G
Index
250
x
Contributors
Chie Adachi is a Senior Lecturer in Digital Learning Innovation at Deakin University. She works on innovation projects around digital learning from the central learning and teaching unit, Deakin Learning Futures. Her research interests lie within the areas of digital learning and self and peer assessment. Tom Banham joined the University of York as Director of Employability and Careers in May 2016 having worked at Nestlé heading up the team that delivers the entry level recruitment and development programmes for the organisation. His career has stretched across consulting, banking, legal and fast-moving consumer goods, with his recent roles focused on leading early careers recruitment and development programmes for Nestlé and Irwin Mitchell LLP. Tom has previously held a role as honorary Vice- President at the Institute of Student Employers and is currently part of the Higher Education Academy Global Network Group. These roles are crucial for him to maintain employer links and for him to understand the changing landscape within higher education. Thomas Baker is Associate Dean for Learning, Teaching and Student Experience at the School of Engineering and Technology at the University of Hertfordshire. His pedagogic interest is particularly in ‘design, build and test’ engineering learning, including CDIO®. His other academic area is in manufacturing, operations and supply chain management and he spent several years in industry before joining academia. Sally Brown has written extensively on learning, teaching and assessment and enjoys life as an independent consultant and Emerita Professor at Leeds Beckett University where she was, until July 2010, Pro Vice Chancellor (Academic). She is Visiting Professor at the Universities of Plymouth, South Wales, Edge Hill and Liverpool John Moores. She is chair of the Association of National Teaching Fellows, a Principal Fellow of the Higher Education Academy, is a Staff and Educational Development Association Senior Fellow and a UK National Teaching Fellow. She holds honorary
xi
List of contributors xi doctorates from Plymouth University, Kingston University and Edinburgh Napier University. Cordelia Bryan leads the HEA Recognition Scheme at University of Hertfordshire. She is also Programme Leader for an international Post Graduate Certificate in Learning and Teaching in Higher Education at Rose Bruford College of Theatre and Performance. The programme is the only one of its kind specifically designed for practitioners in the performing arts and allied industries. Cordelia’s broad pedagogical research interests and educational development over 25 years is driven by a commitment to inclusivity and student engagement. Her early experience of and continued involvement with Steiner Waldorf education has instilled and sustained in her a belief in the transformative power of education which transcends all disciplines. She is a Principal Fellow of the Higher Education Academy. Karen Clegg is Head of Research Excellence Training provision at the University of York, where she coordinates and delivers training for all those actively engaged in research and those who support research. Karen’s research into reflective practice and self and peer assessment underpins her pedagogic design and her commitment to developing communities of practice. Karen is a trained coach and a Senior Fellow of the Higher Education Academy. Kathy Dacre has taught performing arts in the USA and the UK and since 2002 has been Director of Learning, Teaching and Curriculum Development at Rose Bruford College. Kathy’s expertise is in curriculum design and she has devised and developed over 25 drama-related degree programmes. Her research interests include Stanislavski’s teaching legacy, assessment and reflective practice and Shakespearean performance. She is a Fellow of the Royal Society of Arts, a Fellow of the Higher Education Academy and Chair of the Development Board for Shakespeare North. Graham Gibbs retired as Professor and Director of the Oxford Learning Institute, University of Oxford, in 2007. He has since supported the application of his theoretical analysis of how assessment supports learning, and the research tools he developed to evaluate the effects of assessment on learning, through the TESTA programme (Transforming the Experience of Students Through Assessment). TESTA is used in scores of universities worldwide and in an increasing number of cases is the cornerstone of universities’ quality assessment and enhancement. He is the author of Dimensions of Quality, which identifies the variables that most affect learning in university. Tony Gardner-Medwin is an emeritus professor of physiology at UCL, with an interest in neural mechanisms of memory, inference and decision making. He has pioneered many sophisticated innovations in computer- based
xii
xii List of contributors teaching since 1983, with an emphasis on the exploitation of simulations, practicals and exercises in critical thinking. His work on certainty-based marking started in the context of teaching quantitative methods to biomedical students. Email: a.gardner- [email protected]; http://tmedwin. net. Theo Gilbert won the Times Higher Education Award for ‘Most Innovative Teacher of the Year’, 2018. He teaches cross- discipline academic communications and group work at the University of Hertfordshire. His research is driven by a passion for the development of micro skills of compassion which, he argues, can and should be taught and should also become credit bearing. The work aims to advance interculturalism (not simply multiculturalism), reduce the black and minority ethnic attainment gap and enhance the student social experience across all categories of students. Three UK universities have worked on the data analysis so far, with controversial findings that need public debate. Theo is a Senior Fellow of the Higher Education Academy. Steve Hutchinson is a coach, consultant, trainer and author. Originally a biologist, he has been at the forefront of academic and researcher development for fifteen years and now works internationally with a range of clients. His company specialises in leadership, communication and personal impact. He has sat through a lot of talks, given feedback to many presenters and just sometimes wishes he got a percentage of prize monies and earnings won by these presenters following this feedback (www. hutchinsontraining.com). Martin Jenkins is Head of Academic Development at Coventry University (UK). Martin has over 20 years of higher education experience in the UK and New Zealand. He has published on digital storytelling, flipped learning and institutional adoption of learning technologies. He is a National Teaching Fellow, Senior Fellow of the HEA, and member of Universities and Colleges Information Systems Association (UCISA)’s Digital Education Group (www.ucisa.ac.uk/groups/deg). martin.jenkins@ coventry.ac.uk; ORCiD orcid.org/0000-0002-1209-8282. Tansy Jessop is Professor of Research Informed Teaching at Solent University. She has led the ‘TESTA project since 2009 which has been used by more than 50 universities in the UK, India, Australia and Ireland. Tansy began her career as a teacher in South Africa, completing a PhD on teacher development in rural KwaZulu-Natal in 1997. She has published on social justice in education, narrative inquiry, learning spaces, and assessment and feedback. Tansy is a National Teaching Fellow. Susan Kane has over 25 years’ experience within learning and development across the public and private sector including higher education, policing, retail and tourism. With an MA in Leadership Innovation and Change she
xiii
List of contributors xiii joined the University of York in 2008 and has led the development of a number of Times Higher Award winning leadership initiatives. Working collaboratively with the Director of Employability and Careers, the experience and expertise of both teams has enabled this innovative approach. Helen King is Associate Director of Academic Practice at the University of the West of England, Bristol, UK. Her educational development career has included roles in UK-wide organisations (including the Higher Education Academy and the Higher Education Funding Council for England), as an independent consultant collaborating with colleagues in the UK, USA and Australia. Helen’s current research explores the characteristics of expertise in higher education teachers. She holds a Senior Fellowship of the Staff and Educational Development Association, a National Teaching Fellowship and is a Principal Fellow of the Higher Education Academy. Paul Maharg focuses on interdisciplinary educational innovation, the design of regulation in legal education and the use of technology- enhanced learning at all levels of legal learning. He has published widely in legal education and legal critique. He is Distinguished Professor of Practice –Legal Education at Osgoode Hall Law School, York University, Canada. He holds visiting professorships at Hong Kong University Faculty of Law and the Chinese University of Hong Kong Faculty of Law and is an Honorary Professor at the Australian National University College of Law. He blogs at http://paulmaharg.com. Email: [email protected]. Giles Martin is Programme Leader, Higher Education Practice, in the Institute for Education at Bath Spa University, UK. Giles is responsible for the Postgraduate Certificate in Higher Education, the MA Professional Practice in Higher Education and the university’s HEA fellowship recognition scheme. He has also run initial teaching programmes for academics and graduate teaching assistants at the University of Bath and Queen Mary, University of London. Giles was originally educated as a mathematical physicist, starting to teach in Mathematics at the University of York in 2003 before moving into educational development. He was the external examiner for the York Learning and Teaching Award from 2013 until 2017. Robert A. Nash is a senior lecturer in Psychology at Aston University in Birmingham, UK. He studied psychology at the University of Warwick, where he completed both his undergraduate degree and his PhD. He is an experimental psychologist, with a main research specialism in human memory and applications of behavioural research to educational and legal contexts. Since 2013, he has conducted research on the topic of feedback in education; in particular, the issue of students’ engagement with and memory for the feedback they receive. This research has been funded
xiv
xiv List of contributors by the Higher Education Academy and the Leverhulme Trust. Email: r. [email protected]. Lin Norton is Professor Emerita of Pedagogical Research at Liverpool Hope University and a National Teaching Fellow. A psychologist by background, she has had throughout her career a strong interest in assessment, marking and feedback practices. She has written numerous publications, given conference papers and run workshops in this area (www.linnorton.co.uk). Her current research interests are on academics’ views of assessment, action research issues and other aspects of higher education pedagogy. Edd Pitt is a Senior Lecturer in Higher Education and Academic Practice and the Programme Director for the Post Graduate Certificate in Higher Education at the University of Kent. His principle research field is Assessment and Feedback with a particular focus upon student’s emotional processing during feedback situations. Darja Reznikova is a practitioner in dance and theatre performance and has worked in Ukraine, Germany, the United States and the UK. Alongside her career as a freelance dance and theatre artist, Darja is a dance lecturer. She currently teaches release-based contemporary classes at Dance Professional Mannheim while further pursuing her choreographic and ongoing research interests in interdisciplinarity. Kay Sambell is Professor of Higher Education Pedagogy at Edinburgh Napier University. She has a long track record of innovation and scholarly outputs in relation to university learning and teaching, with an emphasis on using assessment to promote and foster learning. Alistair Sambell is Senior Vice Principal and Deputy Vice Chancellor at Edinburgh Napier University. He has been active in teaching and research throughout his career, with interests including microwave antenna design and pedagogy in higher education. Joanna Tai is a Research Fellow at the Centre for Research in Assessment and Digital Learning at Deakin University, Australia. Her research interests include peer-assisted learning, developing capacity for evaluative judgement, student perspectives on learning and assessment, and research synthesis. Richard Walker is Head of E-Learning Development at the University of York (UK), leading on the strategic development of e-learning services at the institution. He has over 20 years’ higher education experience with online education in the UK, Netherlands and Spain. He has published on instructional design frameworks for blended learning in a variety of journals, as well as approaches to the institutional adoption of learning technologies. He is a Senior Fellow of the HEA and an active member of the UK Universities and Colleges Information Systems Association’s Digital Education Group
xv
List of contributors xv (www.ucisa.ac.uk/groups/deg); [email protected]; ORCiD: orcid. org/0000-0002-4665-3790. Naomi E. Winstone is a senior lecturer in higher education at the University of Surrey, UK. Naomi is a cognitive psychologist, specialising in learning behaviour and engagement with education. Naomi’s research focuses on the processing and implementation of feedback, educational transitions and educational identities. Her work has been funded by the Higher Education Academy, the Medical Research Council, the Leverhulme Trust, the Higher Education Funding Council for England, and the Society for Research into Higher Education. Naomi is a Senior Fellow of the Higher Education Academy and a National Teaching Fellow. Email: n.winstone@ surrey.ac.uk.
xvi
Foreword Sally Brown
Assessment that is authentic and integral to learning and supports student engagement through formative feedback and productive dialogue is at the heart of this stimulating volume, for which I was delighted to be invited to write a foreword. My dear friend and colleague Liz McDowell (1954–2018), to whom this volume is dedicated, was one of the first proponents of assessment for learning (AfL) in UK higher education, and was in the vanguard of thinking about how to operationalise the approach into workable and authentic practices that enhance student learning. The Northumbria AfL model is characterised by:
• • • • •
a feedback- rich learning environment that has formative assessment at its core with the intention of enabling all students to enhance their achievements; the notion of feedback expanded to include not only the ‘normal’ tutor feedback on student work but also tutor–student dialogic feedback and peer feedback from a range of formal and informal collaborative learning activities; rich interactions enabling students to identify the strengths and weaknesses of their own work, rather than simply expecting tutors to do that job for them; engaging students as active participants in learning activities and feedback, and hence inducting them to understand and subsequently, interrogate and challenge the standards, outcomes, and criteria used for the evaluation of high-quality work. (After McDowell et al, 2009).
Produced at a time when all stakeholders in the higher education process are striving for high-quality assessment practices, this edited collection builds on the AfL legacy and should become a standard text for new and experienced academic practitioners. Karen Clegg and Cordelia Bryan have brought together in this collection authors with considerable expertise who share a commitment to making assessment and feedback work dynamically for students, their funders, invested parents/ sponsors, their subsequent employers, the professional
xvi
Foreword xvii subject and regulatory bodies that shape the professional requirements for programmes and, of course, the accrediting institutions aiming for excellence in a highly competitive higher education market. Central therefore to good assessment practice is effective feedback. As Carless and Boud propose, feedback literacy can be improved through curriculum design, guidance and coaching: all themes explored in depth in this edition: For feedback processes to be enhanced, students need both appreciation of how feedback can operate effectively and opportunities to use feedback within the curriculum. (Carless and Boud, 2018) Carless and Boud further argue that to gain feedback literacy, students need to be made aware of the imperative to take action in response to feedback information so that they can draw inferences from a range of feedback experiences for the purpose of continuous improvement and thereby develop a repertoire of strategies for acting on feedback. This edition offers both philosophical discourse and practical case studies which support a tutor/student dialogic approach to assessment –not just in relation to feedback but also in the design of assessment criteria. Approaches that emphasise feedback as telling are insufficient because students are often not equipped to decode or act on statements satisfactorily, so key messages remain invisible. (Sadler, 2010) Throughout this volume, we see examples of how this can work in practice to the benefit both of the students, who become more effective learners, and those who teach them, since it is more satisfying to provide formative commentaries to students who are clearly taking note of our advice and suggestions. Various metrics are used globally to evaluate the success of assessment and feedback but, in many nations, measures of student satisfaction (such as the Australian Course Experience Questionnaire as developed by Paul Ramsden, and subsequent incarnations, notably the UK National Student Survey demonstrate significant dissatisfaction in both areas). Students, it seems, don’t value our efforts to be reliable, valid and fair, don’t trust us to be consistent, don’t understand how marks are achieved or align with learning outcomes and, most frustratingly, often don’t even read the feedback we work so hard to provide them. It is against this challenging backdrop that this practical and provocative book is set, aiming to show how assessment can be a force for good, benefiting students and enabling them to achieve highly. Cordelia Bryan and Karen Clegg have done an excellent job in marrying high-level concepts of assessment for learning with micro details of implementation. The authors in this book are some of the leaders of change in assessment in higher education, pioneering new ways of thinking about and implementing assessment. Several emphasise the importance of good assessment design
xvi
xviii Foreword from the outset to ensure that assessment works well. Formerly assessment decisions were often taken from a position of habituation (‘we’ve always done it like this in the past’) or inertia (‘changing the way we assess would be nice, but we haven’t got time to do that’) or indeed a lack of knowledge of alternatives. Several case study authors draw on Biggs and Tang (2011) to encourage assessment designers to work from a position of constructive alignment as well as common sense, to ensure that assessment is fit for purpose. Such assessment tends to foreground authenticity of both task design and assessment environments, building inevitably towards enhanced student employability on graduation. The editors and contributors share my view that authenticity in assessment is best achieved by course or programme teams working together to develop shared understandings of expectations as communities of practice and, even more importantly, standards, which can only be achieved collectively. Many of us involved in UK higher education assessment enhancement activities have argued the case for professionalising the practice of assessors through activities that mentor, guide and train novice assessors. This has subsequently been advocated by the Quality Assurance Agency (2013) in their assertion that all who assess should be deemed by their higher education institutions competent to assess, and supported to become so by training, mentoring and other support. This important theme is well represented in this text. Indeed, this book is a valuable primer for all those who want to systematically prepare themselves to assess professionally as well as for those who want to refresh their thinking in the field.
References Biggs, J. and Tang, C. (2011) Teaching for Quality Learning at University: What the Student Does (4th edn). Maidenhead: Open University Press/SRHE. Carless, D. and Boud, D. (2018) The development of student feedback literacy: enabling uptake of feedback. Assessment and Evaluation in Higher Education 43 (8): 1315–1325. Higher Education Academy (2012) A Marked Improvement: Transforming Assessment in Higher Education. York: HEA. McDowell, L., Sambell, K., Davison, G. (2009) Assessment for learning: a brief history and review of terminology. In: Rust, C. (ed.) Improving Student Learning Through the Curriculum. Oxford: Oxford Centre for Staff and Learning Development, pp. 56–64. Quality Assurance Agency for Higher Education (2013) UK Quality Code for Higher Education. Part B: Assuring and Enhancing Academic Quality. Chapter B6: Assessment of Students and Recognition of Prior Learning. Gloucester: QAA. Sadler, D. R. (2010) Beyond feedback: developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35 (5): 535–550.
xix
newgenprepdf
Acknowledgements
We thank all the contributing authors for sharing their innovations and for being willing to hold up the mirror to their own practice. Thanks also to David Boud for his feedback and huge gratitude to Sally Brown for her guidance and for writing the Foreword. To Sarah at Taylor and Francis, thanks for persuading us to complete a second edition –which has turned out to be practically a new book –we have enjoyed it immensely. Finally, thanks to you the practitioner reader; we hope you find some of this useful. If this collection impacts positively on your academic practice we will consider our job done.
xx
1
Introduction How innovative are we? Karen Clegg and Cordelia Bryan
When the publishers approached us to produce a second edition of Innovative Assessment in Higher Education, our first response was reluctance. We asked ourselves, is there really sufficient demand for another collection of chapters about innovative assessment? Surely the issues raised in the original 2006 edition had been addressed and innovation was now prevalent throughout the sector? We very soon concluded, rather depressingly, that this was not the case. The context in which universities operate has changed enormously, yet the traditional architecture of assessment holds firm. Back in 2006, students in the UK were paying significantly less for their education, there was no national audit of teaching and universities were less prominent in the media. Fast forward some years and the issues of how to engage students in feedback, how to assess equitably and how to ensure that assessment makes a difference to the learning experience remain as pertinent now as they were then. In the context of the Teaching Excellence Framework (TEF), increased expectations from parents, students and government for higher education to equip our students for employment in a highly competitive market, assessment still has a huge role to play on the student experience and the outcome and future success of the individual. Assessment remains the game changer. What we illustrate in this collection of chapters, 17 of which are new, is that there are honourable exceptions; innovation is taking place, at module, programme and institutional level. Creative, keen and enthusiastic practitioners are designing (and co-designing with stakeholders) new approaches and changing the assessment landscape. With case studies from the UK, Canada and Australia, what we offer fellow practitioners here is an opportunity to consider the context in which they work (Part I), to review how they use feedback to enhance student learning and their own teaching (Part II), to explore, modify and adapt some of the assessment strategies described to advance student learning (Part III) and finally, to consider how they can be the very best university teacher by taking a professionally integrated and reflective approach to academic practice (Part IV). The title for this second edition has been modified to Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, reflecting our desire for it to be of practical use and theoretical challenge to all those seeking
2
2 Karen Clegg and Cordelia Bryan to gain formal recognition and accreditation and for those wishing to refresh, renew and reinvigorate their practice because they care, as we do, about the student experience. When assembling the collection, our overarching aim was to showcase the very best of current innovative assessment practice. We selected experienced and well-known practitioners with a track record in assessment and additionally used our networks to identify relatively new authors who are pushing the boundaries of conventional assessment to better serve students. The mix of contributors was intentional. We want to help those who are new to teaching to identify, design and deploy assessment practices that best resonate with their own professional values and to gently disrupt the thinking of more experienced practitioners and programme leaders by offering provocative chapters on the complex nature of implementing change and enabling long-term sustainability. We hope that readers will agree that the result is a collection of chapters that provides rich pedagogic discourse that challenges the way we think about the purpose and role of assessment in creating astute, employable students. Threaded throughout the collection are several key themes designed to promote individual reflection and group discussion. Institutional and individual accountability: we ask to what extent assessment innovation is limited not by ambition but by the fear of external scrutiny in the form of the national TEF rankings (in the UK) and the need to meet student and parental expectations. In this context, are we as practitioners, ready to be bold and authentic in our creation and design of assessment that genuinely supports student learning or will we play safe and employ assessment practices that provide us with a neat set of scores that validate and evidence our own expertise as facilitators of learning? We explore collectively how we can learn from institutions like Alverno College in the United States (Chapter 4), Deakin University in Australia (Chapter 5) and those who took part in the Transforming the Experience of Students Through Assessment (TESTA) project (Chapter 3), where we see evidence of institutional approaches to innovative assessment design. Assessment for learning: Kay Sambell and Liz McDowell were among the first proponents of assessment for learning (AfL) in UK higher education and promulgating the concept through the Northumbria conference of the same name and latterly as a special interest group network for the European Association for Research into Learning and Instruction. AfL turned ideas about assessment upside down such that we began as a sector to see assessment not simply as a way of differentiating between student abilities but as a tool for promoting learning. The legacy of the AfL approach and of the work of the late Liz McDowell is evidenced in the number of authors who make reference to it (see Chapters 1, 4, 6, 7, 12, 13 and 18). Assessment design: we encourage all practitioners to consider the needs of students who, however you describe them (Gen Y, Gen X or Millennials, depending on their date of birth), are digitally native and who expect a level
3
Introduction: how innovative are we? 3 of online, social media and sophisticated digitisation about their learning and assessment experience. How then can we use technology to positively impact on assessment and support learning are questions on which we continue collectively to ruminate. These issues are considered in Chapters 14 and 17. In designing assessment, we might think also about how to enable students to develop emotional intelligence, to develop learner confidence and to consider wellbeing (ours and theirs) when setting and marking assessment. An assessment task can be demanding without being demoralising. Carol Dweck and Guy Claxton have been strong proponents of the need to provide a positive mindset for our students; they need traits such as perseverance, resilience and to be able to receive, reflect, respond and revisit their learning in response to feedback. These issues are considered in Chapters 4, 10, 11, 18 and 20. Communities of practice: providing opportunities for students to learn from each other and to give and receive feedback from other stakeholders such as mentors, alumni, business partners, academic collaborators enables them to learn and transform their learning and professional practice. As academic practitioners, we use the process of peer review, drafting and redrafting as second nature in our writing, but rarely do we encourage our students to do the same. One explanation for this focus on individual assessment is the fear of collusion and plagiarism. Acknowledging this, our authors explore how as practitioners we can design and co-create with students and other stakeholders, assessment in ways that are useful and which genuinely equip student to learn, do things differently and progress (Chapters 1, 7, 18 and 20 in particular). Employability: students need more than ever to demonstrate to employers what they can do and how they do it rather than just what they know. They need to be confident in describing the skills, attributes, competences and values they hold and to communicate this in a way which demonstrates to employers that they can add value. How students use feedback to enhance the narrative they create about themselves is explored by a number of our authors (see Chapters 5, 7, 8 and 20). Professional practice: lastly, we consider how professional and academic practice is assessed. Accredited programmes of academic practice require evidence of competence in teaching and assessment, of professional values and ability to support students in ways that reflect equity, diversity and inclusivity. This portfolio of evidence usually requires a personal, first person narrative. There is often an expectation that we, like our students, will be willing and able in our reflections to acknowledge where our teaching practice has not gone well and to disclose our emotional responses. The public versus private nature of assessment is explored throughout Chapters 18, 19, and 20.
Part I: assessment in context This introductory section provides the contextual and theoretical framework for the collection. It starts with a chapter by Helen King (Chapter 1), who
4
4 Karen Clegg and Cordelia Bryan provides a critical overview of the higher education policy context, focused on the UK and including a wider international perspective. King identifies current and emerging themes and the implications (for teachers and learners) of these for assessment. She explores the changes to higher education regulation over the last decade, the growth of students as consumers and co-creators of knowledge. She also offers a context for the themes of employability (noted also by Kane and Banham in Chapter 8, by Clegg and Martin in Chapter 18 and by Maguire in relation to professional education in Chapter 20). Jessop (Chapter 3), drawing on Gibbs’ work from the original edition (Chapter 2) brings to life a number of case studies conducted as part of the nine-year TESTA project, which explores the extent to which assessment practices need to be embedded in curriculum design to make a difference, a theoretical consideration that Maharg (Chapter 17) challenges. In Chapter 4, Sally Brown offers an overview of the progress that has been made in the first decade of the twenty-first century using assessment and feedback to improve student learning. She argues, ‘Assessment can and should be a significant means through which learning can happen, a motivator, an energiser as well as a means of engendering active partnerships between students and those who teach and assess them’. Using material from Alverno College, Milwaukee (Mentowski, Chapter 3 in the original edition) Brown explores how far we have come and offers us five propositions to ensure that assessment and feedback genuinely support learning. Assessment must: 1. serve student learning. 2. be fit for purpose. 3. be a deliberative and sequenced series of activities d emonstrating progressive achievement. 4. be dialogic. 5. be authentic. Part I concludes with a consideration by Tai and Adachi (Chapter 5) of the purpose and potential of self- and peer-assessment to transform the way in which students learn.
Part II: implementing feedback In Part II, authors offer practical ideas for improving the quality of the feedback that we provide (and receive), with a view to enhancing learning and opening up dialogue about strengths and areas for development. The chapters consider in detail how feedback plays a vital role in developing the skills valued by employers and, in doing so, explores the value of developing student confidence, resilience and flexibility. We have retained Chapter 6 by Glover from the original edition as it encourages us to reflect on the quality and effectiveness of the feedback we are giving our students.
5
Introduction: how innovative are we? 5 Chapter 7 (Hutchinson) and Chapter 8 (Kane and Banham) provide provocation to the way in which assessment in higher education supports the development of skills that are respected by employers. Presentations are used commonly as a form of assessment and yet the literature offers very little in the way of guidance or critique for doing so. Hutchinson reflects on the way in which we help students to develop their presentation skills and offers both guidance, in the form of criteria by which to assess presentations and reflects critically on the role of feedback in helping students to become better presenters. Critiquing the criteria is essential and this is where peer-assessment is vital (discussed in more detail by Tai and Adachi in Chapter 5). Kane and Banham describe the systematic, institution-wide approach they have taken to developing a framework for undergraduate employability built on a series of training interventions that enable students to identify, reflect and articulate their skills in a language that has been co-created by a number of well-known employers (Aviva, PWC, Teach First, Glaxo, the National Health Service and Clifford Chance). They have drawn on practice in 30 higher education institutions across the UK, Australia and America and offer a thought- provoking argument for turning the curriculum on its head. Chapter 9 (Norton) encourages educators to take a critical and reflective standpoint about what they want to put in place and how ‘pedagogically desirable’ some of the outcomes might be. Pitt (Chapter 10) explores students’ ability to process especially negative emotions when receiving grades that may mismatch their expectations, while Winstone and Nash (Chapter 11) seek to develop students’ feedback literacy and their ‘proactive recipience’.
Part III: stimulating learning Building on Part II, the chapters in this section offer empirically based case studies looking at different aspects of assessment which we hope educators will review in the light of their own institutional and cultural context. To some extent, all the chapters in this section consider the use of assessment as a way of developing skills, over and above content knowledge, confirming the growing trend in higher education to serve students and support their employability. In Chapter 12, Gardner-Medwin looks at how ‘certainty-based marking’ rewards not just what has been achieved but the ability of learners to be resilient and comfortable with uncertainty, capabilities that support mental wellbeing and which are increasingly valued by employers. Gilbert and Bryan (Chapter 13) review from their own research and relevant literature how group work can be enhanced through positive reinforcement of the use and value of compassion and empathy. They offer a flexible model for students to identify and develop compassionate micro skills to address common and potentially negative archetypes of group work such as the ‘monopoliser’, the ‘colluder’ and the ‘quiet student’. This theme of enhancing learner autonomy is reinforced in Chapter 15 (Sambell and Sambell), which explores their seminal concept of ‘assessment for learning’ (as developed at Northumbria with McDowell and
6
6 Karen Clegg and Cordelia Bryan Brown) and illustrates how this can be used to encourage s tudent autonomy in the context of truly diverse learning contexts. This review of what we can do to enhance our educative practices would not be complete without consideration of how to use technology positively to support assessment, and this is what Walker and Jenkins provide (Chapter 14). Completing this section, Dacre considers whether and to what extent educators working in performance- based subjects can provide parity of experience for students who are each engaged in something unique, requiring a level of subjective assessment. She critically examines the extent to which subjectivity is inherent and required in disciplines where performance or practice is assessed through simulation. This topic is picked up again in Chapter 20.
Part IV: assessing professional development Recognising the exponential growth in the number of academics engaged in accredited learning and teaching programmes and continuing professional development in the UK and beyond, this section explores the use of portfolios to assess academic practice, how to use the UK Professional Standards Framework, as developed by AdvanceHE and its antecedents and now widely used globally to design assessment and provides a glimpse into the next generation of assessment practice using cutting edge online learning. Maharg (Chapter 17) explores why so many digital assessment innovations, despite good design and thorough testing, fail to go beyond the pilot phase. Using Pasteur as a case study, he argues for a transformation of practice that requires us as teachers to consider critically our curriculum and what we really value. Drawing on a decade of experience, Clegg and Martin (Chapter 18) offer a critical examination of their experience of leading (and external examining) on two programmes aimed at developing the academic practice of graduate teaching assistants. The chapter provides a number of recommendations and advice for those who are developing similar programmes and who wish to use portfolio-based assessment. This review of accredited programmes is complemented by Bryan et al (Chapter 19), who review a number of case studies of programmes that are designed using the UK Professional Standards Framework. Last, but by no means least, Maguire et al offer a practical and critical review, based on their own experience of the ways in which professional practice is assessed in law, nursing and accountancy. We conclude with a review of what has changed in assessment and a look into what the future of assessment may hold. Thanks to the good practice described by our contributing authors the future of innovative assessment in higher education globally looks bright. The scope and scale of innovation taking place is considerable as is the capability and willingness of the next generation of practitioners to implement change that positively impacts on student learning.
7
Part I
Assessment in context
8
9
1 Stepping back to move forward The wider context of assessment in higher education Helen King
Introduction Assessment, as considered further in this book, is an integral part of effective curriculum design and development within modules and across programmes. However, it is also influenced by the wider local, national and, to an extent, international education policy context. This chapter explores some aspects of this wider context and considers the implications for you and your learning, teaching and assessment practices. When I began teaching in higher education in England in the mid-1990s, assessment was almost entirely in the control of the individual lecturer. For universities, there was little scrutiny from elsewhere in the institution and certainly not from any external agency. The benefit of this was that lecturers had the opportunity to be more agile in designing and changing assessments. The major disadvantages included potential inconsistency in standards, little sharing of good practice or understanding of the pedagogic value of good assessment practice. In my experience as a student, a few years prior to this, assessment only had one purpose: to test your knowledge. There was no formative assessment and very little, if any, feedback beyond the mark. The picture was a little different for polytechnics and other similar institutions where the Council of National Academic Awards was the national degree-awarding authority until they were granted university status post-1992, and there were a range of innovative formative and summative assessments in use. Now the higher education landscape is vastly different and we are in an era of massification, marketisation and regulation. The opportunities for individual lecturers to teach and assess in isolation are few and far between, and increasingly undesirable as pedagogical research and good practice demonstrate the value of coherent, consistent and collaborative approaches to assessment for learning. Furthermore, international travel, global communications and transnational education have transformed the opportunities for learning from other higher education systems, as exemplified in the biennial Assessment in Higher Education conference which, in 2018, attracted delegates from 26 countries.
10
10 Helen King Table 1.1 The UK Quality Code (March 2018) Expectations for standards
Expectations for quality
The academic standards of courses meet the requirements of the relevant national qualifications framework.
Courses are well-designed, provide a high-quality academic experience for all students and enable a student’s achievement to be reliably assessed.
The value of qualifications awarded to students at the point of qualification and over time is in line with sector-recognised standards.
From admission through to completion, all students are provided with the support that they need to succeed in and benefit from higher education.
Quality, standards and regulation Over the past 20 years or so, higher education has had an increasing policy and public profile in relation to funding, expansion and academic standards. Changes in socioeconomic factors and the implementation of outcomes from the 1963 Robbins Report on Higher Education (Robbins, 1963) led to an increase in overall participation in higher education from 1.8% in 1940 to 32% in 1995 (Dearing, 1997, Report 6 Table 1.1). In 1920, 4357 students obtained a first degree; by 1960 this had increased to 22,426 and to 77,163 by 1990 (Bolton, 2012). Graduates in the 2016/17 academic year numbered 491,170 (HESA, 2018). The Quality Assurance Agency and academic standards In 1997, the National Committee of Inquiry into Higher Education published its report (known as the Dearing Report after the Committee’s Chair), including 93 recommendations, commissioned in response to this increasing expansion and underfunding of the system (Dearing, 1997). This report gave the Quality Assurance Agency (QAA) the remit of providing assurance on standards and quality. From this it developed a higher education qualifications framework (aligned to the Framework for Qualifications of the European Higher Education Area; Bologna Working Group 2005), a code of practice and subject benchmark statements, pool of external examiners and process of institutional audit. Over time, these policies and processes have evolved to the current UK Quality Code for Higher Education and systems of review and enhancement for higher education providers. These systems vary across the devolved administrations of the UK with the Annual Provider Review in England, Quality Enhancement Review in Wales and the Quality Enhancement Framework in Scotland. The 2018 version of the Quality Code is a succinct summary of the standards and quality that higher education providers are expected to meet, and these standards underpin the associated conditions for registrations with the regulator of English higher education providers, the Office for Students.
11
Stepping back to move forward 11 The Code identifies core and common practices which represent effective ways of working and underpin the means to meet these expectations. These expectations and practices are set at the level of the higher education provider, rather than being the direct responsibility of individual members of staff. In order to support and enable its staff to contribute to the maintenance of standards and quality, each higher education provider will have its own internal processes that ensure alignment to and compliance with the Quality Code. Many countries globally now have a formal system of quality assurance. For example, all European countries have introduced or are introducing quality assurance approaches that align to the agreed Standards and Guidelines for Quality Assurance in the European Higher Education Area (ENQA, 2009); Australia has its Tertiary Education Quality and Standards Agency and, in the USA, the federal Department of Education recognises a number of regional and national accreditation agencies). Given that these responsibilities are held at provider level, what are the implications for you as a teacher (and assessor) in higher education? Many institutions will have a dedicated quality team to manage and support processes which ensure adherence to the Quality Code. These processes are likely to include annual monitoring and periodic review of modules and programmes, and a system to approve minor and major changes to modules or programmes. With so many individual members of staff often involved in the design, teaching and assessment, it is important to have a clear system for keeping track across programmes to ensure internal coherence and consistency, as well as maintenance of standards in line with national expectations. In addition, many practice-oriented programmes are accredited by professional, statutory and regulatory bodies and your local quality processes will also support meeting their expectations. The use of external expertise (external examiners and reviewers) is an essential component of quality assurance. As well as offering perspectives on the expectation of standards in a particular subject area, these external colleagues can also bring and share examples of good practice on learning, teaching and assessment. Being an external examiner (of existing programmes) or reviewer (for new programme development) is also an excellent professional development opportunity as it exposes you to different types of higher education provider and different ways of working. Those new to external examining may find it useful to engage in training and in 2017/18, the Higher Education Academy (now AdvanceHE) was commissioned by the Higher Education Funding Council for England (HEFCE) to develop a general course for new external examiners to support consistency across the sector (AdvanceHE, 2018a). Local quality systems (including your responsibility for the implementation of good practice in assessment and feedback) will also need to provide support to address the issue of grade inflation: as higher education has expanded there has been a corresponding increase in the relative proportion of upper-second
12
12 Helen King and first-class degrees awarded. This issue has been researched extensively in the USA, where the literature identifies factors such as student module evaluation and the need to improve recruitment on certain programmes as potential drivers (Bachan, 2015). In its first letter of strategic guidance to the Office for Students (Gyimah,2018), the new regulator of higher education in the UK, the government highlights grade inflation as a priority for monitoring, and related data form a supplementary metric in the Teaching and Student Outcomes Excellence Framework (Office for Students, 2019b). Scholarship and good practice A key feature of the post-Dearing policy landscape was considerable government investment in learning and teaching enhancement, and raising the profile of professional development, reward and recognition. In England, the Teaching Quality Enhancement Fund invested £181 million over the period 1999–2000 to 2004–05 to support three strands of work: institutional, through the development and implementation of learning and teaching strategies, academic subjects/disciplines –including through the Learning and Teaching Support Network (LTSN) Subject Centres and the Fund for the Development of Teaching and Learning (FDTL), and individual (National Teaching Fellowship Scheme; HEFCE, 2005). Subsequent investment included the emergence of the Higher Education Academy from the Institute of Learning and Teaching in Higher Education and LTSN, and the £315 million made available from 2005–06 to 2009–10 for Centres for Excellence in Teaching and Learning (HEFCE, 2011). The impact of this investment is still clearly in evidence today including the ubiquity of learning and teaching strategies, development programmes for new lecturers and support for ongoing curriculum and professional development; the continuation of the Higher Education Academy’s work (including over 100,000 Fellows); sector-led initiatives such as the development of the UK Professional Standards Framework (AdvanceHE, 2018b)); institutional funding for learning and teaching projects; collaborative activities across the sector; and the ongoing presence of assessment-related projects hosting a wealth of resource such as:
• • •
Assessment Standards Knowledge Exchange (Oxford Brookes University); TESTA; resources maintained and developed by Advance HE.
Internationally, there has also been interest in enhancing learning and teaching. For example, over the first decade of this century, the Australian government has invested in large-scale projects and individual awards for teaching excellence (Department of Education and Training, 2017); since 1986 the Society for Teaching and Learning in Higher Education in Canada has partnered with 3M to award a total of over 300 National Teaching Fellowships (STLHE,
13
Stepping back to move forward 13 2019). Most recently, the European University Association (2019) Learning and Teaching initiative has brought together a large number of institutions across Europe to support the exchange of good practice. Assessment-related resources from these institutions include:
• •
Feedback for Learning: Closing the Assessment Loop (Monash University, 2018) Assessment Design Decisions Framework (Bearman et al, 2014)
Through an enormous variety of local, national and international initiatives, as well as the endeavours of individuals and groups, research, scholarship and the development and sharing of good practice and resources in relation to assessment are widespread and there is even a well-established (since 1975) peer-reviewed journal dedicated to the topic: Assessment and Evaluation in Higher Education. Tuition fees, widening participation and the higher education marketplace Another key response to the Dearing (1997) report was the introduction of tuition fees for all UK students through the Teaching and Higher Education Act 1998. Following devolution in 1999, Scotland and Wales bought in their own legislation on tuition fees. Introducing fees meant that the government could reduce the block grant provided to higher education providers, thereby reducing the cost to the public purse. Changing the funding policy enabled opportunities for further expansion and widening participation to allow greater numbers of young people to access higher education and to increase the proportion from under-represented groups (such as those from lower income families, people with disabilities and ethnic minorities). A number of initiatives were instigated to support the outreach, information and guidance, induction, retention and employability issues associated with widening participation (Moore et al, 2013). These have led to a wealth of inclusivity-themed resources, guidance and literature to support institutions and staff to provide all students with the opportunity to reach their potential (e.g. Jisc, 2018)). This increasing expansion of higher education has gone hand-in-hand with moves to marketise the sector whereby the demand and supply of education and research are managed through a price mechanism (Brown, 2015). In 2015/ 16, the removal of controls on the number of students that institutions could recruit further pushed the UK sector towards the concept of a fully competitive market, and in April 2018 following enactment of the Higher Education and Research Act of 2017, the Office for Students began operation as the regulator for the English higher education sector, replacing HEFCE and the Office for Fair Access. Government funding, student fees and the expansion of higher education vary considerably internationally. For example, Germany has abolished
14
14 Helen King tuition fees, but this is only affordable because a lower number of students access higher education (27% of people aged 25–34 years have a degree compared with 48% in the UK; Hillman, 2015). However, the issues exposed in relation to assessment are relevant to all student learning whatever the context. Due consideration of effective assessment and feedback can play a key role in supporting retention and success (Miller et al, 2015), as explored elsewhere in this book. Particular considerations include:
•
• •
the transition to higher education and assessment literacy Assessment practices in higher education are likely to be different to those that students have previously encountered. Opportunities for formative assessment and feedback, together with a transparent approach to criteria and standards, early on in their studies can support transition to higher education (Cotton et al, 2013; Smith et al, 2013). a programmatic view –progression and practice in approaches to assessment Assessment strategies should be considered at programme level, not just within individual modules, to ensure a consistent and coherent learning experience (Hartley & Whitfield, 2011; Jessop et al, 2014; see also Chapter 3). inclusive, co-created and authentic assessment Inclusive assessment is designed to provide all students with fair and equitable opportunities for demonstrating their skills and knowledge (Hockings, 2010). A higher education should encourage students to take responsibility for their own learning as a transition to the more self- regulated learning expectations in the workplace. Traditional assessment practices (e.g. essays, time-limited unseen examinations) often have little relevance to the modern world of work; however, authentic assessments can provide opportunities for students to critique their own work and prepares them better for employability (Boud & Falchikov, 2009). The increasing interest in student partnership and co-creation can also better engage students in their learning (Bloxham & Boyd, 2007).
Student partnership and co-creation Despite, or perhaps because of, the increasing sense of students as customers or consumers of higher education, there is an increasing interest in developing partnerships with students for enhancing learning and teaching, including working with students to co-create curricula (Bovill, 2014). There are many ways in which students and staff can work in partnership, and an increasing literature base is developing internationally that describes and evaluates these approaches and evidences the benefits to students (while acknowledging the challenges of this change in relationships and balance of power; Mercer- Mapstone et al, 2017). The potential benefits of engaging students as partners are relevant to an assessment for learning approach and include more active engagement and the opportunity for developing students’ ability to self-judge their work and generally improve their assessment literacy (Deeley & Bovill,
15
Stepping back to move forward 15 2015). In Australia, one of the seven propositions for assessment reform in higher education specified ‘students and teachers becom[ing] responsible partners in learning and assessment’ (Boud & Associates 2010). Co-creation activities can include staff and students working together to agree essay titles, developing assessment criteria together (Meer & Chapman, 2015), enhancing formative feedback (Fluckiger et al, 2010) and engaging in self-and peer-assessment (Healey et al, 2014 –also a useful reference as a broader overview of students as partners). Plagiarism and contract cheating A particular problem relating to assessment that appears to be highlighted in a mass participation, marketised system is plagiarism. Ensuring academic integrity and discouraging plagiarism are not new issues, however. The rapid development of digital technologies has enabled students to have access to increasingly inventive means of cheating, including YouTube-based and other online cheat sites. The use of ‘essay mills’ or ‘contract cheating’, in particular, has been in the spotlight with the QAA (2017) offering guidance to higher education providers on how to address the problem. Such guidance emphasises the need to more effectively design assessments to reduce or remove the opportunities for such cheating. Suggestions include the use of authentic assessment, mixed methods and, where possible, face-to-face approaches. While it is important to use good practice in assessment design, to educate students on the perils of plagiarism and be vigilant for breaches in academic integrity, it is perhaps also worth being mindful of the true and limited scale of the problem. According to a 2016 report based on freedom of information requests, ‘Almost 50,000 students at British universities have been caught cheating in the past three years’ (Mostrous & Kenber, 2016). This averages to around 17,000 students per year, which is approximately 0.7% of the student population: hardly the ‘student cheating crisis’ that the media headline purported (Mostrous & Kenber, 2016). Learning from feedback Later chapters in this book explore the role that giving feedback has in supporting students’ learning. Feedback is a critical part of any learning and development process and, indeed, forms a substantial element in the acquisition and maintenance of expertise in any field or profession. Extensive research into the nature of expertise over the past three decades reveals a number of consistent characteristics of expert performance, including a self-determined and focused approach to continuous development and improvement known as deliberate practice (Ericsson et al, 1993) or progressive problem solving (Bereiter & Scardamalia, 1993). Hence, receiving feedback on one’s practice through formal and informal processes such as self-reflection, peer review, student module evaluations, discussion in networks and communities of practice,
16
16 Helen King is important to inform and enhance your approach to learning, teaching and assessment (King, 2018). Gathering and responding to student feedback has become increasingly common. Processes for this include module evaluation forms, student representation systems, quick quizzes and activities in class (e.g. asking students to note down things for you to ‘stop’, ‘start’ and ‘continue’ doing), informal conversations and formal focus groups. Perhaps, arguably, the most influential source of feedback in the UK is the National Student Survey (NSS) in which every university takes part, as do many colleges and alternative providers (Office for Students, 2019). The NSS was launched in 2005 and is made available to all final year degree students. The questions focus around the students’ satisfaction with various aspects of their learning, teaching and assessment experience, and in 2017 was updated to include questions about their engagement with their studies. This consideration of how students engage with learning, teaching and assessment is a core feature of the National Survey of Student Engagement (NSSE) which has been running in the USA and Canada since 2000, and the related Australian Survey of Student Engagement (AUSSE; ACER Research, 2019). As illustrated on their host websites, both NSSE and AUSSE have been used extensively to provide an evidence base for informing learning and teaching enhancement, and in the UK the NSS has been particularly influential in informing the development of assessment and feedback. Since its beginnings, responses to questions on this theme have been consistently less positive than for any other aspect of the student experience (although they have improved over time; Medland, 2014). Many institutions have taken steps to try to improve these results including implementing policies for the timely provision of effective feedback on assessment; and a large number of research papers and good practice guides have also been produced (e.g. Elkington & Evans, 2017; Gartland et al, 2016; Hill & West, 2017; Smith & Williams, 2017, to name but a few). As well as an opportunity to survey students’ experiences of assessment and feedback, the NSS questions also offer a useful prompt for you when considering your own practice; what can you do to ensure that students have a positive experience: 8 . The criteria used in marking have been clear in advance. 9. Marking and assessment has been fair. 10. Feedback on my work has been timely. 11. I have received helpful comments on my work. (NSS 2017 Core Questionnaire) While the NSS and other national policy developments (such as the TEF, which uses NSS data within its metrics profile) have focused on undergraduate education, it is important to remember that the principles of good practice in assessment and feedback also apply for taught and research postgraduates. Again, as well as gathering your own evidence about the effectiveness of your
17
Stepping back to move forward 17 practices, there are (voluntary) national surveys in the UK which can offer a complementary view: Postgraduate Taught Experience Survey (Leman, 2018) and Postgraduate Research Experience Survey (Neves, 2018). Insights from learning gain Good practice in assessment in higher education is often associated with the use of a constructive alignment approach to curriculum design (Biggs & Tang, 2007) in which the assessment is carefully designed align to the intended learning outcomes of the module or programme. In general, therefore, assessment will provide useful information on how well the student has met the learning outcomes at that stage but does not shed light on how far that student has travelled in terms of their skills, competencies and knowledge development over a period of time. The concept of learning gain has recently taken hold as a means of exploring this ‘distance travelled’ between two points in time and to what extent the student’s participation in higher education has enabled this development (McGrath et al, 2015). A number of initiatives in the USA, Europe and UK have explored a wide range of methodologies for measuring learning gain (Randles & Cotgrave, 2017). This wide range is a consequence of there not being a single definition for learning gain and projects considering it from many different generic and subject- specific perspectives including affective, behavioural, cognitive, metacognitive, sociocommunicative and civic (Kandiko Howson, 2017). Whether or not you have the opportunity to be involved in considering and measuring learning gain, the concept itself is a useful starting point for discussions around widening participation and employability, student transition to higher education and assessment literacy, and criterion-referenced assessment. At present, in the UK at least, students graduate with a criterion- referenced degree; in other words their outcome (first class, upper second etc.) relates to their performance against specific assessment criteria. However, this outcome does not say anything about how they have developed through their higher education experience. From an employability perspective, the student who has travelled the farthest might have more to offer an employer than someone who has been less challenged. Similarly, when assessing students at the module level, one student’s 50% mark might be an excellent achievement, whereas for another it would be very disappointing. In large classes, and with anonymous assessment, it might not be possible to know individual students well enough to distinguish their personal achievement level but having an awareness that the same mark does not mean the same to all students is a good starting point for providing helpful feedback. So what does this mean for your practice? As briefly sketched above, the higher education policy environment in the UK and internationally is highly dynamic with providers having to respond
18
18 Helen King relatively rapidly to changes in monitoring and procedural requirements. From the individual academic’s perspective there are three main points to bear in mind that will enable you to stay current and ensure your students are participating in an effective learning, teaching and assessment experience: 1. Good practice: an effective response to many, if not all, issues and challenges associated with changes to the higher education landscape, culture and student population, is to develop and maintain assessment and feedback practices that adhere to the principles of good practice as outlined elsewhere in this book and that you may have been exposed to through professional development activities within your institution. 2. Awareness of local policies and processes: unless you have a particular role that requires you to connect with the external policy environment, you do not need to be directly concerned with changes to national policies. However, you do need to be mindful of local systems and processes (particularly in relation to quality assurance and the NSS) that are aligned to external expectations. Actively seek out your local quality team or equivalent and discuss their role in ensuring your institution meets the national quality expectations and standards, and processes that you need to engage with. 3. Collaboration and innovation: despite the decreasing availability of funding for the development of learning, teaching and assessment, and the more competitive environment that the marketisation agenda has encouraged, there is still a strong appetite in the sector for collaboration, innovation and sharing of good practice. Engage with local or national networks and communities of practice in order to discuss challenges and share solutions; or, at least, have conversations with colleagues in your department. While external and internal processes might seem to place constraints on the development of assessment practices, in reality there is plenty of scope for innovation particularly where it involves co-creation with students as partners and builds on a strong evidence base.
References ACER Research (2019) Australasian Survey of Student Engagement (AUSSE). www. acer.org/gb/ausse (accessed 2 January 2019). AdvanceHE (2018a) Degree Standards: Professional Development and Calibration for the External Examining System in the UK. York: AdvanceHE. AdvanceHE (2018b) UK Professional Standards Framework (UKPSF). www. heacademy.ac.uk/ukpsf (accessed 2 January 2019). Bachan, R. (2015) Grade inflation in UK higher education. Studies in Higher Education 42 (8): 1580–1600. Bearman, M., Dawson, P., Boud, D., Hall, M., Bennett, S., Molloy, E., Joughin, G. (2014) Assessment Design Decisions Framework. Canberra: Department of Education and Training. www.assessmentdecisions.org/framework (accessed 2 January 2019).
19
Stepping back to move forward 19 Biggs, J. and Tang, C. (2007) Teaching for Quality Learning at University (3rd edn). Maidenhead: Society for Research into Higher Education and Open University Press. Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead: Open University Press. Bologna Working Group (2005) A Framework for Qualifications of the European Higher Education Area. Bologna Working Group Report on Qualifications Frameworks. Copenhagen: Danish Ministry of Science, Technology and Innovation. Bolton, P. (2012) Education: Historical Statistics. Standard Note: SN/SG/4252. London: House of Commons Library. Boud, D. and Associates (2010) Assessment 2020: Seven Propositions for Assessment Reform in Higher Education. Sydney: Australian Learning and Teaching Council. Bovill, C. (2014) An investigation of co-created curricula within higher education in the UK, Ireland and the USA. Innovations in Education and Teaching International 51 (1): 15–25. Boud, D. and Falchikov, N. (eds) (2007) Rethinking Assessment in Higher Education: Learning for the Longer Term. New York, NY: Routledge. Brown, R. (2015) The marketisation of higher education: issues and ironies. New Vistas 1 (1): 1–9. Cotton, D., Kneale, P. Nash, T. (2013) Widening Participation: PedRIO Horizon Scanning Report. PedRIO Paper 1. Plymouth: University of Plymouth Pedagogic Research Institute and Observatory. Dearing, R.; National Committee of Inquiry into Higher Education (1997) Higher Education in the Learning Society: Main Report. London: Her Majesty’s Stationery Office. Deeley, S. J. and Bovill, C. (2015) Staff student partnership in assessment: enhancing assessment literacy through democratic practices. Assessment and Evaluation in Higher Education 42 (3): 463–477. Department of Education and Training (2017) Learning and teaching. www.education.gov.au/learning-and-teaching (accessed 2 January 2019). ENQA (2009) Standards and Guidelines for Quality Assurance in the European Higher Education Area (3rd edn). Helsinki: European Association for Quality Assurance in Higher Education. Ericsson, K. A., Krampe, R. T., Tesch-Romer, C. (1993) The role of deliberate practice in the acquisition of expert performance. Psychological Review 100 (3): 363–406. Elkington, S. and Evans, C. (2017) Transforming Assessment in Higher Education: A Case Study Series. York: AdvanceHE. European University Association (2019) Learning and Teaching. https://eua.eu/issues/ 20:learning-teaching.html (accessed 2 January 2019). Fluckiger, J., Tixier y Virgil, Y., Pasco, R., Danielson, K. (2010) Formative feedback: involving students as partners in assessment to enhance learning. College Teaching 58: 136–140. Gartland, K. M. A., Shapiro, A., McAleavy, L., McDermott, J., Nimmo, A., Armstrong, M. (2016) Feedback for future learning: delivering enhancements and evidencing impacts on the student learning experience. New Directions in the Teaching of Physical Sciences 11 (1). DOI: https://doi.org/10.29311/ndtps.v0i11.548. Gyimah, S. (2018) Strategic Guidance to the Office for Students –Priorities for Financial Year 2018/19. London: Department for Education. www.officeforstudents.org.uk/ media/1111/strategicguidancetotheofs.pdf (accessed 2 January 2019).
20
20 Helen King Hartley, P. and Whitfield, R. (2011) The case for programme-focused assessment: developing the university in turbulent times. Educational Developments 12 (4): 8–10. Healey, M., Flint, A., Harrington, K. (2014) Engagement Through Partnership: Students as Partners in Learning and Teaching in Higher Education. York: Higher Education Academy HEFCE (2005) Summative Evaluation of the Teaching Quality Enhancement Fund (TQEF): A Report to HEFCE by the Higher Education Consultancy Group and CHEMS Consulting. Bristol: Higher Education Funding Council for England. HEFCE (2011) Summative Evaluation of the CETL Programme: Final Report by SQW to HEFCE and DEL. Bristol: Higher Education Funding Council for England. HESA (2018) Higher Education Student Statistics: UK, 2016/17 –Summary. www. hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics (accessed 2 January 2019). Hill, J. and West, H. (2017) Achieving 100% pass rate and NSS feedback for a module: how we did it. Paper presented at the UWE Teaching and Learning Conference, UWE, Bristol, June 2017. http://eprints.uwe.ac.uk/32356 (accessed 2 January 2019). Hillman, N. (2015) Keeping Up With the Germans? A Comparison of Student Funding, Internationalisation and Research in UK and German Universities. Oxford: Higher Education Policy Institute. Jessop, T., El Hakim, Y., Gibbs, G. (2014) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns. Assessment and Evaluation in Higher Education 39 (1): 73–88. Jisc (2018) Supporting an Inclusive Learner Experience in Higher Education. www. jisc.ac.uk/guides/supporting-an-inclusive-learner-experience-in-higher-education (accessed 2 January 2019). Kandiko Howson, C. B. (2017) Evaluation of HEFCE’s Learning Gain Pilot Projects: Year 1 Report. Bristol: HEFCE. King, H. (2018) Expert learning as a model for teachers’ professional development in higher education. SEDA Webinar https://drhelenking.files.wordpress.com/2018/03/ seda-webinar-slides-15-03-18.pdf Leman, J. (2018) 2018 Postgraduate Taught Experience Survey. York: AdvanceHE. McGrath, C. H., Guerin, B., Harte, E., Frearson M., Manville, C. (2015) Learning Gain in Higher Education. Santa Monica, CA and Cambridge, UK: Rand Corporation. Medland, E. (2016) Assessment in higher education: drivers, barriers and directions for change in the UK. Assessment and Evaluation in Higher Education 41 (1): 81–96. Meer, N. and Chapman, A. (2015) Co- creation of marking criteria: students as partners in the assessment process. Business and Management Education in HE DOI: https://doi.org/10.11120/bmhe.2014.00008. Mercer-Mapstone, L., Lucie Dvorakova, S., Matthews, K. E., Abbot, S., Cheng, B., Felten, P., Knorr, K., Marquis, E., Shammas, R., Swaim, K. (2017) A systematic literature review of students as partners in higher education. International Journal for Students as Partners 1 (1). DOI: https://doi.org/10.15173/ijsap.v1i1.3119. Miller, W., Collings, J., Kneale, P. (2015) Inclusive Assessment. PedRIO Paper 7. Plymouth: University of Plymouth Pedagogic Research Institute and Observatory. Monash University (2018) Feedback for Learning: Closing the Assessment Loop. Canberra: Department of Education and Training. www.learningfeedback.org (accessed 2 January 2018).
21
Stepping back to move forward 21 Moore, J., Sanders, J., Higham, L. (2013) Literature Review of Research into Widening Participation to Higher Education. Report to HEFCE and OFFA by Arc Network. Bristol: Higher Education Funding Council for England. Mostrous, A. and Kenber, B. (2016) Universities face student cheating crisis. Times, 2 January. www.thetimes.co.uk/article/universities-face-student-cheating-crisis- 9jt6ncd9vz7 (accessed 2 January 2019). Neves, J. (2018) 2018 Postgraduate Research Experience Survey. York: AdvanceHE. Office for Students (2019a) National Student Survey –NSS. www.officeforstudents. org.uk/advice-and-guidance/student-information-and-data/national-student- survey-nss (accessed 2 January 2019). Office for Students (2019b) Teaching: the TEF. www.officeforstudents.org.uk/advice- and-guidance/teaching (accessed 2 January 2019). QAA (2017) Contracting to Cheat in Higher Education: How to Address Contract Cheating, the Use of Third-Party Services and Essay Mills. Gloucester: Quality Assurance Agency for Higher Education. Randles, R. and Cotgrave, A. (2017) Measuring student learning gain: a review of transatlantic measurements of assessments in higher education. Innovations in Practice 11 (1): 50–59. Robbins, Lord (1963) Higher Education: Report of the Committee appointed by the Prime Minister under the Chairmanship of Lord Robbins. Cmnd. 2154. London: HMSO. Smith, C. D., Worsfold, K., Davies, L., Fisher, R., McPhail, R. (2013) Assessment literacy and student learning: the case for explicitly developing students ‘assessment literacy’. Assessment and Evaluation in Higher Education 38 (1): 44–60. Smith, N. and Williams, H. (2017) Introduction: rethinking assessment and feedback. European Political Science 16 (2): 127–134. STLHE (2019) 3M National Teaching Fellowship. Society for Teaching and Learning in Higher Education. www.stlhe.ca/awards/3m-national-teaching-fellowships (accessed 2 January 2019).
22
2 How assessment frames student learning Graham Gibbs
Introduction Students are strategic as never before, and they allocate their time and focus their attention on what they believe will be assessed and what they believe will gain good grades. Assessment frames learning, creates learning activity and orients all aspects of learning behaviour. In many courses it has more impact on learning than does teaching. Testing can be reliable, and even valid, and yet measure only the trivial and distorted learning which is an inevitable consequence of the nature of the testing. This chapter, and many others in this edition, is not about testing but about how assessment leads to effective study activity and worthwhile learning outcomes. It starts by quoting students describing how they respond to perceived assessment demands. It then outlines 11 ‘conditions under which assessment supports learning’. These conditions are based on a review of theoretical literature on formative assessment and on a review of published accounts of successful innovations in assessment, across all discipline areas, undertaken to identify why they were successful. Economical assessment methods are described that meet these conditions, each based on published evidence of worthwhile impact on learning and student performance. Associated diagnostic tools have been developed to help faculty to identify how their students respond to their assessment regimen, and some uses of these tools are described. The chapter is intended to provide a conceptual underpinning to the innovations in assessment described elsewhere in this volume.
Students’ experience of assessment The two most influential books I read at the start of my teaching career were from parallel studies of a very similar nature on opposite sides of the Atlantic, focusing on very similar phenomena. In the USA, Benson Snyder was undertaking an ethnographic study of the experience of students at the Massachusetts Institute of Technology (MIT). He had not intended to focus on assessment but he discovered that assessment completely dominated student experience and so that is what he wrote most about. The Hidden Curriculum (Snyder, 1971) described the way that students strategically negotiated their
23
How assessment frames student learning 23 way through impossibly large curricula, trying to work out what faculty were really after and what they could safely ignore. I just don’t bother doing the homework now. I approach the courses so I can get an ‘A’ in the easiest manner, and it’s amazing how little work you have to do if you really don’t like the course. From the beginning I found the whole thing to be a kind of exercise in time budgeting….You had to filter out what was really important in each course … you couldn’t physically do it all. I found out that if you did a good job of filtering out what was important you could do well enough to do well in every course. The central idea in Snyder’s work was the gap between the course as presented publicly in course documentation and by faculty, and the narrower and rather different course students experienced and actually studied. The shape and size of this narrower curriculum was determined by students’ perceptions of assessment demands. Studying was an exercise in selective negligence. In Sweden, Fransson (1977) reported how students who were unable to understand or work out what to study, and attempted to study everything, quickly became depressed by the impossibility of the task. After initially working diligently, the number of hours they studied each week declined and they eventually performed badly or dropped out. There are few rewards for students who are not strategic. At the same time as Benson Snyder was being astonished by students at MIT, studies at the University of Edinburgh, an ancient research intensive university in Scotland, found exactly the same phenomenon, when they interviewed students: I am positive there is an examination game. You don’t learn certain facts, for instance, you don’t take the whole course, you go and look at the examination papers and you say ‘looks as though there have been four questions on a certain theme this year, last year the professor said that the examination would be much the same as before’, so you excise a good bit of the course immediately. (Miller and Parlett, 1974) This study, Up to the Mark: A Study of the Examination Game, described some students as ‘cue conscious’. These students were aware of cues about what to study and what to neglect. Others were described as ‘cue seekers’ and took their professors for a beer in the hope of finding out what questions would be on the exam paper. The remainder were described as ‘cue deaf’ and no matter how often they were advised what to focus on, this information passed over their heads. It proved easy to predict students’ grades simply by categorising the extent to which they were tuned in to cues about assessment and neglected the right things.
24
24 Graham Gibbs Subsequently in my own research I have often glimpsed the world of the student in relation to perceived assessment demands. The following student on a masters course in oceanography was only too aware of the gap between his learning and what got him good grades: If you are under a lot of pressure then you will just concentrate on passing the course. I know that from bitter experience. One subject I wasn’t very good at I tried to understand the subject and I failed the exam. When I re-took the exam I just concentrated on passing the exam. I got 96% and the guy couldn’t understand why I failed the first time. I told him this time I just concentrated on passing the exam rather than understanding the subject. I still don’t understand the subject so it defeated the object, in a way. (Gibbs, 1992a, p. 101) At Oxford University, assessment for grades is almost entirely separated from learning and from ‘assessment for learning’. Most assessment for learning takes place orally in tutorials, which are often weekly one-to-one (or very small group) meetings between a tutor (an academic or a graduate student) and an undergraduate student. The work the student has been doing (on average 10–14 hours of reading and writing to produce an essay) is discussed. Metaphorically, or in some cases actually, the tutor and student sit on the same side of the table and explore the subject matter presented in the essay together as a joint scholarly exercise. They ‘do anthropology’ or ‘do history’ together. Assessment here is all ‘formative’ and is designed to support learning. Students may even complain after a tutorial that they still do not know how they are getting on, as essays are not usually graded. Exams happen mainly at the end of three years. The tutor has little or no input into the design of the exam questions and is not supposed to be preparing the student for exams. This gives students considerable freedom to explore what confounds them in the subject matter and tutors considerable freedom to explore students’ misconceptions. I am not trying to sell the tutorial method: in any case, tutorials are cripplingly expensive even for Oxford! But there is a phenomenon that occurs often in other universities but which happens less commonly at Oxford: that of ‘faking good’. Faking good is an attempt by a student to present themselves and their work as if they know and understand more than they actually do, for the purpose of maximising grades. Students in most institutions normally choose those essay questions that they know most about and for which they will need to do the least learning, not those that will result in most learning. I remember, to my shame, that in my undergraduate essays I sometimes cited more references than I had actually read. In the example below an engineering student (not from Oxford) describes the way he presents his ‘problem sheets’ to his tutor for marking, not in a way which reveals his difficulties of understanding or the blind alleys he went down as he tackled the problems, but in a way which is designed to trick the tutor into
25
How assessment frames student learning 25 giving a good grade. Here assessment is a hurdle to be negotiated, a game to be played, at the expense of learning. The average lecturer likes to see the right result squared in red at the bottom of the test sheet, if possible with as few lines of calculation as possible –above all else don’t put any comments. He hates that. He thinks that you are trying to fill the page with words to make the work look bigger. Don’t leave your mistakes, either, even corrected. If you’ve done it wrong, bin the lot. He likes to believe that you’ve found the right solution at the first time. If you’re still making mistakes, that means you didn’t study enough. There’s no way you can re-do an exercise a few months after because you’ve only got the plain results without comments. If you have a go, you may well make the same mistakes you’ve done before because you’ve got no record of your previous errors. (Gibbs, 1992a) This is the opposite of an Oxford student choosing to spend most time on what they do not yet understand or a tutor deliberately choosing to discuss what the student does not yet understand fully. ‘Faking good’ is a direct consequence of the form of assessment.
Students’ experience of feedback It is a truism that learning requires feedback. The importance of feedback is enshrined in the ‘seven principles of good practice in undergraduate education’ (Chickering & Gamson, 1991). But how do students experience feedback? A number of studies have found that students can find feedback incomprehensible, that they glance at the mark and then throw their work away, or even that they do not bother to collect their work from the Departmental office (e.g. Higgins et al, 2001; Hounsell, 1987). In interviews, I encountered the following statement that was representative of common student perceptions. It concerns another of the ‘seven principles’: that feedback has to be provided promptly if it is to be attended to and be useful. The feedback on my assignments comes back so slowly that we are already on the topic after next and I’ve already submitted the next assignment. It’s water under the bridge, really. I just look at the mark and bin it. The crucial variable appears not to be the quality of the feedback (which is what teachers tend to focus on) but the quality of student engagement with that feedback. For example, Forbes and Spence (1991) report a study of innovation in assessment in an engineering course where peer feedback and marks, of very mixed quality and uncertain marking standards, provided instantly during lecture classes, produced a truly dramatic increase in student performance (in subsequent exams) compared with the previously high-quality
26
26 Graham Gibbs teacher feedback and reliable marking which came back slowly and which students as a consequence had not attended to. This second example of a student statement concerns a general problem with feedback associated with objective testing –including computer-based multiple-choice question testing and open entry forms of computerised feedback. The following student was studying a ‘maths for science’ course where the assessment was online. Students could tackle maths assignments in their own time and then type in their maths solutions. A very sophisticated computer programme then generated instant and appropriate qualitative feedback. I do not like the online assessment method … it was too easy to only study to answer the questions and still get a good mark … the wrong reasoning can still result in the right answer so the student can be misled into thinking she understands something … I think there should have been a tutor-marked assessment part way through the course so someone could comment on methods of working, layout etc. This problem with a focus of assessment on outcome rather than process is echoed in reviews of the impact of different kinds of feedback on pupil behaviour in schools. (Black & Wiliam, 1998). It is now clear that feedback without marks leads to better learning than marks only, or even than marks with feedback. Any feedback that focuses on individual’s overall performance (in the form of a mark or grade) rather than on their learning, detracts from learning.
Who makes the judgements? Thirty years ago, I was struck by Carl Rogers’ principle of learning that stated that learning is maximised when judgements by the learner (in the form of self-assessment) are emphasised and judgements by the teacher are minimised (Rogers, 1969). At the time it seemed a noble but hopelessly idealistic and impractical notion. I now know better. Much research on self and peer-assessment appears to be obsessed with the reliability of student marking in the hope that student-generated grades can substitute for teachers’ grades and save the teacher a whole lot of work. If you go to enough trouble students are indeed capable of reliable marking (or, rather, as reliable as the rather low level teachers usually achieve). But this completely misses the point. What is required is not more grades but more learning. The value of self and peer-assessment is that students internalise academic standards and are subsequently able to supervise themselves as they study and write and solve problems, in relation to these standards. It is the act of students making judgement against standards that brings educational benefits, not the act of receiving a grade from a peer. This issue is explored in much greater depth in the chapters in Part II. There are now many studies of the positive impact of self and peer-assessment on student performance. In the USA, this has been
27
How assessment frames student learning 27 associated with the ‘classroom assessment’ initiative. In Europe and Australia, this has been associated with less organised, but no less voluminous attempts by teachers to support learning better through changing assessment. My favourite example comes from a psychology department where the teachers were exhausted by spending every weekend marking experimental and laboratory reports. They would provide feedback such as ‘You have not labelled the axes of your graphs’, week in, week out, despite abundant guidance to students on laboratory report writing and repeated feedback of an identical kind. The teachers suspected that their diligence in providing feedback was to little purpose. They devised a feedback sheet which contained about 50 of the most frequent comments they wrote on students’ reports (such as ‘Have not labelled axes of graphs’). Next to each was a ‘tick box’ and they provided feedback in the form of ticks next to comments. While this saved their wrist from repetitive strain injury from writing the same feedback endlessly it did not improve students’ laboratory reports. They then had a brainwave and gave the students the feedback sheet and required them to attach a copy to the front of each laboratory report they submitted, but with a tick next to all the things they had done wrong. Students were then able to submit technically perfect laboratory reports because they could undertake useful self- assessment before submission, and the teachers had to develop new, tougher, criteria to avoid everyone getting perfect grades. It is not until students apply criteria and standards to judge their own work, as part of self-supervision while working (just as I am doing while writing this chapter) that their work will improve. And this is at no cost to the teacher (or in my case, the book’s editors).
Conditions under which assessment supports learning I have written a number of books over the years about assessment methods, with the intention of increasing teachers’ repertoire of alternatives to suit different contexts, and because variety is the spice of life (cf. Gibbs, 1992b; Gibbs, 1995; Habeshaw et al, 1993). What I had not done was to provide a coherent rationale for deciding which kind of method suited which kind of context or educational problem. I have recently set out to turn observations such as those in the previous sections into a coherent rationale (Gibbs, 1999). This involved reading theoretical literature (mostly schools based) about formative assessment. But most importantly I read large numbers of ‘case study’ accounts of changed assessment set in higher education where claims were made about improved student performance –but where there was usually no explanation of what this improvement was due to. For example, in Forbes and Spence (1991) there is a full description of the assessment innovation and full data about the improvement in grades but no articulation of the underlying pedagogic principles involved. I was interested in what ‘pedagogic work’ was being done by various assessment tactics that resulted in them being effective. In the case studies it was also rare to find a rationale for selecting
28
28 Graham Gibbs the particular innovation the authors chose to implement. I was interested in how you could diagnose a problem so as to guide the choice of an appropriate assessment solution. This literature review and consultation with the National Assessment Network led to the articulation of 11 ‘conditions under which assessment supports student learning’ (Gibbs & Simpson, 2004). A student questionnaire was then developed, the Assessment Experience Questionnaire (AEQ; Gibbs & Simpson, 2003; updated and referred to in Chapter 3 of this edition) which has been used widely to diagnose which of these conditions is being met and which are not. In the UK, South Africa and Hong Kong there is currently quite widespread use of the AEQ as part of action research projects undertaken by science faculty to find ways to support student learning better through innovation in assessment. Scores from the AEQ help to diagnose problems and select appropriate assessment solutions, and then the AEQ is being administered again after the innovation has been implemented, to monitor changes to student learning behaviour. These 11 conditions are summarised here clustered under the headings used to structure the questionnaire. Quantity and distribution of student effort 1. Assessed tasks capture sufficient study time and effort. This condition concerns whether your students study sufficiently out of class or whether the assessment system allows them to get away with not studying very much at all. This is the ‘time on task’ principle (Chickering & Gamson 1991) linked to the insight that it is assessment, and not teaching, that captures student effort. 2. These tasks distribute student effort evenly across topics and weeks. This condition is concerned with whether students can ‘question spot’ and avoid much of the curriculum, or stop turning up to class after the last assignment is due in. It is about evenness of effort week by week across a course and also across topics. I once saw data on the distribution of students’ answers for an examination in which students had to answer 3 of 15 questions. Almost everyone answered the same three questions and the topics addressed by the other 12 questions were presumably hardly studied at all. Quality and level of student effort 3. These tasks engage students in productive learning activity. This condition is partly about whether the assessment results in students taking a deep approach (attempting to make sense) or a surface approach (trying to reproduce; Marton, 2005) and also about quality of engagement in general. Do the things students have to do to meet assessment
29
How assessment frames student learning 29 requirements engender appropriate, engaged and productive learning activity? Examinations may induce integration of previously unconnected knowledge, during revision or memorisation of unprocessed information. Which approach to revision will be induced depends not so much on the examination demands as on students’ perceptions of these demands. 4. Assessment communicates clear and high expectations to students. This condition is again drawn from Chickering and Gamson (1991): ‘Good practice communicated high expectations’. This is partly about articulating explicit goals that students understand and can orient themselves towards, and partly about the level of perceived challenge. Can students spot, within 10 minutes of the first class of a course or within the first 30 seconds reading a course description, that this is going to be an easy course and that assessment demands will be able to be met without much effort or difficulty? Where do students pick up these clues from? Without internalising the standards of a course students cannot monitor their own level of performance or know when they have not yet done enough to be able safely to move on to the next task or topic or to reallocate their scarce time to another course they are studying in parallel. On the Course Experience Questionnaire, scores on the ‘Clear Goals and Standards’ scale correlate with the extent to which students take a deep approach to learning (Ramsden, 1991). The remaining conditions concern feedback. They are not elaborated here as feedback is addressed in depth in Part II of this book. Quantity and timing of feedback 5 . Sufficient feedback is provided, both often enough and in enough detail. 6. The feedback is provided quickly enough to be useful to students. Quality of feedback 7 . Feedback focuses on learning rather than on marks or students themselves. 8. Feedback is linked to the purpose of the assignment and to criteria. 9. Feedback is understandable to students, given their sophistication. Student response to feedback 1 0. Feedback is received by students and attended to. 11. Feedback is acted upon by students to improve their work or their learning. Outline ideas for meeting these conditions are summarised later in this chapter and addressed in more detail in subsequent case studies.
30
30 Graham Gibbs
Use of the Assessment Experience Questionnaire to diagnose where to innovate Evidence from the use of the Assessment Experience Questionnaire (Gibbs et al, 2003) is cited here to illustrate the way it can be used to diagnose problems with the way assessment supports students learning and in particular the extent to which the 11 conditions outlined above are met. This data come from 776 students on 15 science courses at two UK universities. The students at the two universities were revealed to have very different perceptions of their assessment systems. In fact, there was more variation between the universities than between courses, suggesting that there are institutional assessment system cultures or norms. In response to data such as that in Table 2.1, Institution B has focused its efforts on improving feedback to students. Institution A has, in contrast, focused its efforts on students making more use of the high volume of feedback that they are given. This data comes from a national scale project Table 2.1 Comparison of 15 science courses at two universities in terms of the reported volume and distribution of student effort and students’ perception of the quantity and promptness of feedback Scale
University A
University B
Scale score
t
P-valuea
Time demands and distribution of student effort
20.3 (SD 3.16)
18.6 (SD 2.91)
7.387 (df 772)
P< 0.001
Quantity and timing of feedback
22.0 (SD 4.40)
15.6 (SD 4.48)
19.28 (df 766)
P< 0.001
Sample items
Agree or strongly agree (%)
I only study things that are going to be covered in the assignments
8
27
On this course it is possible to do quite well without studying much
64
33
Sample items
Agree (%)
On this course I get plenty of feedback on how I am doing
68
26
Whatever feedback I get comes too late to be useful
11
42
a Two-tailed t-test. df, degrees of freedom; SD, standard deviation from the mean.
31
How assessment frames student learning 31 Table 2.2 Comparison of science courses within University A in terms of students’ use of feedback AEQ items
‘Best’ course
‘Worst’ course
Strongly agree (%) The feedback helps me to understand things better The feedback shows me how to do better next time The feedback prompts me to go back over material covered earlier in the course
36 31 13
6 4 1
(Swithenby, 2006) ) that supported action research into the way assessment supports learning. The ‘scale scores’ in Table 2.1 are out of a maximum score of 30 and are derived from five-point rating scales on each of six questionnaire items making up each scale. The differences between these institutions in terms of the ‘Quantity and timing of feedback’ are very marked. The data also showed marked differences between different courses in the extent to which, for example, students found feedback helpful, or acted upon feedback. Table 2.2 examines differences between courses within institution ‘A’ and displays a selection of data from the ‘best’ and ‘worst’ course (in terms of scores on the AEQ) in the sample. The data show that the AEQ is capable of distinguishing between courses even within a single institution within a single subject. Note just how unlikely it is for students to be prompted by feedback to go back over material. What is clear from such data is that there are major differences in how effectively assessment systems work to support student learning and to foster student behaviour that is likely to lead to learning. There is clearly plenty of scope for using methods that improve matters.
Assessment tactics that solve learning problems The section summarises assessment tactics that, from accounts in the literature, have the capacity to address particular conditions well. There is obviously no one-to-one relationship between their use and changed student learning behaviour –that will depend on an interaction of many variables in each unique context.
Addressing problems with the quantity and distribution of student effort It is possible to capture student time and effort simply by using more assignments or assignments distributed more evenly across the course and across topics. The Open University, for example, traditionally employs eight evenly spaced assignments on each ‘full credit’ course, to ensure that distance learning students work steadily throughout the year and on all course units.
32
32 Graham Gibbs To cope with the consequent marking load, it is possible to make the completion of assignments a course requirement, or a condition to be met before a summative assessment is tackled at a later date, without marking any of these assignments. It is also possible to sample assignments for marking (e.g. from a portfolio) such that students have to pay serious attention to every assignment in case they are selected for marking. Mechanised and computer- based assessment can obviously achieve similar ends (of high levels of assessment without tutor marking), although often without meeting the other conditions very fully and sometimes at the cost of quality of learning and misorienting of learning effort. The use of self-and/or peer-assessment (provided that it is required) can also generate student time on task without generating teacher time on marking. It is also possible to design examinations that make demands that are unpredictable, or which sample almost everything, so that students have to study everything just in case, though this too can result in other conditions not being met, such as failing to generate high quality and level of learning effort through students taking a surface approach as a result of anxiety and excessive workload.
Problems with the quality and level of student effort Assignments that are larger, more complex and open ended, requiring ‘performances of understanding’, are more likely to induce a deep approach to study than are short answer tests or multiple-choice questions. Assignments involving interaction and collaboration with other students, in or out of class are usually more engaging. Social pressures to deliver, for example through making the products of learning public (in posters or through peer-assessment) may induce more care and pride in work than ‘secretly’ submitted assignments to the teacher. Clear specification of goals, criteria and standards, and especially the ‘modelling’ of the desired products, for example though discussion of exemplars, will make it less likely that students will set themselves low or inappropriate standards. If students internalise these goals, criteria and standards, for example through student marking exercises, and public critique of work, they are likely to be able to use these standards to supervise their own study in future.
Problems with the quantity and timing of feedback Regular feedback requires regular assignments, ideally starting early in a course so that students are oriented to the standard required as early as possible. Some institutions or departments have quality standards for the volume and turn-around time of tutor feedback and also have the means to monitor the achievement of these standards. Mechanised feedback can be used to increase its volume at low cost and much contemporary innovation in assessment is concerned with computer-generated marking and feedback, where mechanised tests are used. The challenge then is to meet other
33
How assessment frames student learning 33 conditions at the same time. The quality of feedback can be traded off against speed of return: for example, using peer feedback or model answers, or the tutor sampling students’ assignments to produce generic feedback based on the first five assignments assessed, but not reading the rest. The balance of gains and losses from such practices are a matter for empirical study to explore. Ultimately, the fastest and most frequent feedback available is that provided by students to themselves from moment to moment as they study or write assignments in ‘learning conversations’. Investing effort in developing such self-supervision may be much the most cost-effective use of tutors’ time.
Problems with the quality of feedback If students receive feedback without marks or grades, they are more likely to read the feedback as the only indication they have of how they are getting on. This has been demonstrated to have a significant positive impact on learning outcomes (Black & Wiliam, 1998). If feedback is structured around the goals of the assignment and relates clearly to criteria and standards, this is more likely to result in clarity and impact than unstructured arbitrary feedback that focuses on student characteristics. The quality and impact of feedback can be improved through clear briefing to teachers. The Open University in the UK trains all its tutors in how to give thorough and motivating feedback, and also periodically monitors the quality of this feedback (rather than monitoring the quality of their teaching) and provides individualised coaching where there are perceived to be quality problems. Students’ ability to make sense of and use feedback can be improved through classroom discussion of what specific examples of feedback mean and through discussion of improvements students intend to make to subsequent assignments in response to the feedback.
Problems with students’ response to feedback If feedback is provided faster, there is more likelihood that students will read and respond to it. If students tell the teacher what they would like feedback on, they are more likely to pay attention to this feedback when they receive it (Habeshaw et al, 1993). If students discuss feedback on their assignments in class, they are more likely to think about it and take it seriously (Rust et al, 2003). If students receive feedback on a draft of an assignment they are likely to use this feedback to improve the assignment. If the assignment allows drafting, with feedback on the draft, students are likely to make good use of this feedback. Students are usually capable of giving each other useful feedback on drafts. Tutors can also ask students to submit a cover sheet to their assignment explaining how the peer feedback (or the feedback from the tutor on the previous assignment) was used to improve this assignment. If testing is ‘two stage’ with an opportunity between a ‘mock’ test and the ‘real’ test to undertake more studying on those aspects which were not tackled well in the
34
34 Graham Gibbs ‘mock’ test, then they are likely to use this opportunity (Cooper, 2000). If assignments are ‘multi-stage’, with each stage building up towards a larger and more complete final assignment, then students will almost inevitably use feedback as they put the whole thing together. If assignments have multiple components, tackled in sequence (e.g. stages of a project, elements of a portfolio of evidence) in which each assignment contributes to a larger whole, feedback on early sections are very likely to be used by students to improve the whole. If students are asked to demonstrate how the current assignment benefits from feedback on the previous assignment, and allocated marks for how well they have done this, they are likely to take feedback seriously. If at least some of the feedback is generic in nature there is more likelihood that it will also apply to subsequent assignments on different topics.
Conclusions This chapter has provided a conceptual framework for diagnosing the extent to which assessment regimens are likely to create effective learning environments. The ‘conditions under which assessment supports student learning’ are based on educational theory and empirical evidence concerning either weak student performance where these conditions are not met, or improved student performance where innovations in assessment have been introduced specifically to address one or more of these conditions. It is clear both that student learning can be poor largely because the assessment system does not work well, and that changes just to the assessment, leaving the teaching unchanged, can bring marked improvements. The chapter has provided a glimpse of the wide variations that exist between courses and between institutions, even within the same discipline area, in terms of how well assessment regimes support learning. Finally, the chapter has outlined some of the ways in which these assessment conditions can be met –the tactics that can be adopted to address specific weaknesses. Later sections of this book contain case studies that illustrate some of these tactics in action.
References Black, P. and Wiliam, D. (1998) Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice 5 (1): 7–74. Chickering, Z. F. and Gamson, A. W. (1991) Applying the Seven Principles for Good Practice in Undergraduate Education. San Francisco, CA: Jossey-Bass. Cooper, N. J. (2000) Facilitating learning from formative feedback in level 3 assessment. Assessment and Evaluation in Higher Education 25 (3): 279–291. Forbes, D. and Spence, J. (1991) An experiment in assessment for a large class. In: Smith, R. (ed.) Innovations in Engineering Education. London: Ellis Horwood, pp. 97–101. Fransson, A (1977) On qualitative differences in learning. IV –effects of motivation and test anxiety on process and outcome. British Journal of Educational Psychology, 47: 244–257.
35
How assessment frames student learning 35 Gibbs, G. (1999) Using assessment strategically to change the way students learn. In: Brown, S. and Glasner, A. (eds) Assessment Matters in Higher Education. Buckingham: Society for Research into Higher Education and Open University Press, pp. 41–53. Gibbs, G. (1995) Assessing Student Centred Courses. Oxford: Oxford Centre for Staff Development. Gibbs, G. (1992a) Improving the Quality Of Student Learning. Bristol: Technical and Educational Services. Gibbs, G. (1992b) Assessing More Students. Oxford: Oxford Centre for Staff Development. Gibbs, G. and Simpson, C. (2004) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1: 3–31. Gibbs, G. and Simpson, C. (2003) Measuring the response of students to assessment: the Assessment Experience Questionnaire. Paper presented at the 11th International Improving Student Learning Symposium, Hinckley. Gibbs, G., Simpson, C. and Macdonald, R. (2003) Improving student learning through changing assessment –a conceptual and practical framework. Paper resented at the European Association for Research into Learning and Instruction Conference, Padova. Habeshaw, S., Gibbs, G. and Habeshaw, T. (1993) 53 Interesting Ways To Assess Your Students. (3rd edn). Bristol: Technical and Educational Services. Higgins, R., Hartley, P. and Skelton, A. (2001) Getting the message across: the problem of communicating assessment feedback. Teaching in Higher Education 6 (2): 269–274. Hounsell, D. (1987) Essay writing and the quality of feedback. In: Richardson, J.T.E., Eysenck, M.W., Warren-Piper, D. (eds) Student Learning: Research in Education and Cognitive Psychology. Milton Keynes: Open University Press and Society for Research into Higher Education, pp. 109–119. Marton, F. (2005) Approaches to learning. In Marton, F., Hounsell, D., Enwistle, N. (eds) The Experience of Learning (3rd edn). Edinburgh: University of Edinburgh, pp. 39–58. www.ed.ac.uk/institute-academic-development/learning-teaching/research/ experience-of-learning (accessed 25 January 2019). Miller, C. M. I. and Parlett, M. (1974) Up to the Mark: A study of the Examination Game. Guildford: Society for Research into Higher Education. Ramsden, P. (1991) A performance indicator of teaching quality in higher education: the course experience questionnaire. Studies in Higher Education 16: 129–150. Rogers, C. (1969) Freedom to Learn. Columbus, OH: Merrill. Rust, C., Price, M., O’Donovan, B. (2003) Improving students’ learning by developing their understanding of assessment criteria and processes. Assessment and Evaluation in Higher Education 28 (2): 147–164. Snyder, B. R. (1971) The Hidden Curriculum. Cambridge, MA: MIT Press. Swithenby, S. J. (2006) Formative assessment in science teaching. New Directions (2): 73–74.
36
3 Changing the narrative A programme approach to assessment through TESTA Tansy Jessop
Introduction In the first edition of this book, Graham Gibbs described the dominant culture in which assessment operates as ‘conservative and defensive, rather than bold’ (Gibbs, 2006, p. 20). His comment was made in the halcyon days of UK higher education, when students paid minimal fees, public funds supported higher education, widening participation was becoming a reality and the metric tide was but a trickle. Given the relentless tide of metrics and the market since 2006, it is hard to imagine progressive shifts from this ‘conservative and defensive’ culture. Being bold is all the more challenging when students want value for money, teaching is under scrutiny, and risks may be costly to measures like the Teaching Excellence Framework. Yet in many universities across the UK, that narrative is changing. In countless degree programmes across some 50 UK universities, and in Australia and India, teams have been emboldened to take systematic steps to improve assessment and feedback using the Transforming the Experience of Students through Assessment (TESTA) research and change process. TESTA is a systemic, evidence-informed and collegiate process designed to help academics step back from a narrow modular gaze and take a programme perspective of assessment and feedback as teams. It draws on the lived experience of students to engender change, at the same time as valuing collaboration with academics and educational developers. TESTA was originally funded by the Higher Education Academy in 2009, but has been sustained beyond its funding period (2009–12) through community, reinvention and the rigour of its approach (Jessop, 2016). A programme approach addresses the unintended consequences of modular degrees, where students learn in small units, often 10 or 20 credits in weight, equating to 100 or 200 hours of study respectively. Modules have afforded students choice; they have also increased summative assessment loads, arguably doubling them (Gibbs, 2006, p. 16). Modules have driven content-focused and disconnected approaches to curriculum design, squeezed out formative assessment and made it easier for students to compartmentalise their learning. Large-scale data from TESTA have identified common patterns
37
Changing the narrative 37 of assessment and feedback which arise mainly as a result of modular silos. In summary, the common patterns and problems are:
• • • • •
high summative assessment loads; low ratios of formative assessment, often weakly implemented; disconnected, one-off feedback which often does not feed forward; weak conceptions of standards by students; a transmission model of assessment and feedback.
The theory which originally underpinned TESTA also underpins this book (Chapter 2), building on ten conditions of assessment which support learning, including the quality and distribution of student effort; clear and high expectations of assessment tasks; the quality, quantity and timing of feedback; and students’ capacity to internalise standards (Gibbs & Simpson, 2005). The TESTA process has continually reinvented itself through fresh literature and new theories, for example on dialogic feedback (Nicol, 2010); personalising feedback (Pitt & Winstone, 2018); authentic assessment (Lombardi, 2007; Ashford-Rowe et al., 2013), slow scholarship (Berg & Seeber, 2016; Harland et al, 2015); and understanding marking as social practice, with implications for how students internalise standards (O’Donovan et al, 2008). One of the key shifts in the TESTA process has been from theorising assessment and feedback through a psychological lens, to a sociological one, in line with the structural emphasis of a programme view. While the dominant psychological theories of deep and surface approaches to learning remain pertinent in an analysis of assessment and feedback (Marton & Saljo, 1976), sociological theories about student alienation and engagement have increasing resonance in TESTA data (Mann, 2001; Case, 2008). This chapter highlights evidence of alienation and demonstrates strategies which encourage students to move from alienation to engagement. Pedagogical tactics alone are insufficient to accomplish this shift as more sophisticated team, programme-wide and institutional approaches bring about deeper change (Gibbs, 2013). Educational change requires a systemic approach, bringing together and rationalising individual innovation and modular initiatives. It would be naïve to discuss assessment in isolation from the wider context of higher education. Helen King’s chapter draws attention to the context of marketisation and its impact on practice in higher education. In an age of neoliberalism, assessment and feedback is subject to market forces, metrics and mass higher education. The corporate university is primarily interested in success, performance and competition, and this plays out in assessment environments. The UK National Student Survey (NSS) places emphasis on the discourse of satisfaction distancing students from having agency and influence over their own learning by foregrounding provision, satisfaction and what students ‘receive’. The Teaching Excellence Framework constructs student learning mainly from the perspective of outcomes, rather than investigating
38
38 Tansy Jessop complex process measures. Mass higher education reinforces student anonymity and adds weight to the impression and reality that many students experience a fundamentally dehumanising transaction at university. Arguably, markets, metrics and massification conspire together to drive assessment and feedback towards more homogenising practices which minimise pedagogic relationships, avoid risk, and dampen creativity. These practices feed alienation rather than the human dimensions of an engaging learning experience. While the wider context is not the focus of my chapter, it does have a bearing on students’ lived experience of assessment.
Research sample, data and ethics Data on which this chapter is based are drawn from 15 universities in the UK,1 with an almost equal split between research and teaching intensives. A more granular analysis of university types in the sample shows that there are teaching-focused Guild HE Cathedrals’ Group universities (n = 4), modern universities (n = 4), and research-intensive universities (n = 7). Altogether, the wider sample consists of 103 programmes, from which the author collected and analysed programme data, and 58 which she led through the TESTA change process. Access to the remaining data has come through collaboration on projects and co-authorship. Ethics and approach to TESTA Universities which participate in TESTA adhere to the normal ethical protocols of confidentiality, anonymity and a trustworthy process. One of the key tenets of TESTA is that only the researchers and programme teams have oversight of the analysed data and it is not shared upwards with managers, to prevent it from being used as a performance management tool. The right to withdraw is enshrined in the ethics of any research process, but this may be somewhat glossed over in instances where TESTA is used with the intention of improving NSS scores, and at the direction of senior managers. Universities vary in who carries out the research and change process. In some universities, TESTA is the core responsibility of academic developers. In others, postgraduate researchers, freelance consultants or graduate interns carry out the research and the change process is negotiated by an academic developer. In yet other universities, academic staff themselves carry out the research and change process, either on their own programmes or on other programmes in a quid pro quo arrangement. The freedom to bring about the changes discussed in the team debriefing varies according to how tightly the academic development and/or quality assurance teams are able to support changes or wish to monitor effects. Increasingly, as the cost of conducting TESTA is calculated, and as metrics like the NSS and the Teaching Excellence Framework bear down on universities, there is more of a call to monitor the effects and impact of TESTA.
39
Changing the narrative 39 Nonetheless, there is always a delicate balance to be struck between disciplinary academics’ autonomy and academic developers’ advice in negotiating TESTA evidence. The most significant contributions that TESTA makes within this complex negotiation is to bring student evidence, a collaborative approach to change, and a team conversation to the process.
How TESTA works: the method TESTA consists of three research methods: the programme audit, the Assessment Experience Questionnaire (AEQ) and focus groups with final- year students. The programme audit is essentially a conversation with the programme leader over the validated documents. It represents the planned curriculum, and the more ‘official’ view of the student experience, seen from the perspective of an academic leading the team. The AEQ is a validated survey instrument, newly refined from the 2003 version (AEQ 3.3) mentioned in the original edition of this book. The updated AEQ is Version 4.0. The AEQ is usually administered to final-year undergraduate students in the programme under study. Focus groups with students, facilitated by someone external to the programme, complete the TESTA data set. These three methods are the heart of TESTA’s research process. The data are analysed, triangulated and crafted into a case study which is discussed at a live event with the programme team. Academics corroborate or challenge the evidence and troubleshoot problems using educationally principled strategies as a way of improving the whole programme’s assessment. In some universities (for example, Dundee, Greenwich, and Winchester), TESTA is integral to the quality assurance process of periodic review. This is TESTA at its most systematic. In the three universities mentioned as examples, nearly 200 programmes have undergone programme review using TESTA. Impact studies are beginning to reflect on changes wrought as a result. In other universities, TESTA has been used to address problems manifested through low assessment and feedback scores on the NSS. This approach is less warmly received by academics, who feel under scrutiny and in danger of being ‘fixed’ by academic developers responding to central drivers and metrics (Manathunga, 2006, 2007; Leibowitz, 2014). Definitions TESTA defines summative assessment as work which counts towards the degree, whether this be pass/fail in first-year or graded work towards the degree award. Contrastingly, formative assessment is defined as a task which all students are required to do, elicits feedback either from the tutor, peers or through self-reflection and does not count towards the degree. This definition contrasts with learning-oriented assessment which maintains that summative assessment performs the functions of formative assessment under certain conditions (Carless, 2007). The counter-argument is that grades distract
40
40 Tansy Jessop students from using feedback meaningfully (Black & Wiliam, 1998), and that summative tasks often drive a narrower and more instrumental approach among students to maximise achievement against highly specified criteria (Sadler, 2007). TESTA maintains that ‘pure’ formative assessment offers students the chance to be creative, take risks, make mistakes and learn from them (Jessop, 2018).
The typical student experience: an overview TESTA has demonstrated some common assessment patterns across the sector, for example the paucity of formative assessment (Wu & Jessop, 2018) and differences between research and teaching-intensive universities in relation to assessment loads (Tomas & Jessop, 2018). The data show that students can expect to experience an assessment diet which has eight times as much summative as formative assessment, almost a third of tasks by examinations, one in four of a different type (not necessarily in any clear sequence) and about 150 words of feedback on each task. Unsurprisingly, our findings show that research-intensive universities have higher summative loads of assessment and more examinations than modern teaching-focused universities. In contrast, teaching-focused universities have higher varieties of assessment than their research- intensive counterparts. There are no statistical differences in how much formative assessment students experience across both types of university. In both parts of the sector, formative assessment presents a common challenge (Wu & Jessop, 2018). In the following sections, the chapter explores the significance of common patterns of assessment, while being cognisant of variations across parts of the sector.
Common patterns of assessment: problems and strategies In this section, I discuss two dominant themes arising from large-scale data (Jessop et al, 2014; Jessop & Tomas, 2017); first, high summative and low formative patterns of assessment; second, disconnected feedback. Theme 1: High summative, low formative assessment patterns Modular degrees have given students the benefit of making choices. Their shadow side is that they have multiplied summative assessment, fostering an ‘assessment arms race’ between parallel modules competing for student time and effort (Harland et al, 2015). Simultaneously, modular degrees have squeezed out formative assessment opportunities. Students describe the competition between summative and formative tasks on parallel modules as a disincentive to their engagement in formative assessment. TESTA evidence has demonstrated that the ratio of one to eight formative to summative has led to problematic consequences (Jessop and Tomas 2017). In focus groups, students describe high summative diets as preventing them from having a rich learning
41
Changing the narrative 41 experience, from reading widely, lowering attendance, and preventing ‘slow learning’ from occurring (Berg & Seeber, 2016). Content-heavy modules and concurrent deadlines drive a surface approach to learning, as evidenced here. The scope of information that you need to know for that module is huge … so you’re having to revise everything at the same time you want to write an in-depth answer. (Jamil, 2013) It’s been non-stop assignments, and I’m now free of assignments until the exams –I’ve had to rush every piece of work I’ve done. (Melody, 2015) Students describe instrumental reading habits. A lot of people don’t do wider reading. You just focus on your essay question. (James, 2014) I always find myself going to the library and going ‘These are the books related to this essay’, and that’s it. (Eryn, 2015) Assessment deadlines dictate attendance patterns. In Weeks 9 to 12 there is hardly anyone in our lectures. I’d rather use those two hours of lectures to get the assignment done. (Chloe, 2013) In parallel, students experience few engaging formative tasks. On the newly developed AEQ 4.0, 291 students gave low scores on the formative assessment scale, indicating that it is not integral to their learning and assigning it minimal value. Grade orientation is not a new phenomenon (Becker et al, 1968; Miller & Parlett, 1974), so, in one sense, it is not surprising that students assign low value to work that does not count. What is surprising, is that fee-paying students are content to dismiss an aspect of learning which promises them the most learning gain (Sadler, 1989; Boud, 2000; Nicol & Macfarlane-Dick, 2006). In focus groups, students indicate that many do not see the purpose of undertaking formative tasks. I would probably work for tasks, but for a lot of people, if it’s not going to count towards your degree, why bother? (Gemma, 2014) If there are no actual consequences of not doing it, most students are going to sit in the bar. (Tom, 2013)
42
42 Tansy Jessop Students describe it as a disincentive that formative does not seem to add tangible value to their learning or degree outcomes. They do not recognise it as contributing to better learning outcomes. If you didn’t do it, it didn’t matter. (Matt, 2014) You wouldn’t necessarily be in a bad position if you didn’t do them. (Chelsea, 2017) If you haven’t done it, you just don’t participate. (Aadil, 2017) It frustrates students that formative often competes with summative, in the midst of colliding deadlines. It is incredibly frustrating when you have assessed modules running along the same time as people demanding a lot of work which are un-assessed. You just have to weigh it up: what is the point of putting that much effort in when I have this much time do an assessment that counts towards my degree? I find it really frustrating when people ask for 10 page reports and presentations which don’t count and I am thinking, ‘Why am I doing this?! Its brilliant practice but… (Jenny, 2014) They see little value in completing formative tasks if lecturers make no investment in giving feedback on them. If lecturers do want to do that then it might be worthwhile to give better feedback on the formative stuff to improve the summative assessments… (Jo, 2014) Lecturers do formative assessment but we don’t get any feedback on it. (Priti, 2014) We had unmarked presentations, and they were such a waste of time. (Ellie, 2017) Strategies to rebalance formative and summative TESTA has prompted strategies to engage students in meaningful formative assessment and to rebalance the ratio of formative to summative assessment. The three main strategies teams have adopted often intersect and can broadly be characterised as programmatic design, authentic assessment and learning- oriented strategies (Jessop, 2018).
43
Changing the narrative 43 Programmatic design Programmatic design involves team decisions about assessment design; addressing the balance of formative and summative and committing to a shared and universal climb down from the assessment arms race (Harland et al, 2015). Academics are often reluctant to reduce summative assessment because it acts as a pedagogy of control, triggering student effort. For many, it is hard to imagine how students will invest any intellectual effort without the incentive of a grade which counts towards the degree. Moreover, some lecturers find it difficult to sacrifice well-worked plans on their modules for the greater good of the programme. In spite of these disincentives, many teams have embarked on programmatic design strategies which have helped students to learn, such as:
• • • • •
reducing the number of summative assessments to one per module across the whole programme; simultaneously increasing formative to two or three required tasks per module; linking formative and summative tasks so that students can use the feedback to improve their graded assessment; sequencing assessment tasks and varieties of assessment across the whole programme; building more integration between assessment tasks, so that students see the connections between knowledge and learning on one module and another.
Authentic assessment Authentic assessment mirrors practice in the profession or discipline so that students see how knowledge is generated and disseminated within their field. Authentic assessment reduces the number of artificial assessment types. More than that, it casts students as inquirers and problem solvers, by constructing scenarios and ill-defined problems, sometimes working with students to co-create scenarios which become the scene of learning –formatively or summatively. Authentic assessment is usually undertaken for an audience rather than the marker, challenging students to perform to the best of their ability in public rather than in privatised tasks. Authentic assessment is neither neat, closed, nor tightly specified, thereby preventing students from being instrumental as learners. TESTA programmes have adopted some of these authentic strategies:
• • • •
blogging formatively on weekly academic texts on public domain sites crafting assessment in line with a discipline’s signature pedagogy; for example, writing ministerial briefs instead of essays on politics degrees; introducing and integrating research tasks from first year; for example interviewing fellow students or postgraduates about specific topics formative poster exhibitions incorporating peer review.
44
44 Tansy Jessop Learning-oriented assessment Learning-oriented assessment focuses on the process of learning, on learning from each other in collaboration; for example, learning through discussion and active learning strategies. It is the opposite of content-focused and topic- based learning. It calls on students to master difficult concepts, to link disciplinary knowledge to wider knowledge fields and complex world problems and to be agents of learning. Examples of TESTA programmes using learning- oriented strategies include:
• • •
formative use of the two-stage examination, where students write an individual examination, hand it in and then collaborate on the same question paper; exploratory in-class writing in the curriculum to build empathy and develop knowledge about leading figures in a field; scenario and problem-based learning which call on students to deploy skills and knowledge to solve real-life problems.
Many of these authentic and learning-oriented strategies may be used by some academics as one-off assessment events. TESTA enables teams to discuss how to design a creative and coherent environment which challenges and engages students in deep learning across the piece rather than randomly. Theme 2: Disconnected feedback Modular degrees are adept at enabling students to compartmentalise learning and discard feedback. The degree structure sets up feedback to fail if you accept that the principle underlying good feedback is that it occurs in iterative cycles, prompting students to reflect, recalibrate and use the feedback to improve the next piece of work. While TESTA audit data paints a rosy picture of students generally receiving high volumes of written feedback, the quality and timeliness of feedback is inconsistent. More significantly, feedback is locked in a transmission model which positions students as recipients of correction and advice, rather than partners in a dialogue about how to improve their writing and thinking about a discipline area. The concept of self-regulation is lost if students are both considered and consider themselves to be the recipients of feedback as the ‘final word’ on their work (Nicol & McFarlane-Dick, 2006). Student agency is compromised when feedback is about ‘telling’ rather than posing questions which prompt students to come up with solutions. TESTA data highlights the structural and relational disconnections students experience in their feedback. Students describe feedback not feeding forward. It’s difficult because your assignments are so detached from the next one you do for that subject. They don’t relate to each other. (Steve, 2013)
45
Changing the narrative 45 The feedback is generally focused on the module. (Amy, 2012) Because it’s at the end of the module, it doesn’t feed into our future work. (Mei-Ling, 2014) Students express anger and disappointment with their feedback. In mass higher education, they rely on feedback to engender a relationship with their tutors, but instead they find ‘an impoverished and fractured dialogue’ (Nicol, 2010, p. 503). Their frustration with the broken discourse of feedback is evident. You know that twenty other people have got the same sort of comment. A lot of the comments have been copied and pasted onto the feedback sheet. (Angie, 2012) It was just so generic. Nothing really to do with my essay. (Martha, 2012) Because they have to mark so many that our essay becomes lost in the sea that they have to mark. (Richard, 2013) It was like ‘Who’s Holly?’ It’s that relationship where you’re just a student. (Holly, 2012) Alienation is writ large in focus group comments about feedback and is evidenced by students not collecting, reading or using their feedback. Yet we know that feedback is the single most important factor in student learning (Hattie & Timperley, 2007). The value of dialogic, conversational and personal feedback has come into focus as research underlines the significance of feedback’s relational and emotional dimensions (Molloy et al, 2013; Nicol, 2010). Pitt and Winstone (2018) make a strong case that anonymous marking runs the risk of depersonalising feedback in the name of protecting students from unfairness and bias. In this context, formative feedback has a particular role to play in assigning value to the personal, developmental and relational aspects of feedback. TESTA has helped programmes to refashion the feedback relationship as a complex, human endeavour involving emotions as well as its rational alter ego characterised by validity, reliability and transparency. Strategies to engage students in using feedback Two main strategies to engage students in using their feedback have arisen through undertaking TESTA. The first is about making it more dialogic, personal and relational, and the second about connecting feedback across
46
46 Tansy Jessop modules, levels and the programme. In both approaches, the purpose is to encourage students to reflect on and use their feedback. Making it more dialogic, personal and relational TESTA focus group data brings a realisation that human beings with emotions and a fear of criticism are at the sharp end of feedback. This awakens programme teams to fresh approaches to giving feedback. Strategies for personalising feedback and entering a dialogue with students include:
• • • •
using audio and screencast feedback in line with its richer tone and humanity than written feedback; inviting students to initiate feedback at submission by identifying areas where feedback would help them; listening to student feedback about teaching in medias res, through using the Critical Incident Questionnaire (Brookfield, 1995) and responding to it quickly and openly; experimenting with using formative peer review in comment form.
Connecting feedback One of the stumbling blocks in modular degrees is that feedback does not easily connect with learning on the next module. Helping students to use their feedback involves demonstrating how assessment and feedback connects between tasks, across modules and over the whole programme. TESTA teams have used the following strategies to build connections:
• • • •
students’ first assessment entails writing a short formative essay, using quick feedback to rewrite the essay and writing a reflection on how they have addressed the feedback, with the reflection being the graded element; requiring students to reflect on ‘ways to improve’ feedback from past assessments when submitting new ones; creative approaches to helping students synthesise feedback from multiple assessment tasks across modules, including creating word clouds which highlight strong messages; responding to feedback from peers and tutors, with structured exercises which combine (a) taking some feedback on board; (b) questions for clarification or understanding; and (c) rebutting feedback with reasoned arguments.
Conclusion TESTA has given the sector new ways of thinking about assessment and feedback. First, it has prompted universities and programme teams to design assessment outside of the module box and spurred on programmatic design,
47
Changing the narrative 47 prompting a step change in curriculum design. As one assessment expert described it, TESTA is the only thing which is ‘making systematic headway in assessment, beyond the module’ (Bloxham, personal communication). Second, TESTA has prompted a more critical approach to the technical-rational quality assurance machinery, so that the paper trail of audit and accountability contained in templates and validation documents is increasingly seen as a second-order issue, with more ownership of curriculum design vested in disciplinary academic teams. TESTA has brought about a change in the narrative in favour of academics working together to construct an assessment environment based on evidence and educational principles. Sceptics may argue that continuities about assessment problems are stronger than changes that have occurred since the publication of the first edition of this book in 2006. I am more hopeful. Gibbs (2006) gave TESTA its firm theoretical foundation; yet the process has grown theoretically and developed into a change process with huge reach. TESTA has had a real impact, and it is the subject of several impact studies, one based at the University of Greenwich, and another at the University of Strathclyde. Its longevity and growth suggests its impact as an evidence-led change process. Qualitative evidence from those who have engaged with TESTA amplify its power. TESTA became a cornerstone of a major assessment transformation in the Faculty of Arts and Social Sciences at UNSW. (Professor Sean Brawley, formerly University of New South Wales, Australia) The third vital shift which TESTA has prompted is by providing a fresh student perspective of the whole assessment environment. Students experience multiple concurrent modules and are constantly wrestling with juggling assessment deadlines and the relationship of concepts and knowledge on one module to another. TESTA helps programme teams to understand what it feels like from a student perspective, and as a result to make helpful connections across the curriculum. At its best, TESTA shifts the paradigm for student learning from alienation to engagement, while enriching academics’ experience of teaching.
Note 1 Birmingham; Exeter; Kent; Nottingham; Queen Mary University London; Southampton; UCL; Bath Spa; Canterbury Christchurch; Chichester; Edgehill; Sheffield Hallam; Solent; Winchester; Worcester.
References Ashford-Rowe, K., Herrington, J., Brown, C. (2013) Establishing the critical elements that determine authentic assessment. Assessment and Evaluation in Higher Education 39 (2): 205–222.
48
48 Tansy Jessop Becker, H. S., Greer, B. and Hughes, E. C. (1968) Making the Grade: The Academic Side of College Life. New York, NY: Wiley. Berg, M. and Seeber, B. (2016) The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto: University of Toronto Press. Black, P. and Wiliam, D. (1998) Inside the Black Box: Raising Standards through Classroom Assessment. London: Grenada Learning. Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education 22 (2): 151–167. Brookfield, S. D. (1995) Becoming a Critically Reflective Teacher. San Francisco, CA: Jossey-Bass. Carless, D. (2007) Learning- oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International 44 (1): 57–66. Case, J. (2008) Alienation and engagement: development of an alternative theoretical framework for understanding student learning. Higher Education 55: 321–332. Gibbs, G. (2013) Reflections on the changing nature of educational development. International Journal for Academic Development 18:1: 4–14. Gibbs, G. (2006) Why assessment is changing. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. London: Routledge, pp. 11–22. Gibbs, G. & Simpson, C. (2005) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1 (1): 3–31. Harland, T., A. McLean, A., Wass, R., Miller, E., Sim, K. N. (2015) An assessment arms race and its fallout: high-stakes grading and the case for slow scholarship. Assessment and Evaluation in Higher Education 40 (4): 528–541. Hattie, J. and Timperley, H. (2007) The power of feedback. Review of Educational Research 77 (1:) 81–112. Jessop, T. (2018) Firing the silver bullet of formative assessment: a mandate for good education. Dialogue Journal 2017/18: 50–63. Jessop, T. (2016) Seven years and still no itch –why TESTA keeps going. Educational Developments 17 (3): 5–8. Jessop, T. and Maleckar, B. (2016) The influence of disciplinary assessment patterns on student learning: a comparative study. Studies in Higher Education 41 (4): 696–711. Jessop, T. and Tomas, C. (2017) The implications of programme assessment patterns for student learning. Assessment and Evaluation in Higher Education 42 (6): 990–999. Jessop, T., El Hakim, Y., Gibbs, G. (2014) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns. Assessment and Evaluation in Higher Education 39 (1): 73–88. Leibowitz, B. (2014) Reflections on academic development: what is in a name? International Journal for Academic Development 19: 4: 357–360. Lombardi, M. (2007) Authentic Learning for the 21st Century: An Overview. Louisville, CO: Educause Learning Initiative. Manathunga, C. (2007) ‘Unhomely academic developer identities: more post-colonial explorations. International Journal for Academic Development 12: 25–34. Manathunga, C. (2006) Doing educational development ambivalently: applying post-colonial metaphors to educational development? International Journal for Academic Development 11: 19–29. Mann, S. (2001) Alternative perspectives on student experience: alienation and engagement. Studies in Higher Education 26 (1): 7–19. Marton, F. and Saljo, R. (1976) On qualitative differences in learning I: outcome and process. British Journal of Educational Psychology 46: 4–11.
49
Changing the narrative 49 Miller, C. M. L. and Parlett, M. (1974) Up to the Mark: A Study of the Examination Game. London: Society for Research into Higher Education. Molloy, E., Borrell-Carrio, F., Epstein. R. (2013) The impact of emotions on feedback. In Boud, D. and Molloy. E. (eds) Feedback in Higher and Professional Education. Abingdon: Routledge, pp. 50–71. Nicol, D. J. (2010) From monologue to dialogue: improving written feedback processes in mass higher education. Assessment and Evaluation in Higher Education 35 (5): 501–517. Nicol, D. J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199–218. O’Donovan, B., Price, M., Rust, C. (2008) Developing student understanding of assessment standards: a nested hierarchy of approaches. Teaching in Higher Education 13 (2): 205–217. Pitt, E. and Winstone, N. (2018) The impact of anonymous marking on students’ perceptions of fairness, feedback and relationships with lecturers. Assessment and Evaluation in Higher Education. 43 (7): 1183–1193. Sadler, D. R. (2007) Perils in the meticulous specification of goals and assessment criteria. Assessment in Education: Principles, Policy and Practice 14 (3): 387–392. Sadler, D. R. (1989) Formative assessment and the design of instructional system. Instructional Science 18: 119–144. Tomas, C. and Jessop, T. (2018) Struggling and juggling: a comparison of student assessment loads across research and teaching-intensive universities. Assessment and Evaluation in Higher Education 44 (1): 1–10. Wu, Q. and Jessop, T. (2018) Formative assessment: missing in action in both research- intensive and teaching focused universities? Assessment and Evaluation in Higher Education 43 (7): 1019–1031.
50
4 Using assessment and feedback to empower students and enhance their learning Sally Brown
Introduction Assessment matters more today than ever before. If we want to help students’ engagement with learning, a key locus of enhancement must be around assessment, and in particular, exploring how it can be for rather than just of learning, since it is a crucial, nuanced and highly complex process. Graham Gibbs somewhat controversially (but still pertinently) argued nearly a decade ago that: Assessment make more difference to the way that students spend their time, focus their effort, and perform, than any other aspect of the courses they study, including the teaching. If teachers want to make their course work better, then there is more leverage through changing aspects of the assessment than anywhere else, and it is often easier and cheaper to change assessment than to change anything else. (Gibbs, 2010) Having worked for more than four decades in higher education teaching, learning and assessment, it is to make these kinds of approaches to enhancing courses/programmes of study that I care deeply about, and so I propose five propositions to ensure that assessment and feedback are integral to learning and genuinely contribute to it, rather than being simply a means of capturing evidence of achievement of stated outcomes. 1. Assessment must serve student learning. 2. Assessment must be fit for purpose. 3. Assessment must be a deliberative and sequenced series of activities demonstrating progressive achievement. 4. Assessment must be dialogic. 5. Assessment must be authentic.
Assessment must serve student learning In former times, assessment was (and in some locations still is) seen as a process detached from everyday student life, something that happens once
51
Using assessment and feedback 51 learning has finished, but many today recognise the importance of assessment being a means through which learning happens, particularly those who champion the assessment for learning movement in higher education including Bloxham and Boyd (2007) and Sambell, McDowell and Montgomery (2012), who led a multi-million pound initiative, the Assessment for Learning Centre for Learning in Teaching Excellence, exploring to what extent that reviewing and revising the assessment and feedback elements of curriculum design and changing students’ orientation towards assessment through fostering enhanced assessment literacy can have high impact on their engagement, retention and ultimate achievement. Assessment literacy implies helping students develop a personal toolkit, through which they can better appreciate the standards, goals and criteria required to evaluate outputs within a specific disciplinary context (Sambell et al, 2012). This is likely to involve enabling students to:
• • •
make sense of the complex and specialist terminology associated with assessment (such as weightings, rubrics, submission/resubmission, deferrals, referrals and condonements); encounter a variety of potentially new-to-them assessment methods such as vivas, portfolios, posters, pitches, critiques and assessed web participation, as well as to get practice in using them; and be strategic in their behaviours, putting more work into aspects of an assignment with high weightings, interrogating criteria to find out what is really required and so on.
While an appropriate balance of formative and summative assessment needs to be achieved, it is formative assessment that has the potential to shape, advance and transform students’ learning, while summative assessment is necessary to demonstrate fitness for practice and competence. The developmental formative guidance they receive though using exemplars, getting peer feedback and undertaking incremental tasks can lead to improved performance in final assessments, while end-point summative assessment is principally designed to evaluate students’ final performances against the set criteria, resulting in numbers, marks or grades. Most assessors, of course, use a mixture of formative and summative assessment to achieve both ends, but to make a difference to students lives we must be clear about what function is predominant at any particular time. Bloxham and Boyd (2007) proposed a number of crucial assessment for learning principles: 1. Tasks should be challenging, demanding higher order learning and integration of knowledge learned in both the university and other contexts. This means in practice that assignments should not just focus on recall and regurgitation under exam conditions but instead should be designed to showcase how students can use and apply what they have learned.
52
52 Sally Brown 2. Learning and assessment should be integrated, assessment should not come at the end of learning but should be part of the learning process: we need to ensure that every assessed task is productive of learning throughout a course of study. One of the particular problems we face currently is managerial drives to reduce the amount of assessment to make it ‘more efficient’. While we do not want to grind staff or students down by over-assessing, if you agree with me that assessment has a core role in engendering learning, we must regularly review programme assessment strategies to ensure that there is sufficient to help engage students without being overwhelming. 3. Students are involved in self-assessment and reflection on their learning, they are involved in judging performance: this is a lifelong learning skill, since experts and professionals all need to become adept at gauging personal performance and continuously enhancing it. Rehearsal and guidance are essential to ensure that disadvantaged students who typically tend to underestimate their abilities are supported to make and accurate meaningful judgements matching their performance against criteria. 4. Assessment should encourage metacognition, promoting thinking about the learning process not just the learning outcomes: research clearly shows that students tend ultimately to be more successful if they interrogate not just the performance criteria but also how they themselves are learning. 5. Assessment should have a formative function, providing ‘feedforward’ for future learning which can be acted upon. There is opportunity and a safe context for students to expose problems with their study and get help; there should be an opportunity for dialogue about students’ work: a principal focus by markers on correcting errors and identifying faults is less helpful to learning than the offering of detailed and developmental advice on how to improve future performances in similar assignments and in future professional life. 6. Assessment expectations should be made visible to students as far as possible: old-fashioned (and mean) assessment approaches sometimes sought to catch out unwary students and keep the workings of assessment in a metaphorical black box, but this is counterproductive. All elements of judgement need to be transparent, not in order to make it easy for students but to help students understand what is required of them to succeed. If we play fair with students, there is a lower chance of them adopting unsuitable or unacceptable academic practices because they will trust our systems and judgements. 7. Tasks should involve the active engagement of students developing the capacity to find things out for themselves and learn independently: it’s important to prepare twenty-first-century students for careers and wider lives where they are not dependent on limited information sources which have been provided by others, but instead seek out knowledge and advice, and even more importantly, systematically and appropriately to evaluate their information sources.
53
Using assessment and feedback 53 8. Tasks should be authentic; worthwhile, relevant and offering students some level of control over their work: when students can see the relevance and value of what they are being asked to do, they are less likely to adopt superficial approaches which focus just on getting the required mark and instead to engage more deeply in learning for its own sake. 9. Tasks are fit for purpose and align with important learning outcomes: too often we ask students to undertake tasks that are proxies for assessments that genuinely demonstrate the achievement of the actions we delineate in our learning outcomes, because they seem to be easier to assess, or because we have always done it that way! 10. Assessment should be used to evaluate teaching as well as student learning. How well students perform the tasks we set them can give us some indication of how effectively we’ve taught them, how well we’ve designed the assessments and how effective we have been in supporting students, although of course it’s not a perfect measure. (After Bloxham & Boyd, 2007, with my gloss). Summing up these factors underpinning effective assessment, Sambell et al (2017) subsequently proposed a cyclical model of assessment for learning that:
• • • • • •
emphasises authentic and complex assessment tasks; uses ‘high-stakes’ summative assessment rigorously but sparingly; offers extensive ‘low stakes’ confidence- building opportunities and practice; is rich in formal feedback (e.g. tutor comment, self-review logs); is rich in informal feedback (e.g. peer review of draft writing, collaborative project work); develops students’ abilities to evaluate own progress, direct own learning.
I have worked over the years with many course teams who have used this model as a framework by which they can periodically review their modules and programmes to maximise their effectiveness, and this approach can readily be adopted by others wishing to spring clean their assessment design and delivery.
Assessment must be fit for purpose Fitness for purpose implies a deliberative and measured approach to decision making around assessment design, so that choices of methodologies and approaches, orientation, agency and timing are all made in alignment with the purposes of the task with that particular cohort of students, at that level, in that context, that subject discipline and at that stage in their academic careers. It can be counterproductive, for example, to offer too many, early, feedback- free summative tasks in a programme, which can simply serve to undermine students’ confidence and self-efficacy, while extensive opportunities offered to
54
54 Sally Brown students at the end of a programme discussing potential improvements can be pointless if they are offered after the student has completed all work necessary to graduate. Some kinds of formal exams can be quite helpful for final assignments, but learning outcomes claiming to test interpersonal, problem- solving and communication skills may well be better demonstrated through practical skills, which need to be tested practically, rather than students being asked to write about them. Portfolios are likely to be better at demonstrating evidence of achievement of a range of skills and capabilities than multiple- choice questions ever could (Brown and Race, 2012, includes table of pros and cons of different forms of assessment for different purposes). As practitioners, we are making crucial design decisions about assessment, based on the level of study, the disciplinary area and professional discourses in which our students (and subsequently our graduates) are working. Where we are in control of such decisions (and this is not the case in all nations) it is incumbent upon us to adopt purposeful strategies to enhance the formative impact of assessment. This could include, for example:
• •
• • •
thinking through the overall diet of assessment for a programme, to avoid stress-inducing bunching of hand-in dates and to ensure that any unfamiliar types of assignments are introduced with rehearsal opportunities and the chance to practice in advance of summative assignments; considering making the first assignment of the first semester of the first year one that every student making a reasonable attempt at would achieve at least a pass mark as a means of fostering confidence, while allowing high flyers to score higher marks. failing the very first assignment on a course is a very dispiriting experience and one to be avoided; avoiding excessive numbers of very demanding summative tasks too early which might undermine confidence, but at the same time ensuring that there are included plenty of opportunities early on for students to gauge how they are doing through informal activities; helping students become confident in their own capabilities by giving them opportunities to develop and demonstrate interpersonal skills, though, for example, small-scale group tasks which rely on everyone contributing; enabling students to build up portfolios of evidence that demonstrate achievement of learning outcomes in a variety of media, for example, testimonials from work-placement supervisors, reflective commentaries, videos of them demonstrating skills in the lab, annotated photos of studio outputs showing how these have been achieved and so on.
Some assignments similarly are oriented towards assessing theory and underpinning knowledge, while others are focused primarily on practical application of what has been learned or indeed both. Many would be reassured to know that airline pilots are assessed for safety through simulations before being declared fit to fly professionally. Practical skills are central to assessment of physiotherapists and sports scientists and most of us would be more trusting
55
Using assessment and feedback 55 of our engineers, doctors and architects if we were confident they understand both underlying technical or scientific principles and their implementation. Who is best placed to assess? In terms of who should assess, in most nations not as much use is made of assessment agents other than tutors as could be beneficial. Students’ meta-learning is enhanced by being involved in judging their own and each other’s performance in assessed tasks (Boud, 1995; Falchikov, 2004; Sadler, 2013), helping them not just to become more effective during their studies, but also building lifelong skills valuable after graduation, through the capacity to monitor their own work rather than relying on someone else to do it. Furthermore, students nowadays undertake study in a variety of practice contexts, so employers, placement managers, internship supervisors and service users can all play a valuable part in assessing students’ competences and interpersonal skills. Much conventional assessment is end- point, and while summative assessment is necessarily final, there can be substantial benefits to integrating incremental and staged tasks culminating in the capstone task at the end, rather than giving students a single shot at the target, leading to high stress levels and risk of failure. With an increasing focus on student wellbeing and good mental health, we have a duty of care as practitioners to ensure that the assignments we set, while remaining testing and challenging, should not disadvantage any groups of students disproportionately (in the UK particularly this means making reasonable adjustments for existing disabilities or special needs, for example, to make sure each student has a fair chance of demonstrating their capabilities). Dweck (2000) argues that students who have static views of their capabilities should be encouraged to adopt more flexible perspectives, that enable them to move beyond thinking they are simply stupid or dreadful at a particular subject seeking instead to use strategies to help improve their competence and thereby building self-efficacy. Students with more of an entity theory of intelligence see intellectual ability as something of which people have a fixed, unchangeable amount. On the other end of the spectrum, those with more of an incremental theory of intelligence see intellectual ability as something that can be grown or developed over time. (Yeager & Dweck, 2012, p. 303). What works best on substantial tasks like final projects and dissertations, for example, is to have some element of continuous assessment of work in progress, with feedback and advice along the way, avoiding ultimate ‘sudden death’. Again, avoiding early assessment can be problematic, since students, particularly the tentative and under-confident can find their anxiety building if they have no measure of how they are doing in the early parts
56
56 Sally Brown of a programme, so modest self-or computer-marked assignments such as multiple-choice questions can be efficient and effective at helping them gauge their own progress. The five elements that collectively make up the fit-for-purpose model can work together coherently to help assessment achieve its first aim, to contribute to rather than distract from student learning.
Assessment must be a deliberative and sequenced series of activities demonstrating progressive achievement If assessment is indeed to be the fulcrum around which change management of student learning can be levered, then nothing can be left to chance. Alverno College in the USA is one of the most cited examples of a progressive and highly developed approach to curriculum design and, delivery and assessment is probably the most influential series of active interventions that demonstrates what can be achieved with a very systematic and articulated college-wide strategy for advancing student learning, capability and confidence as Vignette One demonstrates:
Vignette One The Alverno model Alverno College is a small, women’s higher education institution in Milwaukee Wisconsin in the USA, where the curriculum is ability based and focused on student outcomes integrated in a liberal arts approach, with a heritage of educational values. It is rooted in Catholic tradition and designed to foster leadership and service in the community and over the years, it has established a global reputation as a unique provider of integrated learning opportunities within a tightly focused learning community. Marcia Mentkowski (2006), in the first edition of this book, defined the Alverno approach to assessment as learning as: ‘a process … integral to learning, that involves observation, analysis/interpretation, and judgement of each student’s performance on the basis of explicit, public criteria, with self-assessment and resulting feedback to the student’ (p. 48) ‘where … education goes beyond knowing, to being able to do what one knows’ (p. 49). Mentkowski says that Alverno’s assessment elements, such as making learning outcomes explicit through criteria or rubrics, giving appropriate feedback, and building instructor, peer- and self-assessment, may seem directly adaptable at first. Yet the adaptability of Alverno’s integration of these conceptual frameworks shapes whether and how any one institution’s faculty can apply or elaborate –not adopt or replicate – elements of a curriculum developed by another faculty (pp. 49–50).
57
Using assessment and feedback 57 Alverno’s success is built on a systematic approach to integrating deeper cultural qualities where: the curriculum is a source of both individual challenge and support to the development of both autonomy and orientation to interdependence in professional and community service and where education is seen as developing the whole person –mind, heart and spirit –as well as intellectual capability (p. 52). This accords well with Dweck’s approach and current thinking about techniques that can help students find strategies that through feedback can help them believe that they can improve. What students need the most is not self-esteem boosting or trait labelling; instead, they need mindsets that represent challenges as things that they can take on and overcome over time with effort, new strategies, learning, help from others, and patience. (Yeager & Dweck, 2012, op cit p. 312) The following learning principles have shaped the elements of Alverno’s assessment as learning: • • • • •
‘If learning that lasts is active and independent, integrative and experiential assessment must judge performance in contexts related to life roles. If learning that lasts is self-aware, reflective, self-assessed and self- regarding, assessment must include explicitness of expected outcomes, public criteria and student self-assessment. If learning that lasts is developmental and individual, assessment must include multiplicity and be cumulative and expansive. If learning that lasts is interactive and collaborative, assessment must include feedback and external perspectives as well as performance. If learning that lasts is situated and transferable, assessment must be multiple in mode and context.’ (p. 54.)
In a highly effective cycle, Mentkowski describes how students learn through using abilities as metacognitive strategies to recognise patterns, to think what they are performing, to think about frameworks and to engage in knowledge restructuring as they demonstrate their learning in performance assessments. (p. 56). A particular feature of the Alverno approach to assessment is their commitment to integration of assessment across programmes, curricula, and institution-wide rather than allowing separate piecemeal assessment design, and they argue that this approach enables it to: •
be integral to learning about student learning;
58
58 Sally Brown • • • • • •
create processes that assist faculty, staff and administrators to improve student learning; involve inquiry to judge programme value and effectiveness for fostering student learning; generate multiple sources of feedback to faculty, staff and administrators about patterns of student and alumni performance in relation to learning outcomes that are linked to curriculum; make comparisons of student and alumni performance to standards, criteria or indicators (faculty, disciplinary, professional, accrediting, certifying, legislative) to create public dialogue; yield evidence- based judgements of how students and alumni benefit from the curriculum, co-curriculum, and other learning contexts; guide curricular, co-curricular, institution-wide improvements. (Adapted from the Alverno Student Learning Initiative; Mentkowski, 2006)
The Alverno approach has been widely admired and emulated internationally: it influenced my work in the late 1990s and impacted profoundly on much other work in the UK, for example, particularly programme level assessment approaches of the kinds adopted at the Bradford University-hosted Programme Assessment Strategies (PASS) project, led by Peter Hartley (McDowell, 2012).
Assessment must be dialogic All too often, students perceive assessment as something that is done to them, rather than a process in which they engage actively. As Sadler argues: Students need to be exposed to, and gain experience in making judgements about, a variety of works of different quality … They need to create verbalised rationales and accounts of how various works could have been done better. Finally, they need to engage in evaluative conversations with teachers and other students [and thus] develop a concept of quality that is similar in essence to that which the teacher possesses, and in particular to understand what makes for high quality. (Sadler, 2010) What we can do here is move away from monologic approaches so that students truly become partners in learning. Vignette Two illustrates a very practical and impressive means by which art and design students can engage fully in the ongoing critique process during the production of assessed work:
59
Using assessment and feedback 59
Vignette Two Encouraging dialogic approaches to assessment by using a simple feedback stamp to provide incremental feedback on work in progress (Richard Firth and Ruth Cochrane, Edinburgh Napier University) Within the context of the creative industries, students do not always recognise when feedback is being given, because it is embedded in ongoing conversations about their work in progress. This is often reflected in students’ responses to student satisfaction surveys, since they do not always perceive advice and comments given in class as being feedback. In this field, students often work for extended periods on tasks and briefs, often characterised by creative and ‘messy’ processes and assessment times can be lengthy. Orr and Shreeve (2018) describe the challenge of anchoring and negotiating shared understandings of sometimes abstract, conceptual and subjective feedback, so Richard Firth and Ruth Cochrane set out to find a very practical way to open constructive dialogues with students that would capture ephemeral conversations in the studio, since they were keen to help students recognise that the feedback they received was deliberately and purposefully designed to focus on their own individual improvement. They recognised the importance of engaging students proactively in meaningful feedback dialogues (Carless et al, 2011) to help them get to grips with the tacit assumptions underpinning the discipline and better understand fruitful next steps. They recognised that opening up dialogues about the quality of work and current skills can help to strengthen their capacities to self-regulate their own work (Nicol & McFarlane-Dick, 2006), which are key to their future professional practice in the creative industries. The originators’ approach involved the use of an actual rubber stamp that can be used to ‘anchor’ a diverse range of tutor formative feedback on, for example, sketching, note making and the use of visual diagrams. The stamp is used frequently and iteratively as the module unfolds, with tutors sitting beside them commenting purposefully on student sketch books, presentation boards and three-dimensional prototype models, printing directly on to student sketchbooks or similar with marks on five axes indicating progress to date on each from low, novice, at the centre, to high, expert, at the periphery. These axes covering the essential cyclical, indicative elements of the design process, in the example of product design comprise: •
research: the background work the student has undertaken in preparation; this can include primary and secondary research drawing on theoretical framework from other modules;
60
60 Sally Brown • • • •
initial ideas: this covers the cogency and coherence of the students’ first stab at achieving solutions. this is likely to include an evaluation of the quantity, diversity and innovative nature of those idea; proto (typing) and testing: the endeavours the student has made to try out provisional solutions to see if they work; this could include user testing, development, infrastructure and route to market; presentation: an evaluation of how effective the student has been in putting across ideas or solutions in a variety of formats, including via two-and three-dimensional; virtual and moving images; pride: a reflective review of the students’ professional identity as exemplified in the outcome in progress; indicators might include punctuality, organisation, care and engagement.
Ensuing discussions enable tutors’ tacit understandings to become explicit for the student, so they can be translated into improved performance/outcomes (Sadler, 2010). The visual build-up of the stamp’s presence during a student’s documented workflow helps everyone to see the links between formative and summative assessments, so that feedforward from the former to the latter is clear. This is deemed by tutors and students to be much more productive than writing extensive written feedback after the work has been submitted. This regular dialogic review enables rapid feedback at each stage of the design process in a format that is familiar to both design tutors and students working in the creative field. It thereby avoids awkward and problematic misapprehensions about desired outcomes and the level of work required: meaning no shocks or nasty surprises when it comes to the summative assessment! It also builds over time and across programme levels, helping to develop a sense of a coherent, integrated and incremental feedback strategy that builds developmentally throughout the programme (for an extended account, see Firth et al, 2018).
Assessment must be authentic The very act of being assessed can help students make sense of their learning, since for many it is the articulation of what they have learned and the application of this to live contexts that brings learning to life. Authentic assessment implies using assessment for learning (Sambell et al, 2017) and is meaningful to students in ways that can provide them with a framework for activity, since assessment is fully part of the learning process and integrated within it. The benefits of authentic assessment can be significant for all stakeholders, because students undertaking authentic assessments tend to be more fully engaged in learning and hence tend to achieve more highly, because they see the sense of what they are doing (Sadler, 2005).
61
Using assessment and feedback 61 University teachers adopting authentic approaches can use realistic and live contexts within which to frame assessment tasks, which help to make theoretical elements of the course come to life and employers value students who can quickly engage in real-life tasks immediately on employment, having practised and developed relevant skills and competences through their assignments. In addition, such assignments can foster deeper intellectual skills, as Worthen, argues: Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning. (Worthen, 2018) A useful lens through which to view authenticity in assessment is to consider what kinds of questions employers might ask at interview that might help us to frame authentic assignments. For example, graduating students could be asked to describe an occasion when they had:
• • • •
worked together with colleagues in a group to produce a collective outcome; worked autonomously with incomplete information and self-derived data sources; taken a leadership role in a team, and ‘could you tell us your strategies to influence and persuade your colleagues to achieve a collective task?’; communicate outcomes from project work orally, in writing, through social media and/or through a visual medium.
In this way, learning outcomes around graduate capabilities could be embodied in assignments that truly tested them, with concomitant opportunities for students to develop self-knowledge and confidence about their professionally related competences.
Conclusions Too often, student enthusiasm, commitment and engagement are jeopardised by assessment that forces students to jump through hoops, the value of which they fail to perceive. Assessment can and should be a significant means through which learning can happen, a motivator, an energiser and a means of engendering active partnerships between students and those who teach and assess them. Students can be empowered by being supported in their learning through the assignments we set. What we do as practitioners and designers of assessment can have positive or negative impacts on students’ experiences of
62
62 Sally Brown higher education, so I argue strongly that we have a duty to do so by adopting purposeful, evidence-informed and, where necessary, radical approaches to assessment alignment, design and enactment.
References Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead, Open University Press. Boud, D. (1995) Enhancing Learning Through Self-Assessment. London: Routledge Falmer. Brown, S. (2015) Learning, Teaching and Assessment in Higher Education: Global Perspectives. London: Palgrave MacMillan. Brown, S. and Race, P. (2012) Using effective assessment to promote learning. In: Hunt, L. and Chambers, D. (eds) University Teaching in Focus. Abingdon: Routledge, pp. 74–91. Carless, D., Salter, D., Yang, M., Lam, J. (2011) Developing sustainable feedback practices. Studies in Higher Education 36 (4): 395–407. Dweck, C. S. (2000) Self Theories: Their Role in Motivation, Personality and Development. Lillington, NC: Taylor & Francis. Falchikov, N. (2004) Improving Assessment through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education. London: Routledge. Firth, R., Cochrane, R., Sambell, K., Brown, S. (2018) Using a Simple Feedback Stamp to Provide Incremental Feedback on Work-in-Progress in the Art Design Context. Enhance Guide 11. Edinburgh: Edinburgh Napier University. Gibbs, G. (2010) Using Assessment to Support Student Learning. Leeds: Leeds Met Press. McDowell, L. (2012) Programme Focussed Assessment: A Short Guide. Bradford: University of Bradford. Mentkowski, M. (2006) Accessible and adaptable elements of Alverno student assessment-as learning: strategies and challenges for peer review. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. Abingdon: Routledge, pp. 48–63. Mentkowski, M. and associates (2000) Learning That Lasts: Integrating Learning Development and Performance in College and Beyond. San Francisco, CA: Jossey-Bass. Nicol, D. J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199–218. Orr, S. and Shreeve, A. (2018) Art and Design Pedagogy in Higher Education. London: Routledge. Sadler, D. R. (2013) Making competent judgments of competence. In Blömeke, S., Zlatkin-Troitschanskaia, O., Kuhn, C. and Fege, J. (eds) Modeling and Measuring Competencies in Higher Education: Tasks and Challenges. Rotterdam: Sense Publishers, pp. 13–27. Sadler, D. R. (2010) Beyond feedback: developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35 (5): 535–550. Sadler, D. R. (2005) Interpretations of criteria-based assessment and grading in higher education. Assessment and Evaluation in Higher Education 30: 175–194.
63
Using assessment and feedback 63 Sambell, K., McDowell, L., Montgomery, C. (2012) Assessment for Learning in Higher Education. Abingdon: Routledge. Sambell, K., Brown, S. and Graham, L. (2017) Professionalism in Practice: Key Directions in Higher Education Learning, Teaching and Assessment. London and New York, NY: Palgrave Macmillan. Worthen, M. (2018) The misguided drive to measure ‘learning outcomes’. New York Times 23 February. www.nytimes.com/2018/02/23/opinion/sunday/colleges- measure-learning-outcomes.html?smid=tw-share (accessed 4 January 2019). Yeager, D. S. and Dweck, C. S. (2012) Mindsets that promote resilience: when students believe that personal characteristics can be developed. Educational Psychologist 47 (4): 302–314.
64
5 The transformative role of self-and peer-assessment in developing critical thinkers Joanna Tai and Chie Adachi
Introduction Self-and peer-assessment (SAP) has existed since the early conception of assessment itself (Boud & Tyree, 1980) and yet the discussions around what practices they constitute, why and how SAP should be conducted are still very much contested. As opposed to the traditional notion of assessment performed by educators and experts in a given field, SAP positions students as active assessors for the work of their own and others, for mutual learning opportunities. In this way, SAP is considered to be a non-traditional and innovative approach to assessment within higher education. SAP implementation and research have occurred worldwide. Seminal researchers in this area, including Nancy Falchikov, Keith Topping and David Nicol, hail from the UK, while more recent work has also been undertaken in European contexts. We write from an Australian perspective, which has also been influenced by the work of David Boud and D. Royce Sadler. Within Australia, higher education is a key contributor to the economy, with growing local and international student numbers. Employability is both an attractive outcome of higher education and a driver for curriculum improvement. Coupled with pressures to scale delivery, many educators have looked for practical solutions to fulfil implicit and explicit requirements, which are also pedagogically sound. Definitions of self-and peer-assessment While there are quite a few variations of definitions for SAP with various synonyms for assessment (self-/peer-evaluation, review, grading, marking, critique, just to name a few) we take a broader definition offered by Adachi, Tai and Dawson: Self-assessment: students judge and make decisions about their own work against particular criteria. Peer-assessment: students judge and make decisions about the work of their peers against particular criteria. (Adachi et al, 2018a, p. 454)
65
Transformative role of assessment 65 In our use of the term assessment, we include both summative and formative use of assessment tasks with a recognition that they pose different opportunities and challenges. Learning opportunities of self-and peer-assessment There are shared learning and developmental opportunities for students within SAP. First, students are in an active learning mode where they are assessors rather than the usual, passive assessed (Nulty, 2011). In assessing their own and peers’ work, they need to be actively engaged with the learning process of assessment and feedback. This also cultivates lifelong learners who actively seek and engage with sustainable feedback and assessment (Carless, 2013; Boud & Soler, 2016). The crafting of feedback for self and peers requires skill to be effective for learning. Second, in successfully evaluating work, students must understand standards or criteria and have a good idea of what good (or bad) work looks like to fruitfully provide feedback. Therefore, students are able to develop critical thinking capabilities, or evaluative judgement, through SAP tasks, which are explored later in this chapter. Complex possibilities exist for the design of peer-assessment (Topping, 1998; Gielen et al, 2011; Adachi et al, 2018a). However, there are two main types of peer-assessments –the evaluation of the products created by peers, for example peer review (Nicol et al, 2014) and the evaluation of the process of team/group work performed with peers (van Gennip et al, 2010). The former allows students to review each other’s work, for example, an essay, group report or presentation. This could be done on a one-to-one basis, or intra-/inter-groups on the products or artefacts of group work. This approach offers another layer of feedback to improve student work. Kulkarni et al (2015) argue that peer review and feedback in Massive Open Online Courses (MOOCs) is timely, scalable and relevant, and found that, with calibration and feedback on peer-assessment, students improved their accuracy in grading over time. Peer assessments of teamwork processes, enable students to apply and assess teamwork, communication and critical thinking skills more widely. The opportunity to evaluate and give feedback on each other’s teamwork skills opens a forum for dialogic and ongoing feedback as the task progresses. Students are often invited to reflect on their communication skills in preparing constructive feedback to be fruitfully received and critical thinking skills in evaluating whether given feedback is constructive for future improvements. Self-assessment is best used in conjunction with external sources (Falchikov & Boud, 1989) and also as part of peer or educator assessments. The comparison of assessments can help students gain feedback on their judgement abilities. The sequencing of a self-assessment prior to other assessments may also guide assessors to focus on particular aspects of the work, with a potential to promote feedback dialogue.
66
66 Joanna Tai and Chie Adachi Criticism for self-and peer-assessment Despite its affordances, SAP also attracts criticism for its pitfalls. The biggest concern may be regarding the inaccuracy of self and peer evaluation and feedback (Brown et al, 2015). Given that students are considered ‘novice’ in their fields, SAP engenders strong concern regarding non- critical, inaccurate and unhelpful feedback detrimental for students’ learning. Complex dynamics, management of power relations and potential time constraints (in equipping students to be able to effectively undertake SAP) are also noted as challenges for successful SAP (Liu & Carless, 2006). Crucially, if SAP contributes to a final mark, the learning element may be lost: students might recognise an opportunity to compete, collude or inaccurately self-assess to maximise their marks. It has also been noted that the design of SAP (including the topic and criteria used) also influences outcomes (Falchikov & Goldfinch, 2000). Perspectives critical of SAP may make the assumption that SAP is a substitute for educator assessment and feedback: we suggest that the value of SAP is in the requirement to critically engage with information, standards and feedback; abilities which are only developed over time, with repeated SAP opportunities. We explore these ideas in the following section. Developing learners’ critical abilities through self-and peer-assessment SAP are frequently instituted in relation to a present learning activity or current learning objective for the unit/module of study. Occasionally, they may be used as an efficiency measure to reduce staff workload for providing feedback. However, when we consider the purposes of higher education (to develop higher knowledge and prepare learners for future careers and roles in developing dimensions of professional identity), other potential benefits and motivating concepts surface. SAP represents a sustainable assessment practice, and when viewed through an assessment for learning lens, SAP also parallels workplace activity around performance management and interaction with professional standards. Engaging in SAP is likely to develop learners’ assessment literacy (Carless & Boud, 2018). SAP can also develop students’ evaluative judgement (Tai et al, 2018). Sustainable assessment, assessment literacy and evaluative judgement are three interlinked areas where learners’ critical thinking abilities are vital. Sustainable assessment Boud (2000) argued that ‘sustainable’ assessment must serve a key additional purpose, beyond its already acknowledged functions of certification and aiding learning: to prepare students for lifelong learning, as part of becoming effective members of a learning society. The term sustainable was chosen to
67
Transformative role of assessment 67 indicate that present assessments should not compromise the future needs of students. In line with the purpose of evaluative judgement, when students graduate, educators will no longer give them corrective feedback, and so pedagogical practices must develop students’ own abilities to accomplish this. When Boud and Soler (2016) revisited how sustainable assessment had been taken up in the literature, they found evidence of self-assessment as part of a toolkit promoting sustainable assessment. Peer-assessment was also noted as a potential approach. However, they concluded that large-scale realisation of sustainable assessment had not yet occurred. Ensuring SAP promotes sustainable assessment is therefore still necessary, given its minimal traction to date. SAP affords a handover of assessment responsibility to students, with reduced scaffolding as student progress through a course of study. Students need to be prepared to assess each other and themselves in future work situations. This reflects the situation in many professional disciplines, where standards are agreed upon by members of that profession, through regulation, review, and continuous professional development. By keeping sustainable assessment in mind, we are also preparing learners to become capable professionals. Assessment literacy Assessment literacy is important for both educators and students to achieve good educational outcomes (Price & O’Donovan, 2006; Popham, 2009; Smith et al, 2013). The development of students’ assessment literacy focuses on the quality of work and types of assessments they are required to complete. The common goal is to ensure that assessment supports and guides learning. Assessment literacy is also context specific, with the need for an insider’s translation for that particular setting. Like other literacies, it can be serendipitously developed through learning experiences, however formal interventions may advance its progress (Smith et al, 2013). SAP can contribute to the development of students’ assessment literacy (Carless & Boud, 2018). SAP on individual work is a way to gain formative feedback to improve work prior to final submission. Through engaging in SAP at this point, students gain familiarity of the assessment process and, in a peer-assessment situation, can see how others approached the assessment task. Receiving a range of peer-assessments on an assignment can also help learners to understand complexity and subjectivity in assessment (McConlogue, 2012). The process of understanding assessment processes may be aided through the co-creation of rubrics, and discussion of exemplars (e.g. To & Carless, 2015) while the act of having to make a decision and return an assessment to a peer adds additional lived experience. This develops students’ understanding of how marks are determined, and may help in decisions around requesting additional feedback or re-marking.
68
68 Joanna Tai and Chie Adachi Evaluative judgement Evaluative judgement is the capability of an individual to judge quality of work, both their own, and others’ (Tai et al, 2018). The term was coined by Sadler (1989), while similar concepts were discussed by Boud (2000) in relation to sustainable assessment and Boud and Falchikov (2007) in relation to informed judgement. Conceptual development has occurred more recently (Tai et al, 2017). Evaluative judgement is important for complex tasks and instances of work where quality is more difficult to articulate. The rationale that we should be enabling our students to develop an understanding of what quality is (within their profession, discipline, or other area), in order for them to become independent operators, is a key underlying argument for education itself. Students’ understandings of quality may also aid in effective feedback conversations. SAP is one of several pedagogies which can aid development of evaluative judgement (Tai et al, 2018). First, as an assessor (of a peer or self), the learner has to develop an understanding of the standards of performance. Second, the learner has to apply that standard to the instance of work/performance, making a comparison between the two. Through interacting with the work/ performance and its merits and drawbacks, a richer understanding of quality is developed. As a recipient of a peer-assessment, the learner also gains additional information about their work in relation to a standard, again contributing to a richer understanding of quality (Tai and Sevenhuysen, 2018). Necessary elements of self-and peer-assessment to develop learners’ critical thinking All three concepts discussed require and promote learners’ critical thinking abilities: with sufficient development in these areas, learners may be transformed into independent practitioners, which could contribute to their employability (Chapter 8). SAP is frequently criticised (and students are also not confident about its value) due to the relative lack of expertise that learners possess. However, it is unlikely that learners will develop their expertise without learning and opportunities to practice their skills: a vicious cycle occurs if we take the view that we cannot allow students to partake in SAP without expertise in SAP. This expertise is both in the process of SAP (i.e. making evaluative judgements) and in the topic of the SAP (i.e. what knowledge of content or skills is required). We must therefore provide opportunities for students to learn about SAP, what it can be used for, its benefits and pitfalls and how to participate in SAP successfully. The topic of SAP must also be appropriate (i.e. something that learners can assess with their current level of understanding). For novice students, this might be more generic aspects of performance around essay structure, oral communication, teamwork and elements of content. As students learn more, they can assess themselves and peers on more specific
69
Transformative role of assessment 69 aspects of performance –for example elegancy of the design, nuances within an argument. Opportunities to practice SAP and to receive feedback on the accuracy of their assessment are also crucial. This also has the beneficial effect of reducing collusion and inaccurate assessment for grade inflation.
What makes self-and peer-assessment successful? The question then arises: in designing, implementing and evaluating SAP, what enables success? We draw from Biggs’ (1993) 3P model to discuss enablers, considerations and recommendations for SAP. The 3P model contains three stages: presage, process and product to analyse learning processes. The presage stage considers all the teaching and learners’ contexts prior to the undertaking of the learning tasks. Students’ previous knowledge, skills, values and expectations, as well as teaching context of predesigned curriculum and preferred teaching methods contribute to this phase. In the second phase of process, the analysis of students processing the learning tasks can be observed. In the last stage of product, the outcomes of that learning process can be evaluated. Taking this model, we discuss enablers, considerations and recommendations that contribute to the success of SAP tasks. Presage: purpose, consultation and policy Before the learning process of SAP can occur, several factors need to be considered for a successful learning opportunity. As educators we need to be clear and explicit about the purpose of SAP and its associated benefits for learning. This is within the implicit design of the SAP tasks, and the direct engagement with students. Students usually see the evaluation of students’ work as something that educators do. Articulating purposes and advantages of participating in something that is normally ‘a teacher’s job’ clarifies that educators are not saving time and reducing workload, but instead are providing additional developmental opportunities for learners. Benefits of critical thinking skills, evaluative judgement and teamwork skills can be incorporated in the design and in the explicit conversation with students. Gathering contextual information before designing SAP tasks is critical. This might include the alignment of the SAP task with other assessment tasks within the unit/subject and across the degree. Constructive alignment (Biggs & Tang 2007) will ensure a holistic approach to curriculum design ensuring that students are developing specific skills and knowledge through SAP at appropriate times. Strategic placement and scaffolding of multiple SAP tasks is vital for students to build their capacity over time. Consulting with colleagues and academic developers working in the same degree can be useful. Contextual and technological constraints also need to be carefully considered (Adachi et al, 2018b), especially where a large-scale implementation is intended with the SAP. While educational technology and digital
70
70 Joanna Tai and Chie Adachi platforms have a potential for scalability and efficiency, a complex manipulation of the tools often demands high-level digital literacy skills of both educators and students. Educators need to select an appropriate tool so that the focus remains with the learning opportunity of SAP itself, rather than on the learning of how to troubleshoot and master the use of a given educational technology. As with any assessment, consultation of existing assessment policy and procedures regarding the SAP is required. Could SAP be used for summative as well as formative purposes? If yes, how do educators ensure that students’ input (self and peer grades) contribute to the overall marks fairly and accurately? If not, then how do educators design the SAP task to be efficient, without unduly increasing workloads for both educators and students? Further, policy documents often underpin the best practices of certain assessment approaches –how do the policy and procedures enable or inhibit the SAP task design in mind? Process: scaffolding, resources and pilot There are also considerations during SAP implementation. We must consider how SAP tasks can be scaffolded to develop student confidence. For example, in peer-assessment of teamwork, students need to comprehend the skillsets required for effective teamwork prior to assessing others on this skillset. Educators and students could set expectations for teams to collaborate effectively through discussing and creating a ‘contract’ for teamwork which might include roles and responsibilities, task allocations and timelines. Reimagining and reallocating the available resources is crucial when there they are limited. Extra tutors might be negotiated to facilitate SAP tasks and/ or meet as an assessment panel to discuss where critical feedback on students’ performance should be provided. Where resources are limited, tutors may ensure that the class-based SAP results in useful formative feedback and therefore not spend so much time on developing feedback information to be returned on end-of-term tasks where it is potentially less useful. An initial smaller design and implementation with SAP tasks might facilitate success, especially if educators have had little prior experience with implementing SAP. A pilot with a small cohort of students or a group of more understanding and experienced students (senior or postgraduate) over a quieter teaching period (e.g. summer term) might be considered. Product: evaluation and multiple iterations Results of the learning processes associated with the SAP tasks should be evaluated for improvement in future iterations. Critical review and evaluation of the design and implementation of SAP task based on the feedback from students and other colleagues is essential. Did the task help students achieve the intended learning outcomes set out for the task, unit and beyond?
71
Transformative role of assessment 71 Was the task scaffolded sufficiently that students tackled the task confidently throughout? What is the lesson learned that could be used for future iterations of SAP tasks? The idea of long-term refinement and planning is vital, especially when beginning with a pilot. Multiple SAP iterations which look at long-term benefits provide an opportunity for staff to refine the SAP tasks over time, but also for students to build their capacity with more complex skills. Educators may use this argument for performance management conversations where less favourable student evaluations of teaching are anticipated. Biggs’ (1993) 3P model provides a framework for analysing the enablers and considerations for the design, implementation and evaluation of SAP, and also suggests a way forward with recommendations for future iterations of SAP practices.
Conclusion For SAP to function well, it must be instituted with appropriate purposes (i.e. for developing students’ abilities), with sufficient forethought and planning, resources, and with a view of ongoing improvement. Learners also require sufficient content knowledge and SAP skills to participate successfully in SAP. Through participating in SAP, a sustainable assessment form, students are likely to develop assessment literacy and their evaluative judgement. This is key for both learners’ present and future roles, both within the university, and beyond. Focusing on SAP as a means to develop learners’ critical thinking abilities with a goal of independence implies that learners will come to better understand standards alongside educators, industries and professions. This speaks to employability, the broader learning and assessment context, and thus how feedback is done (Part II), how learning is stimulated (Part III) and, indeed, assessing professional development (Part IV), are all also elements which underpin successful SAP. SAP is mentioned in some chapters (i.e. Chapters 7, 8, 13); however, we also recommend that readers might consider how SAP might complement or be incorporated into other aspects of learning and assessment. Though SAP is not the only way to develop critical thinking abilities, it holds significant promise as a pedagogy for developing students’ skills and capabilities for present and future work.
References Adachi, C., Tai, J., Dawson, P. (2018a) A framework for designing, implementing, communicating and researching peer assessment. Higher Education Research and Development 37 (3): 453–467.. Adachi, C., Tai, J., Dawson, P. (2018b) Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assessment and Evaluation in Higher Education 43 (2): 294–306. Biggs, J. J. (1993) From theory to practice: A cognitive systems approach. Higher Education Research and Development, 12 (1): 73–85.
72
72 Joanna Tai and Chie Adachi Biggs, J., and Tang, C. (2007) Teaching for Quality Learning at University (3rd ed.). Maidenhead: Open University Press. Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education 22 (2): 151–167. Boud, D. and Falchikov, N. (2007) Developing assessment for informing judgement. In: Boud, D. and Falchikov, N. (eds) Rethinking Assessment in Higher Education: Learning for the Longer Term. London: Routledge, pp. 181–197. Boud, D. and Soler, R. (2016) Sustainable assessment revisited. Assessment and Evaluation in Higher Education 41 (3): 400–413. Boud, D. and Tyree, A. L. (1980) Self and peer assessment in professional education: a preliminary study in law. Journal of the Society of Public Teachers of Law 15: 65. Brown, G. T. L., Andrade, H. L., Chen, F. (2015) Accuracy in student self- assessment: directions and cautions for research. Assessment in Education: Principles, Policy and Practice 22 (4): 444–457. Carless, D. (2013) Sustainable feedback and the development of student self-evaluative capacities. In: Merry, S., Price, M., Carless, D., Taras, M. (eds) Reconceptualising Feedback in Higher Education. Hoboken, NJ: Taylor and Francis, pp. 113–122. Carless, D. and Boud, D. (2018) The development of student feedback literacy: enabling uptake of feedback. Assessment and Evaluation in Higher Education 43 (8): 1315–1325.. Falchikov, N. and Boud, D. (1989) Student self-assessment in higher education: a meta-analysis. Review of Educational Research 59 (4): 395–430. Falchikov, N. and Goldfinch, J. (2000) Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research 70 (3): 287–322. Gielen, S., Dochy, F., Onghena, P. (2011) An inventory of peer assessment diversity. Assessment and Evaluation in Higher Education 36 (2): 137–155. Kulkarni, C., Wei , K. P., Le , H., Chia , D., Papadopoulos , K., Cheng , J., Koller, D., Klemmer, S. R. (2015) Peer and self assessment in massive online classes. In: Plattner, H., Meinel, C., Leifer, L. (eds) Design Thinking Research. Springer, pp. 131–168. Liu, N.- F. and Carless, D. (2006) Peer feedback: the learning element of peer assessment. Teaching in Higher Education 11 (3): 279–290. McConlogue, T. (2012) But is it fair? Developing students’ understanding of grading complex written work through peer assessment. Assessment and Evaluation in Higher Education 37 (1): 113–123. Nicol, D., Thomson, A., Breslin, C. (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment and Evaluation in Higher Education 39 (1): 102–122. Nulty, D. D. (2011) Peer and self-assessment in the first year of university. Assessment and Evaluation in Higher Education 36 (5): 493–507. Popham, W. J. (2009) Assessment literacy for teachers: faddish or fundamental? Theory Into Practice 48 (1): 4–11. Price, M. and O’Donovan, B. (2006) Improving performance through enhancing student understanding of criteria and feedback. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. London: Routledge, pp. 100–109. Sadler, D. R. (1989) Formative assessment and the design of instructional systems. Instructional Science 18 (2): 119–144.
73
Transformative role of assessment 73 Smith, C. D., Worsfold, K. Davies, L., Fisher, R., McPhail, R. (2013) Assessment literacy and student learning: the case for explicitly developing students assessment literacy. Assessment and Evaluation in Higher Education 38 (1): 44–60. Tai, J. and Sevenhuysen, S. (2018) The role of peers in developing evaluative judgement. In: Boud, D., Ajjawi, R., Dawson, P., Tai, J. (eds) Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work. Abingdon: Routledge, pp. 156–165. Tai, J., Ajjawi, R., Boud, D., Dawson, P., Panadero, E. (2018) Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76 (3): 467–481. To, J. and Carless, D. (2015) Making productive use of exemplars: Peer discussion and teacher guidance for positive transfer of strategies. Journal of Further and Higher Education 40 (6): 746–764. Topping, K. J. (1998) Peer assessment between students in colleges and universities. Review of Educational Research 68 (3): 249–276. van Gennip, N. A. E., Segers, M. S. R., Tillema, H. H. (2010) Peer assessment as a collaborative learning activity : The role of interpersonal variables and conceptions. Learning and Instruction 20 (4): 280–290.
74
75
Part II
Implementing feedback
76
77
6 Evaluating written feedback Evelyn Brown and Chris Glover
Introduction There has been a proliferation of studies over the past two decades looking at aspects of feedback to students and identifying how feedback can best encourage students’ learning. Based on an extensive literature search, Gibbs and Simpson (2004) identified seven conditions under which feedback is believed to influence students learning. These have been used to form part of a conceptual framework for improving students’ learning through changing assessment (Gibbs et al, 2003). These conditions concern the quantity, timing and quality of the feedback and the student’s response to it. Nicol and Macfarlane-Dick (2006) identified seven broad principles of good feedback practice from the literature, discussed by David Nicol and Colin Milligan in Chapter 5 of the first edition in the context of technology-supported assessment practices. Six of these are to do with the learning process, students’ understanding of good performance or the effects of feedback on students’ motivation and self-esteem. The feedback conditions and principles of good feedback practice are stated in Table 6.1. Important outcomes of feedback identified in both studies are that students should act upon it and be able to use it to close the gap between their current performance and the desired performance in order to improve future work. These outcomes define the formative nature of feedback. Students receive tutor-authored feedback, generic or individualised, in a variety of forms, as marks/grades, orally, written and computer generated. This chapter focuses on the written feedback that students receive on assignments and shows how tutors can evaluate its strengths and weaknesses empirically within the conceptual frameworks described above.
Which sort of feedback do students find most useful and helpful? A study involving 147 students at Sheffield Hallam University (Glover, 2004) showed that although the students perceived marks or grades to be the primary vehicle for measuring their progress, they perceived written feedback as the most useful form of feedback. Feedback that helped them to understand
78
78 Evelyn Brown and Chris Glover Table 6.1 The seven conditions under which feedback is believed to influence students learning (Gibbs et al, 2003) and the seven principles of good feedback practice (Nicol & MacFarlane-Dick, 2006) Feedback conditions
Principles of good feedback practice
• Sufficient feedback is provided often enough and in enough detail • The feedback is provided quickly enough to be useful to students • Feedback focuses on learning rather than on marks or students • Feedback is linked to the purpose of the assignment and to criteria • Feedback is understandable to students, given their sophistication • Feedback is received by students and attended to
• Helps clarify what good performance is (goals, criteria, expected standards) • Facilitates the development of self- assessment (reflection) in learning • Delivers high-quality information to students about their learning • Encourages teacher and peer dialogue around learning • Encourages positive motivational beliefs and self-esteem • Provides opportunities to close the gap between current and desired performance • Feedback is acted upon by students to • Provides information to teachers that improve their work or their learning can be used to help shape teaching
where they had gone wrong was the most helpful, presumably, in part, because it aided their understanding of their marks (see, e.g., Jackson, 1995, cited in Cooper, 2000). Questionnaire responses from 22 Open University distance- learning geology students (Roberts, 1996) suggested that they viewed ‘good feedback’ as feedback that was encouraging and positive. They valued tutor comments that clearly indicated where they were incorrect and the provision of specimen answers that included how or why these were the desired answers. They also valued highly the individualised attention that the written feedback provided and having their problems and difficulties explained in detail.
Classifying written feedback Given the high value that students place on individualised written feedback, the role that good quality feedback may play in aiding student learning (Black & Wiliam, 1998) and the very significant time costs to teachers in its delivery, it is surprising that very few attempts have been made to classify systematically the different types of teacher comments that constitute feedback so that the quality of feedback can be analysed. Bales (1950) devised a set of categories for the analysis of face-to-face small group interactions that distinguishes between task-oriented contributions and socio-emotional contributions, both positive and negative. The feedback categories of Hyland (2001) were designed specifically for language distance learners. The two broad categories are feedback that focuses on the product (i.e. the student’s work: content, organisation, accuracy and presentation) and
79
Evaluating written feedback 79 feedback that focuses on the learning process (praise and encouragement, and the strategies and actions students should take to improve their learning). Whitelock et al (2004) explored the applicability of Bales’ framework to the analysis of tutor comments on Open University students’ assignments. They felt that all comments could be included within Bales’ categories and concluded that his system could, therefore, form the basis for a model of tutor written feedback. Our view is that neither Bales’ nor Hyland’s system is universally applicable because they do not distinguish sufficiently between the different facets of content and skills-oriented feedback that help to guide students’ learning. Neither do they shed light on the extent to which closure of the performance–feedback–reflection–performance feedback loop is enabled. We have devised a feedback classification system which both addresses these criticisms and allows written feedback to be analysed within the conceptual frameworks of Gibbs et al (2003) and Nicol and Macfarlane-Dick (2006). Although our system of classification codes has been constructed primarily for feedback on science assignments, it can be adapted easily to suit the needs of other disciplines. Five main categories of feedback comments are recognised based on current feedback practice on science assignments at the Open University:
• • • • •
Comments about the content of a student’s response, i.e. the student’s knowledge and understanding of the topics being assessed (coded ‘C’). Comments that help a student to develop appropriate skills (coded ‘S’). Comments that actively encourage further learning (coded ‘F’). Comments providing a qualitative assessment of a student’s performance that are motivational (coded ‘M’). Comments providing a qualitative assessment of a student’s performance that may de-motivate (coded ‘DM’).
The first four of these categories have the potential to help students to improve their work or learning. The fifth includes what Rorty (1989) termed ‘final vocabulary’; that is value-laden, judgemental words that may inhibit further learning by damaging students’ self-esteem. Each category is subdivided to enable a finer analysis of the types of feedback within each category (Table 6.2). The lowercase letter codes ascribed to each have been chosen to reflect directly the type of feedback comment that has been made by adopting the same first letter (shown emboldened in the second column of Table 6.2). The extent to which feedback comments may help students to improve their performance or learning is determined by analysing their depth. Different levels of feedback are assigned number codes to reflect the depth, with the exception of the ‘de-motivational’ feedback. With respect to content and skills a tutor may:
•
acknowledge a weakness (i.e. acknowledge that a performance gap exists; level 1);
80
Table 6.2 The coding system used for the analysis of written feedback to students Code
Type of comment
Relation to Relation to principles of feedback conditionsa good feedback practiceb
Comments on content of student’s response (coded ‘C’) Ce error/misconception Feedback is linked Helps clarify what Co omission of relevant to the purpose of good performance is material the assignment (expected standards), Ci irrelevant material and may be acted provides opportunities included upon by students to close the gap between Ctc tutor clarification of a to improve their current and desired point work or learning. performance, Csc student’s clarification delivers high-quality of a point requested information to students by tutor about their learning. Comments designed to develop student’s skills (coded ‘S’) Sc Se Sd Sm Sp
communication English usage diagrams or graphs mathematical presentation
Feedback may be acted upon by students to improve their work.
Helps clarify what good performance is (expected standards), provides opportunities to close the gap between current and desired performance.
Comments that encourage further learning (coded ‘F’) Fd Ff Fr
dialogue with student encouraged future study/ assessment tasks referred to resource materials referred to
Feedback is acted upon by students to improve their learning
Encourages teacher dialogue around learning, facilitates development of reflection in learning
Qualitative assessment of student’s performance –motivational comments(coded ‘M’) Mp Me
praise for achievement encouragement about performance
Feedback is acted upon by students to improve their work or learning
Encourages positive motivational beliefs and self-esteem
Qualitative assessment of student’s performance –de-motivational comments (coded ‘DM’) DMn DMj
negative words/phrases Feedback focuses (e.g. ‘you should not/ on student rather never’) used than student’s judgement of student’s work performance is personal and negative (e.g. ‘careless’)
Modified from Brown et al (2003) a Gibbs et al (2003). b Nicol and Macfarlane-Dick (2005).
Discourages positive motivational beliefs and self-esteem
81
Evaluating written feedback 81
• •
provide correction (i.e. give the student the information needed to close the gap; level 2); explain why the student’s response is inappropriate/why the correction is a preferred response (i.e. enable the student to use the information to close the gap; level 3) by making connections between the feedback and the students’ work (Sadler, 1989). The action taken by the student to close the gap is the core activity of formative assessment, the closure of the feedback loop (Sadler, 1989).
Praise and encouragement are often basic (e.g. ‘well done’, ‘keep up the good work’; level 1). The extent to which the basis for praise and encouragement is explained determines whether it is coded level 2 or 3. Level of detail determines the coding level for the further learning categories. Note that any one feedback comment from a tutor may be assigned more than one code. For example, a tutor may acknowledge and correct a factual error (Ce2) using negative words or phrases (DMn). Similarly, a tutor may acknowledge the presence of irrelevant material (Ci1) and also correct it because it is erroneous (Ce2). There is also a high degree of subjectivity involved in assigning codes to comments and so any analysis using the code provide pointers to strengths and weaknesses in feedback practice, not precise diagnoses.
Applying the classification system to analyse written feedback to students The classification system was used in 2003 to analyse the tutor feedback on 112 student assignments at the Open University, covering six different biological and physical sciences modules. The assignments were authored by module team full-time staff but the students’ work was assessed by teams of part-time tutors (associate lecturers), guided by a common mark scheme and accompanying tutor notes. The work of 83 tutors was involved. This section explores how the outcomes for the analysis as a whole, and for two of the modules specifically, were used to diagnose strengths and weaknesses in the feedback provided. This enabled the module team to assess how well some of the feedback conditions were being met and how well the principles of good feedback were being applied (Tables 6.1 and 6.2). Two modules have been singled out because the analyses also demonstrate how the nature of assessment tasks may influence the feedback given. S204: Biology: Uniformity and Diversity and S207: The Physical World are both 60 Credit Accumulation and Transfer Scheme credit-point modules at higher education (England, Wales and Northern Ireland) level 6. S204 involved the analysis of 20 assignments, representing feedback from 20 different tutors. S207 involved 17 assignments with feedback from 10 tutors. The mean score for the 112 assignments, marked using a linear scale of 0–100, was 71 (just above the class 2(i)/2(ii) borderline) and the total number of codes assigned was 3580, an average of about 32 per assignment. The mean
82
% of codes recorded
82 Evelyn Brown and Chris Glover 70 60 50 40 30 20 10 0
All n = 3580 S204 n = 805 S207 n = 410
Feedback categories
Figure 6.1 Analyses of the different categories of feedback for S204 and S207 compared with all modules
score for the S204 assignments was 60 (class 2(ii)), and the total number of codes assigned was 805, an average of about 40 per assignment. The mean score for the S207 assignments was 76 (class 2(i)) and the total number of codes assigned was 410, an average of about 24 per assignment. In Figure 6.1 the proportions of the different categories of feedback for S204 and S207 are compared with the data for all the modules analysed and a high degree of uniformity can be seen. Strengths identified across all modules The students received a reasonably large amount of motivational feedback (the vast majority in the form of praise) and very little of the feedback was de- motivating. This feedback was likely to encourage positive motivational beliefs and self-esteem thereby increasing the likelihood that students might act upon it. Weaknesses identified across all modules The majority of the feedback was content focused, relevant to the topics that were assessed. Interviews with the 112 students whose assignments were analysed revealed that they did not act on this feedback to improve their work, although they valued it most, because the topics studied had moved on and the students felt that they were unlikely to be revisited. The feedback, therefore, was not provided quickly enough to be useful to students, despite a relatively short turn-round period of only three weeks. Skills feedback, which students felt did feed forward to future work, and feedback designed to encourage them to engage in further learning, were poorly represented. These weaknesses suggest that most of the feedback
83
Evaluating written feedback 83 served to reinforce the summative nature of the assignments. It was marks focused, justifying the students’ marks by telling them ‘what counted’ (Boud, 2000); that is, it fed back far more than it fed forward and so did not fulfil the formative function of helping them to improve their work and learning. A comparison of S204 and S207
% Content codes recorded
A more detailed analysis of the types of content feedback (Figure 6.2) showed that the ratio of omissions to errors was an order of magnitude higher for S204 (5.0:1) than for S207 (0.4:1). The S204 assignment was largely discursive and interpretive whereas the S207 assignment was dominated by calculations. Most marks for S204 were awarded for specific points students were expected to make rather than for students’ achievement of broader criteria and so students found it difficult to map their own answers to those indicated by the tutors’ mark schemes: ‘I was not sure what was expected by the questions’ (S204 student interview). Consequently, the tutors’ comments were largely to do with material omitted from answers (average14 codes/script) and little to do with factual errors or misconceptions (average 2 codes/script). Thus, while the feedback was linked to the purpose of the assignment, there was no link to criteria. By contrast, the S207 students had little difficulty in understanding what their assignment questions expected (omissions averaged 3.5/script) but were prone to make more mistakes, largely due to the high mathematical content (errors/misconceptions averaged 8/script). An analysis of the depth of feedback given revealed a further difference between S204 and S207. In common with all other modules except S204, most of the feedback on errors/ misconceptions and skills for the S207
80 70 60 50 40 30 20 10 0
All S204 S207
Content sub-categories
Figure 6.2 Analyses of the sub-categories of content feedback for S204 and S207 compared with all modules
84
84 Evelyn Brown and Chris Glover assignment was at the level of simply the acknowledgement of a weakness or the correction of the weakness (levels 1 and 2; Figures 6.3a and b). Less than one-third involved explanation (level 3). By contrast, around three-quarters of the feedback for S204 involved explanation. Similarly, where praise was given it was mostly unexplained (‘good’, ‘excellent’, ‘well done’), except for S204 where more than half of the instances explained the basis for the praise. The S204 assignment tasks had a large number of questions requiring analysis and evaluation, asking students to explain ‘how’ or ‘why’ in drawing conclusions. The marking notes supplied to tutors also encouraged the provision of explanations to students (level 3 feedback) enabling students to make the connections between the feedback and their own work so they were given ample opportunities to close the gap between their current and desired performances.
% Errors & omissions codes recorded
80 70 60 50
All
40
S204
30
S207
20 10 0
Acknowledge
Correct
Explain
Depth of errors & omissions feedback
Depth of skills feedback
90 80 70 60
All
50
S204
40
S207
30 20 10 0
Acknowledge
Correct
Explain
% Skills codes recorded
Figure 6.3 Analyses of the levels of feedback on (a) errors and omissions and (b) skills for S204 and S207 compared with all modules
85
Evaluating written feedback 85 The problems of levels 1 and 2 feedback The predominance of levels 1 and 2 feedback used in the other modules suggests that students studying these may not have been able to make the connections between their work and the feedback, especially the less able students, and so they may not have been able to understand how to interpret it or what to do with it. In other words, they may not have been able to act upon it.
Improving the quality of feedback and the student response The feedback analysis enabled the module teams to identify weaknesses in the type and quality of feedback that inhibited students’ engagement with their feedback to improve their later work or learning. These are the lack of:
• • • •
feedback that fed forward, providing students with the incentive to engage with it, thereby enabling them to improve future work; feedback designed to encourage further learning, especially the lack of the sort of dialogue that facilitates the development of reflection in learning; clear assessment criteria shared by tutors and students; feedback that enabled students to make connections with their own work and so able to close the gap between their current and desired performances.
As a result, various changes have been made by different module teams which address some of these deficiencies, for example:
• • • •
permitting students to receive formative-only feedback on their work before submitting it for summative assessment. this eliminates the focus on marks and encourages the students to engage with the feedback to improve their work and learning; providing exemplars (specimen answers) for students with explanatory notes that stress skills development and the relevance to future work (S204); encouraging tutors to highlight aspects of the student’s strengths and weaknesses that have relevance for future work (S207); generating assessment tasks in which the feedback from one assignment is relevant to subsequent tasks.
Even small changes may lead to worthwhile improvements in the quality of feedback. In 2004, we analysed the feedback provided on the equivalent S207 assignment to that analysed in 2003 (Figure 6.4). This analysis involved 6 tutors and 15 student assignments. The mean assignment score was 83 (a high class 2(i)) and the total number of codes assigned was 413, an average of 28 per assignment. The proportion of feedback encouraging further learning had more than trebled.
86
% Codes recorded
86 Evelyn Brown and Chris Glover 70 60 50 40 30 20 10 0
S207 2003 n = 410 S207 2004 n = 413
Feedback categories
Figure 6.4 Analyses of the different categories of feedback for S207 in 2004 compared with 2003
Conclusion The feedback coding system provides a potentially valuable tool to help teachers at all levels to reflect on the quality and effectiveness of the feedback they are giving their students. It enables the strengths and weaknesses of the feedback to be identified in relation to both the principles of good feedback practice (Nicol & Macfarlane-Dick, 2006) and the conceptual framework for improving students’ learning of Gibbs et al (2003). Once weaknesses have been identified it is possible for teachers to search for the causes and to put into effect changes to the type of feedback given and/or assessment tasks set that will lead to improvements in the quality of the feedback and increase its potential to help students’ learning.
References Bales, R.F. (1950) A set of categories for the analysis of small group interactions. American Sociological Review 15 (2): 257–263. Black, P. and Wiliam, D. (1998) Assessment and classroom learning. Assessment in Education 5 (1): 7–74. Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education 22 (2): 152–167. Brown, E., Gibbs, G., Glover, C. (2003) Evaluating tools for investigating the impact of assessment regimes on student learning. Bioscience Education 2 (1): 1–7. Brown, E., Glover, C., Freake, S., Stevens, V. A. M (2005) Evaluating the effectiveness of written feedback as an element of formative assessment in science. In: Rust, C. (ed.). Improving Student Learning: Diversity and Inclusivity. Proceedings of the 12th Improving Student Learning Conference. Oxford: OCLSD. Cooper, N. J. (2000) Facilitating learning from formative feedback in level 3 assessment. Assessment and Evaluation in Higher Education 25 (3): 279–291.
87
Evaluating written feedback 87 Gibbs, G. and Simpson, C. (2004) Does your assessment support your students’ learning? Journal of Teaching and Learning in Higher Education 1 (1): 3–31. Gibbs, G., Simpson, C. and Macdonald, R. (2003) Improving student learning through changing assessment –a conceptual and practical framework. Paper presented at the European Association for Research into Learning and Instruction, 2003, Padova, Italy. Glover, C. (2004) Report on an analysis of student responses to a questionnaire given to Year 2 and Year 4 science students at Sheffield Hallam University. Unpublished. Hyland, F. (2001) Providing effective support: investigating feedback to distance learners. Open Learning 16 (3): 233–247. Nicol, D. J. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 31 (12): 199–218. Nicol, D. and Milligan, C., (2006) Rethinking technology-supported assessment practices in relation to the seven principles of good feedback practice, in Bryan, C. and Clegg, K. (Eds) Innovative Assessment in Higher Education. Abingdon: Routledge, pp 64–77. Roberts, D. (1996) Feedback on assignments. Distance Education 17 (1): 95–116. Rorty, R. (1989) Contingency, Irony and Solidarity. Cambridge: Cambridge University Press. Sadler, D. R. (1989) Formative assessment and the design of instructional systems. Instructional Science 18: 145–165. Whitelock, D., Raw, S., Moreale, E. (2004) Analysing tutor feedback to students: first steps towards constructing an electronic monitoring system. Association for Learning Technology Journal 11 (3): 31–42.
88
7 Assessing oral presentations Style, substance and the ‘razzle dazzle’ trap Steve Hutchinson
Introduction –why assess presentations? There is nothing new in the notion of assessing presentations; for centuries scholars have given oral presentations and audiences have given feedback. Yet, in the information age, we have, due to mass communication technology, become exposed to a far greater number of presentations and as a result have become more discerning and demanding of our speakers. In short, we –as modern audiences –recognise good practice when we see it and are less tolerant of poor performance. As someone who trains people to give presentations, it strikes me that how we assess those presentations and whether that assessment is fair, reliable, equitable and criterion led is worthy of attention and a phenomenon that is rarely discussed in the literature. Presenting is a core component of most professions; and while it is a more vital element in some (e.g. law, education, academia) than in others, many job interviews involve the candidate giving a presentation. What is more, the type of talks that professionals are asked to give is becoming more varied and so we need to ensure that our students can meet the needs of differing professional audiences and stakeholders. As such, many degree programmes require students to present and we then formatively or summatively assess their capabilities. This chapter poses a number of questions to the academic practitioner wishing to set up or refine such an assessment scheme and also presents a model of how such a scheme can be made more effective.
The problem with presentations … Audiences are humans, and human beings –especially in real-time –are susceptible to spectacle, showy rhetoric, special effects and showmanship; elements sometimes labelled as ‘razzle dazzle’. Think briefly about the things that you believe make for a ‘good’ presentation. Don’t choose things that you feel an academic book wants you choose but elements that you genuinely respond to as an audience member.
89
Assessing oral presentations 89 Were your answers curriculum based (i.e. appropriate content covered)? Were they teachable (i.e. pacing, narrative etc.)? Or were they subjective human traits that are innate (such as enthusiasm or humour)? Ask yourself honestly how susceptible you are –even as a discerning intellectual –to ‘razzle dazzle’. And then ask how many of the things you listed as effective devices does your programme explicitly teach? Now imagine you have just sat through several dull, information-heavy student presentations that have been poorly delivered (because we do not teach presentation skills), from students not all fully fluent in English. You have had no respite or opportunity to recalibrate your critical faculties after a particularly good or bad performance. Then a student presents with great innate style and stage presence. Of course you will like this presentation. How could you not? But assessment is not about liking something. It is about making an objective, fair and valid judgement to help guide student improvement. So ask yourself again, in this situation is your judgement objective? Of course it is not. Presentations are highly subjective entities –perhaps more than any other form of assessed student work. Judging an academic presentation shares many challenges with assessing performance disciplines such as drama and dance (e.g. Jacobs, 2016) in that an essential component of success for the presenter is how well they connect with an audience. Yes, grading an artistic portfolio can evoke strong subjective reactions in the assessor, but there we can pause, think, reflect and re-examine the work. In presentations, the ‘performance’ occurs once, and we must process much data very quickly. These data are rateable objectively (i.e. facts delivered accurately), semi-objectively (i.e. appropriate narrative) and subjectively (i.e. the elusive ‘X factor’). Even an audience’s enquiries are skewed by individual specialities –which, unless each student is asked identical questions, raises issues about assessor bias. Moreover, there is nowadays much emphasis placed on presenters behaving ‘authentically’ (i.e. natural on stage). As such, drawing up a checklist of criteria and rewarding conformity may actually serve to be counterproductive to authentic individual performance. It may be that an essay and well-rehearsed talk are not dissimilar; but an effective presenter does not just robotically plough through preprepared slides. Even if the ‘paper’ is read from a script the differentiator between a strong and weak performance is the presenter’s ability to bring their paper to life. As such, grading elements like ‘flow’ is difficult, as much of what we are subconsciously noticing is the student’s ability to perform and respond to an audience’s reaction. Another issue that presentations raise is that they are impossible to render anonymous. An essay can have the writer’s details redacted, but a talk is a very personal entity and clearly owned by the presenter –and so the possibility of conscious and unconscious bias (even at a level of cultural gestures, accents and pronunciations) may come into play.
90
90 Steve Hutchinson As many previous authors point out (e.g. Moon, n.d.; Joughin, 1998) there are myriad other concerns with the subjectivity of assessing presentations. For instance, the lack of a permanent record of performance, the difficulties in dual marking, the fact that the assessment must occur during contact teaching time but mainly that the assessment criteria must be very simple so that the assessor(s) can use them while viewing a live performance. Furthermore, reliable assessment should award the same grade to the same performance regardless of circumstance. To assess a live performance with so many variables in play –and with no opportunity for the assessor to revisit the work and pause to think –is, it could be argued, an impossible task. It also requires all assessors be aligned as to what ‘effective’ performance is, which is perhaps the biggest hurdle of all. Finally, assessment should be fair, and there is certainly a case to be debated that assessing students in a way that requires that they think and talk in ‘live time’ (especially if they are required to take questions at the end) is unfair to students who are non-native English speakers. Presentations can also be a huge cause of anxiety and it is often the students who are unafflicted by such a handicap that seem to be more effective presenters. So, if dull subjects can be leavened by dazzling performers, and great material can be ruined by lack of energy and engagement, our challenge is how to tease these elements of substance and style apart and help our students reach effective levels of competency in each. We should proceed with caution.
To help or to judge? –formative or summative assessment If we do decide to assess student presentations, then our students need to know why this is being chosen as part of the raft of assessment measures – especially if presenting is not a core facet of any related profession. (To this end, much thought on oral presentation assessment has come from disciplines like law in which presenting and mooting are core professional skills; e.g. Varnava & Webb 2009). We must also consider to what degree we are using presentations as a way of ‘checking’ whether our students have gained the appropriate information from our teaching and can repackage it back in a vocal form. If this element is vital it may be worth requiring students accompany their oral assessment with a more conventional piece of written work. Fundamentally, we must then decide whether our intent is award a grade or to provide the student with useful feedback on their transferable skills. And if the purpose of assessment is to guide student improvement (which it surely is) then a one-off assessment which sits outside an experiential skill development framework makes little sense. The ‘assessment’ itself can come from several sources; namely staff, peer and self –which do not necessarily need to scrutinise the same element of the
91
Assessing oral presentations 91 presentation. For instance, it may be appropriate for the tutor(s) to focus on factual content, while the student’s peers focus on delivery.
Summative assessment If assessment is to be purely summative, I would not be the first author to urge care due to the difficulties of disentangling style and substance. What is more, it is difficult (even with detailed learning objectives and comprehensive feedback forms) to provide an accurate weighting of grade for the different elements of the presentation. Also, summative assessment for presentations begs the question of how much of what we assess has been explicitly taught as part of the curriculum. Entering ‘oral presentation assessment criteria’ into a search engine produces many feedback pro formas. As such, this chapter does not seek to add to this haul. Some existing mark-sheets offer full specifications of different levels of capability in discrete areas of subject-specific performance (see, for instance, Cooper, 2005) and some are more vague (one simply itemises four presentation facets and requires we tick either ‘effective’ or ‘needs work’). A good starting point is the resource created by Kate Ippolito at the LearnHigher CETL at Brunel University. But ultimately our feedback devices must reflect the purpose of the assessment –to grade or guide. If we opt for a summative grading approach, then the following are ‘must- have’ ingredients:
• • • • •
a clear explanation as to why this element of assessment has been chosen; explicit learning objectives and itemised levels of attainment for each objective (the areas for these objectives are clarified later); clear proformas that allow a grader to simply assess different performance areas; a cross-faculty standard for what is worth credit and what is not in each area; a robust moderation process (probably requiring filmed presentations).
Formative assessment If assessment is simply to provide constructive feedback, then many of the above ingredients also apply (i.e. feedback proformas) though absolute grade criteria are less vital. We must however give students multiple opportunities to receive feedback and then implement that feedback. It should also be remembered that presenting is potentially traumatic for the inexperienced, and feedback should build confidence and not just criticise shortcomings. Of course, many teaching practitioners will argue that a presentation could be assessed with both a grade and constructive feedback –but such a decision
92
92 Steve Hutchinson should be carefully considered and based on whether such a measure will actually help a student to develop. And, if they are to improve, what is their model of improvement?
Articulating and modelling best practice –helping students to recognise what is required As teaching academics we know that assessment tends to focus student attention on certain elements of the syllabus –and indeed it may form their curriculum altogether (e.g. Ramsden, 1992). So if presentations are to be assessed we need to decide which criteria will be measured, what performance levels are expected and then convey this to the student body. And vitally, we must as academics model best practice at all times in our lectures, tutorials and seminars1. This is particularly important if we are not explicitly teaching what we will be assessing (i.e. presentation skills). We must start by addressing these simple questions:
• • • • •
What criteria do we believe make for effective presentations? Are our faculty views as to what constitutes effective presentation aligned with the views of other professional stakeholders or representative audiences? Are these criteria sufficiently simple so as to be useful to students and workable by academics when viewing a live performance? What actually is the required standard of competency? Do the criteria used sufficiently disentangle the substance of the presentation (i.e. the factual content) from the style (i.e. can the student communicate a message effectively in a form appropriate for the audience /assessor)?
Encouraging reflection Having spent many years teaching presentation skills, the first element of my tuition is always reflective. We must help our students to reflect on their experiences of effective and ineffective presentation and we must ensure that they own the criteria for formative judgement. This can be done in many ways:
• • •
Individual reflection –a short (assessable) written piece about what constitutes effective oral performance. Group Reflection –groups are asked to discuss, prepare and present a their reflective thoughts on effective or ineffective talks. Groups could be asked to design (assessable) feedback proformas. Group Observation –asking a seminar group to observe ‘real’ (academic) presentations and then ask them to capture good and bad practice. These observed presentations could be live or pre-recorded. For instance there are many Three- Minute- Thesis (3MT®) competition winners’ films online,2 which start and conclude a fully cogent narrative very quickly indeed. This allows a group to watch three full, high-quality
93
Assessing oral presentations 93 presentations in 10 minutes. Other potential observation opportunities abound. For instance, asking first-year students to observe third-year presentations can benefit all parties. Using academics to be criticisable models may prove too challenging, but workshops (see Lee et al, 2017) using actors giving extremely poor academic presentations and then asking students to deconstruct and discuss their performance have had impressive results. The findings of the above reflections can be captured into student-owned feedback proformas and tip sheets for future cohorts.
What should we actually assess? Whether our criteria come from self-reflection, peer discussion or from the faculty, feedback proformas and tip lists should at least contain the broad elements in Table 7.1, which can form the basis of peer feedback, student reflection and tutor-centred discussion. It is, as mentioned previously, perfectly feasible for the summative assessment element of the content of the presentation to be carried out by an academic. In this case, the summative assessment, clear criteria of performance level and grade should be awarded based on sections two and six (shaded). Peer feedback about how a student could improve their transferable skills would come from the other sections.
The role of skills training –should we assess what we do not teach? If assessment is summative then it must be aligned constructively (see Biggs, 1996) with clear learning outcomes, valid assessment tasks and appropriate explicit guidance in the skill to be assessed –so that all the students know what is expected and at what level. This does not mean that each module should contain a designated skills workshop, though these can help if well delivered. Even if formal skills training is not feasible, it may be worth providing a guide to ‘good presentation practice’ (see Lawrence, 2008) or showing feedback forms and assessment criteria in advance. While informal training can take the form of reflection, observation and peer discussion presenting itself is a practical skill that is only improved by actually being on stage. As such, if we are to see improvement in our students’ performance we should:
• •
require that our students present a number of times throughout their candidature before any formal assessment is undertaken (especially if this assessment carries weighting towards their final degree mark); involve our students in the design of their feedback mechanisms so they obtain formative information that is actually useful to them;
94
94 Steve Hutchinson Table 7.1 Broad elements for assessment criteria Element
Content
1
Preparation
• Was the presenter ready, organised and prepared? • Had they arrived early to check the logistics and did all their visual aids etc. work appropriately?a • Was their time keeping appropriate?
2
Verbal (what they say)
• Was the content of the presentation appropriate and at the right audience level? • Was the breadth and depth of the material appropriate? • Was there a clear opening and conclusion? • Did the structure of the presentation form a cogent narrative? • Was the material made engaging to the audience? • Was any cited material appropriately referenced? • Where appropriate, were there links to material covered throughout the course and beyond the required reading?
3
Vocal (how they say it)
• Was their pace, pitch, tone and diction appropriate? • Did they vocally stress key elements to add emphasis? • Did they use ‘filler’ words (er, um, ah, etc.) in a noticeably detrimental manner?
4
Visual (how they look)
• Was there appropriate eye contact? • Did their physical movement, gestures and body language seem authentic and helpful to the message?
5
Visual aids (if appropriate)
• Was their use of visual aids appropriate? • Were slides visual rather than text-based? • Are slides legible/visible/clear/on screen for long enough?
6
Questions
• To what degree did they handle questions in an appropriate manner?
a This category does not usually appear on feedback proformas, but if you have seen a flustered conference presenter try to load an incompatible slideshow, you will recognise its importance.
• •
aid our students to reflect on their own performance; help them to understand, process and judge any performance and feedback objectively.
In short, presentations are an element of the curriculum that, regardless of student learning style or preference, are best learned experientially (see Kolb,
95
Assessing oral presentations 95 1994; Hutchinson and Lawrence, 2011) and it is essential that, if we are to provide formative or summative assessment on the skill area, we must provide multiple structured opportunities for them to plan, act, and reflect in the performance area.
Audience focus –ensuring a transferable skill is actually transferable Most students, once they leave university, will probably never present to an academic again. They will, however, present to many other audience types and differing stakeholders. As such it is important to ask students, in preparation, to consider an audience’s need (language, level, interests, etc). Academia is perhaps not great at doing this (see Hutchinson, 2017) –for instance, we talk about ‘public engagement’ as if the ‘public’ is a homogenous grouping, which they clearly are not. We therefore may wish to ask our students to present as if to different audiences, which in turn can develop their empathy, emotional intelligence and creativity. This simple action can ensure that presentation skills are not simply a student’s ‘ability-to-rehash-the-curriculum-in-a-way- their-lecturer-finds-acceptable’ skills.
Duration of presentation –what is long enough? Given busy workloads and constricting timetabling demands placed on us, we need to balance the need to hear enough of a presentation to give feedback without taking an undue length of time. Here again, we must revisit the purpose of the entire exercise. If we are examining the information contained within the talk then the only limiter may be logistical. If we are looking for whether the student is able to convey a clear and effective message verbally, vocally and visually then this can be achieved in five minutes. It is however important to ask to see a clear beginning, narrative structure and end –not simply a short excerpt from another, longer, talk. It is also important to manage the time allocations so as to ensure each student is given time to answer questions at the end. This is not simply an academic consideration, since the skill of handling questions is itself transferable.
Group presentations –caution advised Often, students are asked to present as a group and are then collectively assessed. There are intellectual advantages in doing so (developing cooperation and teamwork), but mostly here the underlying rationale seems mainly logistical (the whole cohort can present within an hour if divided into five teams). While group presentations are manageable for timetabling, they raise problems for students (not least in this form of presentation’s validity outside of a classroom) and for the assessor. Group presentations, in my experience, show unequal distributions of input and presentation talent –and serve
96
96 Steve Hutchinson to highlight the poor practice from less capable students –which is challenging if the students are to be awarded a joint grade. Even if the workload is evenly distributed and the presentation is rehearsed, the key elements that makes the performance effective are segues (style) between sub-presenters. Moreover, unless they are artificially selected, presenting teams often form from friendship groups resulting in collectives of (for example) non-native speakers, thus amplifying many of the style versus substance problems already outlined. In short, extreme caution is advised here and decisions about group presentation assessment should be made on educational and not simply timetabling grounds.
Smartphone feedback –the objective advantages of technology The transience of presentations means that they are difficult for staff to moderate any assessment and also difficult for the student to reconcile their own performance alongside any given feedback. However, students now have a means for capturing their own performance by filming with their phones. They can then revisit this film for their own reflection and learning and even upload the footage to a specific site for group peer review. In presentation feedback sessions I insist on filming the participants’ presentations and then watching at least some of it back with them. I often notice things on the second pass that I do not consciously notice live. I change my pen colour so that the students can see what I noticed live and what I noticed from the film. If running a group feedback session, I ask my students to do the same. A further potential advantage here is that students may wish to upload their presentations (to file-sharing platforms or YouTube) so they can solicit feedback from a wider audience. At this stage, though, the ownership of the film and what the student chooses to do with it, rests with the student.
Peer-assessment of presentations –is it better to give than to receive? Many academics are reticent to include peer-assessment and feedback in their modules, but it can offer triangulation of data and real richness for all parties. But if we are to avoid a peer group telling each other that everything is ‘fine’ with their performance then students must be shown how to give effective feedback and be given appropriate tools to help them. Supportive peer-assessment of presentations can be an excellent way of building strong student cohorts and thriving departmental research environments. And of course, the ultimate advantage of a cohort group being taught to identify best practice, reflect, observe each other, provide constructive peer feedback and consciously implement the learning means that the process can be repeated without the need for tutor intervention.
97
Assessing oral presentations 97
Providing helpful feedback –‘can’t you just be better?’ As mentioned already, assessment is primarily about guiding student improvement, and so it would be sensible for an assessor to consider exactly how their actions and words are adding value. Nichol and Macfarlane-Dick (2006) detail principles of good feedback practice which may steer thinking in this regard, but fundamentally feedback should promote self-assessment and provide motivating, high-quality information to students. This should come from peer and tutor verbal feedback and well-completed proformas as well as video capture of the presentation and can be achieved as follows.
•
• • •
At the start of the presentation session, remind the group of the session purpose –namely to help each other to improve. Emphasise the need for supportive behaviour and stress that if students want quality feedback they should also give it. Distribute feedback proformas and insist on the importance of quality, legible notes since these collated forms will be handed to each individual presenter at the end. Before their presentation, ask each student what feedback they need. They may have been told previously that they are too fast /slow/manic etc., and contracting with the student helps sharpen the group’s observation and the value of any subsequent feedback. After each presentation,3 ensure that all group members (including the presenter) are given a minute or so to finesse their feedback notes. This small detail makes a big difference. Before examining the films, re-enquire to the student what the audience should focus on in light of how well they feel they presented. Again, this reminds the peer group of what to focus on. (Ask them to change pen colour for the second pass.)
It is not necessary to watch all the presentation a second time; three minutes is entirely adequate. Do not necessarily watch from the start of the film. Sometimes watching the end and the questions is more enlightening –since often presenters are more natural here.
•
Start any formative feedback session by asking the student to lead. Questions like: What are you expecting us to say to you? What do you notice when you see the film of yourself ? What strengths can you observe in your performance? can start the discussion in a productive way and ensure that the student takes ownership of the information.
Presenters, when seeing themselves on film or in the aftermath of the talk are likely to judge themselves harshly and fixate on (sometimes insignificant) elements of the presentation. For example, students presenting in a non-native language often are concerned that their slightly imperfect fluency inhibits
98
98 Steve Hutchinson their message. Of course, sometimes the audience does notice –but frequently such ‘flaws’ are not detrimental and are largely forgiven.
•
Spend a short amount of time with each student asking them what they will focus on specifically the next time they present and how they will set about remedying any weaknesses in their performance. Of course, the pre-and post-reflective elements, if required, could form assessable written pieces in their own right.
At the end of discussions, ensure that feedback forms are collected and handed to each individual student and make the films available to individuals so they can re-watch them if they require.
So where do I come in? The role of the tutor It is important for the academic ‘tutor’ to consider exactly what role to take in any presentation feedback session; facilitator, coach, tutor, consultant or trainer. It is also vital that this role is made clear to the students. Shifting our role from a traditional grade-giving role to that of a facilitator who oversees peer feedback can help to iron out the potential variability of feedback between tutors. Observation of presentation by two tutors simultaneously may serve a similar function, but it is resource intensive and there’s still no guarantee that assessors’ views will be aligned –even if their final ‘scores’ are. If the tutor acts as a facilitator then the students’ peers can provide the bulk of the feedback and leave the ‘content’ scoring to the academic.
Conclusion –presenting for the next generation An advantage of paying close attention to audience feedback criteria and keeping filmed evidence of presentations from one year group to the next is that we can observe whether performance levels increase, standards change and audience tastes evolve. My observation of the last 20 years is that, while there is still a huge variation in capability (and there are still razzle dazzlers), performance levels seem to rise year on year. Moreover, in an ever-connected, bite-sized and cross-cut world it would seem a safe prediction that audience tastes and needs will change too. So, as we prepare students to communicate in a modern information-led age, it is vital that we actually think about how to do so in a way that helps them to formally learn –as opposed to our being content that they have had an experience and that we have told them what we think of their efforts. Providing opportunities for reflection and observation of modelled best practice, asking students to reflect, providing clear criteria of performance and designing these criteria alongside the students, gathering objective data from a variety of sources and technologies and, vitally, providing multiple occasions to experientially learn are all vital in ensuring that our students can
99
Assessing oral presentations 99 actually present effectively. Also, by doing this perhaps we too will become better presenters, and also all be less susceptible to the ‘razzle dazzle’ that can affect our critical judgements.
Notes 1 As a callow undergraduate, I don’t think I ever read an essay or research paper by one of my tutors. As such, I was prepared to accept their criticism of my written work, since to me they were mental titans whose prose was probably dazzling. I did, however, sit in many barely satisfactory lectures delivered by people whose substance was valid but whose style was at times pitiful. 2 Three Minute Thesis (3MT®) is a global academic competition founded at the University of Queensland where researchers concisely present their thesis to a general audience. 3 Depending on how the class or workshop is to operate I may run five or six presentations concurrently, then pause for a short interlude, and then look at the films.
References Biggs, J. (1996) Enhancing teaching through constructive alignment. Higher Education 32: 1–18. Cooper, D. (2005) Assessing what we have taught: the challenges faced with the assessment of oral presentation skills. Higher Education in a Changing World. HERDSA 2005 Conference –Higher Education in a Changing World –Proceedings, pp. 124–132. http://conference.herdsa.org.au/2005/pdf/refereed/paper_283.pdf (accessed 4 January 2019). Hutchinson, S. (2017) Engagement beyond the ivory tower. In: Daley, R., Guggione, K. and Hutchison, S. (eds) 53 Ways to Enhance Researcher Development. Newmarket: Professional and Higher Partnership, pp. 95–98. Hutchinson, S. and Lawrence, H. (2011) Playing with Purpose –How Experiential Learning can be More than a Game. London: Routledge. Ippolito, K. Assessing oral presentations. LearnHigher CETL, Brunel University. www. thegeographeronline.net/uploads/2/6/6/2/26629356/assessing_oral_presentations. pdf (accessed 25 January 2019). Jacobs, R. (2016) Challenges of drama performance assessment. Drama Research: International Journal of Drama in Education 7 (1): 1–18. Joughin, G. (1998) Dimensions of oral assessment. Assessment and Evaluation in Higher Education 23 (4): 367–378. Kolb, D. (1984) Experiential Learning: Experience as the Source of Learning and Development. Upper Saddle River, NJ: Prentice Hall. Lawrence, H. (2008) Presentation skills in the postgraduate companion. In: Hall, G. and Longman, J. (eds) The Postgraduate’s Companion. London: Sage, pp. 248–264. Lee, L., Myles, R., Guccione, K. (2017) Partnering with actors to enhance researcher development workshops. In:Daley, R., Guggione, K. and Hutchison, S. (eds) 53 Ways to Enhance Researcher Development. Newmarket: Professional and Higher Partnership, pp. 80–82.
100
100 Steve Hutchinson Moon, J. (n.d.) Assessing oral presentations. Learning and Teaching Support Centre. pcwww.liv.ac.uk/~cll/cepd_files/Assessing_Oral_Presentations.doc (accessed 4 January 2019). Nichol, D. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199–218. Ramsden P. (1992) Learning to Teach in Higher Education. London: Routledge. Varnava, T. and Webb, J. (2009) Enhancing learning in legal education. In: Fry, H. Ketteridge, S. and Marshall, S. (eds) A Handbook for Teaching and Learning in Higher Education (3rd edn). . London: Routledge, pp. 363–381.
101
8 Assessing and developing employability skills through triangulated feedback Susan Kane and Tom Banham
Background Over the past 20 years, the term employability has become ubiquitous within higher education discourse. Indeed, it has become even more prominent with the increased importance of graduate outcomes as part of the new Office for Students Teaching Excellence and Student Outcomes Framework and Access and Participation Plan. When reviewing the literature, there is certainly no agreed understanding of what employability means across the higher education sector, never mind across employers. When discussing with employers the skills and attributes they look for to bring to life what a ‘great’ graduate looks like, you often hear reference to ‘strong communication skills’, that they ‘can work with others’ or that they ‘have the edge’. We understand where employers are coming from when referencing these idealised terms, but what does this actually mean in practical terms? If there is no clear agreement across the sector and lack of clarity from employers, how do we expect students to understand what they need to do to become employable? Recent research by the Chartered Management Institute has helped define this further through a recent publication, 21st Century Leaders (CMI, 2018). The report concluded that the majority of employers (70%) believe that management, enterprise and leadership modules should be integrated into all degree subjects to boost employability. This report is useful to support the development of employability skills through the curriculum, but a combination of different approaches are needed to help support broader student employability. At York, our interpretation of employability is ‘the skills, attributes and behaviours an individual builds over time that can be applied and transferred in different contexts. Critically, these experiences need to be articulated in a way that employers can understand’. Feedback is an essential component of our understanding of employability as it enables students to develop a deeper level of understanding about themselves and their strengths to ensure continual improvement.
102
102 Susan Kane and Tom Banham There is a significant challenge for employers when assessing graduates on capability and fit for the organisation. This is mainly due to the coaching and support graduates receive prior to employment assessment, wherein candidates are encouraged to pre-rehearse answers and present themselves in a certain way. This makes it extremely difficult for employers to differentiate and find the best candidates and can mask potential. In order to combat this issue, employers have increasingly adopted a new approach to graduate assessment and selection through the use of strengths-based approaches to selecting individuals (Linley, 2008). This approach is different to a traditional competency assessment as it focuses on future potential and assesses levels of ability and engagement. This results in a more authentic response enabling passion and potential to be identified. The premise being that if you are good at something and love to do it, then is that not the perfect job match? Many of the large-scale graduate recruiters have adopted the strengths-based approach, with compelling evidence for fairness, prediction and positive candidate experience (Linley & Garcea, 2013).
Building a framework for employability Many universities use graduate attribute frameworks to support students when building employability skills. These frameworks detail the qualities and skills the institution believes graduates should develop through the course of their study and engagement in student life. The graduate attribute frameworks used by many institutions are more aligned to the traditional competency methods of assessment. With the rise of strengths- based recruitment and development there could be challenges for graduates when reflecting on their experiences gained while at university if they are conditioned under a traditional mode of assessment. Most notably because the feedback mechanism and the language used will be different during a strengths-based assessment process. Much of the established careers support at universities is focused on how to be successful in the assessment process, through the use of mock interviews and assessment simulators. While this is helpful, it can be limited to only supporting students to prepare for ‘passing’ assessments, rather than helping them to understand their strengths and how best to develop these for their future careers. Focusing on passing an assessment rather than providing the employer with an authentic appreciation of an individual’s true self can lead to future challenges for both graduates and employers if the cultural fit and organisational values do not align. Strengths-based assessment does not allow for rehearsed answers as it is much more dynamic than the competency approach, making preparation very difficult. It also builds on recent research carried out by the Institute of Student Employers where only 39% of employers think that graduates are sufficiently self-aware (ISE, 2018). With advancements in technology through automation and artificial intelligence, an individual’s behaviours and emotional intelligence will become
103
Assessing employability skills 103 even more critical to graduate success as the skills required in the future will be very different. Our focus is on building self-awareness very early on and enabling students to be self-directed based on their strengths, helping individuals to build social and emotional capital. If their strengths and preferences match and align with employers, we think this will lead to a much stronger hire for the student, employer and the university. The approach taken was to develop a consistent framework whereby students are able to reflect on activity and experiences at university and beyond; this led to the creation of the York Strengths Framework (YSF). In creating the YSF, we wanted to gain a full appreciation of approaches already embedded, while seeking to answer the question posed earlier in the chapter ‘What differentiates a good graduate from a great graduate?’ We also wanted to ensure that any approach complemented the York Pedagogy to maximise the opportunity for students to identify and nurture individual capabilities and strengths, enriching their understanding of themselves and their abilities. The York Pedagogy is an evidence-based approach to designing teaching and learning interventions and is fundamental to our learning culture. Part of the design framework focuses on the ability of students to articulate their skills and knowledge to employers, by providing a concise, concrete set of learning outcomes for each programme. These are defined and developed by the programme teams themselves in consultation with students, and address employability skills specifically for the discipline. The development of the YSF involved several stages. We initially carried out desk-based research into what 30 other higher education institutions across the UK, Australia and America were offering to students in relation to enhancing employability skills and developing leadership capability. This involved reviewing skills, behaviours and/or attributes they were seeking to develop and the manner in which they sought to do this. It was clear that institutions in America and Australia are more advanced in their thinking, so we also visited both Monash University and Australian National University, Canberra, to observe and participate in student leadership activities and learn from their best practice. Monash University offered a comprehensive approach to student leadership development, and like some American universities, they use the Social Change Model (Astin & Astin, 1996) to empower students to distinguish themselves as capable and competent leaders that can create positive change. This is a value-based model where leadership is viewed as a process rather than a position and requires a high level of self-awareness of personal values, beliefs, attitudes and emotions that motivate individuals to take positive action. Additionally, we reviewed a number of current research papers to gain a greater understanding of the future of work and compared the skill areas identified, with the themes identified from the desk-based research. We also conducted qualitative interviews with employers, alumni and graduate recruiters through which the difference between a good and a great
104
104 Susan Kane and Tom Banham graduate became clear. Students that were able to tell a clear and compelling personal narrative in an authentic way and were able to illustrate how they had worked collaboratively to bring about positive change, stood out. In addition, those that then went on to talk about how their values and strengths aligned to the purpose and values of the business, had the edge over others. These discussions also highlighted an increased focus on collaboration and how a good citizen works collaboratively to bring around positive change on behalf of others and for others, within the wider community. This approach reflected the Social Change Model (Astin & Astin, 1996) referenced in some institutions and informed our decision to embed the Social Change Model within our framework. By triangulating the different data source we found 16 core themes emerged. The final stage of our design was to look at a small percentage of programme level outcomes within the pedagogy to provide a snapshot of which of the 16 core themes trend more predominantly than others. From this we selected our final nine strengths. Working in partnership with leading occupational psychologists Capp, we then created three levels of narrative to illustrate each of the nine strengths through the individual, group and society level of the Social Change Model, thus increasing the opportunity for reflection, feedback and assessment (Table 8.1). As a final validation of the framework, at this stage Capp created a graduate strengths survey and circulated this to 39 graduate employers to rate the importance and application of these levels within their own organisations. This unique framework reflects behaviours actually desired by a representative range of graduate employers as well as providing a common language and consistent approach for all students, which enables them to reflect on their entire university life and create an employability story which is personal.
Assessing and developing employability York were keen to embed a development programme that utilised Strengths methodology to enable students to really identify what they are good at, what they enjoy doing and most importantly, how they can use these in the transition from education to work. In 2017, York introduced the ‘York Strengths Programme’. This programme offers every one of its first-year students a unique opportunity to enhance their self-awareness, careers confidence and ability to focus their subsequent job searches into their most personally relevant sectors. The Programme was designed in partnership with Capp and with input from six major graduate recruiting organisations: Aviva, PWC, Teach First, Glaxo, NHS and Clifford Chance. All 4300 undergraduate students at York are expected to take up this intensive development opportunity and leave with an action plan. The Programme consists of three components. First is an immersive online situational strengths test, assessing both behavioural and cognitive aspects of the YSF. Students are invited to respond to a series of scenarios by ranking
105
Assessing employability skills 105 Table 8.1 York strengths framework Strength
Individual
Group
Community
Relationship Builder: Builds honest relationships based on trust and credibility, feeling energised by collaborating with others and working towards shared goals.
Students strong in Relationship Builder know what individual qualities they bring to a working relationship and demonstrate a natural warmth and engagement when acting in congruence with these qualities. They build trust and credibility by proactively seeking to understand other people, and using this understanding to develop authentic and honest relationships.
Students strong in Relationship Builder are energised when they are collaborating with others and working towards shared goals. They keep everyone engaged with the groups’ goals by building relationships and understanding how everyone’s needs link to the overall aim of the group. Recognising that differences in viewpoint are inevitable, they manage diverse opinions in a respectful manner.
Students strong in Relationship Builder view working with the community as a two-way process. Through engaging and honest communications, they leverage the expertise of the community to create positive change. This also helps them to build relationships so they can further understand their needs and find a common purpose.
the available response options from most to least desirable. Depending on the strength being measured, students will receive a score for that strength which reflects how low or high they are in that behaviour. The report shares back with students their three highest strengths; these scores are relative to the person and not compared to other students. The report provides students with self-driven feedback on their strength areas and provides the foundation for the immersive development day, where feedback will be primarily observer led. This first online assessment is also an opportunity for students to experience the type of upfront assessment used widely by student recruiters as a well-documented valid, fair and innovative methodology (McDaniel & Nguyen, 2001; Deakin & Dovey, 2017). The second component is the Strengths Day; a highly participative series of short exercises each designed to focus on one or two of the nine strengths in York’s framework. The day took the approach of focused micro-exercises following research from Lievens and Sackett (2016), which found exercises that lasted only a few minutes provided higher validity in terms of the
106
106 Susan Kane and Tom Banham behaviours it assesses and increased candidate engagement. Students are initially reminded of the strengths through a Strength Gallery at the start of the day and invited to share with each other examples of where they have applied these strengths to date. The strength matrix is shared with them and they are encouraged to self-reflect after each activity and score themselves based on their level of competence and engagement with the task. They are placed in groups of six and allocated an observer who works with them throughout the day. At the end of the day each student gets a feedback session with the observer to understand which were their two highest strengths displayed throughout the day and to discuss their personal reflections as documented in a learning journal, a key element of experiential learning (Kolb, 2015). The final component is a new iteration of the York Award, the university’s certificate of personal development. Originally the first of its type in the UK, this version of the Award enables students to reflect on their own experiences and consider how accurately the feedback of the first two components resonates with them and what they might do with this new level of self- awareness. From the pilot of the Strengths Programme we know that an early grasp of strengths can shape and focus a student’s consideration of potential careers and the acquisition of relevant work experience and competencies. Employers also know that applicants with enhanced self-awareness are likely to have made more informed choices in their applications and thus progress more successfully in their careers.
Impact and results When evaluating the Strengths Pilot Programme, where 400 students took part, students provided positive qualitative feedback about the event on the day. In addition, we found that students who attended the event were more confident in their career prospects as well as 20% more likely to report having gained recent work experience when surveyed in the following academic year. Additional feedback on the programme from students that participated the following academic year indicates that 81% would recommend this to others. You think you know yourself –so it’s a way to open yourself and make you realise there are some bits you can improve and some bits that you are not aware of. It was useful. (Student participant, 2017) Recruiters also regard the solution as highly attractive for future talent. The York Strengths Programme is a great concept. I was impressed with the consultation process which included a number of employers from a range of sectors and the feedback that we provided was taken on board and very much helped to shape York Strengths. I have also been fortunate
107
Assessing employability skills 107 enough to see one of the initial trial days and there was a real focus on students finding out what their strengths and development areas were but with appropriate levels of support in place too. (Andrew Bargary, PwC) By taking part in the York Strengths event, the students are able to understand more about what their strengths are by taking a self-assessment. They then come along to an amazing day full of exercises and activities, where they get to understand what their strengths mean for them in practice. There are no other universities doing things in this way or to this extent, so it’s an amazing opportunity for York students. (Emma Trenier, Capp)
Final steps Second-year undergraduate students are then invited to apply for an intensive three-day leadership programme designed around the YSF, where they explore the strengths through the group lens. These applications are assessed by third-year undergraduate students, thus providing another opportunity to develop their own assessment and feedback skills and offer qualitative peer feedback. Day 1 focuses on gaining a greater understanding of the strengths through employer talks, panel debates and short activities that demonstrate these strengths in action. At the end of the initial day, students are put in groups and presented with work based challenges that they have to address and design a solution to. They then present this solution to a panel of employers at the end of day 2. Day 2 comprises of them exploring this challenge and co-creating a solution that they all agree with. They have access to employers for guidance on marketing, finance, communication, inclusion and engagement. They also have a number of team challenges that they have to participate in during the day. They are observed throughout the day by a member of the careers and placement team. Day 3 is focused purely on assessment and feedback to draw together all the learning from across the three days and enable students to create their own personal narrative and reflections around the YSF. Students are required to provide feedback to each other on the contribution they made and what had the greatest and least impact on others. This is managed through the goldfish bowl technique, so that each student has to lead a discussion on the feedback the group wishes to give and then deliver the feedback to that student on behalf of the group, in a constructive and supportive way. Each student also receives one to one feedback against the YSF from the member of the careers and placement team; feedback from the employer panel on their solution and presentation and individual feedback on their personal impact from employers they contacted for guidance.
108
108 Susan Kane and Tom Banham The most useful thing in the entire course. It was all really positive, and there was a tangible atmosphere in my group of, not just self-improvement, but team improvement. It was great to acknowledge how far everybody had come in such a short space of time, and also to identify ways in which we could continue to improve. (Student participant, 2018) The key benefits of group feedback discussions included team learning, facilitator support and multi-perspective critical thinking. I welcomed the inclusion of reflective group discussions into the programme, not as a substitute of, but rather, complementary to my own reflection of my learning and development throughout the programme. For me, the non-judgemental reflective group discussions and feedback session were beneficial in different ways. The interactive, supportive and multi-perspective nature of the reflective group feedback discussions was particularly appealing to me and confirmed my positive contribution to my team as well as areas I needed to improve to for personal development. (Student participant, 2018) Really interesting way to give feedback on your peers and the best feedback session I have ever been a part of. Really great to hear what the facilitator had to say as they were observing all day in silence so was nice to hear their feedback after all of Day 2. (Student participant, 2018) They finally engage in mini strength-based interviews with employers. This interview is also recorded so that they can review their performance and reflect on the personal impact this has. It is a unique opportunity to also observe other students and the responses that they give to the questions they are asked. They then receive feedback on their responses by students, facilitators and employers. Really useful, both to undergo the experience and get comprehensive feedback, as well as to watch others and learn from them. Lots of very good examples to draw upon. (Student participant, 2018) The structure of the programme –theoretical approaches in workshops, practical applications through a self-led project followed by feedback sessions and reflective interviews –resulted in a steep learning curve. Throughout the course, I myself, and everyone involved, quickly acquired new skills, refined existing ones, identified strengths and weaknesses. (Johannes, student participant, 2018)
109
Assessing employability skills 109 Key strengths 1. Co-creation of approach with alumni, employers and students. This ensures that the framework is informed by current research and practice and is mindful of the diverse perspectives of all stakeholders. It has also resulted in high commitment levels of employers and alumni in the roll out of the approach. In addition, the Student Union has been prolific in their advertising and endorsement of the approach. 2. Executive Board support. Due to the sponsorship of the approach by University Executive Board, finances were made available to ensure that all first-year undergraduates had the opportunity to engage with the York Strengths Programme. 3. Peer review and feedback. By providing a framework which describes the strengths in detail, it enables a shared narrative and understanding. This has facilitated qualitative peer feedback both through students assessing application forms and providing feedback to each other as a result of participating in the three-day leadership programme. Recommendations 1. Maximise engagement of all students by designing a flexible intervention that is desirable to all student groups e.g. distance learners and mature students. This would also include building the programme into the curriculum. 2. Maximise academic support by engaging them in the design process so they can see how it supports the pedagogy identifying champions to help promote the approach and involving them in the delivery of the programme. 3. Enhance quality assurance and consistency of assessment process by not only providing initial training and support in the framework and assessment methodology, but by dip sampling feedback reports, observing some 1-1 feedback sessions and facilitating peer support to share best practice.
References Astin, A. and Astin H. (1996) A Social Change Model of Leadership Development Guidebook. Los Angeles, CA: Higher Education Research Institute. CMI (2018) 21st Century Leaders: Building Practice into the Curriculum. London: Chartered Management Institute. Deakin, P. and Dovey, H. (2017) Can combining assessments improve test fairness and enhance user experience? Assessment and Development Matters, 9 (1): 25–29. ISE (2018) ISE Development Survey 2018. London: Institute of Student Employers. Kolb, D. (2015) Experiential Learning. Upper Saddle River, NJ: Pearson Education.
110
110 Susan Kane and Tom Banham Lievens, F. and Sackett, P. R. (2016) The effects of predictor method factors on selection outcomes: a modular approach to personnel selection procedures. Journal of Applied Psychology 102: 43–66. Linley, A. (2008) Average to A+: Realising Strength in Yourself and Others. Coventry: CAPP Press. Linley, P. A. and Garcea, N. (2013) Engaging graduates to recruit the best. Strategic HR Review 12 (6): 297–301. McDaniel, M. A. and Nguyen, N. T. (2001) Situational judgment tests: a review of practice and constructs assessment. International Journal of Selection and Assessment 9: 103–113.
111
9 Innovative assessment The academic’s perspective Lin Norton
Introduction In this chapter, I suggest that the context of assessment in higher education needs to be taken into account when considering innovations. This includes the ‘view from the ground’. By this, I mean academics’ perceptions of the affordances and constraints to assessment and feedback practices that are generally accepted as pedagogically advisable. The academic’s view is particularly important in the current higher education climate where the pressures include satisfying fee-paying students, competing in the league tables and demonstrating teaching excellence. In such a context, to introduce assessment that is innovative carries with it certain challenges and potential risks. It is not sufficient to have a well thought out and pedagogically sound idea for new assessment, be it designing a new task or implementing new marking and feedback processes. To be successful, we have to carefully negotiate a series of hurdles which I explore here by drawing from the literature and from some of my recent research with both new and experienced academic staff from a wide range of institutions in the UK. Bearman et al (2016) argue that much of the assessment literature focuses more on the learner and their experience of assessment rather than on the staff who design, implement and judge assessment. The perspective of the academic on assessment has been the thrust of many research studies I have been involved with over the past 20 years or so. Much of this later work was enabled by the Write Now Centre for Excellence in Teaching and Learning (CETL). Examining the views of academics enables us to consider some of the complex elements involved in assessment practice. This may give us a better understanding of why there can often be a mismatch between the espoused and the actual assessment practice at the macro level of the institution, the meso level of the subject discipline and the micro level of the individual academic.1
Assessment in higher education: a wicked problem? Ramley (2014) used the concept of the ‘wicked problem’ developed by Rittel and Webber (1973) in her discussion of the changing role of higher education.
112
112 Lin Norton Wicked problems are those that cannot be easily defined, as they are changing as we study them, so a clearcut solution is never easy. Every response creates new problems or challenges of its own. These are unique and interrelated within larger more complex problems, so are constantly changing and need flexible and collaborative responses. I would argue that assessment and feedback practice in higher education is a ‘wicked problem’. Assessment and feedback are interlinked processes that comprise assessment design, marking and feedback. Assessment design, marking and feedback is fundamentally contextualised in what Malcolm and Zukas (2009) describe as the messy experience of academic work. This in itself is embedded within broader contexts of international drivers and institutional policies.
The macro level: the national context In the UK, we have to consider the bigger picture of the honours degree system. Medland (2016) cites the work of the Burgess Group (Universities UK, 2007) who reported that the UK honours system was no longer fit for purpose since ‘it cannot describe, and therefore does not do full justice to the range of knowledge, skills, experience and attributes of a graduate in the twenty first century’ (p. 5). This report led to two key drivers for changing the assessment process: i) the external independent quality assurance processes and ii) the student perspective. Quality assurance procedures now dominate assessment policy and practices but the pervasive influence of the National Student Survey (NSS) in the UK creates tensions between the two. In terms of the academics’ viewpoint this leads to a real dilemma when trying to satisfy two competing requirements such as students’ expectations of prompt return of feedback and institutional quality assurance procedures such as second marking and exam boards, which all take time. This can lead to a conflict of priorities especially given increasing student numbers and increased workloads. In a questionnaire study with 586 new university teachers from 66 institutions in the UK, we were looking at the affordances and constraints in assessment design considered to be ‘pedagogically desirable’ (Norton et al, 2013). We found evidence of desirable assessment design practice, such as ‘designing assessments to encourage students to take responsibility for their own learning progress’. This was mentioned by 86% of our respondents. The second most desirable assessment practice they agreed with was the ‘emphasis on assessment for rather than of learning in their own practice’ (75%). Interestingly, there was evidence to suggest that taking a university teaching programmes had ‘changed my views on assessment practice’, agreed by 75% and an awareness that ‘assessment methods are needed to improve current practice’ (69%). These are encouraging findings and support the value of formal university teaching programmes. However, we also found that what academics were learning on these courses were not always put into practice for a number of reasons that we termed ‘constraints’, many of which are addressed elsewhere in this book (e.g. practical, little incentive to innovate,
113
Innovative assessment 113 students going through the motions and focusing on grades). The top two constraints to desirable assessment practices were what we called external factors (i.e. high cost, time and student numbers) mentioned by 75% followed by a perception that there was little incentive to innovate in assessment practice (61%). In a further study using the same questionnaire with 356 experienced lecturers from two institutions very similar percentage agreements were found (Maguire et al, 2013; Norton et al, unpublished). Both these studies support Bearman et al’s (2016) claim that there is a gap between what staff conceptualise as good assessment practice ‘work as imagined’ and their actual practice ‘work as done’ (Bearman et al, 2016, p. 546). In our own research, we have attempted to look at some of the reasons for this gap by analysing some of the open-ended comments in the second questionnaire study. The presented quotes are samples, some of which have been presented at a conference and some of which come from a content analysis in our current paper (Norton et al, unpublished). In all cases the quotes are presented as illustrations rather than objective evidence so there has been some subjectivity in the quotes I have selected. In all cases I have tried to give a fair representation of how academics experience the challenges of innovating assessment in both of the participating institutions. One of the perceived issues was not only to do with the wider national assessment culture played out in the institution that puts heavy demands on staff, but with how our colleagues react; this can lead to feelings of isolation. An issue can be class size and the attitude of other staff in the same school, breaking rank on feedback does not win favour with mature staff who have their own entrenched views on teaching. (Engineering, Institution A). On any specific course if feedback is to be effective there needs to be group buy in from the academics and support staff for that course so that the processes are followed and the confidence is given that everyone is working together and the resulting feedback looks the same to all students. (Marketing, Institution B). Another frequently mentioned issue related to the national driver of the NSS was to do with students themselves, particularly when it came to feedback processes: Students are also responsible for engaging, reflecting and acting on the feedback given. Lecturers can only do so much to encourage students to engage with feedback for future development, but students need to take the responsibility also. (Management, Institution A)
114
114 Lin Norton My observation is that students, as a body, do not fully appreciate the feedback they are given. Whilst no (sic) all, there are students who cannot accept that points made a (sic) valid, and they do not then go on to benefit from it. (Health Science and Practice, Institution B) However despite these challenges, many of these lecturers had clear beliefs about how to deal with them. I use a two-stage approach to feedback, a general feedback to the class outlining the main merits and failings of the course work. This is discussed in class with time for questions and reflection. Thereafter an offer is made for individual feedback, normally the takeup is small, but this approach always gets improving course work marks with the progress of time. (Engineering, Institution A) Informal discussions between colleagues is very effective in sharing practice. (Sport, Institution A) If marking should be to encourage not penalise, directly feeding forwards and backwards in a friendly, confident yet slightly authoritarian way [i.e. displaying one’s own enthusiasm for one’s chosen subject matter] should be encouraged. (Music, Institution B) I am in the process of designing an innovative approach to assessing research methods, based on a credit bearing research assistance programme I was involved with in a US University. (Education, Institution B) Examples like these show that individuals can make space to reflect on assessment and to innovate.
The meso level: the subject discipline Innovations are often the result of individual staff initiatives where the support of colleagues is important but so too is the institutional response. Part of the challenge is to do with the influence of the subject discipline. External examiners are the gatekeepers of academic standards of a subject but they are usually appointed for their subject expertise rather than for their assessment expertise (Medland, 2017). This can have the unfortunate consequence of constraining some more innovative assessments in favour of maintaining the assessment status quo in the discipline. Our survey findings with new lecturers showed broad discipline differences in both desirable assessment practice and
115
Innovative assessment 115 constraints. We used Biglan’s (1973a, b) taxonomy that allocated subjects on the basis of whether they are hard or soft and pure or applied. Our statistical findings suggested something of a continuum with hard-pure lecturers being at one end in which they were less likely to agree with desirable practice and more likely to agree with constraints and soft applied lecturers being at the opposite end (higher on desirable practice and lower on constraints). The hard-applied and soft-pure lecturers could be seen as somewhere in between these two extremes. (Norton et al, 2013, pp. 245–246). This was partially replicated in our second survey with more experienced colleagues which showed that lecturers from soft applied subjects scored highest on desirable assessment practices (Norton et al, unpublished), but, looking at open text comments on the questionnaire, there are other factors at play. … not all lecturers have control over the assessment process employed in courses and modules, this is often controlled by more experienced colleagues. This leads to a variety of potential barriers to assessment practice change which can include (1) custom and practice (2) reluctance to use new methods and technology. (Family studies, Institution A). The process of designing (and delivering) assessment is often isolated out from the crafting of good teaching and learning experiences because of bureaucratic processes. The bureaucratic systems I am referring to are mostly internal, so imposed weighting on assessment pieces, time allocated for marking and feedback and exam boards. I do believe that quality assurance and external benchmarks are seriously important and need to be applied to the processes of course (and assessment) design so that formative assessment adequately prepares students for summative assessment. (Education, Institution B).
The micro level: the individual academic Although complex and multi-layered, it is ultimately the individual academic who enables or inhibits innovations in assessment and feedback (Adachi et al, 2018). Wei and Yanmei (2017), looking at university teachers in an elite Chinese university, found that innovations in feedback practice were unlikely to be adopted unless confirmed as effective in teachers’ own evaluations. This is a little different to the situation in the UK where student evaluations such as the NSS are powerful determinants of assessment practice, but there are other constraints. One of the early studies we carried out in the Write Now CETL project was an interview with 29 academics from four universities seeking their views on assessment, learning and teaching (Norton et al, 2009). Interviews were carried out by two of the research team who conducted an in-depth interview with each lecturer. Using a semi-structured schedule, each interview had a phenomenological emphasis trying to capture the individual practitioner’s
116
116 Lin Norton lived experience and personal view. Interviews were transcribed verbatim and then analysed thematically using an ‘ideal versus actual’ dichotomy after Murray and Macdonald (1997) as a conceptual framework. Academics in this study articulated a personal pedagogical philosophy of learning, teaching and assessment, but there were obstacles to their ‘ideal’ when talking about the ‘actual’ processes associated with assessment, marking and feedback. When thinking about assessment for learning, for example, they wondered if it was pedagogically appropriate and fit for purpose; they were also concerned about how motivational it might be for strategic students who focus on grades. One particular aspect that stood out for them was that of the negative aspect of power relations in which assessment practice often turned out to be one-way rather than their desired ‘ideal’ two-way process. These were highlighted and discussed in terms of the current situation where the academic is seen as a service provider and their students as customers. Raaper (2016), in writing about power relations, makes the point that ‘assessment processes are underpinned by a fundamental element of domination between assessor and assessed’ (p. 178). Related to this were our findings that some academics found it difficult to deal with students’ negative emotions when given feedback they did not want to get (usually a lower grade than expected). They also felt stressed about marking, disheartened by students who did not appear to appreciate feedback or who could not comprehend the lecture material, or who were unprepared and handed in work of a poor standard. Feeling confident and competent is an important consideration when innovating assessment. Quesada- Sierra et al (2016) carried out a survey involving 427 lecturers from 18 Spanish universities from a range of disciplines analysing lecturers’ perceptions of their assessment practices. One of their main findings was that many academics did not feel sufficiently competent to introduce more innovative assessments. Reimann and Sadler (2017) looked at participants on a workshop on assessment and found a variation in their understanding of assessment which appeared to be a consequence of their varied contexts. Price et al (2012) have been a voice for the concept of assessment literacy for both staff and students and Forsyth et al (2015) report on the need for staff development. The question of competence was addressed indirectly in our second survey questionnaire which asked whether or not academics felt formal training in feedback and marking should be given. The majority of our respondents (75% and over) agreed. Training in assessment and feedback for learning should be mandatory for all higher education teaching staff, and with continuing professional development in this area scheduled at intervals in one’s teaching career. (Architecture, Institution A) …training sessions on marking and feedback for the HE beginners (juniors and/or international staff) would be a great asset. (Psychology, Institution B)
117
Innovative assessment 117 Before concluding this chapter, I want to briefly mention two earlier studies which were also concerned with the academics’ perspective. In a questionnaire study with 45 lecturers from a single institution there were two main findings (Norton et al, 2012). First, there seemed to be little evidence of professional training in giving feedback but there was a general consensus about making feedback as personal as possible. Second, there was an issue with time impacting negatively on feedback shown both in the questionnaire analysis and in the free text comments. In our results, the lecturers from the hard applied subjects seemed to feel this more strongly, although the comments showed that academics from professionally oriented disciplines such as teacher education contextualised the whole notion of feedback differently. In the same year, I carried out another survey with over 70 staff from a different institution by asking them what was their most pressing concern related to assessment and to feedback (Norton, 2012). The top three concerns for assessment from a thematic analysis were: 1) assessment design purpose; 2) accuracy in marking; and 3) perceived constraints to ‘good’ assessment practice. The most frequently mentioned feedback concerns were: 1) students’ engagement with feedback; 2) making feedback effective; and equal third 3) feedback to feedforward and time, and workload issues. Both studies although carried out in different institutions using different questionnaire approaches show some remarkable similarities that together with the bigger surveys discussed earlier in this chapter suggest that staff are faced with many challenges that might constrain their willingness to innovate in assessment and/or feedback. And yet, there is a remarkable amount of interest in innovation judging by the number of books published on the subject, including this one! There are also many journal articles written and conferences held that are specifically on the theme of assessment. Simply talking with colleagues on a daily basis indicates a widespread commitment not only to their subject but also a genuine wish to help their students grow and develop. Assessment, feedback and marking practices are recognised as powerful tools.
Conclusions The academics’ views reported in this chapter confirm that assessment and feedback in higher education can indeed be a wicked problem. However there is much that the practitioner can do to bring about successful assessment innovations in a context that does not always appear to be sympathetic.
118
118 Lin Norton Introducing innovative assessment whether that be in the design or in marking and feedback processes needs very careful planning. The process can be significantly enhanced when we collaborate with others to get ‘buy in’ particularly from those who are in positions of influence. I am a great believer in the affordances of individual inspiration and enthusiasm but to grow the influence and impact of our innovations that will bring about real change, we need robust evidence, proactive dissemination and commitment at all levels, especially from the students themselves. To be both credible and persuasive, we may present evidence from the scholarship of teaching and learning literature. Perhaps more powerfully, we may prefer to present our own evidence from some form of practitioner research. Carrying out pedagogical research to evidence the impact of an innovation can take the form of action research where the aim would be to improve some aspect of assessment and feedback practice and thereby our students’ learning experience (Norton, 2019). Pedagogical action research not only enables us to respond to assessment issues in our own practice but also enhances our ability to act as reflective practitioners; we can disseminate our findings within and beyond our institution and thereby effect change perhaps at the subject level; we can also use our reflections to support applications for HEA fellowship. Practical guidance on how to carry out a pedagogical action research study together with some case studies from across the sector can be found in Arnold and Norton (2018a, b). Another way to make assessment innovation more effective is to involve students. Working with students as partners is a growing movement across the sector globally (Cook-Sather et al, 2014) and in the area of assessment is increasingly seen as advantageous (Meer & Chapman, 2014), although it still has a way to go before being fully accepted (Deeley & Bovill, 2015). Nevertheless, this is a promising way forward as it is more specific than relying on large-scale student evaluations that are generic such as the NSS in the UK and the Course Experience Questionnaire in Australia, for example. Students can also be involved as valuable members of a community of practice (see Wenger-Trayner & Wenger-Trayner, 2015 for an accessible definition and discussion). Networks such as these can be very influential levers in changing educational practice, and they also have the benefits of enabling us to work with like-minded individuals who can support and encourage us in our innovations. Working together can enable us to deal positively with the challenges of conceptualising assessment innovation as a wicked problem. With others, we can bring about innovations that aim for real transformative change at the micro level of the individual lecturer, the meso level of the subject discipline and ultimately at the macro levels of the institution.
Note 1 See Fanghanel (2007) for an exposition of how this framework can be used in a higher education context.
119
Innovative assessment 119
References Adachi, C., Tai, J. H. M., Dawson, P. (2018) Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assessment and Evaluation in Higher Education 43(2): 294–306. Arnold, L. and Norton, L. (2018a) HEA Action Research: Practice Guide. York: Higher Education Academy. Arnold, L. and Norton, L. (2018b) HEA Action-Research: Sector Case studies York: Higher Education Academy. Bearman, M., Dawson, P., Boud, D., Bennett, S., Hall, M., Molloy, E. (2016) Support for assessment practice: developing the assessment design decisions framework. Teaching in Higher Education 21 (5): 545–556. Biglan, A. (1973a) Relationships between subject matter characteristics and the structure and output of university departments. Journal of Applied Psychology 57 (3): 204–213. Biglan, A. (1973b) The characteristics of subject matter in different academic areas. Journal of Applied Psychology 58: 195–203. Cook-Sather, A., Bovill, C., Felten, P. (2014) Engaging Students as Partners in Learning and Teaching: A Guide for Faculty. San Francisco, CA: John Wiley & Sons. Deeley, S. J. and Bovill, C., (2015) Staff student partnership in assessment: Enhancing assessment literacy through democratic practices. Assessment and Evaluation in Higher Education 42(3): 463–477. Fanghanel, J. (2007) Investigating University Lecturers’ Pedagogical Constructs in the Working Context. York: Higher Education Academy. Forsyth, R., Cullen, R., Ringan, N., Stubbs, M. (2015) Supporting the development of assessment literacy of staff through institutional process change. London Review of Education 13 (3): 34–41. Maguire, S., Norton, L., Norton, B. (2013) What do academic staff think about assessment and how can it help inform policy making and approaches to professional development? 4th International Assessment in HE conference, Birmingham, 26–27 June 2013. Malcolm, J. and Zukas, M. (2009) Making a mess of academic work: experience, purpose and identity. Teaching in Higher Education 14 (5): 495–506. Medland, E. (2017) Examining the Examiner: Investigating the Assessment Literacy of External Examiners. Final Report. London: Society for Research into Higher Education. Medland, E. (2016) Assessment in higher education: drivers, barriers and directions for change in the UK. Assessment and Evaluation in Higher Education 41 (1): 81–96. Murray, K. and Macdonald, R. (1997) The disjunction between lecturers’ conceptions of teaching and their claimed educational practice. Higher Education 33: 331–349. Norton, L. (2019) Action Research in Teaching and Learning. A Practical Guide to Conducting Pedagogical Research in Universities (2nd edn). Abingdon: Routledge. Norton, L. (2012) Assessment and feedback: is there anything more to be said? Perspectives on Pedagogy and Practice 3: 1–32. Norton, L., Norton, B., Floyd, S. Assessment, marking and feedback practice: lecturers’ views of the ‘reality on the ground. Unpublished manuscript. Norton, L., Norton, B., Sadler, I. (2012) Assessment, marking and feedback: understanding the lecturers’ perspective. Practitioner Research In Higher Education 6 (2): 3–24.
120
120 Lin Norton Norton, L., Norton, B., Shannon, L. (2013) Revitalising assessment design; what is holding new lecturers back? Higher Education 66: 233–251. Norton, L., Norton, B., Shannon, L., Phillips, F. (2009) Assessment design, pedagogy and practice: what do new lecturers think? Paper presented at the annual conference of the International Society for the Scholarship of Teaching and Learning (ISSOTL 2009), Indiana University, Bloomington, Indiana, USA, 22–25 October 2009. Price, M., Rust, C., O’Donovan, B., Handley, K., Bryant, R. (2012) Assessment Literacy: The Foundation of Improving Student Learning. Oxford: Oxford Centre for Staff and Learning Development, Oxford Brookes University. Quesada-Sierra, V., Rodríguez-Gómez, G., Ibarra-Sáiz, M. S. (2016) What are we missing? Spanish lecturers’ perceptions of their assessment practices. Innovations in Education and Teaching International 53 (1): 48–59. Raaper, R. (2016) Academic perceptions of higher education assessment processes in neoliberal academia. Critical Studies in Education 57 (2): 175–190. Ramley, J. A. (2014) The changing role of higher education: learning to deal with wicked problems. Journal of Higher Education Outreach and Engagement 18 (3): 7–21. Reimann, N. and Sadler, I. (2017) Personal understanding of assessment and the link to assessment practice: the perspectives of higher education staff. Assessment and Evaluation in Higher Education 42 (5): 724–736. Rittel, H. W. J. and Webber, M. M. (1973) Dilemmas in a general theory of planning. Policy Sciences 4: 155–169. Universities, UK (2007) Beyond the Honours Degree Classification. The Burgess Group Final Report. London: Universities UK. Wei, W. and Yanmei, X. (2017) University teachers’ reflections on the reasons behind their changing feedback practice. Assessment and Evaluation in Higher Education 43 (6): 867–879. Wenger-Trayner, E. and Wenger-Trayner, B. (2015) Introduction to communities of practice. A brief overview of the concept and its uses. http://wenger-trayner.com/ introduction-to-communities-of-practice (accessed 4 January 2019).
121
10 Developing emotional literacy in assessment and feedback Edd Pitt
Introduction The sometimes undervalued and underexplored role of students’ emotions, within learning contexts, have in recent times received attention within the literature. Such research attempts to explain the effect emotions have upon learning in general terms; within higher education (Schutz & DeCuir, 2002), in adult learning (Dirkx, 2001), in relation to goals (Turner et al, 2002) and in motivation research (Seifert, 2004). However, more recently Rowe, Fitness and Wood (2013) posit that the functionality of emotions within feedback situations has not been systematically looked at. Emotions can be classified as a mental state that arises spontaneously rather than through conscious effort (Pekrun & Stephens, 2010). When any of us, including students, receives feedback, it has an emotional impact, which can influence both processing and subsequent behaviour. Price, Handley and Millar (2011) argue that the likelihood of a student acting on the feedback they receive is dependent upon their previous experiences and pedagogical intelligence. In this chapter, I propose a conceptual model, which offers a perspective whereby the principle of feedback is viewed in a more holistic sense taking into consideration the student’s previous learning experiences, emotions at the time of receiving the feedback and their assessment, feedback and emotional literacy. It suggests how we as lecturers can design learning environments which enable students to develop their assessment, feedback and emotional literacy to positively enhance their academic resilience, comprehension, utilisation and behavioural response to feedback. Fundamental to this conceptual model is the existence of a learning environment that is open, honest and reflective, with ongoing dialogue surrounding emotions that all within the environment agree to uphold. Beard, Humberstone and Clayton (2014) suggest that emotions embrace both moods and feelings. Based upon this one can assert that an inner feeling is presented, existing alongside a physiological response such as raised heart rate or laughter. Further, an individual’s predisposition and in situ decision making, temper their emotional reaction (Falchikov & Boud, 2007). Central to an individual’s ways of knowing is his/her emotions, which can
122
122 Edd Pitt obstruct or stimulate learning by affecting attention deployment, memory and problem-solving performance (Dirkx, 2001; Falchikov & Boud, 2007). The effect of students’ emotional engagement is of particular significance to us as lecturers, considering that potentially the effect could last for a sustained period. If a student receives, what they perceive as negative feedback the consequence could be that the learner is unreceptive to learning for a long time. Yorke (2003) adds that there is considerable variability in the way students respond to failure. From a practical perspective, a student’s ability to regulate their emotional reaction may explain why for some students their emotions are positive towards their future learning and for others they are counterproductive. It follows therefore that a student’s resilience to the potential effects of their emotional reactions is an important consideration for us as mass higher education practitioners. However, we perhaps should not get too carried away with interpreting emotional reactions as relatively enduring experiences. Beard et al (2014) have suggested that positive emotional reactions of happiness, relief and pride are rather ephemeral for many students. Dirkx (2001) has argued that it is acceptable for us to demonstrate acknowledgement of emotional reactions by students, as this will allow the student the opportunity to express the emotion and then be more able to overcome it and return to concentrating upon the learning process. We can demonstrate positive responses to students showing their emotions by being empathetic (Falchikov & Boud 2007). However, the degree to which we have opportunities to do this, even if we are able and willing to, is rather limited within mass higher education assessment and feedback situations (Pitt & Norton, 2017). Much of this literature does suggest a rather robotic response from the student and does not always highlight there being many opportunities for us to discuss emotions with students. Indeed, it could also be the case that some of us may not wish to engage in such practice. Despite the challenges this presents to our already busy teaching schedules, if the culture of the learning environment does not promote students’ expressing emotions in front of their peers and us, emotions could become subdued. So, how is our feedback to become more useable?
A conceptual model to develop emotional literacy In the first year of university, students experience an emotional roller coaster, which transcends many aspects of their lives (Beard et al, 2007). More recently, Beard et al (2014) have suggested we view students as affective and embodied individuals, concluding that in order to understand this phenomenon clearer theorisation of student’s emotional experiences is required. We therefore need to question the established mechanism of simply giving student’s feedback and expecting them to attend to it and adjust in subsequent assessments (Winstone & Pitt, 2017). The conceptual model in Figure 10.1 addresses the role emotions play within a practical teaching framework.
123
Developing emotional literacy 123
Stressor
Emotional State
Previous Learning
Learning Environment
Student Behaviour
Figure 10.1 A conceptual model to develop emotional literacy
Stressor The stressor in this model is the summative assessment. Within mass higher education, the power of summative assessment cannot be underestimated. Students are extremely focused upon their summative assessments and the associated grade outcomes (Pitt & Norton, 2017). The stressor is a natural starting point as it defines the situation that the students find themselves in and affects how we design the learning environment.
Previous experiences Students enter university having encountered a multitude of previous learning experiences, which are naturally not within our control. This learning baggage will undoubtedly be engrained and have a substantial influence upon their expectations about what constitutes learning in higher education. To attempt to change student’s perceptions of and induct them into a new learning environment, we should enable them to experience:
•
opportunities to reflect upon their previous assessment and feedback experiences from an emotional standpoint;
124
124 Edd Pitt
• • • •
opportunities to explore how their emotions underpin their approach to learning, assessment and feedback behaviours; situations in which we acknowledge their prior feelings by initiating dialogue right at the outset of their HE experience; a learning environment that is supported by an appreciation of the role that emotions play in their learning in order to mitigate misunderstanding; our commitment to develop their emotional literacy over time, in order to improve their subsequent feedback usage and assessment performance.
Emotional state Emotional maturity or the ability to control one’s own emotions in times of disappointment should be factored into any potential understanding of student assessment and feedback behaviour. I have previously reported that students are often at differing levels of emotional maturity and experience emotional backwash in feedback situations (Pitt & Norton, 2017; Pitt, 2017). Some students report adaptive skills but, in the main, many report maladaptive behaviour when things do not go well for them (Pitt, 2017). A person who has a history of failure and fails an assessment may make a different attribution (such as inability) than a student who has a history of success and fails a test (such as lack of study). Students’ behaviours are guided by emotional responses to tasks and task conditions (Seifert & O’Keefe, 2001). Within a situation, a student given a task will produce an effective response; this in turn will be manifested in an associated behaviour outcome. Developing the students’ emotional literacy will help them to control and manage their emotional reactions so that they are adaptively able to use feedback. What I am proposing here is that students’ emotional literacy interacts with their assessment and feedback literacy to initiate feedback usage in subsequent assessment tasks.
Learning environment –strategies to develop emotional literacy Students enter the learning environment with previous learning experiences, an emotional state and expect to be assessed summatively. The learning environment we design will affect how students learn and use feedback in their assessments. It is suggested that:
• • • •
the learning environment needs to be open, honest and facilitate ongoing dialogue surrounding emotions; we need to create an environment where students can develop their understandings, capacities and dispositions towards learning and feedback; through exemplar sessions, we can help them to develop evaluative judgements of what quality looks like in their discipline over time; we should provide extended opportunities for students to experience their own and peer failure and to curate and give feedback to one another to improve their work;
125
Developing emotional literacy 125
•
students should be afforded sustained opportunities to actively discuss with their peers and us, how feedback affects their emotional wellbeing.
This approach moves forward the more students become used to the emotions arising when established models in which they are required to give and receive feedback are employed (Nicol et al, 2014; Tai et al, 2016; Carless & Chan, 2017). This allows students to experience the emotions of success and failure within a safe formative learning space and actively discuss how such emotions enabled or inhibited their processing of the feedback. Students are afforded the opportunity to reflect upon how their emotions and those of their peers have helped them to become more resilient students (Rowe et al, 2013), to mitigate negative emotions and threats to self-esteem (Juwah et al, 2004) and to improve in future learning situations (Fredrickson & Cohn, 2008). Sustained dialogue between students, peers and ourselves about their emotional experiences within the learning environment will help students to understand their current emotional literacy level. Subsequent sessions can build upon how the students’ adaptive or maladaptive emotional responses affected their feedback use in order to improve prior to the summative assessment later in the programme, course or module. The student is placed in a learning environment where there are repeated and sustained opportunities to develop their assessment literacy, feedback literacy and emotional literacy.
Student behaviour As I have outlined in this chapter, sustained emotionally sensitive dialogue between lecturers and students, students and peers, can (over time) positively affect students’ use of feedback. If students are exposed to learning situations which develop their assessment and feedback literacy, this can positively impact their assessment performance. The approach here takes this a step further as the students emotional literacy is also developed through the design of tasks within the learning environment and opportunities for meaningful collaborative dialogue surrounding their emotions. The interactions between students’ assessment and feedback literacy and emotional literacy result in a student who is progressively more emotionally mature, self-aware, resilient and has a greater propensity to use feedback in subsequent assessments. This approach is brought to life within the case study below.
Case study –emotionally sensitive feedback in comedy performance Paul has been teaching for a number of years in higher education following a successful professional career as a stand-up comedy performer. I talked to Paul about the ways in which he incorporates the conceptual model I have outlined here to develop students’ emotional literacy in his teaching. First, Paul talked about a student who openly shared their emotions.
126
126 Edd Pitt I can remember a student a few years ago, who was a high achieving student, very, very bright, super bright, brilliant essays. He was quite cerebral, which meant that his comedy was quite absurd, and he hadn’t got a way of connecting that with an audience at first. I remember when it came to feedback he always cried, so it was really upsetting, I did not want him to cry and I was supportive obviously. It was at this point I realised that emotions were a huge part of the feedback story. Paul’s approach to this critical incident was to modify the learning environment so that students were encouraged to openly show their emotions so they could work on understanding and overcoming them collaboratively. Over time, this cultural shift meant students were able to manage their disappointment and use the feedback. Paul also reflected upon a student who did not share their emotions in class. Early on in the module, things hadn’t clicked for her. We had four weeks or five weeks of workshops where we were presenting our work and peers had been giving and receiving feedback in an emotionally sensitive way. Everyone had been given many chances to talk about how the feedback they were receiving had made them feel and how they had been using it to improve, so I thought everything was OK. What I didn’t realise was that she regularly went home after the classes and cried. I know that because her housemate told me and asked how they could help. I advised that they remind her that she has enormous potential and to remember all the great things people have been saying to her in class. I also advised them to encourage her to come and see me in my office. It is often very difficult for us to know how students are emotionally reacting and processing the feedback we give them, if the learning environment is not set up for open and honest dialogue surrounding emotions. Students often interpret feedback as being about them, rather than the work they have produced. They also struggle to process critical feedback due to their emotional reaction to it. To overcome this, Paul suggested the following. She said her emotions often got the better of her and no matter what I or her peers told her in the feedback, she just didn’t believe it. I recommend she film her next few performances and the feedback she received from the audience. After four weeks of this, I got her to watch the recordings back and to note down the feedback she was getting and to classify it as positive or negative. Several of her peers said I really like that tactic of gathering all the feedback from many weeks and then seeing how it’s changed over time. The students commented that because they were also more emotionally aware it made it easier to acknowledge the effect their emotions were having and to wait for these to pass and then use the feedback in a more adaptive manner.
127
Developing emotional literacy 127
Conclusion The role that emotions play within complex assessment and feedback situations needs to be carefully considered in the design of formative teaching and learning environments. Successfully implementing this will better prepare students for the summative assessment and resultant feedback usage in subsequent assessments. As I have indicated here, applying the conceptual model will begin to address the overwhelming effect that students’ emotional literacy has upon their ability to process, comprehend and utilise feedback in assessment opportunities.
References Beard, C., Clegg, S., Smith, K. (2007) Acknowledging the affective in higher education. British Educational Research Journal 33: 235–252. Beard, C., Humberstone, B., Clayton, B. (2014) Positive emotions: passionate scholarship and student transformation. Teaching in Higher Education 19: 630–643. Carless, D. and Chan, K. K. H. (2017) Managing dialogic use of exemplars. Assessment and Evaluation in Higher Education 42: 930–941. Dirkx, J. M. (2001) The power of feelings: emotion, imagination, and the construction of meaning in adult learning. New Directions for Adult and Continuing Education 89: 63–72. Falchikov, N. and Boud, D. (2007). Assessment and emotion: the impact of being assessed. In: Boud, D. and Falchikov, N. (eds) Rethinking Assessment for Higher Education: Learning for the Longer Term. Abingdon: Routledge, pp. 144–152. Fredrickson, B. L. and Cohn, M. A. (2008) Positive emotions. In: Lewis, M., Haviland- Jones, J. M., Barrett, L. F. (eds) Handbook of Emotions (3rd edn) New York, NY: Guilford Press, pp. 777–796. Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D., Smith, B. (2004) Enhancing Student Learning Through Effective Formative Feedback. York: Higher Education Academy. Nicol, D., Thomson, A., Breslin, C. (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment and Evaluation in Higher Education 39: 102–122. Pekrun, R. and Stephens, S. J. (2010) Achievement emotions in higher education. In: Smart, J. C. (ed.) Higher Education: Handbook of Theory and Research. New York, NY: Springer, pp. 257–306. Pitt, E. (2017) Students’ utilisation of feedback: a cyclical model. In: Carless, D., Bridges, S., Chan, C., Glofcheski, R. (eds) Scaling up Assessment for Learning in Higher Education. Singapore: Springer, pp. 145–158. Pitt, E. and Norton. L. (2017) ‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded work and what they do with it. Assessment and Evaluation in Higher Education 42: 499–516. Price, M., Handley, K., Millar, J. (2011) Feedback – focussing attention on engagement. Studies in Higher Education 36: 879–896. Rowe, A., Fitness, J., Wood, L. (2013) University student and lecturer perceptions of positive emotions in learning. International Journal of Qualitative Studies in Education 28: 1–20.
128
128 Edd Pitt Schutz, P. A. and DeCuir, J. T. (2002) Inquiry on emotions in education. Educational Psychologist 37: 125–34. Seifert, T. L. (2004) Understanding student motivation. Educational Research 46: 137–149. Seifert, T. L. and O’Keefe, B. (2001) The relationship of work avoidance and learning goals to perceived competency, externality and meaning. British Journal of Educational Psychology 71: 81–92. Tai, J., Canny, B. J., Haines, T. P., Molloy, E. K. (2016) The role of peer-assisted learning in building evaluative judgement: opportunities in clinical medical education. Advances in Health Sciences Education 21: 659–676. Turner, J. E., Husman, J., Schallert, D. L. (2002) The importance of students’ goals in their emotional experience of academic failure: Investigating the precursors and consequences of shame. Educational Psychologist 37: 79–89. Winstone, N. and Pitt, E. (2017) Feedback is a two-way street, so why does the NSS only look one way? Times Higher Education Supplement 20 September: 30. Yorke, M. (2003) Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education 45: 477–501.
129
11 Developing students’ proactive engagement with feedback Naomi E. Winstone and Robert A. Nash
Introduction In this chapter, we make a case for the importance of an approach to assessment in which students are encouraged to engage proactively with feedback, and in which students are supported to use feedback both to guide their future study and to engage in meaningful learning. In doing so we outline our recent research programme, which involved close consultation with students, and which began from a recognition that if we wish to create more dialogic feedback environments, then we ourselves need to be open to hearing students’ perspectives, and to involving them in the development of feedback policy and practice. We end this chapter by inviting you to consider students’ engagement with feedback in the context of your own practice. But to begin, consider how you might answer the following questions:
• • •
Why do students often fail to take on board the feedback comments we give them? Is it reasonable for us to expect that they will do so? Whose problem is this to solve?
It is commonly reported, based on the work of John Hattie and colleagues in particular (e.g. Hattie, 1999; Hattie & Timperley, 2007), that receiving feedback is one of the strongest influences on learners’ grade improvement, more influential than is their initial ability level or the quality of teaching they receive. Yet a key measure of the impacts of assessment and feedback on learning is the extent to which they lead not just to better grades, but also to behaviour change. In other words, if learners’ grades are to improve in a systematic way, then we need to assess whether assessment and feedback guide learners to adjust their learning strategies, for example, their skill practice, and their motivation to engage with other learning opportunities. As a consequence, although delivering feedback is important, we argue that defining feedback purely as the delivery of information is characteristic of an outdated and inadequate ‘old paradigm’ approach (Carless, 2015). To fully realise the learning potential of feedback, we must focus beyond the delivery of feedback, and towards learners’ engagement
130
130 Naomi E. Winstone and Robert A. Nash with and enactment of the crucial developmental guidance it contains (Ajjawi & Boud, 2017; Carless, 2015). Our focus aligns more closely with the latter of these approaches and builds upon the focus of the first edition of this book (Bryan & Clegg, 2006), and three of the conditions under which assessment is argued to support students’ learning: feedback is understandable to students; feedback is received by students and attended to; feedback is acted upon by the student to improve their work or their learning (Gibbs & Simpson, 2005). We believe that students have critical roles to play in the feedback process, because no matter how frequently they receive ‘best practice’ feedback information, it will never magically improve their performance unless they put it into practice (Nicol, 2010; Winstone et al, 2017a). To understand how to implement such an approach, we must first consider the behaviours that students commonly exhibit when they receive feedback on their work. Several studies document that students do recognise the importance of engaging with feedback (e.g. Higgins et al, 2002). Yet, in contrast, other education researchers and practitioners often report very poor engagement with feedback on their students’ part (Hyland, 1998; Sinclair & Cleland, 2007). Barriers that might impede students’ engagement with feedback include that students often find feedback insufficiently useful, in that they do not find it straightforward to act upon the comments they receive; feedback is not always as specific, detailed and individualised as students would hope; it can be too authoritative in tone, which may impede students’ willingness to engage; students do not necessarily know how to make use of feedback; and students find it difficult to understand the academic terminology used in feedback discourse (Jönsson, 2013). Gaining a deeper understanding of these kinds of barriers has been a central aim of our own research programme.
The ‘nurturing students’ engagement with feedback’ project In early 2014, we embarked on a research project funded by the UK’s Higher Education Academy, with the aim of establishing an evidence base for understanding students’ engagement with feedback. Essentially, we wanted to explore feedback from the perspective of the ‘receiver’ (i.e. the student) and to generate evidence-based recommendations for supporting students in how to use and assimilate feedback and implement it when working towards their future goals. There were three distinct phases to this project. We began with a consultation phase, which involved a systematic literature review and a series of surveys and focus groups with students (see Winstone et al, 2016, 2017a, 2017b). Next, we carried out some behavioural laboratory studies to explore cognitive processing of feedback, in an exploration phase (see Nash et al, 2018). Finally, in an implementation phase we used all that we had learnt through the project to develop tools and resources for supporting students’ engagement with feedback, (see Winstone & Nash, 2016; Winstone et al, 2019a). In the consultation phase, we built upon Jönsson’s (2013) work by carrying out a series of focus groups with psychology students (Winstone et al, 2017a).
131
Developing engagement with feedback 131 This work uncovered four broad types of barrier that the students believed can limit or prevent their engagement with feedback, each representing a different psychological process.
• • • •
Awareness: Students sometimes have difficulty using feedback because they struggle to ‘decode’ the terminology used within feedback or they do not clearly understand what feedback actually is, and what it is for. Cognisance: Students sometimes have difficulty using feedback because they are not cognisant of suitable strategies and opportunities for engaging with and acting upon feedback. Agency: Students sometimes have difficulty using feedback because they do not believe that circumstances will enable them to put it into practice. Volition: Students sometimes have difficulty using feedback because they lack the necessary motivation and willingness to do so.
To a certain extent, these four kinds of barriers are cumulative; that is to say, for instance, in order to have volition to use feedback, to some extent a student first needs a sense of agency. Having agency to some extent also depends on being cognisant of appropriate strategies for making use of feedback, and this cognisance in turn requires awareness of what the feedback means and is for (Nash & Winstone, 2017). Identifying these barriers provided the first piece of the puzzle in developing strategies to support students’ engagement with feedback. But we also wanted to try and understand what kinds of skills students might need in order to be able to overcome these barriers and make effective use of feedback. As another part of the first consultation phase, then, we carried out a systematic review of the existing literature on students’ reception of assessment feedback (Winstone et al, 2017b). Of the 195 papers that we reviewed, 105 reported interventions to develop students’ engagement with feedback. In line with our skill development focus, for each of these interventions we sought information about the authors’ rationale: what skills were they trying to develop in students? We identified four broad categories of skills underpinning engagement with feedback, which we termed the ‘SAGE’ skills (Figure 11.1). Self-appraisal represents the ability to look critically at one’s own attributes, and recognise strengths and areas for development. If an individual is not open to this process of self-evaluation, then defensive reactions to feedback can result, which can hamper strong engagement (e.g. Smith & King, 2004). Assessment literacy requires knowledge of the standards and criteria used within the process of assessment. When students develop the capacity for evaluative judgement they become less reliant on external sources of feedback, being better able to generate feedback for themselves (e.g. Tai et al, 2018). This also requires students to take the perspective of a marker, supporting deeper understanding of feedback terminology (Winstone et al, 2017b). Engagement with feedback also requires the ability to set and monitor progress towards explicit learning goals. Feedback information typically contains
132
132 Naomi E. Winstone and Robert A. Nash
S
A
G
E
Self-Appraisal
Assessment Literacy
Goal-setting and self-regulation
Engagement and Motivation
Figure 11.1 The SAGE taxonomy of feedback recipience skills
valuable information about how to improve future assignments; however, knowing what to improve and how to improve require different levels of engagement. Goal-setting and self-regulation are thus important dimensions of engagement with feedback, enabling the student to adopt the goal-directed behaviours needed to realise the impact of feedback information (Winstone et al, 2017b). Finally, we identified engagement and motivation as an important dimension of feedback recipience. This attribute is similar to what Handley, Price and Millar (2011) called ‘readiness to engage’; students have to be willing to scrutinise feedback and to engage in what can be hard work to develop and hone their skills. Developing recipience skills Having identified the skills required for effective recipience of feedback, the next important question to be addressed is how these important recipience skills can be developed. The interventions for supporting engagement with feedback, described in 105 papers in our review, could be classified into one or more of four kinds: 1) internalising and applying standards; 2) sustainable monitoring; 3) collective provision of training; and 4) manner of feedback delivery. 1. Interventions in the ‘internalising and applying standards’ category had the primary aim of enhancing students’ ability to understand and apply the standards and grading criteria that are used when assessing performance. For example, peer-assessment can be beneficial for both the feedback giver and the recipient. When giving feedback to a peer, students are often prompted to reflect upon their own work (e.g. Al-Barakat & Al-Hassan, 2009) and, by embodying the role of ‘assessor’, students become more adept at generating internal feedback, and also interpreting feedback from others from the stance of one with deeper inside knowledge of the assessment process (e.g. McDonnell & Curtis, 2014; Moore & Teather, 2013). However, in order for practices such as peer-assessment (and also self-assessment) to be effective, students may need support to develop their assessment literacy. Many students lack confidence in their
133
Developing engagement with feedback 133 ability to provide feedback for themselves or for a peer, feeling ‘out of their comfort zone’ (e.g. Bedford & Legg, 2007; Cartney, 2010). To benefit from the learning that these practices can promote, students need a sound understanding of the terminology used within grading criteria, especially if they are to apply their understanding of standards and criteria to new and different assessment situations. 2. Supporting students to engage in action planning and promoting the use of portfolios to support engagement with feedback, formed the ‘sustainable monitoring’ category of interventions. Here, we found that setting action points for implementing feedback could promote reflection and independence, as well as encouraging further feedback-seeking behaviour (e.g. Altahawi et al, 2012; Dahllöf, et al, 2004). Portfolios, on the other hand, can foster independence (e.g. Dahllöf et al., 2004) and intrinsic motivation to implement feedback (e.g. Embo et al, 2010). As is also the case for internalising and applying standards, in order to be able to reflect upon feedback, learners first need to understand the terminology used within mark schemes and written feedback (e.g. Quinton & Smallbone, 2010). 3. Whereas feedback given to learners is often personalised, much of the groundwork needed to equip students with feedback-receiving skills can be delivered to cohorts of students at scale. Such interventions formed the ‘collective provision of training’ category, which included initiatives such as feedback workshops, discussion of exemplars, and provision of feedback resources to whole cohorts of students simultaneously. For example, workshops have been used successfully to develop students’ assessment literacy, fostering the perspective-taking skills that can help students make maximum use of feedback (e.g. Rust et al, 2003). 4. The final category of interventions, ‘manner of feedback delivery’, included practices that focused on the way in which feedback was provided to students. For example, one common practice in this category was the provision of formative feedback on students’ drafts. It is argued that this practice encourages students to adopt behaviours that support future improvement of their work (e.g. Wingate, 2010) and can promote further feedback seeking (e.g. Cartney, 2010). One of the messages emerging from our review was that developing students’ assessment and feedback literacies is an important precursor to engaging students in activities that might promote their engagement with feedback. Our own approach to tackle this objective synthesised our focus on skill development with a more general grounding in feedback literacy. We worked with students to create the Developing Engagement with Feedback Toolkit (DEFT; Winstone & Nash, 2016), a freely-available, flexible set of resources comprising a student-authored feedback guide, a feedback workshop and a feedback portfolio (Winstone & Nash, 2016). Here, we focus on our implementation of the latter two components.
134
134 Naomi E. Winstone and Robert A. Nash The feedback workshop resource comprises nine individual activities, broadly structured around the three domains of feedback literacy described by Sutton (2012). Specifically, three activities were designed to support students’ ‘knowing’, that is, their understanding that feedback is a source of learning; three activities targeted students’ ‘being’, representing their appreciation that feedback generates emotions and that these emotions can be used to positive or negative ends. The final section addressed ‘acting’; that is, students’ knowledge of the importance and process of acting upon feedback. The resources can be used as stand-alone activities or can be combined to create a feedback seminar or tutorial. In Winstone et al (2019a) we reported an evaluation of the latter approach, whereby we delivered a feedback workshop to 103 first-year psychology students in small groups, as part of their academic skills tutorial programme. Before the workshop we measured students’ feedback literacy in the domains of knowing, being and acting, and repeated these measures one week after the workshop. Our analyses revealed statistically significant increases in all three domains (Winstone et al, 2019a). As part of the DEFT, we also developed a series of tools that could be used to create a feedback portfolio. The purpose of the portfolio is to enable students to synthesise feedback from multiple assessors and assessments, to identify common themes emerging in the feedback, and to set targets for improvement on the basis of the feedback. One of us (NW) has also developed a digital version of the portfolio, which can be embedded within a virtual learning environment (FEATS, 2017). An important purpose of the portfolio is to help students to take steps to implement feedback; to this end, the digital portfolio developed from the basic DEFT tools contains a resource bank incorporating links to papers, books, websites, videos and online tools under thematic headings. Formal evaluation work has demonstrated that after a semester of using the digital portfolio, students reported significantly higher cognisance of how to implement feedback effectively (Winstone et al, 2019b). In this chapter, we have made a case for the importance of supporting students to develop skills that might underpin their proactive engagement with assessment feedback. We have drawn upon findings from our ‘nurturing students’ engagement with feedback’ project, outlining common barriers to engagement with feedback, the skills needed to implement feedback, and frequently-used interventions for developing these skills. We also outlined our own approach to developing engagement with feedback, drawing specifically on the feedback workshop and portfolio elements of the Developing Engagement with Feedback Toolkit (Winstone & Nash, 2016; Winstone et al, 2019a). We conclude the chapter with some reflections and recommendations for further developing this approach to assessment and feedback in higher education.
Challenges and recommendations It might appear contradictory that our focus here is on developing students’ proactive and independent engagement with feedback, yet we have focused
135
Developing engagement with feedback 135 primarily on the actions that educators might take to foster this engagement. Is it, in fact, our responsibility as educators to facilitate students’ engagement? As we have argued elsewhere, the responsibility for ensuring that feedback benefits learning belongs neither solely to the student nor to the educator; the responsibility is shared between the two parties (Nash & Winstone, 2017). We also argue that laying the foundations for engagement, through developing students’ feedback literacy, is an important dimension of the educator’s responsibility in the feedback process. Many of the interventions we uncovered in our systematic review, and indeed our own tools and resources, aim to do exactly this. If we can equip students with the knowledge and strategies they need in order to become proactive recipients of feedback, students are then in principle ready to shoulder greater responsibility for taking opportunities to seek and implement feedback, and for being motivated to initiate this improvement (Nash & Winstone, 2017). An important observation from our systematic review was the limited number of studies that have explored the impact of interventions on students’ observable behaviour (Winstone et al, 2017b). Instead, the majority of evidence comes from students’ reported beliefs about what benefits their learning, their self-reported use of feedback and their satisfaction with feedback as measured by instruments such as the UK’s National Student Survey. If we are to successfully shift our focus from mere transmission of information, towards appraising the impact of feedback on students’ performance, then research studies employing behavioural outcome measures are likely to be of critical importance. At the end of the ‘Nurturing students’ engagement with feedback’ project, when reflecting on our progress, we shared some snippets of the key ‘lessons learned’ that struck us most clearly (Winstone & Nash, 2016). Crucially, through this process we learnt that engaging in critical discussion with students about the overall purpose and process of feedback is just as important as talking with them about the specific feedback they receive. We also came to believe that endeavours to promote engagement with feedback will be more effective if embedded within the curriculum, rather than as one- off discrete interventions. Finally, we have learnt that it can be a powerful experience for students to hear about the kinds of feedback that we ourselves receive within our professional lives as academics and educators, and to share our own learning journeys in the context of receiving and implementing feedback. With these reflections in mind, we invite you to return to the questions we posed at the start of this chapter and to think again about how you might answer them. Our hope is that in light of the arguments that we have made here, you might view both the problems and their possible solutions in new and productive ways, and that these might in turn offer valuable ideas for implementing into your teaching and learning practice. Building upon those questions and answers, we now finish with some further questions, intended to encourage your continued reflection on how to tackle these challenges.
136
136 Naomi E. Winstone and Robert A. Nash
• • • •
Consider a time when you received feedback (e.g. a journal paper review, or a teaching evaluation). How did you respond? Did you fully engage with the feedback? If not, what were the barriers? If you did, why were there no barriers or how did you avoid them? How might the barriers to your students’ engagement with feedback differ according to their discipline and level of study? How could you embed the development of students’ feedback literacy skills into your own teaching practice? What could you do to promote the sharing of responsibility between yourselves (and other educators) and students in the feedback process? How could you make this responsibility sharing sustainable so that its benefits are long-lasting?
References Ajjawi, R. and Boud, D. (2017) Researching feedback dialogue: an interactional analysis approach. Assessment and Evaluation in Higher Education 42 (2): 252–265. Al-Barakat, A. and Al-Hassan, O. (2009) Peer assessment as a learning tool for enhancing student teachers’ preparation. Asia-Pacific Journal of Teacher Education 37: 399–413. Altahawi, F., Sisk, B., Poloskey, S., Hicks, C., Dannefer, E. F. (2012) Student perspectives on assessment: Experience in a competency-based portfolio system. Medical Teacher 34: 221–225. Bedford, S. and Legg, S. (2007) Formative peer and self feedback as a catalyst for change within science teaching. Chemistry Education Research and Practice 8: 80–92. Bryan, K. and Clegg, C. (eds) (2006) Innovative Assessment in Higher Education. Abingdon: Routledge. Carless, D. (2015) Excellence in University Assessment. Abingdon: Routledge. Cartney, P. (2010) Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used. Assessment and Evaluation in Higher Education 35: 551–564. Dahllöf, G., Tsilingaridis, G., Hindbeck, H. (2004) A logbook for continuous self- assessment during 1 year in paediatric dentistry. European Journal of Paediatric Dentistry 5: 163–169. Embo, M. P. C., Driessen, E. W., Valcke, M., Van der Vleuten, C. P. M. (2010) Assessment and feedback to facilitate self-directed learning in clinical practice of Midwifery students. Medical Teacher 32: e263–e269. FEATS (2017) FEATS tutorials. Feedback, Engagement and Tracking. tinyurl.com/ FEATSportfolio (accessed 4 January 2019). Gibbs, G. and Simpson, C. (2005) Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 1: 3–31. Handley, K., Price, M., Millar, J. (2011) Beyond ‘doing time’: investigating the concept of student engagement with feedback. Oxford Review of Education 37: 543–560. Hattie, J. (1999) Influences on student learning. Inaugural lecture, Professor of Education, given on August 2, 1999. University of Auckland https://cdn.auckland. ac.nz/assets/education/hattie/docs/influences-on-student-learning.pdf (accessed 4 January 2019).
137
Developing engagement with feedback 137 Hattie, J. and Timperley, H. (2007) The power of feedback. Review of Educational Research 77: 81–112. Higgins, R., Hartley, P., Skelton, A. (2002) The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education 27: 53–64. Hyland, F. (1998) The impact of teacher written feedback on individual writers. Journal of Second Language Writing 7: 255–286. Jönsson, A. (2013) Facilitating productive use of feedback in higher education. Active Learning in Higher Education 14: 63–76. McDonnell, J. and Curtis, W. (2014) Making space for democracy through assessment and feedback in higher education: thoughts from an action research project in education studies. Assessment and Evaluation in Higher Education 39: 932–948. Moore, C. and Teather, S. (2013) Engaging students in peer review: Feedback as learning. Issues in Educational Research 23 (Suppl): 196–211. Nash, R. A., Winstone, N. E. (2017) Responsibility sharing in the giving and receiving of assessment feedback. Frontiers in Psychology 8: 1519. Nash, R. A., Winstone, N. E., Gregory, S. E. A., Papps, E. (2018) A memory advantage for past- oriented over future- oriented performance feedback. Journal of Experimental Psychology: Learning, Memory, and Cognition 44 (12): 1864–1879. Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education. Assessment and Evaluation in Higher Education 35: 501–517. Quinton, S. and Smallbone, T. (2010) Feeding forward: using feedback to promote student reflection and learning –a teaching model. Innovations in Education and Teaching International 47: 125–135. Rust, C., Price, M., O’Donovan, B. (2003) Improving students’ learning by developing their understanding of assessment criteria and processes. Assessment and Evaluation in Higher Education 28: 147–164. Sinclair, H. K., Cleland, J. A. (2007). Undergraduate medical students: who seeks formative feedback? Medical Education 41: 580–582. Smith, C. D. and King, P. E. (2004) Student feedback sensitivity and the efficacy of feedback interventions in public speaking performance improvement. Communication Education 53 (3): 203–216. Sutton, P. (2012) Conceptualizing feedback literacy: knowing, being and acting. Innovations in Education and Teaching International 49 (1): 31–40. Tai, J., Ajjawi, R., Boud, D., Dawson, P., Panadero, E. (2018) Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76 (3): 467–481. Wingate, U. (2010) The impact of formative feedback on the development of academic writing. Assessment and Evaluation in Higher Education 35: 519–533. Winstone, N. E. and Nash, R. A. (2016) The Developing Engagement with Feedback Toolkit. York: Higher Education Academy. Winstone, N. E., Nash, R. A., Rowntree, J., Menezes, R. (2016) What do students want most from written feedback information? Distinguishing necessities from luxuries using a budgeting methodology. Assessment and Evaluation in Higher Education 41 (8): 1237–1253. Winstone, N. E., Nash, R. A., Rowntree, J., Parker, M. (2017a). ‘It’d be useful, but I wouldn’t use it’: barriers to university students’ feedback seeking and recipience. Studies in Higher Education 42 (11): 2026–2041.
138
138 Naomi E. Winstone and Robert A. Nash Winstone, N. E., Nash, R. A., Parker, M., Rowntree, J. (2017b) Supporting learners’ agentic engagement with feedback: a systematic review and a taxonomy of recipience processes. Educational Psychologist 52: 17–37. Winstone, N. E., Mathlin, G., Nash, R. A. (2019a). Building feedback literacy: the developing engagement with feedback toolkit. Manuscript in preparation. Winstone, N. E., Medland, E., Papps, E., Rees, R., Niculescu, I. (2019b) Feedback footprints: supporting students to act upon and track the impact of their assessment feedback using an e-portfolio tool. Manuscript in preparation.
139
Part III
Stimulating learning
140
141
12 Certainty-based marking Stimulating thinking and improving objective tests Tony Gardner-Medwin
Introduction ‘Certainty- based marking’ (CBM) was originally described at University College and Imperial College in London and, for the first edition of this book (Gardner-Medwin, 2006), as ‘confidence-based marking’. Its aims and principles have not changed, but the change of name tells a story. Too often, people misunderstood CBM as something designed to boost or reward self- confidence. Its aim is much more nuanced –to reward the identification of uncertainty as well as genuinely reliable conclusions. It may even diminish unwarranted self-confidence, though knowledge of what one does and does not understand is ultimately the basis for confident decision making. Overconfidence or its opposite, excessive hesitation, can vary with personality, upbringing and gender. A properly designed CBM strategy gives feedback that, with practice, can serve to correct such biases, but the principal aim is to enhance learning by stimulating thinking about how different aspects of a student’s knowledge are usefully related. Typically, the immediate response to a question –especially the sort of question good teachers ask to promote understanding –deserves extra scrutiny. CBM always asks ‘How sure are you?’, challenging the student to look for justifications and reservations – aspects of knowledge seldom explored in objective testing. Such thinking promotes deeper understanding and is rewarded as we shall see by enhanced CBM scores, even if the answer remains unchanged or starts to seem less reliable. Strong students can often do well by relying only on superficial associations, with little incentive (on conventional right/wrong mark schemes) to reflect on the reliability of their thinking unless really taxed. Weaker students try to emulate this with diligent rote learning, rejecting deeper learning as unnecessarily challenging. This may get them through a test, but can be disastrous as a basis for future learning. It becomes stressful, because if facts are learned independently there are many more to learn than if they can be deduced and checked one against another. Knowledge, especially together with understanding, is more like a network of relationships than a set of facts, and is much more efficiently stored that way.
142
142 Tony Gardner-Medwin The value of thinking about certainty, for enhancing learning and consolidation, has been researched extensively, but mostly before computer- aided assessment was practical on much of a scale (see, for example, Hevner 1932; Ahlgren 1969; Good 1979). The development of CBM in London has helped stimulate wider application.1 CBM is now included in the open-source learning management system ‘Moodle’,2 and the use of CBM variants has been reported from several institutions worldwide (e.g. Hassmen & Hunt, 1994; Davies 2002; Rosewell 2011; Schoendorfer & Emmett 2012; Yuen-Reed & Reed 2015; Foster 2016). This chapter focuses on the London medical school experience since 1995 (ca. 1.8 million self-test sessions and 1.4 million exam answers). Students seem to pick up the logic of CBM instinctively through use –more readily than through exposition and discussion. After all, as a successful animal species we have evolved to learn to handle situations with uncertainties, risks and rewards: in childhood we call them games, while as adults they can determine our survival (Gardner-Medwin, 2018). Readers are encouraged to try CBM exercises themselves.3
How does certainty-based marking work? The scheme instigated at UCL in 1994 (Gardner-Medwin, 1995), which still seems a good choice, is simpler than most that had been used in previous research. It uses just three certainty levels, identified by numbers (C = 1, 2, 3) and by neutral terms (low, mid, high) rather than by descriptors (‘guess’, ‘hunch’, ‘sure’, ‘definite’, etc.) that can have idiosyncratic interpretations. Table 12.1 shows the marks (or ‘points’) awarded at each C level for correct and incorrect answers. It is these credits and penalties for answers marked right or wrong that determine how best to use the levels. The table is easily remembered and is enough to guide students in use of CBM. The concept of a double penalty (–6) sets the criterion fairly high for choosing C = 3, and with minimal knowledge it is clearly worth entering an answer with C = 1 rather than ‘no reply’. Students do not normally report thinking quantitatively about CBM, but nevertheless manage with practice to use C levels in a near optimal way (Gardner-Medwin 2006, 2013). They often say they think about C = 1 and C = 3, then if undecided they opt for C = 2, which is simple and rational. The implications emerge in Figure 12.1.
Table 12.1 The certainty-based marking scheme Certainty level
Mark if correct
Penalty if wrong
C = 3 (high) C = 2 (mid) C = 1 (low) No reply
3 2 1 0
–6 –2 0 0
143
Certainty-based marking 143 C=3 is best
Mark expected on average for such questions
3 2 1 0
C=1 is best
C=2 is best
C=3 C=2 C=1 no reply
-1 -2 -3 -4 -5 -6
50% 50% 67% 67%80% 80% 100% 0% 20% 40% 60% 80% 100% How likely is it that my answer is correct?
Figure 12.1 Rationale for choosing certainty levels. Depending on how likely you think your answer is to be correct, you expect to gain by choosing whichever level (C = 1, 2 or 3) gives the highest average expected mark: C = 3 for greater than 80%, C = 1 for less than 67% or C = 2 in between. It is always better to enter a reply with C = 1 than to omit a reply
The figure shows that for any probability of being correct, there is a C level for which the graph is highest, meaning you expect (on average with such questions) the best reward. Above a threshold of 80% C = 3 is best, while below 67% C = 1 is best. Whatever your certainty, you cannot expect to gain by misrepresenting it –in other words by trying to ‘game the system’. The system motivates the reporting of an honest judgement, using what statisticians call a ‘proper’ reward scheme for estimating probabilities (Good 1979; Dawid 2006). It is always worth sketching the equivalent of Figure 12.1 to check whether a particular marking scheme properly motivates what is intended (Gardner-Medwin & Gahan, 2003). Contrast CBM with fixed negative marking schemes, which offer the option ‘no reply’ (mark = 0) to avoid risk of a fixed penalty for wrong answers. The intention is to encourage students to omit uncertain answers, especially guesses, that increase variability in final scores. However, the consequence for the student of such omissions can be illusory and iniquitous. Penalties are commonly set to the minimum that ensures guesses do not on average improve one’s score. For example, with five-option multiple-choice questions (MCQs), the penalty would be –0.25 times the credit for a correct answer. Even slight partial knowledge then means the student should expect to gain on average from answering, while guesses would on average be neutral. So students are encouraged to disadvantage themselves by omitting answers, based on advice and risk aversion rather than rationality. For example, limited knowledge can often narrow MCQ options down from five to around two, with a 50% chance of a final guess being correct, which would lead on average to 40% of full credit. A system that encourages students to disadvantage themselves
144
144 Tony Gardner-Medwin by omitting answers in such circumstances should be illegal. CBM, by contrast, straightforwardly steers the student to enter their uncertain response with C = 1, yielding on average in the e xample 17% of the full credit for a confident correct answer based on thorough knowledge.
The student’s perspective using certainty-based marking There are several ways in which a student’s perception of CBM embodies sound principles of good learning (Cornwell & Gardner- Medwin, 2008; Gardner-Medwin, 2018). 1. CBM rewards thinking about how to justify an answer, thereby developing relationships between different nuggets of knowledge far better than the rote learning of facts. 2. It rewards identification of uncertainties and inconsistencies in one’s thinking, leading to more effective study. 3. Lucky guesses are not the same as knowledge. Students recognise that they should not get the same credit. Teachers and examiners should recognise this too. 4. Confident misconceptions are serious, even dangerous. When studying, a penalty is a wake- up call, triggering reflection and attention to explanations. We learn through mistakes, especially bad ones. 5. Quoting student comments from an early evaluation study (Issroff & Gardner-Medwin, 1998): ‘It … stops you making rush answers.’ ‘You can assess how well you really understand a topic.’ ‘It makes one think ... it can be quite a shock to get a –6 ... you are forced to concentrate.’ These points encapsulate the initial reasons for introducing CBM. Unreliable knowledge of the basics in a subject, or (worse) lack of awareness of which parts of one’s knowledge are sound and which not, can be a huge handicap to further learning (Gardner-Medwin, 1995). Thinking critically and identifying points of weakness is an opportunity to consolidate connections between different elements of knowledge. It is distressing to see students with good GCSE grades struggling two years later to apply half-remembered rules to issues that should just be embedded as common-sense understanding –for example how to combine successive percentage changes in a quantity. There are different ways to solve problems and retrieve facts. Communicating the reliability of rival ideas is a key part of interpersonal communication in every walk of life, either explicitly or through body language, and it deserves emphasis in education. Such skills, however, can remain largely untaught and untested in assessments until final exams, when they are often expected in demanding forms of critical writing and in viva situations. CBM can be a constant stimulus and reminder of their importance. An interesting evaluation study (Foster, 2016) has used a form of CBM in school mathematics. Pupils aged 11–14 were given numerical exercises and
145
Certainty-based marking 145 asked to rate how confident they were in each answer on a scale 0–10. They were told their total score would be the sum of their ratings for correct answers minus the sum for incorrect answers. Completely new to the idea, their reaction was encouraging: a ratio of 106 to 28 positive to negative comments. They found it challenging, but constructive. The scheme might need revision in continued use since it is not a properly motivating scheme for reporting certainty: a pupil should expect to do best by rating each answer 10 if the chance of being right was judged to be greater than 50%. One negative comment from a pupil: ‘I don’t like this marking scheme as it is partly based on your confidence in yourself’ seems perceptive: students who were confident and not averse to risk would likely have performed in a more nearly optimal way, which may have contributed to a gender difference reported: despite the fact that girls in the trial achieved greater accuracy than boys, they had similar average confidence ratings.
Benefits to teachers from using certainty-based marking Self-tests can help students learn effectively, whether or not they use CBM. There is a danger, however, that setting up such exercises can be seen by students as a form of assessment rather than as a challenge to assist their study. Several guideline principles should apply to self-tests, in my opinion. Students should at least optionally be able to keep self-test marks private (not visible to teachers) to avoid fear of humiliation. After all, the more mistakes they make the more they learn –especially if feedback is immediate and accompanied by explanations. They should be able to control the questions they choose to answer and be able to be marked out of the subset they choose; this enables them to focus on challenging themselves in areas of weakness or interest. They can be encouraged to work together to stimulate discussion, with anonymous comment facilities shared with other students and staff, to facilitate discussion of questions and improvement of content. While it is useful for teachers to know who does and does not use self-tests, submission of scores can be voluntary and anonymised without sacrificing much of its value as feedback for teaching. Online self-tests can be programmed to operate entirely within the student’s computer once initiated, so a central computer need not handle performance data at all unless the student chooses to submit it on completion. How does CBM help? First, of course, there are the benefits discussed earlier from a student’s perspective. CBM can prompt useful discussion, because questions like ‘Why are you so sure/unsure’ can be very constructive. A pleasant surprise to me, starting to write questions for self-tests (about physiology and maths), was that CBM makes life easier: you should not always worry about pitching questions at an appropriate level of difficulty. Students have a range of different strengths and weaknesses. CBM makes a diverse mixture of questions work well. When a student thinks a question is really easy (which without CBM they might even see as demeaning), they
146
146 Tony Gardner-Medwin
Av. % Correct on each Q
1
Qs answered with accuracy and moderate to high certainty. OK!
0.8 0.6 0.4 0.2 1.5 2 2.5 3 Av. Certainty (C=1,2,3) expressed
Qs answered with reasonable accuracy but insecure knowledge. Qs with certainty expressed for wrong answers: misconcep ons, or poor Qs? Qs elici ng uncertainty and errors: poor knowledge or understanding, or poor Qs?
Figure 12.2 Distribution of responses to a set of 40 questions (mostly single-option multiple-choice and numerical) written as self-test practice for an exam. Likely issues about individual questions are highlighted and can prompt consideration alongside question text and wrong answer choices. Thanks to N. Curtin (Imperial College) for anonymised data.
think ‘OK, definitely a C = 3 here’. Weak students are encouraged by identifying their strengths as well as weaknesses. Feedback to teachers can be striking: a surprising number of correct answers to ‘easy’ questions may be accompanied by low C ratings, indicating an insecurity in understanding. I have come to hesitate using terms like ‘easy’ and ‘difficult’ for questions, because these depend so much on how individual students have approached the subject. Teachers get useful feedback about question quality, helping the elimination of ambiguities and improvement of explanations. An unforeseen bonus arising from the use of –6 penalties with CBM has been the readiness with which students will try to justify in comments why their particular slant on a question was completely reasonable: not always true, but helpful in improving exercises and explanations! Teachers get a new dimension of feedback about their questions with CBM: average certainty ratings as well as accuracy. Figure 12.2 shows one plotted against the other for each question in an exercise. With self-tests, the data can help with improvements and course planning, while equivalent data in exams can flag questions that may have been widely misunderstood and may warrant exclusion from assessment.
Assessment with certainty-based marking CBM, at least in London, has been used more for self-tests than formal assessment. There are both benefits and obstacles to its use in assessment. The UCL medical school ran first-and second-year objective exam components (using true/false questions) with CBM for five years from 2001 to 2006. The result was an enhancement of reliability (equivalent to using over 50% more
147
Certainty-based marking 147 ϯ
ϭϬϬй йŽĨ ŵĂdž ƉŽƐƐ D DĂƌŬ
$YHUDJH&%0 0DUN
Ϯ͘ϱ Ϯ ϭ͘ϱ
ϱϬй
ϭ
ĞŶĞĮƚĨƌŽŵ ϯϯй ŐŽŽĚĐĞƌƚĂŝŶƚLJ ũƵĚŐŵĞŶƚƐ;Ϭ͘ϮϳͿ ϭϳй
Ϭ͘ϱ Ϭ
ϱϬй
ϲϬй
ϳϬй
ϴϬй
ϵϬй
ϭϬϬй
6LPSOH$FFXUDF\ &RUUHFW
Ϭй
ϭϬϬй &% $FFXUDF\ LQFOXGLQJ%RQXV
ϵϬй ϴϬй ŽŶƵƐ;сϮ͘ϳйͿ
ϳϬй ϲϬй ϱϬй ϱϬй
ϲϬй
ϳϬй
ϴϬй
ϵϬй
ϭϬϬй
6LPSOH$FFXUDF\ &RUUHFW
Figure 12.3 Exam scores (320 medical students, 300 true/false questions). A: Average certainty-based mark (CBM) plotted against accuracy for each student. Dashed line: unattainable equality between CBM (expressed as percentage of maximum) and accuracy. CBM scores are lower, but mostly above the full line showing CBM scores for a student who does not discriminate reliability but has the same accuracy (credited with a fixed C level appropriate for the accuracy). B: Certainty-based (CB) accuracy expressed as conventional accuracy plus a bonus calculated from the benefits illustrated in A Thanks to D. Bender (UCL) for anonymised data.
questions with conventional marking) and strong student support for its continued use (Gardner-Medwin 2006, 2013). However, a review discontinued its use in exams (with also a switch to multiple choice and extended matching question styles), apparently because CBM was out of line with exam practice elsewhere and it was thought it could lead to confusion over standard setting. Comparison of standards with and without CBM is an interesting challenge. Figure 12.3 A shows a typical distribution of CBM exam scores (average marks per question) plotted against conventional accuracy. A student with 80% accuracy typically gets around 50% of the maximum possible CBM score. This divergence is inevitable, since both percentages could only be equal if every correct answer was entered with C = 3 and all the others with C = 1 or ‘no reply’. Given that students will often have partial knowledge, raising the probability of being correct above chance but not to 100%, this cannot happen. However, CBM percentages substantially lower than conventional scores can seem a bit demoralising to a student and confusing to examiners. I have tried three approaches to this problem. Simplest, to boost student morale, is just to scale CBM scores so 100% corresponds to all correct at C = 2; the maximum is then 150% and the median (typically 70–75%) roughly comparable to median accuracy. For exam assessments, a more complex non- linear scaling of CBM grades (Gardner-Medwin & Curtin, 2007) can retain CBM grades within a 0–100% range and ensure approximate equivalence on average between CBM and accuracy at each level of ranking. The same
148
148 Tony Gardner-Medwin mark criteria for accuracy and CBM will then pass about the same number of students, although students with better identification of reliable and unreliable answers will rank higher with CBM. Although somewhat complex, this does simplify standard comparisons. Figure 12.3 illustrates the third, probably best, approach in which any benefit the student has derived from effective use of CBM is separated and added as a bonus to conventional accuracy, yielding ‘certainty-based accuracy’. The benefit (Figure 12.3A) is the average CBM mark minus what it would have been if the student had not distinguished reliable and unreliable answers, using an identical C level (appropriate for the overall accuracy) throughout. Such ‘benefits’ can be negative, seldom in exams but more commonly in self-tests as students work with wrong ideas. The bonus added to accuracy (Figure 12.3B) is one-tenth of this calculated benefit, using a factor optimised empirically for improvement of statistical reliability with CB accuracy (Gardner-Medwin, 2013).
Conclusion CBM addresses some of the key elements of knowledge and understanding that we try to impart through education. These days it is easy to check facts online. Many professions have shifted towards a culture of collaboration, making it easier to seek help in addressing uncertainties. The important thing is always to be aware whether your ideas are reliable or need checking or consulting. This is the core concept of CBM: rewarding accurate judgement of reliability. The necessary thought processes are a bit like a self-consultation, linking an issue to the rest of your knowledge. By strengthening awareness of how facts and ideas relate, CBM assists learning and understanding and leads to more fair assessment. Knowledge is not a binary thing (‘you know it or you don’t’). CBM gives graded credit for partial knowledge; lucky guesses are not treated like knowledge. Negative knowledge is penalised (i.e. firm misconceptions, potentially hazardous, and worse than a baseline of acknowledged ignorance). Knowledge has many facets. You may struggle to retrieve facts, yet recognise them with certainty if presented (as in many MCQ tests). You may know something but not understand it. Training with CBM helps develop the connections and strategies on which retrieval and understanding depend. Always paramount are the key processes of education like explanation, inspiration, encouragement and involvement, but CBM sits well alongside them and can go some way to dispelling the negative image that often attaches to testing in education.
Notes 1 More information, examples and links are available at CBM selftests (online): Certainty Based Marking: https://tmedwin.net/cbm/selftests. The software can use existing sets of questions (true/false, multiple choice, extended matching, text,
149
Certainty-based marking 149 numerical), with or without CBM, provided that answers can be categorically identified as right or wrong. It has sophisticated options for randomisation, alternative answers, tailored feedback, anonymised comments and so on, and is available free for use with exercises hosted on the site or elsewhere. Queries, suggestions and involvement in further developments are welcome. Contact: a.gardner-medwin@ ucl.ac.uk. 2 CBM in Moodle: Using certainty-based marking. Moodle https://docs.moodle.org/ en/Using_certainty-based_marking (accessed 4 January 2019). 3 See note 1 for details.
References Ahlgren, A. (1969) Reliability, predictive validity, and personality bias of confidence- weighted scores. Paper presented at the American Educational Research Association Convention, Los Angeles, California, February 5–8, 1969. https://eric. ed.gov/?id=ED033384 (accessed 5 January 2019). Cornwell, R. and Gardner- Medwin, T. (2008) Perspective on certainty- based marking: an interview with Tony Gardner-Medwin. Innovate: Journal of Online Education 4 (3): article 7. Davies, P. (2002) There’s no confidence in multiple-choice testing. Proceedings of the 6th International CAA Conference. Loughborough: Loughborough University, pp. 119–130. https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/1875/1/davies_ p1.pdf (accessed 5 January 2019). Dawid, A. P. (2006) Probability forecasting. In: Kotz, S., Read, C. B., Balakrishnan, N., Vidakovic, B., Johnson, N. L. (eds) Encyclopedia of Statistical Sciences. Chichester: John Wiley & Sons, vol. 7, pp. 210–211. Foster, C. (2016) Confidence and competence with mathematical procedures. Educational Studies in Mathematics 91: 271–288. Gardner- Medwin, A. R. (2018) The value of self- tests and the acknowledgement of uncertainty. In: Luckin, R. (ed.) Enhancing Learning and Teaching with Technology: What the Research Says. London: UCL IOE Press, ch. 1.2. Gardner-Medwin, A. R. (2013) Optimisation of certainty-based assessment scores. Proceedings of the Physiological Society 37th Congress of IUPS (Birmingham, UK). www.physoc.org/proceedings/abstract/Proc%2037th%20IUPSPCA167 (accessed 5 January 2019). Gardner-Medwin, A. R. (2006) Confidence-based marking –towards deeper learning and better exams. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. London and New York, NY: Routledge, pp. 141–149. Gardner-Medwin, A. R. (1995) Confidence assessment in the teaching of basic science. Research in Learning Technology 3: 80–85. Gardner-Medwin, A. R. and Curtin, N. (2007) Certainty-based marking (CBM) for reflective learning and proper knowledge assessment. In: REAP07 Assessment Design for Learner Responsibility, Online Conference (29– 31 May 2007). Re- Engineering Assessment Practices, Universities of Strathclyde, Glasgow, Glasgow Caledonian. www.tmedwin.net/~ucgbarg/tea/REAP/REAP_CBM.htm (accessed 5 January 2019). Gardner-Medwin, A. R. and Gahan, M. (2003) Formative and summative confidence- based assessment. Proceedings of the 7th International Computer-Aided Assessment
150
150 Tony Gardner-Medwin Conference. Loughborough: Loughborough University, pp. 147– 155. https:// dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/1910/1/gardner-medwin03.pdf (accessed 5 January 2019). Gigerenzer, G. (2003) Reckoning with Risk. Harmondsworth: Penguin. Good, I. J. (1979) ‘Proper fees’ in multiple choice examinations. Journal of Statistical and Computational Simulation 9: 164–165. Hassmen, P. and Hunt, D. P. (1994) Human self-assessment in multiple-choice testing. Journal of Educational Measurement 31: 149–160. Hevner, K. (1932) Method for correcting for guessing and empirical evidence to support. Journal of Social Psychology 3: 359–362. Issroff, K. and Gardner-Medwin, A. R. (1998) Evaluation of confidence assessment within optional coursework. In: Oliver, M. (ed.) Innovation in the Evaluation of Learning Technology. London: University of North London, pp. 169–179. Rosewell, J. P. (2011) Opening up multiple- choice: assessing with confidence. Presented at the 2011 International Computer Assisted Assessment (CAA) Conference: Research into e-Assessment, 5/6 July 2011, Southampton, UK. http:// oro.open.ac.uk/32150 (accessed 5 January 2019). Schoendorfer, N. and Emmett, D. (2012) Use of certainty-based marking in a second- year medical student cohort: a pilot study. Advances in Medical Education and Practice 3: 139–143. Yuen-Reed, G. and Reed, K. B. (2015) Engineering student self-assessment through confidence-based scoring. Advances in Engineering Education 4 (4): 8.
151
13 Developing and assessing inclusivity in group learning Theo Gilbert and Cordelia Bryan
Introduction Higher education has an undisputed responsibility to produce graduates able to build cooperative collective intelligences during and after university. Institutions continue to rely heavily on group work for teaching and assessing students, and this is where that responsibility could be met in ways informed by a particular body of new multidisciplinary scholarship. In this regard, we track some innovations in developing and assessing group work which move well beyond old Hofstedian and Belbin models. When these two models are used to direct group work, as is often the case in higher education, they can unwittingly license tendencies among staff and students towards reductionist thinking about others and the placement of constraints on their ability to use the full range of their talents and abilities for the group’s wellbeing and task achievement. These are the very issues that contribute to current low levels of student satisfaction in group work (National Union of Students, 2010; Turner, 2009). Some innovative work has been carried out in the performing arts where collaboration is frequently the context of learning and where it is widely acknowledged among both students and tutors that the quality of performance stands or falls on the abilities of the group members to work ‘effectively’ together (Bryan, 2004a, 2004b). The emphasis at this time was focused to define ‘effective group behaviour’ within an inclusive environment; to enable students to develop such behaviours and then to demonstrate their value; to learn how to self, peer (and tutor) assess these behaviours. Evidence-based and practical approaches and materials which would address the areas of weakness the students (and staff) had identified included: how to apply basic group dynamic theory in differing contexts; how to give and receive feedback so it will contribute to future learning; how to recognise, deal with and minimise the occurrence of negative group behaviours (as identified by both students and staff) and; how to assess individual and group contributions to both task and process. A case study illustrating this approach is described in the first edition of this book (Bryan & Clegg, 2006). After participating in a problem-based learning workshop, group participants were asked to rate the following against a mark of 1–5: a) how well the group achieved its task as set out in the brief; b) how well you think you contributed to achieve the group task; c) how well the
152
152 Theo Gilbert and Cordelia Bryan group functioned as a group and; d) how well you think you performed as a group member. Group maintenance included relevant aspects of group dynamics such as how well each individual listened to others; extrapolated salient points from muddled contributions; enabled quiet members to contribute or applied a technique for dealing with unproductive disharmony within the group.1
Developing group skills using compassion-focused pedagogy This chapter, written more than a decade since the first edition, draws on developments in the science of compassion from neuroscience, and we point even more urgently to the vital place within the modern university curriculum of training and assessing students’ emotionally intelligent actions, taken towards shaping positive group dynamics and interactions. We provide evidence that these are cognitive, deliberative behavioural skills that can be, and are being, trained and assessed in higher education, in very little time and at no financial cost. Furthermore, as with the examples from our earlier work, our approaches for developing emotionally intelligent behaviour can potentially be integrated into the group work of any subject discipline and can, therefore, be transferred and applied by students in other group settings. Because these behaviours are heard and seen as demonstrable group micro skills, they can thus be identified, observed and assessed. They may also be filmed for developmental purposes and for external verification. The behavioural skills we refer to are the micro skills of compassion that can be situated in unfolding group interactions, in live time, thereby preventing opportunities for their rehearsed, strategic, mechanistic enactment. Compassionate micro skills may easily be developed by first identifying some dysfunctional group archetypes. Table 13.1 addresses how to deal with three common and potentially negative archetypes of group work, namely the monopoliser, the colluder and the quiet student. It is worth noting that to reduce distress in the group, use of these techniques in column three of Table 13.1 (below) is best asked of students very early in a module. For example, to suggest students use the third micro skill in column three, only after one or two students have repeatedly been seen –perhaps from over enthusiasm, or anxiety –cutting off the quieter students who try to speak, is too late. Delayed exposure to these techniques, after such interactional errors of judgement have been made by students, is unfair; it may appear to single out the ‘wrong doer’ in his own and others’ minds thus introducing the avoidable and unnecessary risk of face-losing and humiliation which is deeply damaging to group dynamics. Another concern for over-speakers is that they believe speaking less could damage their tutor and peer-observed performance (Gilbert, 2016). But such students later reported that they had, instead, experienced advantages for the quality of their thinking processes and therefore performance in the group when they were required to speak more concisely in order to share the floor with others. This clearly aligns with Yalom
153
Table 13.1 Three examples illustrating how to develop compassionate micro skills for effective group work Signs of group communication dysfunction to notice
Objective
Compassionate micro skills used to counter problems
Monopolisers: These over- talkers tend to fix eye contact with one person only in the group, often the person directly in front of them (Gilbert, 2016; 2017). Anxiety may be one reason (Yalom and Leszsz, 2005). The person to whom the monopoliser is directing all his/her attention/eye contact is now a colluder in an alpha pair (Bion, 1971) and he/ she should notice this.
To signal to the monopoliser that the group, (including the monopoliser) needs other perspectives/ input to optimise the quality of group problem solving, analysis and/ or criticality on task.
1. The colluder breaks eye contact with the monopoliser, directing/ channelling the over- talker’s eye contact to left and to right –that is, to all other members in the group, as though the group is a single organism.
The colluder does not act/or acts and the monopoliser does not respond.
Other group members share responsibility to signal to the monopoliser that others also need eye contact so that a more equal spread of participation can be facilitated.
2. Other group members act non-verbally to break up the dyad. Slight body movements field- observed to be effective (Gilbert, 2012, 2016, 2017) include: slight hand waves, tipping sideways of the head towards the colluder, hand extended across the table, reaching and pulling gestures, and so on, as group members see fit, until the over-talker’s eye contact becomes inclusive to facilitate participation, (not simply verbal interruption) by others.
Quiet students Some students do not speak/contribute much to the discussion.
For the group to notice, not normalise this; and to create conditions in which the ‘quiet’ student has something to say and wants to say it.
3. Group members invite, by name, the quiet student from time to time, e.g. ‘What do you think, Sam?’ also allowing Sam to say, (something like): ‘Nothing just now’ so that the group moves on seamlessly, until Sam is invited again later, e.g. ‘Sam, do you think that … ?’
Source: Gilbert et al (2018). Note: Students are supported in their ability to notice dysfunctional dynamics and guided as to what to watch out for.
154
154 Theo Gilbert and Cordelia Bryan and Leszsz’s (2005) work in group psychotherapy where they suggested of monopolisers (their term) that: ‘you do not want to hear less; … you want to hear more’ (Yalom’s italics). Thus, describing herself as normally ‘pretty dominant’,2 this student in a study of the efficacy of compassion-focused pedagogy (Gilbert, 2016) was allowing the silences that others needed so that they could enter the discussion; she was listening more and yet working harder: S1: … on subjects that I feel very confident about, I’ve taken a step back and not gone rushing in to make, you know, the first comment, the first argument against something –because other students are coming up with them themselves … I can then use their points to kind of continue. (S1 female, stage one, one-to-one interview) This is an achievement in learning how to forbear from unproductive performance competition with other over-talkers in the group, which suggests the value of the micro skills of compassion: I used to grit my teeth … But now … I mostly deal with it by waiting for them to finish their point –‘cause I don’t want to cut anyone up. (S1 female, stage one, one-to-one interview) And a result that surprised this student was that: S1: I notice that they’ve got a reduction in how they’re being as well. (S1 female, stage one, one-to-one interview) Thus, all students are required to stay out of the silences that shyer students need to make their entry into the group discussion. The whole group is responsible for monitoring themselves and others when compassionate practice is required. What is particularly gratifying to note is that some students’ report an internalisation of the compassion-focused pedagogy when this is taken seriously enough by the institution to be assessed. That is, they report exercising their compassionate micro skills for group work on other modules where these skills are not assessed and notice a resulting positive change in other people’s responses to them in interactions. We, like others, strive to develop graduates who are motivated and equipped to build collaborative, problem-solving teams in the work place and other communities, post-higher education. We argue that these skills are what determine how and why a group arrives at task achievement, or in some cases, whether they arrive at task achievement at all.
Assessing group skills In group work, a powerful way of pinning students’ attention to their skills at noticing and managing the psychosocial processes of the group productively,
155
Developing and assessing inclusivity 155 is to award academic credit to them for proactively enhancing the social and learning experiences of others. Directing students’ attention through this kind of institutional endorsement is, of course, using best practice in assessment for learning and sends a clear message which is heard by students: ‘these group skills are important as they can improve my grades’. Tutors can begin by setting the group two simple questions at the start of a module or workshop. Students then have time to practise watching and acting on behalf of all others in their groups. The questions are: 1. What can I do to enhance the social and learning experiences of my fellow students in this group work that they will most value in me? 2. What can my fellow students do in this group work to enhance my social and learning experiences that I will most value in them? These two questions were originally devised by former University of Hertfordshire market research lecturer Elaine O’Connor, when some of her students objected to being put in teams with students they considered to be less able than themselves, and who might therefore need their support on a busy course. Post-groupwork, these objecting students were the most outspoken advocates of the exercise; the inclusion in the groups of their fellow students, they said, had developed their (objectors’) subject-specific learning, and social intelligence, at key moments in the group work. If it is possible to change the membership of groups regularly, rather than ‘business as usual’ in familiar cliques, this helps accelerate students’ subject engagement with others whose sometimes unexpected perspectives can extend the group’s conceptual reach when it thinks together (Gilbert, 2016, 2017; 2018b). Students may also be required to respond to the above two questions, in writing, as part of a larger written assignment at the end of a module.
Closing the black and minority ethnic attainment gap At the start of a module, 220 ethnically diverse computer science undergraduates at the University of Hertfordshire attended an interactive lecture (lectorial) specifically on the micro skills of compassion. They learned in the lectorial that the definition of compassion as agreed in neuroscience and biology (Sapolsky, 2017; Klimecki et al, 2014; Weng et al, 2013; Immordino- Yang et al, 2009), anthropology (Goetz et al, 2010) and psychology (Gilbert et al, 2017; Neff, 2003b) is: to notice distress and/or disadvantaging of others and commit to reducing it. Students were shown how to apply this scholarship by learning practical micro skills for disrupting the behaviours of some students who may monopolise discussions. This might be because they are overly self-focused and/or very anxious (Yalom and Leszsz, 2005). They also learned techniques for non-verbal interventions, such as purposefully inclusive eye contact, for inducing normally quiet, non-contributing students to
156
156 Theo Gilbert and Cordelia Bryan feel safe enough to venture more to the group, socially and intellectually (Gilbert, 2016; 2017; Gilbert et al, 2018). For the computer science students, the academic performance of the black and minority ethnic (BME) students in the intervention sample of ethnically diverse students (n = 220) showed no statistical evidence on an attainment gap. This was in contrast to the results obtained from the smaller control group of (n = 27) which comprised the complete cohort on the module the previous year. This data analysis –of the control and intervention group – was conducted blind by one reader in statistics at the University of Edinburgh and another at the University of Hertfordshire. From the range of statistical tests run in each university, no differences were found between the results obtained by each statistician (Gilbert, 2018). This is of note because the BME attainment gap is approximately 14% across the UK higher education sector, and this is reflected in the University of Hertfordshire, where the study was run. This additional positive outcome of reducing the BME attainment gap when adopting compassion-focused pedagogy, we believe, has to do with how the micro skills of signalling compassion by students to each other across the group during task focused discussion, reduces the need to pay attention to signs of potential social threat in making and responding to contributions. The compassion-focused pedagogy is likely to be enhancing levels of oxytocin in group members’ brains and reducing levels of adrenaline and cortisol (stress hormones). This is suggested by qualitative data on students’ reports of anticipating each other’s needs to speak and feeling better able to ‘read’ each other’s body language in this regard too. This may be the result of learning how to apply the compassionate micro skills based on explicit in-class attention to and practice of the first component of compassion, namely, to notice signals of distress in others. The neuropeptide oxytocin is associated in human interactions with enhanced syncing of communicative behaviours in human interactions and with increased ability to read and interpret social signals more accurately. It is also more conducive than adrenaline or cortisol to higher-order cognitive processes (Shahestani et al, 2013; Colonello et al, 2017; Page-Gould et al, 2008). In other words, it appears that the compassion-focused pedagogy enables group task- focused cognitive processing to take precedence over more psychobiologically determined requirements, such as personal social safety in the group. Students using the micro skills of compassion in their group work therefore expend less cognitive capacity on formulating and enacting defensive behavioural strategies; instead they are more likely to experience the group entity as safe, resilient and productive places. In another study applying compassion-focused pedagogy in small-group discussion seminars with (n=38) business students, the results were the same as those in the computer science study (Gilbert, 2016). The version of compassion-focused pedagogy that was used in the business school, and originally piloted in the humanities, is seen here in Figure 13.1. Essentially though, compassion is pivotal again, because of its property of centralising
157
Developing and assessing inclusivity 157
STEP THREE - IN WEEKLY SEMINARS STEP TWO HOMEWORK STEP ONE - SEMINAR ONE 1. Speed meeting. 2. What is compassion? 3. Small group → whole group concensus on: a. noticing unhelpful seminar behaviours. b. how to address these compassionately.
1. After each weekly lecture, students carry out individual, independent research on the topic of the lecture.
1. The final small group discussions, at the end of the module, are filmed and each student is assessed according to criteria seen in Table 2., below
- IN WEEKLY SEMINARS 2. In small groups, students share (present and join a discussion of) the research they have each done. 3. Tutor facilitates students to support each other in using the strategies they agreed during their work on (3a) and (3b) in Step one - seminar one.
Figure 13.1 Compassion- focused pedagogy for higher education small- group discussion-based seminars/tutorials Source: Gilbert (2016) reproduced by kind permission of the LINK journal, School of Education, University of Hertfordshire.
noticing and making meaning of behaviours around the group (empathy) and acting wisely on this (Gilbert et al, 2017). In effect, here, empathy (again as deliberative and not dependent for its existence on an emotion) is recruited from its own fMRI-identified neural circuitry into the (largely) differently located neural circuitry of compassion (also identified by fMRI; Klemecki et al, 2014; Weng et al, 2013). In relation to step one, points 3a and 3b in Figure 13.1 (above), for examples of how anti-group behaviours (Nitsun, 1996) can be noticed and disrupted with compassion-focused pedagogy skills, see Table 13.1 (above). Awarding academic credit for demonstrable positive group behaviour which enhances the quality of outcomes (Havergal, 2016) is not a new idea. It is a logical extension of a principle applied throughout academia that assessment drives learning, and is a recurring theme that runs throughout this book. We maintain that the main reason awarding credit for group skills is not yet common practice is because of its perceived complexities, which we dispute here. Attending to the psychosocial processes of group dynamics is not at all complex, inscrutable or inaccessible to the reasonably observant eye of either an educator or a student. This is because there are key behavioural processes
158
158 Theo Gilbert and Cordelia Bryan which play out again and again in group work where competitive individualism and/or disengagement and withdrawal are features of the group behavioural ‘norm’. The ability to notice these, make sense of them and address them where they are harmful to the equal spread of participation and quality of thinking in the group is quickly and easily trainable and, as mentioned previously, comes at no financial cost. Table 13.2 (below) illustrates a marking scheme which has easily been adapted to suit assessed discussions in different disciplines. Each of these strategies –aimed at dismantling the above types of anti- group behaviours –is rationalised and elaborated by a rich, intersecting, cross-corroborating, interdisciplinary theoretical base. Put differently, there is a bespoke scholarship supporting compassion as a secular, cognitive, deliberative (not necessarily affective; Sapolsky, 2017, p. 551) capacity for optimising social and learning outcomes in task-focused group work in the workplace and in education, including in higher education.
Conclusion Each of the strategies we have discussed in this chapter for optimising the student experience of group processes is rationalised and elaborated by a rich, intersecting, cross-corroborating, interdisciplinary theoretical base. Put differently, there is a bespoke scholarship supporting compassion as a secular, cognitive, deliberative (not necessarily affective; Sapolsky, 2017, p. 551) capacity for optimising social and learning outcomes in task-focused group work in the workplace and in education, including in higher education. In this chapter, we have argued that the objections that are still raised to embedding and assessing compassion within the school and university curriculum; for example that it is impossible to assess, that it will dumb down the subject, that it is not in the expertise of teachers, that it has nothing to do with development of students as researcher, or that it is patronising to students –all fly in the face of current evidence as to what happens when a curriculum does acknowledge the role of student-led compassionate pedagogy. Not to put too fine a point on it, Google has recently completed a US$5 million study to identify why a few of its teams –out of a sample of 180 teams the company studied from its work force of 55,000 –were such outstanding performers (Duhigg, 2016). Because the outstanding teams were each so different from the other higher performing teams, it took five years to pin down the key defining trait of such teams beyond doubt. It was found not to be: ambition or drive; competitiveness; a good ‘leader’; experience; certain personality mixes in the group, nor a particular skill set. It was kindness that was found to raise the collective intelligence of the group (ibid.) and where kindness was absent in the group behavioural norm, the group intelligence was diminished, even in teams comprised of outstandingly bright individuals. Thus, it appears higher education is now lagging behind industry. However, if HE acts on applied research from compassion-focused pedagogy, it can still catch up.
159
Table 13.2 Marking criteria for small-group research-based discussion Student : …............................ Candidate No :.................... Date :................. Tutor:..................................... Module ………....................... Code:.................. Content (90–70%) The research undertaken by the candidate for the examination topic is demonstrated to be extensive; it is appropriate in content, level and relevance. (30%)
A B C D E F
Little or no evidence is offered of sufficient and/ or appropriate research.
Critical perspectives –as in questions posed, arguments offered, analytical and or evaluative insights into the student’s own research and that contributed by others – are integrated relevantly and helpfully into the group discussion. The student helps keep the group focused on task. (40%)
A B C D E F
Few or no critical perspectives –as in questions posed, arguments offered, analytical or evaluative insights into the student’s own research and that contributed by others – are demonstrated during the discussion. The student may contribution little by remaining silent, or else may input in ways that lead the group off task
Group management skills (10–30 %) Body language (10%) Eye contact and other body language is appropriately inclusive.
A B C D E F
• Body language is signalling little interest or engagement with what is being said by others; or, may focus repeatedly on some students to the exclusion of others.
Language (10%) is graded (it is international English and it is appropriately paced). It is also mindful in other aspects when: • disagreeing and/or critiquing • questioning • enacting inclusivity skills (see below).
A B C D E F
Student may: • speak too fast; or too quietly • use excluding, localised English • use inappropriately individualistic or disrespectful language when challenging or questioning others, or when enacting some group management strategies. (continued )
160
160 Theo Gilbert and Cordelia Bryan Table 13.2 (Cont.) Group management strategies (10%) • Eliciting, encouraging, acknowledging. • Accommodating reasonable hesitations/silences while less confident speakers are engaging the group’s attention. • Checking understanding of the group when speaking. • Intervening proactively and compassionately in the excluding behaviours of others, e.g. monopolising.
A B C D E F
Student may: • tend to monopolise discussion or speak over others • Student may make little or no attempt to: • check the group’s understanding (of his/her own research) e.g. when presenting an unfamiliar term/concept • get clarification when it is needed • during presentations • listen to and respond relevantly to • others • proactively support the efforts of others to contribute effectively to group task achievement.
Comments: First assessor………………. Second assessor ……….…...…… Grade: …….….….. Source: Gilbert (2016).
Notes 1 All materials developed over the three years’ research are available in easily downloadable form from the project website (Assessing Group Practice) at www.lancaster.ac.uk/palatine/AGP/index.htm (accessed 5 January 2019). 2 S1 female, stage one, one-to-one interview.
References Bion, W. (1971) Experiences in Groups. London: Tavistock Publications. Bryan, C. (2006) Developing group learning through assessment. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. Abingdon: Routledge, pp. 150–157. Bryan, C. (2004a) Assessing the creative work of groups. In: Miell, D. and Littleton, K. (eds) Collaborative Creativity. London: Free Association Books, pp. 52–64. Bryan, C. (2004b) The case for developing and assessing group practice. In: Bryan, C. (ed.) Assessing Group Practice. Seda Paper 117. Birmingham: Staff and Educational Development Association, pp. 7–17. Colonello, V., Petrocchi, N., Heinrichs, M. (2017) The psychobiological foundation of prosocial relationships: the role of oxytocin in daily social exchanges. In: Gilbert, P. (ed.) Compassion: Concepts, Research and Applications. New York, NY: Routledge, pp. 105–119.
161
Developing and assessing inclusivity 161 Duhigg, C. (2016) What Google learned from its quest to build the perfect team. New York Times Magazine, 25 February. www.nytimes.com/2016/02/28/magazine/ what-google-learned-from-its-quest-to-build-the-perfect-team.html (accessed 5 January 2019). Gilbert, P., Catarino, F., Duarte, C., Matos, M., Kolts, R., Stubbs, J., Ceresatto, L., Duarte, J., Pinto- Gouveia, J., Basran, J. (2017) The development of compassionate engagement and action scales for self and others. Journal of Compassionate Healthcare 4: 4. Gilbert, T. (2018) Embedding and assessing compassion on the university curriculum. University of Hertfordshire. YouTube 19 January. www.youtube.com/ watch?v=3jFVTCuSCOg (accessed 5 January 2019). Gilbert, T. (2017) When looking is allowed: what compassionate group work looks like in a UK university. In: Gibbs, P. (ed.) The Pedagogy of Compassion at the Heart of Higher Education. London: Springer, pp. 189–202. Gilbert, T. (2016) Assess compassion in higher education? Why and how would we do that? LINK, 2 (1). www.herts.ac.uk/link/volume-2,-issue-1/assess-compassion-in- higher-education-how-and-why-would-we-do-that (accessed 5 January 2019). Gilbert, T. (2012) Enhancing Inclusivity in the Higher Education Discussion Group: Strategies for Employability, Internationalisation and Assessment in a UK University. In Thornton, M. and Wankede, G. (eds) Widening participation and Social Cohesion Amongst Diverse, Disadvantaged and Minority Groups in Higher Education. Mumbai: Tata Institute of Social Sciences, pp. 81–92. Gilbert, T., Doolan, M., Beka, S., Spencer, N., Crotta, M., Davari, S. (2018) Compassion on university degree programmes at a UK university: the neuroscience of effective group work. Journal of Research in Innovative Teaching and Learning 11 (1): 4–21. Goetz, L., Keltner, D., Simon-Thomas, E. (2010) ‘Compassion: an evolutionary analysis and empirical review. Psychological Bulletin 136 (6): 351–374. Havergal, C. (2016) Students assessed on getting peers to contribute to seminars. Times Higher Education 13 July. www.timeshighereducation.com/news/students-assessed- on-getting-peers-to-contribute-to-seminars (accessed 5 January 2019). Immordino-Yang, M. H., McColl, A., Damasio, H., Damasio, A. (2009) Neural correlates of admiration and compassion. Proceedings of the National Academy of Sciences U S A 106 (19): 8021–8026. Klimecki, O. M., Leiberg, S., Ricard, M., Singer, T. (2014) Differential pattern of functional brain plasticity after compassion and empathy training. Social Cognitive and Affective Neuroscience 9 (6): 873–879. National Union of Students (2010) Race for Equality: A Report on the Experiences of Black Students in Further and Higher Education. National Union of Students. London: NUS. Neff, K. D. (2003b) The development and validation of a scale to measure self- compassion. Self and Identity 2: 223–250. Nitsun, M. (1996) The Anti-Group: Destructive Forces in the Group and their Creative Potential. London: Routledge. Page-Gould, E., Mendoza-Denton, R., Tropp, L. (2008) With a little help from my cross group friend: reducing anxiety in intergroup contexts through cross-group friendship. Journal of Personality and Social Psychology 95 (5): 1080–1094. Sapolsky, R. (2017) Behave: The Biology of Humans at our Best and Worst. London: Bodley Head.
162
162 Theo Gilbert and Cordelia Bryan Shahestani, S., Kemp, A. H., Guastella, A. J. (2013) The impact of a single administration of intranasal oxytocin on the recognition of basic emotions in humans: a meta-analysis. Neuropsychopharmacology 38: 1929–1936. Turner, Y. (2009) ‘Knowing me, knowing you’ is there nothing we can do? Pedagogic challenges in positioning the HE classroom as an international learning space. Journal of Studies in International Education 13 (2): 240–255. Weng, H. Y., Fox, A. S., Shackman, A. J., Stodola, D. E., Caldwell, J. Z., Olson, M. C., Rogers, G. M., Davidson, R. J. (2013) Compassion training alters altruism and neural responses to suffering. Psychological Science 24 (7): 1171–1178. Yalom, I. and Leszsz, M. (2005) The Theory and Practice of Group Psychotherapy (5th edn). New York, NY: Basic Books.
163
14 Designing engaging assessment through the use of social media and collaborative technologies Richard Walker and Martin Jenkins
Key take-aways from this chapter
• • •
an understanding of the opportunities and appropriate contexts for the use of social media and collaborative technologies in ‘assessment as learning’ activities to support conceptual learning, combined with transferable skills development; a presentation of models that may be used to reflect on the use of technology in assessment design, including the scope to incorporate social media and collaborative technologies in assessment activities; an introduction to digital storytelling and digital publication as assessment design methods.
Introduction While the potential for learning technologies to support and enhance assessment processes has long been recognised (e.g. Jisc, 2007), there has been limited evidence to show the impact of technology-enabled assessment on learning outcomes within higher education (Bennett et al, 2017). The use of learning technologies is purported to add value to assessment practices in a variety of ways, supporting greater learner autonomy, agency and self- regulation through reliable activities which deliver immediate feedback on performance (Oldfield et al, 2012). However, the reality of how learning technologies are being used across the higher education sector commonly falls short of these claims, with tools often used to replicate traditional assessment practices rather than to establish new ways of assessing and supporting student learning (Sweeney et al, 2017). The use of technology has predominantly focused on the performance of formative and summative tasks as a way of measuring learner progression, specifically in relation to the mastery of knowledge through objective online tests (Nicol & Milligan, 2006). As Laurillard (2002) has observed, this usage has replicated traditional modes of assessment as a way of measuring knowledge acquisition (conceptual learning), as opposed to focusing on the demonstration of students’ skills and competencies through the performance
164
164 Richard Walker and Martin Jenkins of authentic tasks. The latter ‘assessment as learning’ approach places an emphasis instead on the development of students’ problem- solving and self-regulation skills and capabilities for future learning as much as the end product (i.e. the mark that is attained at the end of an assessment). McDowell (2012) identifies the key components of ‘assessment as learning’ as being:
• • • • •
authenticity; enhancing formative assessment; active and participatory learning; feedback through dialogue and participation; student autonomy.
The distinction between technology- enabled traditional (‘assessment of learning’) and ‘assessment as learning’ approaches is captured in Figure 14.1. The figure presents different modes of technology-enabled assessment, which are mapped in relation to the level of collaboration/participation in the performance of the task and the degree to which the task encourages critical reflection on learning. All four quadrants in Figure 14.1 represent valid applications of technology to support assessment, although there is a clear distinction between the left-hand and right-hand quadrants in terms of their approach and targeted assessment objectives. The left-hand quadrants focus on the mastery of conceptual knowledge through the completion of tests and presentation of reports with the final mark representing the key product of the assessment activity. The right-hand quadrants place a stronger emphasis on the learning process, with attention to the development of students’ metacognitive skills through active learning and self-regulation of the assessed task. These quadrants also indicate how opportunities for students to demonstrate skills relevant both to academic and professional domains may be designed in to the assessment process through digital communication and networking activities, based on the use of collaborative tools and public- facing social media tools such as Twitter. These activities specifically target students’ ability to seek out and act on feedback received in the performance of a task. These transferable skills are important, given that an estimated 90% of UK jobs now require some level of digital competency, equipping students for living, learning, working and presenting themselves in a digital society (Jisc, 2014). In the next section, we explore how ‘assessment as learning’ tasks may be designed into study programmes through the use of collaborative tools and social media technology, drawing on case study evidence from the Universities of Coventry and York. The case studies highlight a variety of ways in which assessment tasks may be introduced within the curriculum to support an ongoing learning process, with specific attention to the development of students’ real-world digital skills and literacies as part of the assessment design.
165
Increased collaboration and participation
Designing engaging assessment 165 Assessment of learning
Assessment as learning
Group tasks
Collaborative tasks
Using technology to facilitate the performance of group tasks (e.g. collaborative report-writing)
Tasks that require critical engagement with internal and external audiences as a means of developing learning (e.g. tweeting and blogging on group research, inviting peer review and discussion on key findings)
Capabilities–presentation skills; organisation skills; collaboration; knowledge acquisition
Capabilities–self-regulation; collaboration; multimodal literacies; critical analysis Individual tasks
Individual tasks
Using technology to assess knowledge (e.g. by completion of online MCQ tests)
Assessments that require individuals to use technology to demonstrate and reflect on their learning (e.g. reflective blogs and portfolios)
Capabilities–knowledge acquisition
Capabilities–self-regulation; organisation and presentation skills; critical analysis
Increased focus on metacognitive skills
Figure 14.1 Categorisation of technology-enabled assessment modes
Case studies of technology-enabled ‘assessment as learning’ tasks Table 14.1 presents a summary of the technology- enabled case study approaches. Individual ‘assessment as learning’ tasks Case study (i): digital storytelling Digital storytelling as an assessment approach invites students to develop a narrative to demonstrate their knowledge and critical reflection on a specific theme. The act of creating a digital story engages learning processes such as the evaluation, selection, rejection, structuring, ordering, presentation and synthesis of supporting evidence in relation to the message to be conveyed, and also requires students to demonstrate a keen appreciation of the target audience for the narrative. These skills are key to the development of the story narrative, but are also applied to the other aspects of the production such as selection of the images and sound to complement the narrative
166
166 Richard Walker and Martin Jenkins Table 14.1 Summary of technology-enabled case study approaches ‘Assessment as learning’ approach
Activity description
Targeted learning outcomes /skills development
(i) Digital storytelling (occupational therapy)
Individual presentation of reflective career narrative
Self-reflection Critical reflection on career development
(ii) Digital communication to professional audiences (music technology)
Individual portfolio presentation to professional audiences – networking engagement through tweeting/blogging
Critical thinking Effective presentation skills Critical engagement with professional audiences
(iii) Digital communication to public and professional audiences (heritage practice)
Group-based broadcasting of research findings to public audience
Effective presentation skills through audio/ video media
(iv) Peer review and critical engagement with professional networks (photography)
Cohort-based use of social media to support peer review of work
Critical thinking Effective presentation/ networking skills Critical engagement with professional audiences
(Gravestock & Jenkins, 2009). As such, this approach provides a means of making explicit tacit knowledge and reflection and has been identified as multimodal, developing multiple literacies (Jenkins & Gravestock, 2012), so addressing many of the ‘assessment as learning’ components identified above. The occupational therapy undergraduate programme at Coventry University makes use of digital storytelling for students to develop and present their career narrative. Through this narrative they are required to reflect on their skills development and personal learning journey. This assessment task is required as part of an employability and entrepreneurship module; the use of digital storytelling is seen to help develop students’ employability skills through both the use of technology, helping to develop digital literacy skills, and through an assessment design, with its focus on story and the opportunity for formative feedback (as stories are told and responded to), that develops self-reflection. The application of technology has opened up the possibility of using storytelling in a creative way to convey key conceptual knowledge in a structured and engaging narrative to a target audience. It is important to note though that while many students may have the requisite level of technical skills to tackle a digital storytelling task, assumptions should not be made regarding their readiness to do so, given that
167
Designing engaging assessment 167 this may be perceived as an unconventional assessment activity –far riskier than writing an essay. Students should be well informed about the rationale for the activity and how it should be tackled. This is best done by showing them examples that you have created and by inviting them to engage with these resources in formative tasks first. Case study (ii): individual blogging and portfolio presentation to an external audience The personal and professional practitioner core module for the MSc Audio and Music Technology (Electronic Engineering) at the University of York incorporates aspects of authentic/real-world learning in its assessment design in a different way, requiring students to develop a public website with an active weekly blog and to create a LinkedIn profile, which is used for professional networking and job finding, enabling them to showcase their work and interests and prepare them for the world of work. The website is designed and launched in the autumn term and runs across the academic year and is a vehicle for public engagement with a student’s personal and professional development, and is intended to be portable, accompanying students after graduation as they develop their professional career pathways. The weekly blog is assessed, with students choosing four of their best six blog posts for review –with 30% of the final mark based on the degree to which they have successfully promoted their services and skills, with other assessment criteria touching on usability and presentation skills, as well as the overall quality of the content. From experience, it is advisable to help students with the planning and refinement of their website and blogging activities, building self-regulation into the module design so that they are encouraged to reflect on their progress as they develop their showcase. This requirement has been addressed in the personal and professional core module through the provision of a private reflective log using Google sites, which has been shared with a supervisor and used as a space for reflection on progress (Baruah et al, 2017). Collaborative ‘assessment as learning’ tasks Case study (iii): digital communication to public and professional audiences Heritage practice (archaeology) at the University of York has taken the demonstration and dissemination of learning to a public audience a stage further –developing storytelling and creative media skills as part of an integrated group-based assessment designed for a first-year undergraduate archaeology module in heritage practice. This instructional approach is intended to help students to develop authentic digital skills that will prepare them for future careers in archaeology and related disciplines (Perry, 2015). Students are encouraged to work collaboratively to develop specific competencies in audio technology (through podcasting), video development and
168
168 Richard Walker and Martin Jenkins other digital publication modes, with the aim of developing meaningful interpretative resources for real-world heritage sites, contributing directly to their official communication strategies and public engagement goals. Students are supported in this work through technical workshops delivered by the central e-learning team on how to develop multimedia resources, as well as assignment briefs, participatory design sessions and daily review and critique sessions, each aimed at refining their outputs for public audiences and anticipating obstacles that might arise with access to the media resources that they publish online. Feedback from course participants tells us that these steps are important in helping to guide individuals with limited technical knowledge through the entire production process to create high-quality output through a highly practical, project-based approach. The module follows a co- design model, with the focus on students developing technical and project management skills as content creators and equal collaborators in the work alongside their external heritage partners (local museums and archaeological sites) and academic staff. This has resulted in a diverse range of media outputs being developed by students, ranging from short films about a Mesolithic excavation site (Star Carr) to prototype mobile apps (Breary Banks) and a video game about the Roman site of Malton. Case study (iv): peer review and critical engagement with professional networks Social media may also be used to invite critical engagement with professional networks. The undergraduate photography programme at Coventry University has developed a borderless course delivered using a range of social media. Blogs and tweets are used to encourage students to be reflective and share ideas with a wider community that extends beyond the students enrolled on their programme. The open nature of the coursework, using web-based applications such as Flickr, Vimeo, SoundCloud and Facebook encourages this sharing and presents an opportunity for participants to extend their network to professionals in their chosen field who provide feedback on their work. For example, students at Coventry have used Twitter to share ideas and discuss videos related to their discipline. In doing so, the discussion has been extended to a wider community including the original authors of the videos, which has enhanced both the ideas sharing and overall learning experience. Students are also provided with closed spaces (online forums) in which staff provide one-to-one support and feedback (McGill & Gray, 2015). From our experience of using social media tools such as Twitter, they can be effective as a medium to stimulate commenting and interaction on a specific theme, but it is worth bearing in mind that the depth and breadth of contributions may vary greatly and can become quite granular in nature. Providing space for more considered, reflective discussion through blogging or document sharing applications may help to address this challenge, building
169
Designing engaging assessment 169 on the more dynamic and interactive nature of social media tools, in this way enhancing the overall learning experience.
Challenges to designing and delivering ‘assessment as learning’ activities Identifying opportunities for innovation in assessment design Making the transition from traditional assessment activities to activities with a more explicit learning focus is not without its challenges. Bennett et al (2017) highlight the tension that academic staff may feel between the need to find efficiencies in the design and delivery of assessments and the scope for innovation in their practice. There is a need for teaching staff to be able to articulate learning outcomes clearly to students and link these to appropriate designs and potential technologies in the assessment design. Figure 14.1 provides one particular model which may be used to conceptualise existing practice and stimulate discussion around the design of assessment and how it might be adapted to support the targeted learning outcomes. The model is intentionally focused on task design in the first instance rather than on the choice of technology, in order to ensure that pedagogic objectives are being met. As Tay and Allan observe with reference to the inclusion of social media in assessed activities: it is the particular pedagogic application of social media –not the technology itself –that will lead to a constructivist learning outcome. Whereas social media might afford us possibilities for collaboration, shared content creation, and participation in knowledge building, those possibilities need to be actualised through the effective integration of social media into learning environments. (Tay & Allan, 2011, p. 156) Implementing change While assessment design should be pedagogically led, there is a risk that the choice of technology may not fully meet the requirements for the task in hand, in this way creating a barrier to learning. As Sweeney et al (2017) conclude from their review of technology-enhanced assessment practice, the affordances of the technologies need to be clearly understood in relation to the stated pedagogic aims that they are supporting. Figure 14.2 is intended to help address this challenge. The model maps different technologies on to the matrix model produced in Figure 14.1, providing a steer on the range of technology choices for the desired use case. Figures 14.1 and 14.2 together act as tools that may be used to help review and plan assessment activities at a module or programme level. For example, the intended outcomes/capabilities for an assessment activity may be mapped
170
Increased collaboration and participation
170 Richard Walker and Martin Jenkins
Assessment of learning
Assessment as learning
Group tasks
Collaborative tasks
Document sharing applications Wikis Content authoring tools for video, podcast artefacts
Digital storytelling (audio, video, images) Social media for peer review (i.e. blogs, Twitter)
Individual tasks
Individual tasks
MCQs Learning units (e.g. SCORM packages)
Blogs Portfolios Simulations
Increased focus on metacognitive skills
Figure 14.2 Tools mapped to technology-enabled assessment modes
to the appropriate quadrant in Figure 14.1. Using the figure in this way can serve as a way of making explicit the intended outcomes for the proposed activity, and the ensuing discussion will consequently help a course team to develop a shared understanding of these targeted outcomes. This initial mapping would then be followed up by a discussion on the most appropriate tools for the assessment task using Figure 14.2 as a reference point. We should also think carefully about how technology-enabled ‘assessment as learning’ tasks are presented to students, ensuring that participants understand the rationale and targeted working methods for the activity that you are asking them to perform. This touches on a range of instructional responsibilities in preparing students for effective engagement with the assessed task. Drawing on Walker and Baets (2009)’s five-stage blended delivery model, these responsibilities may be summarised as follows:
• • • •
socialising: induction –modelling of assessed task and targeted learning behaviour; building confidence and addressing technical and learning competencies through a preliminary workshop or guided formative task; supporting: just- in- time instructions; modelling of targeted learning; ongoing provision of feedback/technical support (tips); sustaining: monitoring of work; ongoing evaluation and accountability – ‘little and often’; interlinking and summing up: acknowledge and summarise online contributions in class; invite class presentations on collaborative work (peer accountability); make explicit learning outcomes from class-based and online activities.
171
Designing engaging assessment 171
Conclusion The use of technology within UK higher education has up to now been mainly directed towards supporting traditional models of assessment, helping to create efficiencies in the delivery and management of these activities (Walker et al, 2017). Yet it is clear from the examples presented in this chapter that the potential exists for new models of assessment to be developed –drawing on the affordances of collaborative technologies and social media –which address both the mastery of conceptual knowledge and the development of metacognitive and digital skills through the performance of authentic tasks. These tasks can be designed to tap students’ creativity and scaffold the development of authentic skills that graduates can take with them and apply to the workplace, as summed up in the following student testimony. I really enjoyed the video project as it has allowed for creativity, freedom of thought and a learning experience that feels unique compared to other courses. I also feel it has prepared me at an early stage for the kind of skills that a future place of work in heritage may require. (Heritage practice student, University of York) We observe though that the selection of collaborative technologies needs to be carefully thought through, and will not automatically lead to effective student engagement and the realisation of targeted outcomes, unless the toolset is an integrated part of the assessment design and is presented to students in a coherent way, with the rationale for the assessed task and targeted working methods clearly addressed. Decision making over the selection and embedding of collaborative technologies can best be realised through dialogue within course teams, drawing on the expertise of learning technologists and academic developers. The matrix model in this chapter is intended to help facilitate these conversations and encourage new approaches to assessment, highlighting potential technologies that may be employed within an integrated ‘assessment as learning’ design.
References Baruah, B., Ward, T., Brereton, J. (2017) An e-learning tool for reflective practice and enhancing employability among engineering students. Presented at the 27th European Association for Education in Electrical and Information Engineering (EAEEIE) Annual Conference, Grenoble, France. Bennett, S., Dawson, P., Bearman, M., Molloy, E., Boud, D. (2017) How technology shapes assessment design: findings from a study of university teachers. British Journal of Educational Technology 48 (2): 672–682. Gravestock, P. and Jenkins, M. (2009) Digital storytelling and its pedagogical impact. In: Mayes, T., Morrison, D., Mellar, H., Bullen, P., Oliver, M. (eds) Transforming Higher Education through Technology-Enhanced Learning. York: Higher Education Academy, pp. 249–264.
172
172 Richard Walker and Martin Jenkins Jenkins, M. and Gravestock, P. (2012) Digital storytelling as an alternative assessment. In: Clouder, L., Broughan, C., Jewell, S., Steventon, G. (eds) Improving Student Engagement and Development Through Assessment: Theory and Practice in Higher Education. London: Routledge, pp. 126–137. Jisc (2014) Developing Digital Literacies. Jisc Guide. Bristol: JISC. www.jisc.ac.uk/ guides/developing-digital-literacies (accessed 5 January 2019). Jisc (2007) Effective Practice with e-Assessment. An Overview of Technologies, Policies and Practice in Further and Higher Education. Bristol: Jisc. http://webarchive. nationalarchives.gov.uk/ 2 0140702223032/ w ww.jisc.ac.uk/ m edia/ d ocuments/ themes/elearning/effpraceassess.pdf (accessed 5 January 2019). Laurillard, D. (2002) Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies (2nd edn). London: Routledge Falmer. McDowell, L. (2012) Assessment for learning. In: Clouder, L., Broughan, C., Jewell, S. Steventon, G. (eds) Improving Student Engagement and Development Through Assessment: Theory and Practice in Higher Education. London: Routledge, pp. 73–85. McGill, L. and Gray, T. (2015) Open Media Classes at Coventry University: Final Evaluation Report 2015. Bristol: Jisc. Nicol, D. and Milligan, C. (2006) Rethinking technology- supported assessment practices in relation to the seven principles of good feedback practice. In Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education. Abingdon: Routledge, pp. 64–77. Oldfield, A., Broadfoot, P., Sutherland, R., Timmis, S. (2012) Assessment in a Digital Age: A Research Review. Bristol: Graduate School of Education, University of Bristol. Perry, S. (2015) Changing the way archaeologists work: Blogging and the development of expertise. Internet Archaeology 39. DOI: https://doi.org/10.11141/ia.39.9 (accessed 5 January 2019). Sweeney, T., West, D., Groessler, A., Haynie, A., Higgs, B. M., Macaulay, J., Mercer-Mapstone, L., Yeo, M. (2017) Where’s the transformation? Unlocking the potential of technology-enhanced assessment, Teaching and Learning Inquiry 5 (1): 1–16. Tay, E. and Allen, M. (2011) Designing social media into university learning: technology of collaboration or collaboration for technology? Educational Media International 48 (3): 151–163. Walker, R. and Baets, W. (2009) Instructional design for class-based and computer- mediated learning: creating the right blend for student- centred learning. In: Donnelly R. and McSweeney F. (eds) Applied E-Learning and E-Teaching in Higher Education. New York, NY: Information Science Reference, pp. 241–261. Walker, R., Jenkins, M., Voce, J. (2017) The rhetoric and reality of technology- enhanced learning developments in UK higher education: reflections on recent UCISA research findings (2012– 2016). Interactive Learning Environments 26 (7): 858–868.
173
15 Developing autonomy via assessment for learning Students’ views of their involvement in self and peer review activities Kay Sambell and Alistair Sambell Introduction Assessment for learning (AfL) in higher education is now a burgeoning international movement which seeks to ensure that assessment is fully integrated into learning and teaching, such that it becomes a pedagogical tool for promoting students’ learning, rather than solely a means of measuring their achievement and assuring standards (Sambell et al, 2013; Carless, 2017). A major concern of AfL is not only to support student success within the university, but also to promote approaches to learning and dispositions which, in the longer term, prioritise the development of autonomous individuals who are well-equipped to navigate the complexities of an ever-changing future world (Mclean, 2018). From this viewpoint, ensuring that students explicitly learn to develop the capacity to evaluate their own work and that of others is a key priority. Well-developed evaluative skills form the bedrock of students’ abilities to comprehend and act upon feedback from various sources and help prepare them to learn independently of the teacher. These capabilities are at the heart of autonomous learning and of the graduate qualities valued by employers and in professional practice (Tai et al, 2017), so assessment experts increasingly argue that we should design university assessment processes to nurture and develop these graduate attributes. Building on our longstanding research interests in the impact of assessment on student learning (Sambell et al, 1997), we have helped to pioneer the AfL movement in higher education. The work reported here formed part of our ground-breaking effort to embed AfL in institutional policy as well as in individual practice (McDowell et al, 2011). This chapter focuses on two case studies in which lecturers employed our specific model of AfL (Sambell et al, 2013) to redesign assessment practices across a whole module. The holistic model we developed established a set of six evidence-informed design principles (outlined later in the chapter) which practitioners subsequently used as a basis for their pedagogical developments. The model foregrounds pedagogies that foster the active involvement of students in the assessment and feedback process, supported by learning environments that encourage active learning and the development of learning communities within which students are involved
174
174 Kay Sambell and Alistair Sambell in co-constructing meaning, rather than passively receiving knowledge from teachers. From this perspective, learning takes place in social settings, often associated with work on communities of practice (Lave & Wenger, 1991). As students become immersed in a continuous flow of discourse within their chosen field they learn to orientate themselves to the standards for work in the subject or disciplinary area and the criteria that are used to decide what counts as quality. They do this through dialogue with more experienced others as well as their peers. Explicit opportunities to activate teacher–student dialogue about questions of quality and the reasoning behind the making of judgements helpfully draw students’ attention towards important principles and processes. Hence, in each case, self-and peer reviewing of formative tasks, accompanied by extensive student– teacher dialogue about corresponding teacher reviews, were embedded as pedagogical techniques to help students to evaluate their own work partway through the module. Analysis of the student viewpoint on these reviewing activities is used to highlight some of the issues that surround the use of assessment to develop autonomy as an outcome. Pedagogical practices that are designed to move assessment and feedback processes more firmly into students’ own hands are becoming increasingly common (Tai et al, 2017). This is in response to the growing recognition that if students are to become independent learners who are well-prepared for the learning requirements of contemporary life and employability in the longer term, as well as being well-equipped to appreciate external feedback comments and improve their work in the here-and-now, they need explicit engagement in assessment activities which empower them to build the capability to develop and rely upon their own informed evaluative judgements (Boud & Falchicov, 2007) and take control of their own learning. This approach is less concerned with whether students can learn to grade their own work in line with teachers. Instead, it is much more concerned with approaches that, first, engage learners in identifying criteria and standards, then, second, engage them in applying them to make and justify judgements about work, so they gradually learn to refine their understandings of quality and calibrate them according to the disciplinary context they occupy (Tai et al, 2017). For instance, formative and summative self-evaluation activities, and the closely-related activity of peer review or reciprocal peer feedback situations, plus the dialogic analysis of exemplars, are becoming increasingly common practices which teachers use to help students hone their appraisal skills in the specific context of the subject domain or setting (Sadler, 2010; To & Carless, 2016; Nicol et al, 2014). Often, this is because they are believed to be valuable activities that develop students’ abilities to become realistic judges of their own performance, enabling them to effectively monitor their own learning, rather than relying on their teachers to fulfil this role for them. As such, these responsibility-sharing assessment practices, which support students to construct, calibrate and hone their own internal feedback (Nicol & Macfarlane-Dick, 2006), are commonly regarded as important tools for learning, frequently linked to the notion of promoting, practising and developing autonomy (Nicol, 2014; Knight & Yorke, 2003).
175
Developing autonomy via assessment 175
Autonomy and learning Explicitly involving students in the act of evaluating work has been frequently promoted on the grounds that it develops the capacity to self-monitor by raising consciousness of metacognitive processes and learning-to-learn skills. It is often seen as key to the ability to operate independently on future occasions, for example, in professional practice, where graduates will be expected to seek and take into account diverse forms of information without explicit direction from a teacher-figure (Tai et al, 2017). This means that developing learners’ capacity for the high-order cognitive ability required making evaluative judgement and raising awareness of the importance of this skill is frequently linked to the employability agenda (Tomlinson, 2012). In this general way self-assessment has long been viewed as ‘a particularly important part of the thinking of those who are committed to such goals as student autonomy’ (Boud, 1995, p. 14). When more specifically related to learning, however, autonomy and its development is clearly a complex construct. It is typically thought to incorporate several dimensions, including different domains of knowledge (Brew, 1995, p. 48), cognitive elements, the self and the subject matter being studied. Ecclestone (2002) suggests there are important distinctions to be made between different forms of autonomy, most significantly between procedural and critical autonomy. The first relates to the capacity to generally manage one’s studies, referring to matters such as organising your time, meeting deadlines, paying attention to requirements and following guidance and instructions. The second relates to students engaging analytically with concepts, practices and debates in a subject, such that students are increasingly able to handle knowledge in ways which enable them to think for themselves. Examples might include being able to look for evidence to build an argument or consciously searching for meaning when researching a topic. At higher levels, though, critical autonomy might include seeing knowledge as provisional, seeing knowledge in its wider context, recognising different perspectives and the importance of social and personal responsibility. Such qualities are often seen as generic outcomes of higher education (Perry, 1999). These definitions can be usefully related to the main concept a learner has of their capacity for exercising control and assuming responsibility within a specific learning context and are used to analyse and illuminate students’ views of the two case studies under discussion. Developing autonomy via assessment for learning The key educational purpose of the assessment methods reported here was to develop student autonomy, with a view to encouraging the long-term academic and professional development of the undergraduate learners on each programme by involving them in, rather than subjecting them to, assessment
176
176 Kay Sambell and Alistair Sambell and feedback processes. This was particularly challenging, especially in the context of supporting diverse groups of learners, many of whom brought fairly entrenched views, based on their former educational experiences, of assessment as simply a perfunctory matter of acquiring marks and of assessment as being ‘the teacher’s job.’ To address this, the lecturers in both cases redesigned their modules using the principles of AfL as part of a large- scale initiative to implement and research our evidence-informed model of AfL. Crucially, the model of AfL we developed involved more than simply incorporating a few techniques: instead, it meant redesigning the whole curriculum, in an attempt to infuse assessment in an integrated and productive way into the overall experience of learning, rather than it being seen as an adjunct or ‘necessary evil’. Building on over a decade of research into the impact of assessment on students’ approaches to learning, we developed our model based on six interrelated conditions which were known to characterise productive learning environments. These proved to be adaptable and valuable in a variety of subjects and contexts (see McDowell et al, 2011). In effect, the six principles acted as questions curriculum designers could ask themselves to create learning-oriented assessment cultures (Carless, 2015) when planning the curriculum. In summary, the model of AfL which we developed called for a learning environment that:
• • • • • •
emphasises authenticity and complexity in the content and methods of assessment rather than reproduction of knowledge and reductive measurement; uses high-stakes summative assessment rigorously but sparingly rather than as the main driver for learning; offers students extensive opportunities to engage in the kinds of tasks that develop and demonstrate their learning, thus building their confidence and capabilities before they are summatively assessed; is rich in feedback derived from formal mechanisms (e.g. tutor comments on assignments, clickers in class, student self-review logs, computer-based quizzes); is rich in informal feedback (e.g. peer review of work in progress, collaborative project work, which provides students with a continuous flow of feedback on ‘how they are doing’); develops students’ abilities to direct their own learning, evaluate their own progress and attainments and support the learning of others.
In both cases reported here, self-and peer review activities were introduced mid-module to help students to:
• • •
understand the nature of high-quality work; have the evaluative skill needed to compare their own work to the expected standard; develop tactics that help them modify their own work (Sadler, 1989: 119).
177
Developing autonomy via assessment 177 Developments were based on the assumption that, as Boud and Molloy (2013) argue, if assessment is to be formative for learners they must do more than simply receive feedback. They must actively consider and use the information in some way and so will, at some level, be involved in evaluating and managing their own learning. So, how might practitioners go about achieving that in their programmes? In the two case studies in question, key formative activities were incorporated. These were specifically designed to:
• •
involve students in making discoveries about their own work and learning strategies by engaging in the process of reviewing; inculcate the belief that the learner was in a position to exert significant control of their own learning to improve their own performance.
In addition, emphasis was placed on collaborative activities to draw upon the social learning potential of the student group and to limit both demands and dependence upon the lecturer to ‘deliver’ feedback, fostering autonomous learning within a community. This was supported by a strong move away from ‘traditional’ endpoint summative assessment formats (such as the essay and the time-constrained exam) to more authentic project-based assignments that allowed a degree of personalisation and choice within given parameters. Students worked on these assignments in a staged way during each module and the self and peer review activities were used to support students’ progress with developmental formative tasks.
Case studies Case study 1 This took place with first-year undergraduates studying childhood on an interdisciplinary social science degree. Lecturers knew from experience that many students experienced difficulty making the transition to see the contested and provisional nature of knowledge in their subject, but unless they accomplished the necessary epistemological shift, they would struggle with the whole degree. To address this, they sought to involve students in regular formative activities and dialogic encounters in class time to help them become clear about academic expectations and to engage them with multiple perspectives. They set a series of highly structured interactive and dialogic activities, including individual and group short writing tasks, and oral presentations, which students displayed and reviewed on regular basis. Starting with personal written responses and building up to fully theorised pieces as the module unfolded, these culminated in the self-and peer evaluation of a short conceptual paper written by the
178
178 Kay Sambell and Alistair Sambell students, in which they were asked to explain their understanding of a threshold concept underpinning the course. During class time, students evaluated their own and each other’s work in relation to three sample answers, which exemplified a range of ‘good’ through to ‘ineffective’ work. This opened up extensive peer and tutor dialogue around notions of quality, epistemology and the concept in question. The aim was to enable students to better appreciate how their own work related to the standards expected, so they could, if necessary, adjust their approaches in the light of self-generated feedback. Ultimately, in the summative assignment, students were required to reflect on what they had learned from the formative activities, as well as demonstrate the ability to use key concepts to analyse diverse views of childhood as illustrated in data they’d gathered from local settings. Case study 2 Here, students were in their second year of study on an engineering degree. The lecturer had been keen to offer his students the opportunity to undertake an authentic project, so he gave them ‘a deliberately under- specified brief, as would be typical from a client, and then develop and propose appropriate solutions’. As they moved through the stages of the project, he required them to present their work, which was peer and tutor reviewed. Students and tutors used the same review sheets and their evaluations were discussed among the whole group. The lecturer involved students in various ways: At the start of the module, I engaged the students in developing the assessment criteria to be used in the peer and tutor review. I asked them to think about what excellent, good and satisfactory work might look like. This really got them thinking –and discussing –what was actually important in achieving the learning outcomes of the module. I wanted to move them away from thinking about the end product as simply a nicely presented written report, but rather to focus on the engineering process with its inherent decision making, research, critical review and evaluation. I wanted them to explain, justify and defend their approaches to solving a problem, recognising strengths as well as compromises and limitations in the solutions that they developed. This approach to problem solving is inherent to engineering, where there is no single solution to a problem but the ability to synthesise information, reconcile different requirements and explain your reasoning is critical. Reflecting on the tutor and peer feedback received during the module itself, I asked the students to include a section within the final report identifying what they had learned from the presentation stage.
179
Developing autonomy via assessment 179
Research methods Our research sought to identify students’ perceptions of the impact of being involved in the formative reviewing activities. Their views were analysed in order to capture and illuminate shifts in student consciousness as a result of their involvement in such evaluative practices. We report these under key headings, which indicate the range of different views students held about the precise nature of the new levels of control and responsibility they felt they had learned to exercise. They represent emergent concepts of autonomy. These are presented below, together with illustrative quotations. Both studies gathered data chiefly from students, using semi-structured interview schedules to support reflective phenomenographic interviewing strategies (Marton & Booth, 1997). Interviews were conducted by independent researchers at key points throughout the semester. These focused on asking students to discuss what they had learned from the review activities, and how they felt that had any bearing on how they now went about learning independently, outside the formal classroom environment. These data were complemented by observational data, documentary analysis and interviews with academic staff, as well as a questionnaire to explore the whole class response.
Findings: student experiences Anxiety surrounding autonomy Students in the first example, who were in their first semester, were initially interviewed before undertaking the reviewing activities. Most admitted to harbouring deep anxieties about the levels of autonomy they would be expected to display at university. Many of their earliest comments related to a conception of autonomy that did not appear in the model described at the outset of the chapter. In this dimension, independent learning was viewed as being thrown in the deep end, or being left to sink or swim, especially with regard to assessment matters. [In college] for coursework, it was very ‘teacher-help’, if you know what I mean. We would write the essays, hand them in, they would mark it and they would, not make you change it all, but … it was basically the teacher had written it, if you know what I mean. Like, you would say your view and the teacher would turn it around to try and make it more suitable. Here [at university] you’re just like ‘Oh! I’m on my own!’. (1) In one sense, this student’s concerns about feeling painfully unfamiliar with expectations in a new context highlights how important it is to be mindful of the affective domain in our assessment and feedback practices (Carless and
180
180 Kay Sambell and Alistair Sambell Boud, 2018), especially given growing concerns about students’ emotional wellbeing. It also underlines the value of explicitly but sensitively seeking to scaffold newcomers’ learning; to foster resilience and develop the skills that gradually reduce dependence on tutors. Development of procedural autonomy In later interviews, after the reviewing activities, all claimed to be highly aware of consciously developing their approaches outside the classroom. They all said they had begun to learn how to manage their own time, usually prompted by seeing how others approach academic tasks. Learning from others not so much tutors, but people on your level was key here. Transformation in time spent Many said the value of the activities were that they had learned by becoming more aware of the amount of time you needed to spend on academic tasks. They now said they did more, or at least knew they ought to. They saw the benefits of becoming involved in the reviewing activities as a matter of learning the rules about tutors’ expectations. I realised they were expecting loads of reading and references! I need to work much, much, much harder! (1) Here, independent learning was viewed in quantitative terms, seen simply as the need to do more work. This was discussed in terms of time spent and also equated with procedural tasks: locating sources, downloading articles and so on. Student self-evaluative activity in this dimension related solely to the question–how much time have I spent? I found out that I need to put much more reading in-I was trying to do it based on lecture notes, but that’s not enough. (1) This view was only evident in case study 1. Transformation in presenting one’s work Many students claimed that the benefits of the formative evaluation activities lay in bringing a new awareness about how to approach the assignment in terms of its physical production. They talked of, say, gaining insight into the process of producing work, but this time on the level of technical presentation, with a focus on the generic task in hand, such as ‘doing a presentation’ or ‘writing an essay.’ In this dimension, students start to ask themselves questions
181
Developing autonomy via assessment 181 about their own work, as a consequence of comparing it with others’ ways of working. Mine looked most like the second one. And now I know I have to bring in a lot more reading, if I want to get a better mark –so I know what I have to do. (1) Mechanisms for self-monitoring here revolve around the question ‘How should I set it out?’ This frequently led students to make comparisons between their own work and others,’ which in turn led to heightened curiosity about criteria. Before, I only know what I would put in an essay, or what I think should go in. But when you read other peoples’ you think, ‘Oh well, I’ll get to see what they would put in.’ I suppose you wonder what is best then, so … I suppose it makes you think about what are the assessment criteria, really? (1) Development of critical autonomy Beginning to understand criteria In this dimension students appeared to believe they could take steps to improve their own learning by becoming clearer about tutors’ expectations, and thus get a ‘feeling’ for how they might apply criteria to their own work, in time to improve it. I think it’s helped quite a lot to do it just before we have to start doing the assignments. It’s kind of made me see what I should be aiming for and what the lecturers are looking for in their marking. (1) You learn to see what the module tutor is doing-how they mark. You are in that position where you are making decisions, making judgements: who is better at what, so you get a feeling for what the tutor is looking for when they are judging you. (2) The following more experienced learner also talks of becoming ‘more aware,’ but this time that awareness refers to the attempt to judge his own work in relation to that of others, with a view to identifying areas for his own development. Having to comment on somebody else’s presentation makes you think a lot more, made you more aware. You started to compare your presentation
182
182 Kay Sambell and Alistair Sambell to other people, so by looking at other people you were seeing what you are good at and what you need to improve on, by comparing to the other person. You have developed from each other’s experience. (2) For many, this process of comparison was clearly helpful and underpinned a new sense of confidence and efficacy in terms of generic approaches to tasks. It lead them to construct ideas about how they might improve their own work, even in the absence of traditional teacher feedback. It’s quite a new way of writing, to me and it’s certainly new to think about it from the marker’s point of view. It was really helpful, seeing how you could write about these things, and what looks good, or less good. So it helped me listen out for my own writing. (1) The comment about ‘listening out for my own writing’ implies a level of self- evaluation that focuses on the question What can I do to improve the way I present my own learning? Transformation in ways of thinking In this dimension students claimed to be conscious that they were beginning to see the nature of learning differently as a result of the reviewing activities. Although still very task focused (Hawe et al, 2019), here, conceptual awareness relating to the task is redefined as calling for a search for meaning and understanding. Instead of simply looking for how to set out their learning, the students here began to discuss their awareness that lecturers expected you to look for meaning-thus moving beyond a purely technical and quantifiable view of what constitutes quality work. Here, students talked of beginning to question their own thoughts and views and ‘think about things’ as a result of being involved in the evaluation activities. I think I realised they want us to draw on a lot of different viewpoints and not just go, ‘Right, this is what I think.’ You have to draw from a lot of different places and talk about, ‘Oh it might be this and it might be that.’ … because there’s not always one definition of things…they want you to think about that. (1) One engineering student noted that reviewing peers’ work and hearing others’ reviews helped him to see the task in a more informed and nuanced way: Until I got to that point I was like in the middle of the sea and I didn’t know what direction to swim. [Then] a guy was asking what he has to do
183
Developing autonomy via assessment 183 [to improve his project] … and another student explained… before the student started to explain I didn’t really catch the real meaning of that question. That set me off in a useful direction. Needing to become analytical and critical The following student talks of realising she should try to see other constructions of knowledge, and is conscious from moving from a fixed to a relative way of knowing, because she saw ‘good’ answers presented different perspectives. As opposed to the earlier dimensions, which focused on procedural study habits, this transformation is based on monitoring how far she has personally made sense of subject matter. The hardest thing is actually choosing what to put in an assignment, because I think you’ve got to think about it from a personal point of view. Here you have to read not just on the subject, but round it, which I find hard to do. You see, I think it’s testing totally different skills. At college I didn’t think of doing that, as it was a yes/no, it either is or it isn’t answer. It can’t be in between. You can’t have a critical eye when you’re looking at stuff like that because there is an answer to it. At college we just got given the chapters and stuff to read and once you knew that you knew everything to write. Here you need to be not methodical, but sort of analytical. To understand how you feel about it and how different everybody else feels. It’s a different kind of learning, you know, to see everybody’s point of view. (1) This level of critical autonomy was rare in the first-year case study, but more pronounced with the engineers, who all talked of trying to monitor their own understanding of the subject by reviewing their peers’ work. It was very positive, as I really started to understand about communication systems. I was also able to relate to other communication systems; the technical detail, how it is used as opposed to other systems. (2) All the engineers felt that they were better able to develop and assess the level of their own understanding of subject knowledge in a more sophisticated and integrated way by rehearsing their ideas within a peer community of practice during the peer and self-evaluation activities. Because it comes from your own group, it’s pitched at the right level for the class. Some lecturers can be just too difficult to follow-this way you can relate to it. You are still learning, but on a level you can understand. Students are likely to use other forms of expression that the lecturers couldn’t. Students can relate it to other modules we’ve done, that perhaps the module tutor
184
184 Kay Sambell and Alistair Sambell doesn’t know about. So we can tie all these modules together through our presentations, and the other students know what we are on about. (2) Interestingly, given the recent debates about the deleterious effects of modularisation fragmenting the experience of assessment and feedback (Jessop & Tomas, 2017) this student assumes the authority required to make coherent sense of the programme best lies with the students, rather than the staff. Further, a notable aspect of the engineering interviews was the extent to which the students also foregrounded the ways in which the AfL approaches encouraged them to build their own learning communities outside of the classroom, which also was promoting a more contextualised understanding of the subject. Capacity to engage in an expert community of practice Among engineering students autonomy in a subject- specific rather than a generic academic sense (Candy, 1991) began to appear as a result of the assessment activities. You can pass an exam without knowing anything about the subject at all. Whereas this really is embedded in you-you really learn the subject quite well in depth. So you feel you can come away with the confidence and talk to the other lecturers in a conversation on a reasonable technical level. If you went and spoke to the same people after an exam, you wouldn’t have an idea what they were talking about. (2) This student was conscious of starting to ‘think like an engineer,’ monitoring his own research findings and interpretations against others’. This student’s experience of developing autonomy seems more connected with the specific subject than with generic study skills and relates to the general sense of the overall realism (Villarroel et al, 2017) that the AfL environment appeared to afford to the engineers. This realism was embodied by the authenticity of the problem-solving assignment. In this module … I can decide what’s interesting for me. We’re not supposed to know everything about everything. So you have to see for yourself. Make decisions. You’ve got some basic techniques and you have to think about whether you use them or not, and what kind. I think that’s what you have to do in your job. The boss doesn’t come and say ‘Come on, I will now explain to you how to do this.’ You just have to find out how to do it. And so I think it’s more helpful for reality. (2)
185
Developing autonomy via assessment 185
Discussion Knight and Yorke (2003) make a strong case for actively involving students in self and peer-assessment activities that, instead of being designed to pin down achievement in terms of grades or marks, take on a different tone when explicitly used to give students feedback on their performance, for formative purposes which support learning. Going further, more recent work suggests that the act of making judgements and generating feedback, rather than receiving it, is the most powerful learning element for students involved in peer review (Nicol, 2014; Tai et al, 2017). This view of self-and peer-assessment activities as useful elements of AfL can be seen as a positive response to the pressures on assessment which currently threaten to squeeze out formative assessment as add-on tutor feedback. Both case studies overcame this pressure by using self-and peer evaluation practices as a method of teaching and learning. More importantly, in so doing, the locus of control can be seen to shift, albeit gradually, from teacher to learner in the feedback process. The principle of responsibility sharing (Nash & Winstone, 2017) is increasingly being foregrounded as a fundamental aspect of effective assessment and feedback processes in higher education. After all, while the quality and timing of teacher feedback are important, without interpretation and action on the learner’s part, transmitted feedback is little more than inert data (Boud & Molloy, 2013). Our study suggests the review activities supported a move towards student agency in the feedback process, from initial reliance on tutors to issue feedback towards a developing awareness that feedback can derive from the learner, by better understanding criteria, standards, goals (Nicol, 2014) and by developing a sense of ‘social relatedness’ (Boekaerts & Minneart, 2003). What is important, however, is that the context supports their competency development. There are a series of well-known developmental models of university students (Baxter Magolda, 1992; Perry, 1999), which illustrate changes in concepts of learning and knowledge, and in the personal capabilities to cope with and make sense of academic learning over time at university. With respect to autonomy in learning, perhaps all students need to work out what is required in a new context. This seems particularly important in the case of transitioning students whose prior experiences of learning and assessment have prompted a surface approach to learning in which the learner feels heavily dependent on the teacher to issue instruction and directive feedback missives. Our research into the process of innovation in case study 1, though, also highlighted some interesting insights into the difficulties many of the first-year students experienced while learning to notice what their university tutors took for granted in the review process. The tutors clearly felt that, as O’Donovan (2017: 630) suggests, it would be useful to design evaluative exercises which helped students discuss, share and develop ‘understandings of the epistemic assumptions implicit in disciplines, [and] assessment tasks’. These tutors, however, underestimated the extent of the confusion many students would
186
186 Kay Sambell and Alistair Sambell initially experience when analysing the examples. Participant observations revealed that, during the review of the sample responses, many students failed to appreciate the tutors’ view of high-quality work. I can’t see why the lecturers think that one’s so great. If you ask me, it has too many people in it! We infer that absolutist views of knowledge, as well as a need to get to grips with the writing conventions of academic inter-textuality, underpin this students’ sense of dissonance with the tutors’ view of quality. This research suggests that the first case study mainly prompted students to use the review activities to systematically and gradually build up the skills and concepts of procedural autonomy, by carefully scaffolding their experience and encouraging them to reconceptualise the nature of assessment, feedback and learning. However, most students needed longer and more structured interactions to notice the disciplinary aspects of high-quality work their teachers prized so highly. Sophisticated views of autonomy take time and experience to gradually develop. As Ecclestone (2002) suggests, students need to develop a sense of procedural autonomy before they are ready to approach subject knowledge in a critically autonomous manner. The developmental model of autonomy that emerged from the interviews also has interesting resonance with Lea and Street’s (1998) developmental model of academic writing. ‘Critical autonomy’ corresponds to Lea and Street’s notion of ‘academic socialisation’ in which students are inducted into the ‘university culture’, largely by focusing attention on learning appropriate approaches to learning tasks. Lea and Street highlight how often this is viewed as fairly unproblematic, a process of acquiring a set of rules, codes and conventions, which are often implicit. Many developments in self-and peer-assessment, especially when using exemplars, have focused on a form of ‘academic socialisation’: making these tacit codes or ‘rules’ of the assessment game explicit and transparent to students (Hawe et al, 2019). On one level, this could be read as undermining autonomy, because it implies coercing students to accept, rather than interrogate, tutors’ assessment criteria and judgements. It has little in common with the ‘higher’ experiences of learning and epistemological development (Perry, 1999) which characterise an ideal learner who is capable of working in an ‘emancipatory’ manner, with ‘the capacity to bring into consciousness the very ways in which … knowledge is constructed and therefore to go beyond it’ (Brew, 1995, p. 53). This level of learner control and capacity to challenge also informs the higher levels of the ‘academic literacies’ approach (Lea & Street, 1998) that emphasises academic writing as a site of ideological assumptions, discourse and power. That being said, there are problems of educational practice in higher education being driven conceptually by developmental models such as Perry’s. According to O’Donovan’s (2017) study in the UK, relatively few students present the ‘higher’ levels of ways of knowing, and maybe this is unsurprising
187
Developing autonomy via assessment 187 given the increasing diversity of undergraduate populations of many universities. In terms of the development of autonomy, then, perhaps we should not frequently expect to see subject-matter autonomy (Candy, 1991). Instead, it might be more helpful to recognise that a prerequisite for such epistemological autonomy might lie in conscious and systematic attempts to develop students’ critical autonomy, and this needs addressing throughout a programme, helping students to grow in confidence and experience (Price et al, 2012).
Conclusion These innovations worked because they gradually introduced students to situations in which they became involved in self-evaluation in a purposeful and scaffolded manner via reviewing others’ work. This helped students begin to develop the necessary skills, insights and concepts that underpin different levels of autonomy in learning. The innovations were possible because the lecturers were committed to the belief that developing student autonomy takes time and practice, and is worth devoting considerable amounts of collaborative time and effort to. In both cases, lecturers were open to the idea of completely redesigning the whole of their teaching programme to incorporate self and peer evaluation, not simply as a ‘one-off’ session, but as an integrated and embedded aspect underpinning the whole course design and curriculum delivery. In practical terms this meant reducing the tutors’ control of content delivery and trusting to the students to respond to the innovative pedagogical methods and actively engage with the learning material. It also involved tutors clearly taking care to explain the educational rationale for their approaches, sharing this regularly with their students to manage their expectations. We know this has been successful because the research studies indicated marked shifts in students’ approaches to exercising autonomy, and the questionnaire data illuminated they responded to the distinctiveness of AfL approaches (McDowell et al, 2011). In addition, the students reported seeing this as a ‘much better way of learning than just sitting in a lecture, with everything going way over your head. Let’s face it: what do you really learn from that?’.
References Baxter Magolda, M. B. (1992) Knowing and Reasoning in College: Gender-Related Patterns in Students’ Intellectual Development. San Francisco, CA: Jossey-Bass. Boekaerts, M. and Minneart, A. (2003) Assessment of students’ feelings of autonomy, competence, and social relatedness: a new approach to measuring the quality of the learning process through self and peer assessment. In: Segers, M., Dochy, F., Cascallar E. (eds) Optimising New Modes of Assessment: In Search of Qualities and Standards. Dordrecht: Kluwer, pp. 225–239. Boud, D. (1995) Enhancing Learning Through Self Assessment. London: Routledge Falmer. Boud, D. and Falchikov, N. (2007) Rethinking Assessment in Higher Education: Learning for the Longer Term. Abingdon: Routledge.
188
188 Kay Sambell and Alistair Sambell Boud, D. and Molloy, E. (2013) Rethinking models of feedback for learning: the challenge of design. Assessment and Evaluation in Higher Education 38 (6): 698–712. Brew, A. (1995) What is the scope of self assessment? In: Boud, D. (ed.) Enhancing Learning Through Self Assessment. London: Kogan Page, pp. 48–62. Candy, P. C. (1991) Self-Direction for Lifelong Learning: A Comprehensive Guide to Theory and Practice. San Francisco, CA: Jossey-Bass. Carless, D. (2005) Prospects for the implementation of assessment for learning. Assessment in Education: Principles Policy and Practice 12 (1): 39–54. Carless, D. (2015) Exploring learning-oriented assessment processes. Higher Education 69 (6): 963–976. Carless D. (2017) Scaling up assessment for learning: progress and prospects. In: Carless D., Bridges S., Chan C., Glofcheski R. (eds) Scaling up Assessment for Learning in Higher Education. The Enabling Power of Assessment series, vol. 5) Singapore: Springer, pp. 3–17. Carless, D. and Boud, D. (2018) The development of student feedback literacy: enabling uptake of feedback. Assessment and Evaluation in Higher Education 43 (8): 1315–1325. Ecclestone, K. (2002) Learning Autonomy in Post-16 Education: The Politics and Practice of Formative Assessment. London: Routledge Falmer Hawe, E., Lightfoot, U., Dixon, H. (2019) First- year students working with exemplars: Promoting self-efficacy, self-monitoring and self-regulation. Journal of Further and Higher Education 43 (1): 1–15. Jessop, T. and Tomas, C. (2017) The implications of programme assessment patterns for student learning. Assessment and Evaluation in Higher Education 42 (6): 990–999. Knight, P. T. and Yorke, M. (2003) Assessment, Learning and Employability. Buckingham: Society for Research into Higher Education/Open University Press. Lave, J. and Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press. Lea, M. and Street, B. (1998) Student writing in higher education: an academic literacies approach. Studies in Higher Education 23 (2): 157–172. McDowell, L., Wakelin, D., Montgomery, C., King, S. (2011) Does assessment for learning make a difference? the development of a questionnaire to explore the student response. Assessment and Evaluation in Higher Education 36 (7): 749–765. Mclean, H. (2018) This is the way to teach: Insights from academics and students about assessment that supports learning. Assessment and Evaluation in Higher Education, 43 (8): 1–13. Nash, R. A. and Winstone, N. E. (2017) Responsibility-sharing in the giving and receiving of assessment feedback. Frontiers in Psychology, 8: 1519. Nicol, D. (2014) Guiding principles for peer review: unlocking learners’ evaluative skills. In: Kreber, C., Anderson, C., McArthur, J., Entwistle, N. Advances and Innovations in University Assessment and Feedback. Edinburgh: Edinburgh University Press, pp. 197–224. Nicol, D., Thomson, A., Breslin, C. (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment and Evaluation in Higher Education 39 (1): 102–122. Nicol, D. and Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199–218.
189
Developing autonomy via assessment 189 O’Donovan, B. (2017) How student beliefs about knowledge and knowing influence their satisfaction with assessment and feedback. Higher Education: The International Journal of Higher Education Research 74 (4): 617–633. Perry, W. G. Jr (1999) Forms of Intellectual and Ethical Development In the College Years: A Scheme. New York, NY: Holt, Rinehart and Winston. Price, M., Rust, C., O’Donovan, B., Handley, K., Bryant, R. (2012) Assessment literacy: The Foundation for Improving Student Learning. Oxford: Oxford Centre for Staff and Learning Development, Oxford Brookes University. Sadler, D. (1989) Formative assessment and the design of instructional systems. Instructional Science 18: 119–144. Sadler, D. R. (2010) Beyond feedback: developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35 (5): 535–550. Sambell, K., McDowell, L., Brown, S. (1997) ‘But is it fair?’ An exploratory study of student perceptions of the consequential validity of assessment. Studies in Educational Evaluation 23(4): 349–371. Sambell, K., McDowell, L., Montgomery, C. (2013) Assessment for Learning in Higher Education. Abingdon: Routledge. Tai, J., Ajjawi, R., Boud, D., Dawson, P., Panadero, E. (2017) Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76 (3): 467–481. To, J. and Carless, D. (2016) Making productive use of exemplars: peer discussion and teacher guidance for positive transfer of strategies. Journal of Further and Higher Education 40 (6): 746–764. Tomlinson M. (2012) Graduate employability: a review of conceptual and empirical themes. Higher Education Policy 25 (4): 407–431. Villarroel, V., Bloxham, S., Bruna, D., Bruna, C., Herrera-Seda, C. (2018) Authentic assessment: creating a blueprint for course design. Assessment and Evaluation in Higher Education 43 (5): 840–854.
190
16 Assessing simulated professional practice in the performing arts Kathy Dacre
Introduction Our 2015 Higher Education Academy (HEA) funded research project at Rose Bruford College of Theatre and Performance, ‘Embedding Employability in the Curriculum’, explored the vocational nature of each degree programme and advised colleagues, ‘we need to take a holistic view of what we are offering, one based on the views of industry and alumni not merely the views of academics who may not have practised for some time’. This advice led to changes within our own curriculum and a wider appreciation of the innovative learning environment offered by simulated professional practice, the creation of a learning environment for students that replicates the professional environment they will encounter on graduating. But it also brought to the fore some of the difficulties of assessing student achievement in simulated environments. This chapter describes the assessment innovations while also sharing the challenges that we encountered and key factors which continue to require creative solutions. An example of our use of simulated professional practice as an assessment activity may be found on the Rose Bruford College of Theatre and Performance website. Here, you will be invited to a final-year performance season of plays performed and produced by students, yet closely following practices found in professional theatres throughout the UK, from box office and front of house procedures to creative team and performer preparation and production. As a learning mode, simulated professional practice is found most frequently in conservatoires, university performing arts departments and in education and health related degree programmes. It is work-based learning in an environment which mirrors the professional workplace and it offers an innovative approach to structured learning across higher education: an approach which could be used to introduce students to new experiences in a wide variety of work in such areas as publishing, marketing, research, management, government, commerce and local government. All these areas require skills that are tested in simulated practice, for instance, collaboration, competence, application of acquired knowledge, problem solving, compromise and discovery. Creating a simulation of professional practice for students to engage in allows
191
Assessing practice in the performing arts 191 for innovations in content and assessment which, as practice research becomes an acknowledged part of arts based learning in UK higher education, may inspire a wider use of such environments. The simulated professional season has always been seen by staff, students and the performing arts industry as a vital component in learning and teaching in the performing arts but its organisation in higher education demands more complex planning than is usual in professional theatres. At my own college, Rose Bruford, for instance, for one undergraduate season students from four degree programmes, acting, actor musicianship, American theatre arts and European theatre arts need to demonstrate their mastery of practice in performance roles. While students from seven design and technical arts degrees, costume production, scenic arts, theatre design, lighting design, creative lighting control, performance sound, stage and event management, need to be assigned roles in which they too can demonstrate their expertise. We are talking of creating in this production season an effective leaning environment for around 200 students. Our former students will testify that the production season was one of the most effective learning events in their degree programmes and so the amount of organisation involved is certainly compensated for by the high-quality learning which takes place. Our simulated professional season and the assessment of student achievement it involves can be seen as an innovative higher education learning and teaching practice in that it places employability firmly at the foreground of final-year summative assessment. A range of descriptions for the previously mentioned degree programmes at Rose Bruford point students towards ‘fully mounted productions in professional spaces with a public audience’ promising ‘you will develop a range of creative and transferrable skills that will encourage you to become an independent thinker and motivated artist, an articulate reflective and enterprising practitioner equipped to succeed in an increasingly competitive profession’. Students, we hope, will make ‘an effective and innovative contribution to the performing arts industries’ and ‘develop work through the performance to a standard that meets industry expectations’.1 So this fictional workplace season introduces students to a work environment they will encounter in their professional life and the Rose Bruford House Rules present in detail every procedure from professional conduct, to back stage etiquette, behaviour protocols in technical rehearsals, risk assessment, the use of firearms and pyrotechnics and the performing rights that practitioners need to negotiate for the simulated season just as they will later in the workplace.2 The House Rules are a good model for simulated practice situations in other disciplines. They give students the confidence that they are learning real industry practice. Since our employability research we have been involving industry professionals more extensively in discussing and developing a range of intended learning outcomes for the simulated professional season so that assessment prepares students for professional life. Acting students should be able to ‘play and embody objectives, actions and living relationships’,
192
192 Kathy Dacre American theatre arts students will be able to ‘embody the director’s interpretive approach to the text with spontaneity, precision and depth’ while stage and event management students will ‘develop work through to performance to a standard that meets industry expectations’ and assessment measures their engagement with these outcomes.3 The production season is also innovative in that it is an interdisciplinary event. Actors work alongside lighting designers and creative lighting control students in creating mixed media performances which mingle video and live performance. Scenic Arts students might explore with actors the director Kantor’s use of objects as a vital element in performing his work while performance sound students work with performers on music scores and soundscapes. With this fusion of creative skills and artistic experiment the understanding of and respect for a wide range of disciplines is assessed and becomes an important part of student learning. Employability in professional spheres beyond the performance world often relies on the ability to collaborate but in the performance industries this ability is an essential part of the creative company process. So, the simulated professional season is innovative in higher education environments in that it foregrounds the importance of collaboration in artistic creation and each programme mentions collaboration as an intended learning outcome. The performance sound students will ‘work within a collaborative team, contributing to and helping to develop the production through to performance’. For the stage and event management students, the season will ‘enable you to establish and maintain effective relationships in collaborative team work in complex situations’. Students from the performance programmes will work cooperatively and productively with others in a team frequently under demanding conditions’ and ‘understand the importance of collaborative work in the company process’. Successfully fulfilling these learning outcomes will enable a student to work collaboratively with practitioners from other theatre disciplines.4 So the final year conservatoire simulated professional production season has much to offer as an effective learning environment. Yet managing the season as a reliable assessment practice presents a number of challenges. Some years ago we worked on a HEA funded research project ‘Assessing Assessment’ at Rose Bruford, in which we examined the rationale, type and purpose of all our assessment assignments on undergraduate degrees. We acknowledged that good assessment of student’s knowledge, skills and abilities is absolutely crucial to the process of learning and that that an assessment scheme should allow students to demonstrate their achievement of the intended learning outcomes of the programme and provide enough evidence of student’s achievement to enable robust decisions to be made about their intended academic qualification. We recognised however the myriad challenges that face tutors when assessing practical work and these become particularly challenging in simulated professional performance practice. The first challenge is ensuring a parity of opportunity for students to demonstrate their achievement. How in one production season of 10 plays are
193
Assessing practice in the performing arts 193 design, management and technical arts students given parity of opportunity? At Levels 4 and 5 of their programmes, students are assigned particular roles in college productions but when they come to the final year season they then negotiate the choice of role with their tutor. Prior to such discussions there will be a careful consideration by staff of student numbers and possible production roles, which must marry up. Students are asked to choose the type of production they would like to work on, which may range from classic text to new writing or improvised work but they have no guarantee of their first choice. It is sometimes argued by colleagues that we are not a democracy and students have to realise that tutors might know better and be able to identify what they need to learn. We feel you ought to be working on this is a phrase often used by staff. With different students working on different tasks for this production module the intended learning outcomes are broadly drawn and a specific brief is negotiated with each student. Colleagues are united in recognising that the season has immense strength as a pedagogical model. Students have to think in the moment, deal with pressure and respond to the circumstances in which they find themselves. There are other organisational concerns to ensure the learning experience of every student involved in the season. Scenic arts and costume production students need to respond to the ideas and vision of professional visiting designers and this vision needs to be carefully managed so the production moves beyond simulated professional practice to become a structured learning experience for each student. Lighting design and creative lighting control and performance sound students will work in tandem with a visiting director and in a similar way the opportunities offered to students need to be monitored by tutors. In some ways the final season is not entirely a 100% simulation of professional practice. Although students might manage their own production budgets, for instance, the head of production keeps a benevolent eye on spending, providing a safety net for students who might otherwise run over budget. Parity of opportunity to fulfil the intended learning outcomes is no less a problem when casting students from the School of Performance and the choice of plays for the season does not strictly follow professional practice. Whereas the Manchester Royal Exchange or the Bridge Theatres can plan a season based on a particular theme or genre and then cast accordingly, tutors in a conservatoire have their company of actors in place before they have chosen the plays and need to keep parity of student experience and exposure to different types of material at the forefront of any choice. So the season is planned with student numbers in mind and as students have a tendency to immediately do a line count on roles assigned to them, parity in casting often takes precedent. For our acting and actor musician undergraduate students classic, comic, musical and contemporary plays form the mix with new writing often commissioned for the particular group but all, unlike on the professional stage, tend to be large cast. The European theatre arts programme does not have such a parity problem as their final year production is a devised performance piece created with and for the cast by a professional
194
194 Kathy Dacre ensemble practitioner and the American theatre arts programme have come up with an innovative solution to parity. Their programme director, Steven Dykes, is a practising playwright who has over the years written and adapted scripts specifically designed to showcase the performance skills and experience of the cohort of students in the simulated professional season. A second challenge is designing intended learning outcomes for the season that help students realise exactly what they have to do and then enable tutors to assess their achievement. As recognised by Race (1996) over 10 years ago, ‘It can be difficult to agree on assessment criteria for practical skills. There may be several ways of performing a task well, requiring a range of alternative assessment criteria’ and with the current focus on intended learning outcomes in higher education, the task has not become any easier. Learning outcomes tend to focus on skills and replication rather than learning and innovation. Constructive alignment (Biggs & Tang, 2011) in assessment schemes has become constrictive alignment. Learning outcomes are justified as proof of a new concern with the quality of teaching and student learning. In reality, they are part of the drift in higher education towards skill-programming and away from the cultivation of cognitive freedom and love of thinking. (Noonan, 2016 p. 2793) Kleiman, in his recent work on assessment, argues that intended learning outcomes militate against intellectual experimentation and discovery, that creativity cannot be predetermined, that outcomes promise certainty when learning might be unpredictable and that they foster a climate that inhibits the capacity to deal with uncertainty (Kleiman, 2017). These four concerns would appear to be paramount when facing the challenge of designing outcomes for the simulated professional season and prompt us to use outcomes as simply guides for, and not determinants of, judgements. Facing this challenge brings us to the great question of what kind of knowledge are we actually assessing in a simulation of professional practice. Looking closely at the knowledge we are assessing in our performance season has made us aware of the relative ease with which we can assess competence in practical skills but the difficulty posed in assessing practical knowledge. This can present a major problem. We realised that we were failing to fully recognise and assess the different kinds of knowledge – explicit, implicit, tacit and intuitive –that students might acquire and demonstrate in simulated professional practice. Explicit knowledge is knowledge that has been articulated, proven or demonstrated as best practice and this is assessed in the professional skills the students show in production. Implicit knowledge is knowledge that is implied or inferred by how the student behaves or performs in their role. It is knowledge that they have not articulated but have acquired with immersion in, and focus on, practical experience and can be recognised in confident practice.
195
Assessing practice in the performing arts 195 The third category of knowledge, tacit knowledge, cannot be articulated yet is crucial in creative and innovation led environments. It is gained through training and personal exposure and experience and describes the phenomenon that ‘we know more than we can tell’ (Polanyi, 1974). Such knowledge is referred to as embodied knowledge. For this reason theatre training has often relied heavily on contact with professional practitioners as only they have the tacit knowledge which is embedded in their professional practice. The academic-isation of training for the performing arts to some extent poses a difficulty. While academics are highly competent assessors of explicit knowledge their tacit knowledge base is bound to be more limited as far as professional theatre is concerned and robust assessment of this deep professional knowledge would benefit from an assessment team which includes professional practitioners from beyond the institution. Kleiman (2008) quotes Baer and McKool (2009) in advocating a ‘consensual assessment technique’. The most valid assessment of the creativity of an idea or creation in any field is the collective judgement of recognized experts in that field. (Baer & McKool, 2009, p. 2) Yet however the team of assessors is made up for this project there is still the challenge of deciding whether the assessment marks arrived at are criterion referenced, which relies upon students being able demonstrate that they have fulfilled the learning outcomes or norm referenced … affected by the quality of the work of the students within that year cohort. And how can assessors be objective when marking practical work in a creative discipline where knowledge gained can be intuitive? While the concept of tacit knowledge began to be articulated in the 1960s, intuitive knowledge has a longer history of consideration in philosophy, psychology and pedagogy (Westcott, 1968). The Oxford English Dictionary defines intuition as ‘the ability to understand something instinctively, without the need for conscious reasoning’. Students often come to understand and master a given technique or skill at an intuitive level and working at such a level often enables students to master a given task more rapidly than those searching for conscious assimilation and comprehension. In his essay ‘The Anatomy of Intuition’, Claxton identifies six varieties of intuition and his classification identifies and extends the learning that might be assessed in simulated professional practice (Claxton, 2004).
• • • •
expertise –the unreflective rumination of intricate skilled performance; implicit learning –the acquisition of such expertise by non-conscious or non-conceptual means; judgement –making accurate decisions and categorisations without, at the time, being able to justify them; sensitivity –a heightened attentiveness, both conscious and non- conscious, to details of a situation;
196
196 Kathy Dacre
• •
creativity –the use of incubation and reverie to enhance problem solving; rumination –the process of ‘chewing the cud’ of experience in order to attract its meaning and its implications.
An awareness of the types of knowledge a student might be called upon to use within a simulated performance season adds to the challenge of assessment through learning outcomes but it brings a greater understanding of how assessing student achievement in such a complex and innovative learning environment needs to take us into new and innovative assessment methodologies. I have left discussion of one essential element in the simulated professional production season until the close of this chapter and yet as an assessment tool it matters a great deal to students –the audience, the public outside the student world. Having a real live audience adds an immediacy to the work, a responsibility to the public, a sense that the show must run as clockwork and affect the hearts and minds of those who sit in the auditorium. The student practitioners have a story to tell and they are responsible for ensuring its impact on this small society. A simulated professional environment in all areas of higher education can capture this. It gives a sense of mission which adds huge importance to the learning process and raises the expectations of students and staff. The students become closely engaged with the assessment process rather than alienated from it (Case, 2008). Their personal assessment comes with the applause as the lights dim and their all-important ability for self-assessment is enhanced by this engagement with others. Marie Hay (2008), in exploring the vital capacity for self-assessment by students in the performing arts, makes reference to the thoughts of Kögler on the power of dialogue between performer and audience. The other (or audience) becomes the point of departure for critical insight into the self. The insight thereby provided to be sure is never pure, context free or absolute. Yet if adequately developed the perspective from the other’s point of view proves all the more valuable, because it sheds light on ourselves that we could not have generated by ourselves. (Kögler, 1999, p. 252) Perhaps the audience, the public observer, is the really innovative assessment tool that we ignore.
Conclusion This chapter has pointed to several of the challenges that confront colleagues in the performing arts when they organise a learning environment that mirrors the professional workplace and then assess the achievements of individual students within this event. It has also pointed out the significant opportunities for developing innovative assessment strategies that enhance employability, interdisciplinary understanding, collaborative skills and an awareness
197
Assessing practice in the performing arts 197 of creativity and differing types of knowledge. At Rose Bruford College, we have found that these opportunities far outweigh the challenges presented in creating a simulated professional learning environment in which we assess student achievement. For the use of simulated professional practice in higher education can not only engage and motivate students, it can offer the most significant assessment experience of their degree course and bring an important and valuable understanding of a world beyond the conservatoire or university.
Notes 1 Rose Bruford College programme specifications and module descriptors. 2 The Rose Bruford House Rules are produced by Anthony Sammut, Head of Productions. 3 Extract from Rose Bruford College programme specifications and module descriptors. 4 Ibid.
References Baer, J. and McKool, S. (2009) Assessing creativity using the consensual assessment technique. In: Handbook of Research on Assessment Technologies, Methods, and Applications in Higher Education. Hershey, PA: Information Science Reference, pp. 65–77. Biggs, J. and Tang, C. (2011) Teaching for Quality Learning at University: What the Student Does (4th edn). Maidenhead: Society for Research into Higher Education and Oxford University Press. Case, J. (2008) Alienation and engagement: development of an alternative theoretical framework for understanding student learning. Higher Education 55: 321–332. Claxton, G. (2004) The anatomy of intuition. In: Atkinson, T. and Claxton, G. The Intuitive Practitioner. Maidenhead: OUP, pp. 32–52. Hay, M. (2008) Assessment for reflective learning in the creative arts. International Journal of Learning 15 (7): 131–138. Kleiman, P. (2017) Radical re- alignments: innovations in curriculum design and assessment. Postgraduate conference presentation, Rose Bruford College. Kleiman, P. (2009) Design for Learning: A Guide to the Principles of Good Curriculum Design. Palatine Working Paper. Lancaster: Palatine/Higher Education Academy. Kleiman, P. (2008) Towards transformation: conceptions of creativity in higher education. Innovations in Education and Teaching International 45: 209–217. Kögler, H. (1999) The Power of Dialogue: Critical Hermeneutics After Gadamer and Foucault. Hendrickson, P. (trans.) London: MIT Press. Noonan, J. (2016) Ten theses in support of teaching and against learning outcomes. Jeff Noonan: Interventions and Evocations (blog post). www.jeffnoonan.org/ ?p=2793 (accessed 6 January 2019). Polanyi, M. (1974) Personal Knowledge: Towards a Post-Critical Philosophy. Chicago, IL: University of Chicago Press. Race, P. (1996) The art of assessing 2. New Academic 5 (1). Westcott, M. R. (1968) Towards a Contemporary Psychology of Intuition. New York, NY: Holt, Rinehart and Winston.
198
17 Archimedean levers and assessment Disseminating digital innovation in higher education Paul Maharg
Introduction For almost 30 years I have been involved in the introduction of digital modes of legal education in law schools across a range of jurisdictions internationally. These included the design and development of simulation environments, early webcast environments, multimedia legal skills development, the analysis of student and staff use, and the construction of pedagogies around the digital environments. An underlying aim throughout was to effect a transformation of conventional pedagogies and improve the conditions for learning by creating new, disciplinary-specific forms of digital environments. At many points the interdisciplinary teams in which I worked would be creating what Shulman (2005) has described as ‘shadow pedagogies’ (i.e. those that challenge the orthodoxy of the hegemonic or ‘signature’ pedagogies). Often, the interdisciplinary team I was involved with would design and implement a pilot, and that would be amended, expanded or altered subsequently, depending on many factors. This chapter focuses on the next stage, dissemination and further implementation, and attempts some answers to an apparently simple question: why is it that so many promising digital educational innovations, conceived, theorised, carefully implemented and with equally promising results, fail to be taken up more generally in higher education? Given the volume of digital projects, resources, courseware and much else that has been developed globally since the development of the web, and the cost of developing them, it is an important question from a financial perspective. But it is also a crucial question from educational and cultural perspectives, too. There are many aspects to this problem, not least in the literature of innovation dissemination and change management.1 There, the literature is considerable, and contains key works that further our understanding on the diffusion of innovation.2 In another sense, the problem can be cast as one consequence of the tension between signature and shadow pedagogies, for the hegemonic forms of disciplinary learning will rarely offer an environment conducive to the growth and flourishing of shadow, subaltern or emerging pedagogies. In this chapter, we consider another aspect of the problem, namely
199
Archimedean levers and assessment 199 the dichotomy between forms of research such as randomised controlled studies that act as a laboratories for experiments conducted upon teaching, learning and assessment innovations, and the working out of those results in the actual world of lecture theatres, seminar rooms, libraries, texts, dialogues, screens, online assessment and examination halls. This chapter is therefore a critical reflection on categories of educational research and innovation dissemination, and how our understanding of the two categories of research and dissemination can skew our vision of the potential of both.
Latour and Pasteur: levers for change Bruno Latour’s work on Louis Pasteur is a good place to begin. Latour’s work generally has focused on critique of the social, on theories and assemblages of the social, and meta-comment upon practices and concepts such as social science research, modernity, laboratory practice and much else (see Latour, 2007, 1999). His work provides a useful analysis of the relationship of the concepts of innovation and dissemination.3 Latour analysed in detail the experimental methods Louis Pasteur used in his famous research carried out in the 1870s on the anthrax bacillus and the subsequent development of a vaccine that was successful in eradicating anthrax, then prevalent in French agriculture (Latour, 1993, 1983). Latour uses Pasteur’s work on anthrax as a case study of ‘the construction of the laboratory and its position in the societal milieu’ (Latour, 1983, p. 143), and he explores how Pasteur’s strategy succeeded by destabilising the concept of laboratory work and the concept of vaccine field trials, where he worked with farmers, veterinarians and others. Latour sets out the scene. First Pasteur sets up a makeshift laboratory in a farm in Beauce where, among other work, Pasteur and his assistants ‘pinpoint all the variations in the onset and timing of the outbreaks of anthrax and [see] how far these could fit with their one living cause, the anthrax bacillus’ (Latour, 1983, p. 145). Pasteur then removes the isolated bacillus to his Paris laboratory, and cultivates it under highly controlled conditions, experiments with it on animals, focusing on the behaviour of the bacillus in different contexts. He demonstrates mastery of it. As a result, he and his assistants begin the process of accounting for the puzzling and apparently random variations in anthrax outbreaks on farms –whether weak or strong, their infectiousness, their geographical spread, and the like. They develop a vaccine. In the third stage, Pasteur moves back to the farm and conducts a large- scale field trial with the vaccine. But Latour is careful to describe how Pasteur proceeds, which is not to accept the given conditions of the farm, but to extend the laboratory to the farm. Pasteur cannot just hand out a few flasks of vaccine to farmers and say: ‘OK, it works in my lab, get by with that.’ If he were to do that, it would not work. The vaccination can work on the condition that the
200
200 Paul Maharg farm chosen in the village of Pouilly le Fort for the field trial be in some crucial respects transformed according to the prescriptions of Pasteur’s laboratory. (Latour, 1983, p. 151) Pasteur engages in a form of negotiation with the agricultural societies sponsoring him, and the veterinarians and farmers with whom he is working. According to Latour, Pasteur is in effect advising that: on the condition that you respect a limited set of laboratory practices – disinfection, cleanliness, conservation, inoculation gesture, timing and recording –you can extend to every French farm a laboratory product made at Pasteur’s lab. (Latour, 1983, p. 150) Under these conditions the vaccine can transform aspects of French agricultural practices; and Pasteur goes on to prove convincingly in the field trial that vaccinated livestock survive and unvaccinated animals die.4 Latour also points to the way that Pasteur draws the attention and confidence of stakeholders. Agricultural societies are relatively uninterested in Pasteur’s early work on the farm. When they hear of the work in the lab, they become more engaged; and are especially involved in the field trials of the vaccines. Latour imagines the strategy of Pasteur with regards to farmers and veterinarians: ‘If you wish to solve your anthrax problem, come to my laboratory’ (Latour, 1983, p. 147). Pasteur can take this strategy because anthrax is already perceived to be a serious problem for veterinarians, farmers and the interests of French agriculture. As a result the passage of innovative scientific practices to French farms becomes easier, and the incentive for veterinarians and farmers to learn something of basic microbiological science is much more powerful. Of course, Pasteur’s laboratory is only one type among many, each with different methodologies, often dependent upon the area of science and the research undertaken within the lab. In the later nineteenth century one might think of the laboratory methods of William Thomson, Lord Kelvin, almost contemporaneous with Pasteur, that present another interesting example of the boundary of field and lab, particularly Thomson’s involvement with transatlantic telegraph cable-laying.5 In the twentieth century, the labs of particle physicists, accelerator builders and microphysicists have entirely different aims, purposes cultures and subcultures, for example; and their work has social impact in timeframes that are different to those of Pasteur’s lab.6 Nevertheless, even the critics of Latour’s views of scientific method and the culture of laboratories do prove the point about the social conception of scientific experimentation and the dissemination of innovation. Success or failure of the uptake of scientific innovation is not a scientific matter alone: it is also a social achievement.7 One of Latour’s critics, Pam Scott discusses this point in
201
Archimedean levers and assessment 201 her article analysing a case of failure to develop a research programme around a specific virus at the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO). The virus in question was foot and mouth disease (FMD), which is not present in Australia due in part to Australia’s stringent quarantine and immigration controls. When it was proposed that live FMD virus be introduced to CSIRO in order to develop, inter alia a vaccination programme, the stakeholder and general public response was so great that the project had to be abandoned. Innovative virology research failed not because the scientific basis of the project was doubtful, but on social, economic and cultural grounds.8 In their perceptive article on the context of innovative digital education, Squire and Shaffer apply Latour’s insights into Pasteur’s method to digital education.9 On the topic of ‘scaling up’ research findings in a process of dissemination they agree with Latour: ‘we should conceive of the role of research on educational technology as also being about finding ways to reorganise schools in more fundamental ways … that serve as models of large-scale transformation of schooling’. They go on to advocate for the ‘transformation of schools by adapting them more broadly to the conditions under which meaningful learning takes place’ (Squire and Shaffer, 2006, pp. 14–15). Latour’s insights into Pasteur’s methods, applied by Squire and Shaffer to education, offer us much to think about as regards the boundaries between laboratory studies and pilots and their dissemination in higher education curricula. In educational research, there are, of course, many studies of assessment innovations, with multiple methodologies in use to evaluate assessment practices and effectiveness. Nevertheless, often the studies focus upon single pilots of innovations, or upon use of the innovation upon a single cohort of students where the innovation is developed, implemented, and the results analysed. The results are of course useful in themselves, but anyone wishing to put them into practice in programmes and institutions faces something of the dilemma that Latour outlines in his analysis of Pasteur’s method. A good example of this is the SIMPLE project (SIMulated Professional Learning Environment), funded by Jisc, the HEA, through its law subject centre at Warwick University Law School, and the British and Irish Law Education Technology Association, and carried out in 2006– 08. In that period, the project team based at the Glasgow Graduate School of Law at Strathclyde University developed open- source simulation software that included a case management platform, technical and educational documentation, the tools for academics to build sims within the SIMPLE environment, and ran them in a variety of social science and related disciplines and professions, and much else.10 In law schools, this enabled students to carry out simulations of legal matters across a broad range of transactions. At Strathclyde, we developed a Deweyan framework for learning and assessment called ‘transactional learning’, with seven key characteristics.11 In the project, our partners were not just law schools but included social work, architecture and management science, and we liaised closely with a sister project in
202
202 Paul Maharg the Netherlands, Sieberdam. Our project was thus interdisciplinary, as the literature recommended (Ducoffe et al, 2006), and we published widely in legal educational and other disciplinary forums. The software is still in use at Strathclyde Law School. And yet in the decade since the project finished, there have been fewer than 10 initiatives to take up the software and use it in digital sims. As we saw in the introduction to this chapter, part of the problem is of course that simulation is not what Shulman calls a ‘signature pedagogy’ for law, or indeed many disciplines in higher education: it is a ‘shadow’ pedagogy, seeking to emerge from those shadows. But part of the problem is that also that simulation software cannot just be taken up and implemented. Much as Pasteur’s flasks of vaccine could not simply be used in farms without a transformation of practices on the farms, so too simulation requires a transformation of many aspects of curricular practices, not least in assessment, both formative and summative. One centre that did use it extensively and successfully was the Legal Workshop in the Australian National University College of Law.12 It succeeded only because it redeveloped the software to suit the local situation and, more importantly, made transformational changes to the curriculum of the postgraduate diploma in legal practice, particularly assessment practices. It is useful to contrast SIMPLE digital simulation with another simulation heuristic, the use of ‘simulated clients’ in legal education. The method was adapted from medical education, where the use of simulated patients/persons has been in use for almost half a century and has been extensively researched. A correlative study was carried out at the Glasgow Graduate School of Law in 2006 that confirmed many medical educational studies, and which proved that properly trained, simulated clients were as effective as tutors in assessing the client-facing skills of students and novice lawyers (Barton et al, 2006). Assessment criteria were formed using, inter alia, Delphi processes, and we developed a Creative Commons library of materials that others could draw upon, globally. Around 12 projects were formed with partners internationally. The regulator for legal education in England and Wales, the Solicitors Regulation Authority, used the method as a keystone component in their development of the Qualifying Lawyers Transfer Scheme, an assessment of the knowledge and professional skills of foreign-qualified lawyers wishing to practice in England and Wales (see, for example, Fry et al, 2012). What is interesting is that during the development of SIMPLE and simulated clients, we were aware of Latour’s analysis of Pasteur and of Squire and Shaffer’s application of it to digital education. We found it easier to adapt the simulated clients project than SIMPLE for a number of reasons, the following four being most salient. 1. The digital environment of SIMPLE, we discovered, needed to be much more adapted to local circumstances than we had previously considered was necessary. The learning and assessment designs, encoded in the
203
Archimedean levers and assessment 203 software, while sufficient as descriptions of how the software could be enacted, were not sufficiently open to allow for this to happen. 2. The constructivist tools and curricular approaches required student freedom in the creation and use of resources online which ran counter to the managerialist, top-down approaches implicit in many learning management systems (Barton et al, 2007). 3. If simulation learning were to be embedded in a curriculum the typical ‘snapshot’ assessment cultures of conventional curricula required a major shift to assessment practices that were based more on concepts of replay, remix, feedforward and feedback that enabled more sophisticated iterations of learning (Barton et al, 2016). 4. As we remarked in the final SIMPLE project report, conservation of prior technologies is a powerful centripetal force in academic curriculum design. If SIMPLE were to succeed, staff needed to engage in composing and orchestrating the curriculum much more than they had before. That required training and reflection, particularly on how new forms of assessment could be constructed to evaluate new forms of learning. All four points are striking confirmations of Latour’s construction of the relationship between initial research-based innovation and its dissemination in Pasteur’s work. By contrast, the use of simulated clients in the assessment of interviewing skills was easier to implement because the transformational changes were limited to a small group of simulated clients and faculty involved in their training and use; and the deployment of simulated clients generally in the curriculum could follow a path recognisable from prior faculty use of actors as clients. In addition, the power and reliability of the heuristic was relatively easy to prove, and thus regulators such as the Solicitors Regulation Authority could be persuaded of the rigour of the assessment. In addition to being used successfully in Qualifying Lawyers Transfer Scheme, the method will also comprise a core element of Part 2 of the Authority’s Solicitors Qualifying Examination, a common-entry examination for the profession of solicitor in England and Wales.13
Conclusions The dissemination of project and pilot results into wider gains in higher education is complex, politically, culturally, economically and in many other ways. We can frame the difficulties that innovators are faced with by describing the innovations as, in Shulman’s terms, shadow pedagogies attempting to emerge from the hegemonic power of conventional pedagogies. But, in a sense, we need a more teleological approach, and in this chapter we have seen how Latour’s construction of Pasteur’s practices give us an Archimedean lever for dissemination of research. Latour’s analysis reveals the scientist’s success to be not only based upon his scientific brilliance, but also upon his understanding
204
204 Paul Maharg of the relation between research and dissemination of that research. If innovators are to make a difference beyond their own practices and theories they probably need to fashion such levers for themselves and their work. In this respect, the social context of change is critical, and innovators probably need to give thought to that as well as to their innovations.
Notes 1 See, for example, Hazen et al (2012), who note that that there is no comprehensive framework for the dissemination of educational technology innovations. They developed a stage-model for dissemination. 2 See, for instance, , Diffusion of Innovations, fifth edition, Rogers (2003), which is particularly good on diffusion networks. Rogers notes in his preface that there were in 2003 approximately 4000 research items on the subject of innovation diffusion alone. 3 Latour is also a rigorous self-critic –see his insightful critique of his actor-network theory at Latour (1999, p. 15): ‘I will start by saying that there are four things that do not work with actor–network theory; the word actor, the word network, the word theory and the hyphen! Four nails in the coffin’. 4 Although, as Latour points out, the field ‘trial’ is more of a performance because Pasteur knows from his work in the laboratory that under the right conditions the vaccine will succeed. 5 Another example is Rutherford’s laboratories in the context of his work on radiation and neutrons. For further analysis of the boundaries between field and lab, see Kohler (2002a, 2002b) and Kirk and Ramsden (2018), who analyse Howard Liddell’s work at the Cornell ‘Behaviour Farm’ in the 1920s to reveal the ways in which field and laboratory were reconceptualised there to produce a productive ‘hybrid’ place for the study of animal behaviour. 6 Some of their cultures and sub-cultures have been examined in detail by Peter Galison (1997) in his study of the materialities of microphysics research. See also Suchman (2007) and PARC (2011). 7 A point that Latour makes clear in his account of a technological failure, namely the Aramis project, a highly innovative personal transportation system developed in Paris. See Latour (1996). 8 See Scott (1991), whose title alludes to the frequent reference in Latour to the Archimedean trope of science as a lever to the world. The full narrative of the failure is described in the second half of Scott’s article. She quotes CSIRO’s own review of the affair, which included an analysis of ‘broader social developments’ that contributed to the failure, the list of which is worth reproducing here: [A]philosophical change from regarding science as value- free, disinterested pursuit of knowledge to considering it as a social activity as subject to misconceptions, biases and prejudices as other activities; the growing public awareness of the impacts, both good and bad, of science and technology on society, an awareness being heightened by the dramatic implications of the revolutionary advances in computers, robotics and biomedicine; an increasingly pluralistic society and the trend from so-called representative democracy to participatory democracy, with the public, or active sections of it, more reluctant to leave decisions wholly to elected representatives and their expert
205
Archimedean levers and assessment 205 advisers; reflecting and reinforcing these trends, the change in the media’s reporting of science and technology from a fairly descriptive coverage of specific research to a more critical, conflict-laden coverage. (Scott, 1991: 29, citing No author, 1983/84) 9 I discuss some aspects of Schaffer and Squire’s analysis in Maharg (2007), where I acknowledge my debt to their use of Latour in the context of learning within the digital domain. 10 See http://simplecommunity.org for technical documentation and for the project’s final report. 11 Transactional learning was defined as: active learning through performance in authentic transactions involving reflection in and on learning, deep collaborative learning, and holistic or process learning, with relevant professional assessment that included ethical standards (Maharg, 2007). 12 See Ferguson and Lee (2012). For an account of the positive effects these approaches to learning and assessment have on student wellbeing, see Tang and Ferguson (2014). 13 For information on the Solicitors Qualifying Examination, see the Solicitors Regulation Authority website: www.sra.org.uk/sra/policy/sqe.page (accessed 6 January 2019).
References Barton, K., Cunningham, C., Jones, G. T., Maharg, P. (2006) Valuing what clients think: standardized clients and the assessment of communicative competence. Clinical Law Review 13 (1): 1–65. Barton, K., Garvey, J. B., Maharg, P. (2016) ‘You are here’: learning law, practice and professionalism in the Academy. In: Bankowski, Z., Maharg, P., del Mar, M. (eds) The Arts and the Legal Academy: Beyond Text in Legal Education. Abingdon: Routledge, pp. 189–212. Barton, K., McKellar, P., Maharg, P. (2007) Authentic fictions: simulation, professionalism and legal learning. Clinical Law Review 14 (1): 143–193. Bruner, J. (1962) The new educational technology. American Behavioral Scientist 6 (3): 5–7. Ducoffe, S. J., Tromley, C. L., Tucker, M. (2006) Interdisciplinary, team-taught, undergraduate business courses: the impact of integration. Journal of Management Education 30 (2): 276–294. Ferguson, A. and Lee, E. (2012) Desperately seekingrelevant assessment –a case study on the potential for using online simulated group bases learning to create sustainable assessment practices. Legal Education Review 22: 121–146. Fry, E., Crewe, J., Wakeford, R. (2012) The Qualified Lawyers Transfer Scheme: innovative assessment methodology and practice in a high stakes professional exam. Law Teacher 46 (2): 132–145.
206
206 Paul Maharg Galison, P. (1997) Image and Logic: Material Culture of Microphysics. Chicago, IL: University of Chicago Press. Hazen, B. T., Wu, Y., Sankar, C. S., Jones-Farmer, L. A. (2012) A proposed framework for educational innovation dissemination. Journal of Educational Technology Systems, 40 (3): 301–321. Kirk, R. G. W. and Ramsden, E. (2018) Working across species down on the farm: Howard S. Liddell and the development of comparative psychopathology, C. 1923–1962. History and Philosophy of the Life Sciences 40 (1): 24. Kohler, R. E. (2002a) Labscapes: naturalizing the lab. History of Science 40 (130): 473–501. Kohler, R. E. (2002b) Landscapes and Labscapes. Exploring the Lab-Field Border in Biology. Chicago, IL: University Of Chicago Press. Latour, B. (2007) Reassembling the Social: An Introduction to Actor-Network-Theory (new edn). Oxford: Oxford University Press. Latour, B. (1999) On recalling ANT. Sociological Review 47 (1): 15–25. Latour, B. (1996) Aramis, or the Love of Technology. Cambridge, MA: Harvard University Press. Latour, B. (1993) The Pasteurization of France. Cambridge, MA: Harvard University Press. Latour, B. (1983) Give me a laboratory and I will raise the world. In: Knorr-Cetina, K. D. and Mulkay, M. (eds) Science Observed: Perspectives on the Social Study of Science. London: Sage, pp. 141–170. Maharg, P. (2007) Transforming Legal Education: Learning and Teaching the Law in the Early Twenty-First Century. London: Routledge. No author. (1983/84) CSIRO: Whiter than white. Search 14 (11–12): 298. PARC. (2011) Busting the myth of the giant green button: a brief history of corporate ethnography | UX Magazine (612): https://uxmag.com/articles/busting-the-myth- of-the-giant-green-button (accessed 6 January 2019). Rogers, E. M. (2003) Diffusion of Innovations (5th edn). New York, NY: Free Press. Scott, P. (1991) Levers and counterweights: a laboratory that failed to raise the world. Social Studies of Science 21 (1): 7–35. Shulman, L. (2005) Signature pedagogies in the professions. Daedalus (Summer): 52–59. Squire, K. D. and Shaffer, D. W. (2006) The pasteurization of education. In Tettegah, S. Y. and Hunter, R. C. (eds) Technology and Education: Issues in Administration, Policy, and Applications in K12 Schools, Volume 8. New York, NY: Emerald, pp. 43–55. Suchman, L. A. (2007) Human-Machine Reconfigurations: Plans and Situated Actions (2nd edn). Cambridge: Cambridge University Press. Tang, S. and Ferguson, A. (2014) The possibility of wellbeing: preliminary results from surveys of Australian professional legal education students. Queensland University of Technology Law Review 14 (1). DOI: https://doi.org/10.5204/qutlr.v14i1.521.
207
Part IV
Assessing professional development
208
209
18 Developing the next generation of academics The graduate teacher assistant experience Karen Clegg and Giles Martin Introduction Graduate teaching assistants (GTAs), typically PhD students, contribute to the undergraduate student experience in a various ways; they provide insight into cutting edge research, assist with the development of academic skills, assess and provide feedback on student work and often provide pastoral care and advice. Some may also design and deliver modules and contribute to exam boards. They are, we argue, a vital resource to universities and enrich the student experience. Drawing on a decade of experience in directing and delivering Higher Education Academy (D1) accredited programmes for PhD students who wish to pursue an academic career, this chapter explores:
• • • •
the use of portfolio-based assessment in developing the next generation of academics; the value of communities of practice; the impact of GTA training on employability; the various design choices and challenges that have influenced our programmes.
We hope in taking a critical view of our own experience, practitioners wishing to support GTAs can identify approaches suitable for their own institutional and cultural context.
Context The UK’s Teaching Excellence Framework provides a framework for closer scrutiny on the quality of teaching. In this context it is vital that institutions train, support and properly remunerate GTAs who support the undergraduate experience. Having appropriately trained GTAs is not just good for institutional reputation, it demonstrates a commitment to providing research students who teach with the expertise to support fee-paying undergraduates. Institutions have a duty of care as employers to equip GTAs with the skills, confidence and experience to deliver high-quality learning experiences and to support their career and professional development.
210
210 Karen Clegg and Giles Martin This chapter draws on two case studies, at the University of York and Bath Spa University. Karen Clegg has been programme director at the University of York for the past decade and for four of these years, Giles Martin was the external examiner. Giles has led on various accredited programmes and is currently now at Bath Spa University. We draw here on our combined experience as developers and designers of Higher Education Academy (now AdvanceHE) accredited programmes that are mapped to the UK Professional Standards Framework (UKPSF).
Case study 1: York Learning and Teaching Award The York Learning and Teaching Award (YLTA) programme was launched in 2007 and is designed exclusively for GTAs. Like many other programmes in the UK, YLTA is externally accredited by AdvanceHE and leads to Associate Fellowship (AFHEA) status. The programme has more than 350 alumni, many of whom are teaching at institutions across the world. The nine-month, 20-credit level-7 programme comprises four compulsory modules: I II III IV
Supporting Student Learning in Higher Education. Pedagogy and Academic Practice. Learning and Teaching Symposia. Professional Practice and Academic Career Development.
Students can also access a number of additional, optional modules on topics such as ‘Hidden Aspects of an Academic Career’, ‘Structuring and Designing Teaching Sessions’, ‘Supporting Students with a Disability’. The programme is aimed exclusively at postgraduate (doctoral) research students from all disciplines. At York, 37% of postgraduate researchers are from outside the European Union, of whom a small proportion already hold academic posts in their own countries, and this diversity is represented on the programme with a fairly even distribution across arts and humanities, social science and science faculties, with a roughly 50% non-European Union and 60 : 40 female to male ratio. From just 17 participants in 2007, numbers grew and were later capped at 50 to enable the cohort approach, central to the programme design, to be maintained. Supervision/mentors Students are matched with an experienced (FHEA) member of academic/ teaching staff as their ‘YLTA supervisor’. The supervisors play a key part in the success of the programme and the development of the individual GTA’s academic practice. They meet with the students at least once a term, review teaching plans, observe teaching and give feedback, hold group discussions around pedagogical issues and areas of practice of concern to the GTAs. The
211
Developing next generation of academics 211 supervisors are drawn from a range of disciplines and are allocated four or five students each. From the outset, the programme was designed to enable students and staff to learn from each other and to enable a cross-fertilisation of ideas between disciplines, an aspect of the programme greatly valued, as exemplified by one long standing supervisor: One of the key benefits, in my opinion, is in the mixed groups of students from across our three faculties, and their integration within academic supervision groups, with supervisors who are rarely from their own discipline. The students learn much from each other in this context, and share experiences and good practice as to how their home departments work to deliver teaching appropriate to their discipline. As a supervisor, I also learn considerably from these interactions, and value the different perspectives, methods and experiences we all bring to our teaching, be it in lecture room, seminar, lab, class or field work. As supervisors do not assess their own supervisees both are equal partners in a community of practice pursuing an enhanced student experience. Assessment Like many accredited PGCERT higher education type programmes, YLTA is assessed through a portfolio of evidence and reflection, demonstrating successful achievement of the programme outcomes. Successful completion of the programme rests on the student’s ability to demonstrate the relevant criteria for AFHEA through their account of practice. This means that students must evidence engagement at a practical level with the areas of activity, core knowledge and professional values. They must articulate how they have personally approached, delivered and evaluated their teaching through a minimum of two session plans, two reflective logs and two peer observation reports. Having researched and written about assessment in higher education for many years (Hinett, 2002a, 2002b; Clegg, 2004; Bryan & Clegg, 2006), I was keen, in designing the programme, to use authentic assessment, promulgated by the late (great) Peter T. Knight. I wanted to engage students in assessment that would develop, test, and prove their competence as facilitators of student learning in a robust way which would confirm to them and prospective employers that they were a safe, skilled and innovative pair of hands. Consequently, the assessments on the programme were designed to be inclusive, relate to real-life teaching scenarios and speak in mode and form to different learning preferences. Additionally, students are required to give a 10- minute, conference style presentation to an audience of peers and academics on an aspect of academic practice that they find challenging and interesting. Students generate their own questions, which are ratified by the external examiner to ensure parity of complexity. All presentations must include reference
212
212 Karen Clegg and Giles Martin to pedagogy and personal reflection. Topics over the years have included explorations of the research–teaching nexus, engaging students for whom English is not their first language, using rhetoric and debate to inspire students, and developing a gender inclusive classroom. Other students and supervisors provide qualitative feedback on presentations, extracts of which can be used as evidence in their portfolio. Students peer observe each other’s teaching, with strong encouragement to observe someone from a different discipline to enable a wider exposure to different teaching approaches. As observers, they must provide formative feedback using a template structured around the UKPSF. In a matter of months, students become accomplished at spotting practice that works and noticing peaks and troughs in student engagement. They can provide feedback to their peers about what they observed and make suggestions about what might be done differently in the future. This raises their self-assessment ability, a skill crucial to their development as reflective practitioners, with a noticeable improvement throughout the programme in their judgement of the quality of their own work in relation to the expected standard, the UKPSF (Boud et al, 2018). The ability to notice when teaching practice impacts, negatively or positively, on others is crucial to their ability to become truly impactful practitioners. On this topic, Mortiboys says: a teacher, should develop and employ emotional intelligence to complement the subject expertise and pedagogical skills that we already offer to learners. When you are with a group of learners, you have the chance to connect with them beyond the transmission of discussion of ideas and facts, and thereby to transform the experience both for you and for them. If you do not use emotional intelligence in your teaching, the value of both your knowledge of the subject and your learning and teaching methods can be seriously diminished. (Mortiboys, 2012) This ability to tap into intuition and ‘reflect-in-action’ (Schön, 2016) is what distinguishes adequate practitioners from the truly great. For this reason, one module on reflective practice engages students in peer review of each other’s reflective teaching logs, encouraging them to go beyond pure description of what happened in the classroom to an analysis and evaluation of what they did that affected change and positive engagement. The problem we found with pass/fail assessment is lack of acknowledgement for exceptional talent. Recognising this, a ‘prize for outstanding portfolio’ was introduced. To be considered prize worthy ‘the portfolio must demonstrate that the student has engaged with and fulfilled more of the UKPSF than required at D1 level and have been independently nominated by both markers as suitable for a prize’. On the whole this seems to work, as we are able to identify which of the areas of the UKPSF the portfolio has exceeded. In writing this chapter, it strikes me that perhaps the distinguishing
213
Developing next generation of academics 213 feature of an outstanding portfolio and indeed, academic, is the ability to tap into and evidence emotional intelligence in supporting student learning. Initially a physical file, latterly the portfolio is created online, which enables the students to personalise their portfolios, add in documents, photographs, audio and video files. The portability of this means that portfolios can be made available to multiple markers, to prospective employers, and can be added to over time as a record of continuing professional development. Career enhancement The underlying principle of the programme is to enable the next generation of academics to develop their professional practice in a supportive environment. To this end, the programme includes a module on career development and professional practice. Students find a real example of their ‘dream job’ and bring this and their CV to the session. They peer review each other’s CVs in the context of the job specification and identify strengths and gaps in experience. Having peer observed each other and developed positive relationships through supervisory and cohort groups, they are able to provide useful feedback and remind each other about things that they have done to support student learning that they may have missed off their CV and could include in job applications. They peer teach from start to finish, developing almost subconsciously coaching skills and regulating their feedback through experience. Pros and cons The programme design is not without problems: it is time and resource intensive. Supervisors receive a respectable honorarium but it does not cover the considerable discretional effort and personal commitment they give to the programme. The requirement to attend the four face-to-face modules causes problems for research students, who are often away conducting field work or at conferences, and for those who have personal or mental health issues. Leaves of absence and one-to-one catch-up sessions have to be accommodated creating an additional administrative burden. Yet 11 years have produced some quite brilliant teachers, many of whom have gone on to academic roles in prestigious universities across the globe and two current supervisors are alumni of the programme!
Case study 2: Bath Spa University, graduate teaching assistants and academic staff combined At Bath Spa University, GTAs and professional support staff join new academics on the first half of the postgraduate certificate (30 credits, AFHEA accredited). The taught programme runs over nine months and is based around ten ‘sessions’ in three phases (planning for teaching, supporting student learning, reflecting and evidencing) and individual tutorials linked to
214
214 Karen Clegg and Giles Martin teaching observations. The focus is on continually developing direct engagement with students and their individual roles as teachers as part of continuing professional development cycles, which differs slightly from the emphasis at York: equipping GTAs with the skills and cultural know-how to support students at any university and to be able to articulate this to employers. Like the York programme, assessment consists of an online portfolio of evidence and accompanying reflective commentary discussing their development as a teacher. This works equally well for professional support and early academics as it does for GTAs and enables individuals to reflect on their role and how they support students. Design issues: to integrate graduate teaching assistants or not? Given a subject mix including education, art and design, creative industries and liberal arts, a high proportion of the participants at Bath Spa come with industry experience and professional identities, as opposed to the overwhelming majority via the straight academic/research route at University of York and similar institutions. As part of a smaller institution, individuals often also have wider remits to their job roles with technical and professional service staff more integrated into the teaching and taking on greater responsibility and team-teaching (see Chapter 16). The lines are more blurred regarding the responsibilities of different roles, making separate cohorts less appropriate – the participants can and need to learn more about working together. Recognising the various prior levels and types of experience and the often limited enhancement opportunities, the approach taken at Bath Spa is to develop the individual and to help them recognise their role as a teacher, understanding their context and using their experience and expertise. This is part of becoming a teacher and developing a new identity (which is not without challenges!). The individual is responsible for making their learning specific to their role and to evidencing how they have successfully supported students. Cross-department or single cohort? The cross-disciplinary aspect of groups is widely mentioned as a benefit at both York and Bath Spa, but sometimes a hurdle initially. We have observed how cross-disciplinary discussions can help break out of the silo of the discipline: the similarities or common principles across disciplines can help students understand key concepts for teaching and learning, while the differences can also be instructive in understanding context and disciplinary thinking. We have found it important for students to be encouraged to consider both. However, this is not always seen immediately and requires a broader level of reflection on their actual teaching practice. GTAs are often very discipline focused. In a course, over 9–12 months, there is time for such transitions in thinking, but also time to consider both the practical and the theoretical, both generic and discipline specific support. Becoming a teacher takes time.
215
Developing next generation of academics 215
Choice: small or large mentoring team? Like York, the course at Bath Spa uses teaching observations by course tutors, discipline mentors and peers, which can also be used as evidence for the portfolio. This supervisory and tutorial aspect of the course is important, but time intensive. As part of the wider course team, mentors can be invaluable in adding the perspective of their departments and disciplines, signpost discipline specific pedagogy. There is another major choice to make. Bath Spa has a large number of staff involved as mentors in an informal fashion. They usually hold Fellowship status, observe teaching and hold tutorials, but do not assess. This larger pool of voluntary mentors means that a greater number of people across the university are engaged. They have fewer responsibilities than colleagues at York, so recruitment is easier, but are not paid. However, managing this group is more difficult, quality can be more variable and getting them together as a course team poses problems. By contrast, York has a small team of dedicated, paid, trained supervisors each with four or five students from mixed disciplines. They are all highly committed and learn from the students. My observation as external examiner is that their input has been very valuable. However, mentoring can be a catalyst for developing practices in the rest of the university and there is a notable advantage to having a greater portion of the academic staff with some involvement, understanding and stake in the programme and development of junior colleagues. Small mentoring teams like those at York also assess portfolios (although not the work of their own supervisees), so mentors have assessment expertise and there is significant input and commitment from them to programme design and development discussions. With the larger team of mentors at Bath Spa, this is limited to mentor representatives at committees and focus groups, so the assessment responsibilities and expertise stay within the course team. However, assessment consistency and course development is easier to manage. In both our experiences, deploying graduates of such programmes (or PGCerts) as mentors/supervisors is productive as they have been through similar courses and transitions to develop reflective writing. At both institutions mentors report varied motivations for getting involved, including an altruistic spirit of collegiality to support GTAs, a desire to develop their own practice through helping another, or gathering evidence as part of the journey to Senior Fellow of the HEA.
Challenges for graduate teaching assistants GTAs starting their teaching career are, in the terminology of Lave and Wenger (2000), ‘legitimate peripheral participants’ to the ‘community of practice’. GTAs are normally facilitating small group classes and particularly
216
216 Karen Clegg and Giles Martin parts of courses which involve active learning (i.e. seminars, labs, problem classes, rather than lectures). PhD students, often at risk of imposter syndrome, commonly do not see themselves as teachers, with choice and agency, particularly when given instructions for sessions by an academic seeking consistency across groups. In addition to feelings of inexperience, or uncertainty over their role, this can make them feel even more ‘peripheral’. We have found this particularly true for laboratory demonstrators, who may struggle to see their role beyond that of a ‘lifeguard’, helping when things go wrong. Reflecting on those limitations and methods within their power to change, while analysing and understanding those they cannot, is important for all new teachers, but more acute for GTAs. Observations, mentoring, peer discussions and feedback have all been found to help. The ‘other’ (expert and peer) can help ask questions and provide ideas within the confines of the GTA’s context. The value of the peer community, including the department, should not be underestimated. Working with these colleagues as part of course learning activities helps connect GTAs with their discipline and with those in roles they aspire towards. The opportunity to have their voice and experience through departmental learning and teaching committees, institutional learning and teaching conferences and to gain recognition through teaching awards further validates their experience and provides equity with academic staff
• • •
Consequences for assessment: overly positive or negative reflections; designing assessment and guidance to cater for limitations of role; guidance on interpreting assessment’s purpose and requirements.
Developing reflective practice and writing At the beginning of the course, students’ familiarity, experience and comfort with reflective writing varies enormously. At the end of the course, it is commonly one of the major differentiators in achievement in assessment. The academically unusual personal writing could involve ‘failures’ or one’s weaknesses, which some avoid wishing to save face or focus on showing good teaching, as if applying for a prize/job. Alternatively, some also misunderstand reflection to mean simply talking about what needs to change: ‘All my feedback is positive, I’ve nothing to reflect on!’ paraphrases a common comment. The higher achievers tend to make the shift from one-loop to two-loop reflection (Argyris, 1991), questioning and discussing their starting assumptions and methodologies, and taking a critical approach to the evidence. On the other hand, the challenge for some students is to reflect on feedback comments beyond the surface level (see Chapters 10 and 11). Failed portfolios are usually descriptive of both teacher actions and feedback,
217
Developing next generation of academics 217 characterised by simply stating feedback and jumping straight to actions, a ‘they said, we did’ approach. A non- reflective approach is usually quickly identified by assessors, yet students’ difficulties in reflection indicate a potential ‘threshold concept’ (Land & Meyer, 2016) for such programmes. It is a greater challenge to some students than others: some are writing in a second language, or come from different cultures, not just international, but also disciplinary. Both institutions have increased formative feedback on reflection, both written and presentations, to help them grasp what constitutes quality reflection. The cross-disciplinary aspect of workshops can provide a useful catalyst as comparisons with other disciplines and practice help question one’s own, exposing differences and similarities.
• • •
Consequences for assessment: support and guidance on reflection and reflective writing; feedback and feedforward on draft writing; presentations and cross-disciplinary discussions help.
Evidencing practice In advancing their future careers, teachers are often required to present evidence, use it effectively and select examples from practice to demonstrate what the reader is looking for. Our portfolios form the start of ongoing teaching evidence. However, we assess less the evidence itself but rather how it is used –a fundamental but important difference. Portfolio-based assessment allows students significant flexibility and personalisation, within a set structure of outcomes/criteria to be demonstrated. Students can select examples from their practice and relevant evidence as appropriate to their preferred approach. Nevertheless, note that we found it important to emphasise that students must integrate evidence, be clear why this evidence is relevant and what specifically within it the reader should view. At both institutions, certain evidence (e.g. observations) must be collected ahead of specified workshops, requiring organisation and timing. This can be aided by requiring participants to provide a list of allocated teaching/facilitation prior to starting the programme, with the added benefits of ensuring that departments plan their teaching allocation and ensure that GTAs are paid for the formal teaching engagements they deliver. Nonetheless, these structures never fit 100% of the cohort. At York, research trips/fieldwork are a cause of difficulty. At Bath Spa, non-standard teaching patterns (e.g. block and part- time teaching) and a variety of non-classroom teaching environments are the challenge.
• •
Consequences for assessment: timely opportunities to build evidence; using evidence appropriately.
218
218 Karen Clegg and Giles Martin
Conclusions: what have we learnt about supporting and assessing graduate teaching assistants? Our overriding experience is that GTAs thrive as a community of practice, particularly when respected, valued and given equal opportunity to share their practice at institutional level. The following elements are common to both programmes and have proven to support the development of these communities and to add value to the GTA experience:
• • • • • • • •
micro-teaching and teaching observations; peer feedback; self-assessment; practical/group activities and tasks; mentors/supervisors; reflective writing and presentations; student-led sessions; mixing students from different disciplines.
At York, there is also tangible evidence of the positive impact of integrating a careers and employability focused element into the programme so that GTAs have the best possible chances of competing for academic positions. Resource, time, institutional size, pedagogical values and a commitment from senior management to supporting GTAs careers as well as employing their skills are all key factors affecting the design of the programme. If you are developing, or redeveloping a programme you may wish to consider also whether you wish to use the programme as a catalyst for discussions about learning, teaching and assessment; if so then who you involve, how many and in what roles is worth considering. Developing GTAs is, from our perspective, a win–win experience for both the institution and the individuals. Universities can be assured that the GTAs they put before undergraduates are capable and confident facilitators of student learning; the GTAs gain skills and experience in communication, facilitation and a recognition of the importance of equality, diversity and inclusivity in whatever organisation they work, academic or not. Developing opportunities and assessment that authentically assesses that contribution is a privilege and one which we hope you will enjoy as much as we do.
Footnote This chapter is dedicated to all the students and supervisors, the YLTA team and the three external examiners (including Giles Martin my co-author) who completed the YLTA programme under my direction 2007–2018 and who it has been my absolute pleasure to work with. Karen Clegg.
219
Developing next generation of academics 219
References Argyris, C. (1991) Teaching smart people how to learn. Harvard Business Review 69 (3): 99–109. Boud, D., Ajawii, J., Dawson, P., Tai, J. (2018) Developing Evaluative Judgement in Higher Education. Abingdon: Routledge. Clegg, K. (2004) ‘Playing Safe’: Learning and Teaching in Undergraduate Law. Nottingham: UK Centre for Legal Education. Clegg, K. and Bryan, C. (eds) (2006) Innovative Assessment in Higher Education. Abingdon: Routledge. Hinett, K. (2002b) Improving Learning through Reflection: Part One and Part Two. York: Higher Education Academy. Hinett, K. and Knight, P. T. (2002) Summative assessment in higher education: practices in disarray. Studies in Higher Education 27 (3): 275–286. Land, R., Meyer, J., Flanagan, M. T. (eds) (2016) Threshold Concepts in Practice. Educational Futures: Rethinking Theory and Practice. Rotterdam: Sense Publishers. Lave, J. and Wenger, E. (2000) Legitimate peripheral participation in communities of practice. In: Cross, R. L. Jr and Israelit, S. B. (eds) Strategic Learning in a Knowledge Economy: Individual, Collective and Organizational Learning Process. Boston, MA: Butterworth-Heinemann, pp. 167–182.
220
19 Practitioner perspectives Using the UK Professional Standards Framework to design assessment Cordelia Bryan, Thomas Baker, Adam Crymble, Fumi Giles and Darja Reznikova Introduction In 2005, the Higher Education Academy (HEA) ran its sector-wide consultation exercise on the establishment of a professional standards framework to enhance teaching quality at institutional and individual levels across the UK. The professional standards framework quickly became known as the UK Professional Standards Framework (UKPSF) and the current version was revised in 2011. It identifies the diverse range of teaching and support roles and environments within higher education that are reflected and expressed in its Dimensions of Professional Practice. The dimensions are commonly abbreviated as: ‘A’ for areas of activity undertaken by teachers and support staff; ‘K’ for core knowledge needed to carry out those activities as appropriate to one’s job description and; ‘V’ for professional values that individuals performing these activities should exemplify. These abbreviations are used in the vignettes below to indicate which dimensions they address.1 Just as this book argues for an integrated approach to assessment for learning, so too the UKPSF encourages and facilitates an integrated approach to academic practice through its emphasis on the dual nature of academic professionalism, encouraging and recognising continuing professional development in both subject discipline and pedagogical scholarship. We begin with an example of assessments for learning on a postgraduate certificate in learning and teaching in higher education which uses all the dimensions of the UKPSF. This is then followed by four vignettes from different subject disciplines illustrating how the UKPSF has been used in their assessment design.
Postgraduate certificate in learning and teaching in higher education using all dimensions of the UKPSF The programme requires students to self-assess and track their progress against the UKPSF dimensions at the beginning and end of module 1, as illustrated in the assessment brief in Box 19.1. They mark themselves against the areas of activity, knowledge and values on a scale of 0–4 and then write a rationale for their self-assessment scores. The scoring ranges from 0, which indicates no
221
Practitioner perspectives 221
Box 19.1 Self-assessment grid and reflective statement The grid should be completed at the beginning and end of module 1. It should be used a third time as a final check list for your portfolio at the end of module 2. This grid enables you to track your own progress and to provide evidence for meeting the course learning outcomes while also demonstrating your commitment to and active engagement with the UKPSF. The first time you complete the grid it will be used as a diagnostic tool to enable you to take stock of your current learning and teaching situation on joining the programme. You will use the same grid at the end of module 1 when it acts as part of your summative assessment. To indicate your progress, please use a different colour or symbol in the appropriate column. Name…………………………………………………………………….. Complete the grid below using the key to determine your own professional context for each category. The grid is based on the UKPSF, which underpins the whole programme. The UKPSF may be seen at: www. heacademy.ac.uk/ukpsf. Once you have completed the grid, write a short Reflective Statement (1000 words when enrolling and 1500–2000 words at the end of module 1, explaining why you have positioned yourself as you have and describing how you see your current context and yourself as a supporter of learning. Key 0 I have not really considered how to do this; nor do I have any direct experience to reflect upon. I can thus produce no evidence of engaging in the activity/achieving the required learning outcome. 1 I have started to think about this; I have only limited experience to draw upon. However, I can produce little or no evidence to demonstrate that I have engaged in the activity/achieved the required learning outcome. 2 I have thought about this; I have had experience of doing it. However, I can produce little or no evidence to demonstrate that I have engaged in the activity/achieved the required learning outcome. 3 I have thought carefully about this; I have had experience of doing it. I can produce evidence that demonstrates that I have engaged in the activity/achieved the required learning outcome. 4 I have thought carefully about this; I have various experiences of doing it. I can produce substantial evidence that demonstrates that I have engaged in the activity/achieved the required learning outcome.
222
Table 19.1 Self-assessment grid Areas of activity, knowledge and values embedded in the learning outcomes for both modules 1 and 2 Areas of activity A1 Design and plan learning activities and/or programmes of study A2 Teach and/or support learning A3 Assess and give feedback to learners A4 Develop effective learning environments and approaches to student support and guidance A5 Engage in continuing professional development in subjects/disciplines and their pedagogy, incorporating research, scholarship and the evaluation of professional practices Core knowledge K1 The subject material K2 Appropriate methods for teaching, learning and assessing in the subject area and at the level of the academic programme K3 How students learn, both generally and within their subject/disciplinary area(s) K4 The use and value of appropriate learning technologies K5 Methods for evaluating the effectiveness of teaching K6 The implications of quality assurance and quality enhancement for academic and professional practice with a particular focus on teaching Professional values V1 Respect individual learners and diverse learning communities V2 Promote participation in higher education and equality of opportunity for learners V3 Use evidence-informed approaches and the outcomes from research, scholarship and continuing professional development V4 Acknowledge the wider context in which higher education operates recognising the implications for professional practice
0
1
2
3
4
223
Practitioner perspectives 223 Table 19.1 (Cont.) Areas of activity, knowledge and values embedded in the learning outcomes for both modules 1 and 2
0
1
2
3
4
When you complete Module 1 Reflecting on Theory and Practice, tick or cross the appropriate column in the same way. You should now be scoring 3s and 4s and provide evidence for this in your Reflective Statement 2 (1500–2000 words). 1. Critically analyse and evaluate your own practice in relation to contemporary pedagogical theory 2. Apply theoretical principles to the development of course design, project planning and assessment 3. Reflect on your practice to identify scope for enhancement within the broader context of contemporary pedagogic research 4. Use your knowledge of how students learn to inform theoretical debate and approaches to practice-based problems When you complete Module 2 Evaluation and Enhancement, please check that ALL areas are now ticked in the appropriate column including any further evidence provided in your Action Research Report (summative assessment 5000 words) 1. Make a presentation to peers in a professional context 2. Conduct a small-scale action research project within a particular sphere of professional practice 3. Evaluate the outcomes of your project according to appropriate theoretical principles 4. Relate the outcomes of your project to contemporary pedagogic research and principles to inform your current and potential professional responsibilities
experience and therefore an inability to provide any evidence at this time, to 4, which signifies considerable experience and an ability to produce evidence that demonstrates the learning outcome has been met. Students typically score a few 1s and 2s at the beginning of the module but would need to demonstrate evidence of progressing to at least 3s in all areas by the end. The following quotation is from an acting-for-film student on this postgraduate certificate programme which indicates recognition of the value of this integrated approach.
224
224 Cordelia Bryan et al I have found the UKPSF (HEA, 2011) invaluable in providing a framework of expectations within Higher Education (HE). This framework – as in this self-assessment –has helped me identify areas of professional practice I believe I display competence in, while challenging me to discover those areas which need more attention. In practice, I have begun to assess my teaching, and my thought process as a teacher, against the expectations of this framework, and this has allowed me to develop a more reflective perspective in my work. (2018 PG Cert student, Rose Bruford College of Theatre and Performance) As this quotation illustrates, the UKPSF enables students to become critically reflective and determine for themselves their own pedagogical priorities in relation to their practice. Many students have prioritised assessment design as key to enhancing their students’ learning. Authors of each of the vignettes have either successfully completed a Postgraduate Certificate in Learning and Teaching in Higher Education aligned to the UKPSF or else have independently achieved Fellowship of the Higher Education Academy via a recognised continuing professional development scheme. They each map their innovative assessment against the UKPSF using the abbreviations A1–5 for areas of activity; K1–6 for core knowledge and V 1–4 for the professional values. They have also indicated which of the eleven conditions of assessment is addressed within their vignette.
Vignette 1 Encouraging deep searching in historical databases Adam Crymble (University of Hertfordshire, PGCert, Learning and Teaching in Higher Education) UKPSF evidence: A1, A2, A4, K2, K4, V3 Eleven conditions of assessment for learning: 1 –Assessed tasks capture sufficient study time and effort. 3 –These tasks engage students in productive learning activity. Students were given an advanced searching assignment that forced them to read and engage closely with the historical materials, bypassing their tendency to keyword search and find quick quotes for their essays. This helped them to develop better research practices. As Mills Kelly (2013) has noted, since the late 1990s, millions of pages of historical material have been digitised, allowing historians to bring original documents into the undergraduate curriculum at scale for the first time. He called this a shift from a ‘pedagogy of scarcity’ to one of abundance. But assumptions about students’ so-called ‘digital nativism’ meant that they were rarely taught how to search deeply. Instead, many
225
Practitioner perspectives 225 students adopted keyword searching strategies to find quotes to be dropped into essays with little effort. Similar to ‘question spotting’ –the process of avoiding the curriculum and instead only conducting activities that lead to marks, this approach meant that they did not take the time to really understand if what they had quoted was representative. If students were working strategically in search of marks, the solution was to reward deep searching by assessing it. Rather than show that they could find an interesting quote in one source, first-year students in my history of Britain and Africa class were challenged to work in groups to find 50 historical records that mentioned a person of African descent living in eighteenth century London, from within a database of 127 million words. The task was fraught with hidden challenges. The most obvious keyword ‘black’ was much more likely to return false matches –a stolen ‘black mare’ for example. These false positives forced students to slow down and read each match. In the process they began to read a significant number of documents and understood the material in a way that they had not previously. Students then submitted a 750-word report on their 50 records and their strategies for finding them, before presenting the most interesting examples to the class, giving the entire group new insight into what was in the collection. The reports were assessed (20%) as were the presentations (10%). Students peer reviewed the contributions of their group mates, with any non-performers penalised. Finally, armed with this new research, students were asked to write an essay on black experience in eighteenth century London (20%). From a scholarly perspective, students should have been doing this deep searching and careful reading anyway. But by breaking down the research process into concrete stages and rewarding good research practice rather than just the finished product, the students learned to work in a new way. With any luck, their improved essays will reinforce those new habits in future.
Vignette 2 Interdisciplinarity: using self-feedback to enhance integrative and conscious practice Darja Reznikova Research conducted with students from the following institutions: • Rose Bruford College, London/ UK, Postgraduate Certificate in Learning and Teaching in Higher Education • Trinity Laban –Conservatoire for Music and Dance, London/UK • Doreen Bird College of Professional Dance and Musical Theatre, London/UK
226
226 Cordelia Bryan et al •
Dance Professional –Vocational College for Dance Performance, Pedagogy and Choreography, Mannheim/Germany
UKPSF evidence: A2, A4; K2, K3; V1, V3. Eleven conditions of assessment for learning: 3 –These tasks engage students in productive learning activity. 6 –Sufficient feedback is provided, both often enough and in enough detail. 9 –Feedback focuses on learning rather than on marks or students themselves. 12 –Feedback is understandable to students, given their sophistication. Moving with, across and beyond the usually fragmented disciplines such as dance, voice and drama, a new kind of integrative practice (SoundBody) was formed as a way of training and assessing performing arts students. Via a transdisciplinary approach the voice served as an integrating agent of the whole body, culminating in the blurring of discipline boundaries. Essentially, a unitary type of inquiry emerged, which, as claimed by Stember (Chettiparamb, 2007), is highly conducive for enabling students to take a more holistic and individual approach to their learning. While recent studies have confirmed great benefits of interdisciplinary working, rigid assessment, time and resources driven institutional structures mitigate against an emergence of innovative and original endeavours, promoting replication and formulation as well as a conformist way of thinking (Chettiparamb, 2007). In order to provide a learning environment in which autonomy, interdependence and increased consciousness could flourish, it was essential to remove the usually strictly formulated learning outcomes with their inherent expectations. Instead fundamental interdisciplinary qualities such as intuition, playfulness and experimentation were encouraged. The students were verbally guided to immerse themselves in integrative voice and movement work as well as participate in mutual discourse on their experiences. However, the usual summative ‘assessment’ was substituted with an ongoing ‘self-evaluation’, or rather ‘self-awareness’, which opened up pathways for independent student-centred learning. Space and time was given for the students to ‘uncover’ the way their own bodies work, to process information (mental and physical) and to better understand how they learn. Thus, the attempt at an open-ended learning environment sufficed not only for a more exciting and creative platform but also emphasised the student as producer, rather than consumer, of knowledge. After overcoming initial feelings of bewilderment about such freedom, the students gradually developed certain trust in their own knowing, valuing feedback from the ‘self’ rather than merely from the ‘other’. Demonstrating self- awareness of the interconnectedness of all the
227
Practitioner perspectives 227 bodily systems emerged as a new and central aim. Interconnectedness is indispensable to a professional performer. Moreover, every executed sound-movement task, whether improvised or choreographed, provided instantaneous personal, unique and comprehensible feedback experienced in the body, rather than existing in the form of an abstract thought or number. Even working collaboratively, sharing each other’s journeys and providing continuous formative feedback via collective dialogue not only encouraged autonomous meta-learning skills, but equally underscored knowledge gained through the self. As a result, stepping back from the process as the teacher and assessor, the students had the freedom to make their connections and conclusions through themselves (self-reflection) and through each other (peer feedback). Their learning was enhanced, as highlighted by Rogers (1969, in Bryan, 2015), learning is maximised when judgements by the learner (in the form of self-assessment) are emphasised and judgements by the teacher are minimised. The emphasis, however, remained precisely on ‘conscious self- awareness’ rather than ‘critical judgement’ toward, as Barnett (1997) put it, a process of releasing ourselves from the shackles of beliefs or knowledge systems which serve to limit human potential. SoundBody practice is continuously developing to hopefully be integrated into the regular curriculum of a performer’s training regime in the near future.
Vignette 3 Design, build and test: objective assessment in engineering Thomas Baker Research conducted with students from University of Hertfordshire, School of Engineering and Technology, PGCert, Learning and Teaching in Higher Education. UKPSF evidence: A 1–3; K 2,3; V 3,4. Eleven conditions of assessment for learning: While the assessment in this case study reflects most of the eleven conditions, the focus is how a design and build assessment in an engineering context provides constant, immediate feedback of a nature that is meaningful for student learning –capturing the focus of conditions 5–9. One of the challenges of many forms of assessment is that the quality and objectivity of the feedback –as well as its prompt delivery –are largely dependent on the tutor and can, therefore, vary from one
228
228 Cordelia Bryan et al academic to another. The subject area of engineering provides an opportunity to provide instantaneous and continuous feedback (through the duration of the assignment) which is completely unbiased and independent, with a minimum of tutor dependency. This opportunity arises due to the physical nature of engineering and where an assessment can be structured as a design, build and test activity. In such an assessment, the product or process being designed, built and tested provides instantaneous, continuous and objective feedback: the process or product being made either works or it doesn’t! In addition to the above, this approach to assessment in engineering presents opportunities for students to learn by doing –an approach underpinned by Claxton et el (2011) and Claxton (2015). One example of this approach is in Thomas Baker’s engineering fundamentals module. The objective of this first-year module is to familiarise students with basic mechanical engineering elements such as bearings, gears, pulleys, fasteners and so on, as well as basic engineering skills such as measuring length, applying torques and preparing technical drawings. The assessment is to design and build a simple craft which will – without steering from the operator –proceed along a pre-specified path on the floor and park in a specified area. Students are given a large set of assorted components from which to build their craft and may construct their craft as they wish –there is no prescribed way to tackle the task. In this respect it mirrors industry where engineers have freedom to choose how to approach a problem and determine which components to use to achieve the desired outcome. Students thus plan the design in their own time and are given three practical sessions to build and perfect their design –each build session provides immediate feedback from the craft they are building! The final session is an assessed final build where students demonstrate their craft in action. Marks are awarded for the accuracy with which the craft manages to park in the parking space, the energy required to do so, safety of operation and the accuracy of the built craft compared to the technical drawings which accompany it (again mimicking industry). Students typically work in groups of four to six. Feedback and a provisional mark is given immediately following the session, in a meeting with each group and students are invited to record the feedback on their phones. Since many of the performance metrics can be measured, the marks are available immediately and are completely objective. The feedback meeting begins with students being asked to reflect on the session and continues with a discussion on how the design and build might be improved. The session concludes with asking if they would like any further feedback, if they are satisfied with their feedback and if they feel that the marks have been awarded fairly. In almost all cases this is affirmative. If not, further feedback is provided.
229
Practitioner perspectives 229 This method has resulted in excellent student feedback in the area of feedback and assessments with these areas being among the top scoring categories in the end of module student feedback questionnaires.
Vignette 4 ‘Counting’ formative assignments to enhance oral and written language skills Fumi Giles Research conducted with students from University of Hertfordshire, PG Cert. in Learning and Teaching in Higher Education. UKPSF evidence: A3, A4, K4, V1, V3 Eleven conditions of assessment for learning: 2 –Quantity and distribution of student effort 5 and 6 –Quantity and timing of feedback 8 –Quality of feedback. In Level 6 Japanese modules, there are four separate skill- based assessments; reading, writing, listening, and speaking. Each assessment is equally weighted (25%) towards the final mark. To improve productive skills (speaking and writing), development assignments are introduced to support student learning as detailed by Nicol and Macfarlane-Dick (2007). Two speaking and five writing development assignments are scheduled per semester. Assignments carry 10% towards the summative assessment and are designed to encourage students to get into the routine of practising these skills regularly by distributing efforts evenly across the module. This approach not only reduces the stress associated with summative tests, it provides the opportunity for students to identify and learn a wider range of vocabulary appropriate to each relevant topic, thereby ensuring they apply previously learnt vocabulary and grammar as much as possible. Detailed written/verbal feedback is provided after each assignment, which assists students to progress steadily. Topics are carefully selected for the development assignments to support those featured in the end-of-semester assessments. The speaking assignments require students to record a monologue, using their mobile phones (K4). For many, listening to their own voice on record can make them feel self-conscious. In particular, students with less confidence found the exercise quite uncomfortable. However, this challenge resulted in students focusing their efforts on checking all aspects of their assignment. Principal attention was paid to their pronunciation, intonation, and communication fluency, before submitting their recording online. The first assignment (5%) was scheduled a few weeks before the
230
230 Cordelia Bryan et al speaking test (15%), which enabled students to receive useful feedback before the test, thus maximising their mark. The second assignment (5%) was after the test, towards the end of the term, in order to offer another opportunity which could contribute extra marks to their final grade. Regular blog writing assignments using a predetermined theme and strict word count challenged students to write regularly and concisely in Japanese. The five assignments set (10% in total) before the final test (15%) provided students with an accumulation of feedback to act upon to further improve their work and learning. Additionally, one of the blogs required students to write an impression on one of their peer’s blog, providing the opportunity for collaborative learning. The diverse but structured range of development assignments enable students to become better linguists by challenging them to adopt different approaches to learning, practising and improving not only their productive skills, but also their receptive skills (reading and listening). Students are further encouraged to participate in the formative assignments as their accumulated ‘points’ are converted into a small percentage which contributes to their summative grade. Continuous engagement with these tasks and a dialogic approach to feedback makes explicit student progress leading to attainment of the best possible results.
Note 1 For a full description of the UKPSF see AdvanceHE. UK Professional Standards Framework (UKPSF) www.heacademy.ac.uk/ukpsf (accessed 6 January 2019).
References Barnett, R. (1997) Higher Education: A Critical Business. Buckingham: SRHE/Open University Press. Bryan, C. (2015) Enhancing student learning. In: Lea, J. (ed.) Enhancing Learning and Teaching in Higher Education: Engaging with the Dimensions of Practice. Maidenhead: Open University Press, pp. 20–43. Chettiparamb, A. (2007) Interdisciplinarity: A Literature Review. Southampton: Interdisciplinary Teaching and Learning Group, University of Southampton. Claxton, G. (2015) Intelligence in the Flesh: Why Your Mind Needs Your Body Much More Than It Thinks. New Haven, CT, and London: Yale University Press. Claxton, G., Chambers, M., Powell, G., Lucas, B. (2011) The Learning Powered School: Pioneering 21st Century Education. Bristol: TLO. Mills Kelly, T. (2013) Teaching History in the Digital Age. Ann Arbor, MI: University of Michigan Press. Nicol, D. J. and Macfarlane-Dick, D (2007) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31:2, 199–218.
231
20 Measure for measure Wider applications of practices in professional assessment Chris Maguire, Angela Devereux, Lynne Gell and Dimitra Pachi Introduction One of the most regrettable characteristics of the programme design process, within higher education, is the erosion of aspirational and creative learning outcomes because of the difficulty of assessing them reliably and cost effectively (Krathwohl et al, 1964). This is particularly true of abilities detailed within the affective domain, which are so important to professional practice, and what the Institute for Apprenticeships refers to as meta-skills. The components of the affective domain may be summarised as:
• • • • •
receptiveness –openness to others, sensitivity and listening; responsiveness –confidence, empathy, cooperation, proactivity; valuing –beliefs and attitudes; organisation –value system; characterisation –expression of value system, through ethical and professional behaviours, commitment and resilience.
In this chapter, we offer analysis of assessment instruments in four different disciplines: law, educational psychology, nursing and environmental science. Each of the assessments seeks to meet the requirements of both academic and professional standards and uses the characteristics of the affective domain. Angela Devereux sets out the complex assessment route to qualification as a duty solicitor; Lynne Gell, the assessment of the practice learning component of a nursing degree and the difficulties that the affective domain brings to assessment; Dimitra Pachi explores an assessment based on an educational psychology task. Finally, we draw on the work of Mark Davies and describe an environmental science assessment that simulates real-world practice in scientific research. While the focus here is primarily on professional competence, we are sensitive to the significant developments in the skills and employability agenda within academic education. Rich (2015) offers a list of employability skills that, he argues, should be delivered within the undergraduate curriculum. Echoing Bourdieu (1992), Rich refers to these as ‘social capital’, indicating
232
232 C Maguire, A Devereux, L Gell, D Pachi the affective quality of these skills, their often socially contextualised acquisition and the implications for equality of opportunity and the widening participation agenda. We believe that the lens of professional practice provides a sharper focus on shared issues that are common across the sector. The importance of the affective domain is inherent in each of our examples and is evidenced by, for example, the high correlation between how patients feel about how medical staff have communicated with them and the success or otherwise of biomedical outcomes and likelihood of malpractice litigation. (Epstein & Hundert, 2002, p. 232). This correlation illustrates the value of affective behaviours, particularly within professional practice, where time is short, clients and patients fractious and emotional, and sometimes unable to deal with the facts of their situation. Without being overly dramatic it is true to say that without affective abilities practitioners would be less successful, if not ineffective and potentially dangerous. Affect is more than ‘soft skills’, it is an awareness of the wider, personal and societal impact of practice (Shuman et al, 2005). The Office of the Independent Adjudicator reports that for 2017, 55% of all complaints related to matters of academic status and assessment and that the key, common theme was fitness to practice (OIA, 2017). We hope that the examples provided here may provide assistance to colleagues seeking to develop the design of their assessment diet to accommodate these concerns. The examples set out below share key, common features. They:
• • • • • • • •
are multimodal; are integrative; simulate reality; value memory and recall in the application of subject-specific knowledge; require the application of skills within the affective domain including reflexivity and awareness of the effects of emotion and limits of competence in dealing with situations in practice; test adherence to ethical behaviour; recognise the danger of and seeks to militate against bias; include a performative element.
They also accord with the approach set out in Miller’s Framework for Assessment (Cate et al, 2018): knows: recall of facts, principles and theories; knows how: problem solving and following procedures; shows how: demonstration of skills in a controlled setting; does: observation of performance in practice. This reflects the professional practice context, where the ability to determine competence is of paramount importance for the individual, their (future) clients or patients and their profession. The duty solicitor assessment is the most burdensome of our examples requiring a portfolio of 25 cases. The Royal
233
Measure for measure 233 College of General Practitioners has adopted a similar approach to augment its assessment diet (and address concerns about the dependence on the objective structured clinical examination, OSCE, commonly used in professional education) whereby candidates are required to submit several best case recordings of performance in a clinical setting. While the actual number required is not specified, Epstein and Hundert consider that ‘27 cases may be necessary [to provide confidence] in high-stakes examinations’. (Epstein & Hundert, 2002, p. 230). Both these examples highlight the high resource demands of reliability, authenticity and validity of the assessment method in high-stakes assessment and, as Gell observes, in combating overt and subjective bias in the marking process. Extrapolating from professional practice to the academic context, we can see the advantages of developing students’ affective abilities in tandem with their cognitive skills. They are foundational and enabling. They act as a prism through which knowledge and cognition is able to shine more clearly; they are transferable and inter-contextual, and they are valued by employers and government.
Duty solicitors: the assessment gateway Duty solicitors provide representation and legal advice to those charged with certain offences at an early stage in magistrate and youth court proceedings. Their role ensures greater equality between the state and the individual and reduces the risk of unsafe convictions The duty solicitor mitigates the individual’s lack of knowledge and/or understanding of the law; inability or reluctance to confront potential evidence and difficulty in controlling their behaviour, which may be detrimental to their position. These mischiefs inform the design of the assessments chosen to measure the candidate’s ability to fulfil the role and, in doing so, achieves the important goal of retaining the confidence of the profession. Duty solicitors are assessed by a portfolio and an Interviewing and Advocacy Assessment (IAA). The portfolio requires the submission of 25 cases which demonstrate the range with which the candidate is dealing in relation to technical requirements complexity, quality, context, venue and case load at one sitting. The IAA complements the portfolio by requiring the candidate to demonstrate their ability through performance. The assessment consists of:
• • • •
a review of unseen cases papers (8 minutes) an interview (15 minutes) preparation to represent the client and two others in court (35 minutes) representation in court of the three clients in three different types of hearing (sentencing, bail and procedural issue).
The assessment criteria cover: the client relationship; information and instructions; advice and ethics.
234
234 C Maguire, A Devereux, L Gell, D Pachi
• •
For advocacy, the criteria are: the way the candidate relates to the district judge (played by an assessor); the ability to apply legal knowledge to the client’s best advantage, in accordance with their instructions and subject to the Solicitors’ Regulation Authority Code of Conduct.
The IAA simulates the work of a duty solicitor by requiring the candidate to address multiple demands, in a range of professional environments, with diverse clients and under significant time pressure. This tests memory, fact management, adaptability, resilience, interpersonal skills and emotional intelligence, all part of the affective domain and all difficult to assess through conventional means. They are assessed by the judge and the client. Students must pass all the elements, if they lack any of the abilities required they will fail. All performances are recorded and reviewed by other assessors and the external examiner to ensure objectivity. The most frequent causes of failure are inadequate knowledge of the law and analysis of the facts and, less frequently, inability to relate to the client. Experience shows that the less able candidates may be able to pass the interview but that having to then use that information on their feet and also prepare to represent the other two ‘paper’ clients reveals the tenuous grasp they have of the facts or their lack of experience –evidenced sometimes by gaps in knowledge or by their inability to juggle several sets of facts and present the cases in sequence in a structured manner. While the assessment diet may appear burdensome (and expensive) any lighter model risks losing integration, integrity and scope. The qualities necessary to overcome the difficulties of handling the client or the court are such a vital part of professional conduct they must be tested and so judgement of the use and extent of ‘affect’ qualities must take place. Multiple assessment components are not only a feature of high-stakes and expensive professional assessment; they commonly feature in undergraduate assessment as illustrated by Pachi in our next example and, even in those subjects without professional accreditation considerations. What, perhaps, differentiates professional assessment is the extent to which these different assessment components are designed to integrate to test the learning outcomes holistically rather than separately, which facilitates more effective testing of the affective domain as well as the cognitive.
Experiencing what it is to be an educational psychologist The ‘Educational Psychology and Special Educational Needs’ level 6 module in Psychology at BPP University aims to test students’ knowledge and understanding of the syllabus (both the theoretical and empirical) in educational psychology, their level of academic skills, their professional and affective skills (i.e. self- management, problem solving, participant
235
Measure for measure 235 understanding, communication, flexibility), as well as the development of more purely affective areas (such as their motivation, attitude and confidence in psychology). The module has two summative and two formative elements, each pair focusing on different learning outcomes and learning styles. In this chapter, I focus on formative and summative assessment 1. The formative element of this assessment reflects and prepares students for the summative assessment. The summative element consists of:
• • • • •
the student arranges the observation of a primary or secondary teacher delivering group instruction; the student completes the observation report form contemporaneously during the observation; the student re-works the report to add literature resources to describe and support their evaluation; the student formulates recommendations for improvement of the teacher’s instruction based on the literature in educational psychology; the student gives a twenty-minute oral presentation.
The assessment incorporates authentic and performance validity (Woolfolk et al, 2013). The students are required not only to know how to do a task but to do it, which reveals their approach to process (Airasian, 2001). This assessment allows students to consolidate and deepen their learning through discovery (Bruner, 1961), applying inductive reasoning while working on their case study. This assessment tests students on a range of skills, which are all part of a realistic task that an educational psychologist would perform in practice. This contextual realism increases students’ professional insight, awareness and commitment to practice. These aims follow all principles of effective testing (Slavin, 2012). Student feedback consistently reports that students valued the practical insight into educational psychology. Tasks were different from usual and helped learning, used real-life scenarios, and the module was action packed.
Valuing nursing practice learning The Nursing and Midwifery Council requires equal weighting of the assessment of practice and assessment of theory in the final award of the nursing degree, and there is an ever increasing focus on fitness to practice. Assessments are designed to ensure that students are safe and effective in practice. To reflect this competency approach, summative assessments are graded pass/fail with the 360 credits of the programme split equally between theory and practice. Students complete three formative assessments: two reflective accounts applying Gibbs’ model of reflection, online activities and tests that simulate the demands of OSCEs and an online test on knowledge of drugs and dosage calculations. These three elements, prepare for the summative assessments
236
236 C Maguire, A Devereux, L Gell, D Pachi which include OSCEs, knowledge and calculations of drug dosage and a range of competencies and proficiencies detailed in an assessment of practice learning document. Academics and clinicians have repeatedly debated assigning a grade in the assessment of practice learning. The challenges to doing so are variability, subjectivity and whether it is educationally appropriate. As a result, most undergraduate nurse programmes are assessed on a pass/fail basis. It is recognised that, while the varied clinical environments make assessing students’ practice performance a challenge, the validity and reliability of assessment practices should be maintained irrespective of the grading system applied. (Gopee, 2011). Furthermore, many assumptions about criterion-referenced assessment and the associated reluctance to grade are unfounded when dealing with competency rather than with traditional behavioural objectives. By using both criterion-and norm-referenced assessment (Cantillon & Wood, 2010), graded assessment in practice could clarify and impress on students the minimal competency requirements, while at the same time rewarding effort and achievement. Criterion-referenced assessment would better identify what students did and did not do when meeting the competency standard. Students often comment that grading practice would be fair and beneficial: distinguishing between strategic and committed students. Mentors would be able to make finer judgements about the student’s performance and would provide a better way of communicating the assessment of theory application in practice, which is fundamental to a practice-based profession such as nursing. Evaluation generates valuable discussion. Students recognise that they perform quite differently to each other and feel that their application of theory to practice and professional behaviours should not be limited by a pass/fail assessment any more than their academic work is, and that this would also assist employers in recruitment. A study by Calman et al (2002) found that one institution had stopped grading midwifery and nursing students’ clinical competence because higher than expected grades were consistently awarded. The students believed that the assessment was open to bias because it depended on ‘how well you fitted in’ and complained of inconsistency between assessors. Rigorous assessment rubrics and grading criteria are then key to ensuring validity and reliability. Educators and clinicians must work together to establish coherent, objective criteria, which will provide a rigorously determined assessed grade for the student’s performance in practice learning (Wigens and Heathershaw, 2013). Carefully constructed rubrics can also combat grade inflation. Effective rubrics require three key elements: clearly defined performance criteria; detailed descriptions of what performance looks like at each level of proficiency; and a grading scale which generally uses three or four statements (Isaacson & Stacy, 2009). This would ensure that the award reflects the
237
Measure for measure 237 student’s overall nursing ability and his or her practical performance as well as his or her theoretical knowledge.
Affective benefits from correcting unintended consequences of assessment design: lessons from coastal ecology Drawing on the work of Davies and subsequent developments to it (Davies, 2003; Davies & Maguire, 2018) we report on the experience of students on a course in coastal ecology at the University of Sunderland. The students conduct a planned project during a residential field course. They may apply to do anything they like, so long as the scientific credibility is explained and justified. They may employ as many staff, use as much equipment, and use any location they see fit, so long as requirements are fully justified within the framework of the project and are fully costed. Students are asked to imagine that they have up to three years to conduct the study. Students are responsible for: planning the study; establishing its background; identifying the scientific equipment needed; applying for funding on a research council form; giving a conference presentation; writing the scientific paper. Having completed the practical work on the field course, the requirement to give a ‘conference’ style presentation and write the scientific paper exposes students to the realities of being a scientist. It also helps students gain transferable skills useful in most graduate jobs. It is novel and participative: students must work in small groups. The practical element, particularly in executing one’s own designs helps students to evaluate their own abilities and potential; while the presentation forced the students to consciously reflect on and articulate their experiences and findings on the module, aiding the acquisition of deep learning. Perhaps most importantly, students regard this approach as a ‘fun’ way to learn, and when learning is enjoyable it often becomes ‘deep’. However, the diet of assessments was not entirely successful in its first iteration. At the start of the field trip students were reminded that they would be required to make a presentation, which then preoccupied them to the extent that they took time out from their research to prepare. This was an unintended consequence of the assessment diet, influencing student behaviour in a way that undermined the primary learning outcomes of the module. In consequence, in future iterations students were not reminded about the presentation until shortly before they were due. Naturally, this caused some anxiety, which added to the reality of the exercise. The reduced preparation time and heightened anxiety led to the presentations being more focused and effective. The exercise also built students’ confidence because not only had they presented well, they had done so under pressure.
238
238 C Maguire, A Devereux, L Gell, D Pachi
Conclusion Each of the four scenarios seek to simulate a real-world context. There is a correlation between the assessment stakes, the realism of the context, the pressure on the candidate and the demands on their affective domain skills. There is also a correlation with the complexity and therefore costs of assessment. While all the assessments described here are multimodal the higher the stakes and the closer the gateway to practice the greater the evidence base (e.g. the 25 cases required for assessment as a duty solicitor). Repeated achievement in multiple scenarios and practice contexts is required to give confidence that not only has competency been demonstrated but demonstrated consistently. This is reinforced in the integration and interdependence of the components within an assessment: the interviewing skills of the duty solicitor inform case analysis and fact management, which are the foundation for the advocacy presentation. The same approach is present in the other examples: Gell’s tripartite OSCE, drug calculations and reflective accounts seek to ascertain both the specific elements of competency e.g. administering the correct drug in the right dosage, and holistically, the student’s fitness to practice. Undergraduate psychology is perhaps the most traditional subject among the examples here, Pachi’s assessment diet reflects the multimodal, contextualised approach presented by Devereux and Gell in that it requires a contemporaneous note, formal recommendations reviewed against and supported by the literature and finally an oral presentation. Similarly, Davies’ fieldwork assessment has inherent validity with the multimodal and integrative dimension reinforced through the demands of the (unremembered) presentation. All the assessments described in the case studies foreground the affective domain. Each assessment requires the demonstration of interpersonal and communication skills within the professional context: the duty solicitor and the nurse must demonstrate high levels of empathy, listening skills, self- presentation, professional competence and authority, calmness, resilience and care; the fieldwork student must be an active and effective group member, the student psychologist must listen carefully, evaluate empathetically and make their recommendations fairly and dispassionately. Pressure, in relation to assessment has two meanings: anxiety or pressure. Anxiety, is the more commonly understood when the terms are unpacked. It is accepted as an innate part of assessment, traditionally most acutely felt by undergraduates in unseen proctored examinations. According to Van Tuleken, sitting an assessment evokes the same, instinctive physiological response as animal attack and immersion in cold water (Van Tuleken et al, 2018). For some students, it is so acute and debilitating that it is a recognised medical condition. For such students, reasonable adjustments are made to the assessment method or environment provided these do not compromise the professional competency standard. Within professional assessment pressure is used in the traditional sense of bringing force to bear. Professional assessment actively seeks to put the student
239
Measure for measure 239 under the sort of pressure they will face in practice to ascertain whether they can cope with both the demands and the effects without becoming ineffective or damaging. This is most clearly evidenced in the complex, strictly time- constrained components in the assessment for the duty solicitor: three consecutive, inter-dependent activities of 8, 15 and 35 minutes prior to representation in court; but is also present in each of the other assessments: Gell’s OSCE’s, Pachi’s contemporaneous note, Davies’ unremembered presentation. Anxiety in the form of concern or even passion, is also a feature, if not a criterion, within these assessments. It starts with the profession and the educator’s concern to ensure that the next generation is better equipped than the previous one, and does not encounter the shortcomings the educator may have suffered in their own professional education journey. These shortcomings have been highlighted by Nicola Dandridge, the Chief Executive of the Office for Students, noting that over 25% of graduates say they are over qualified for the jobs they are doing, while employers say they are under-skilled: This apparent mismatch between what a university education may deliver and what employers say they need underlines the importance of keeping employability in sharp focus throughout students’ experience of higher education. (Dandridge, 2018) We would argue that both a sophisticated and transformative solution, drawing on the experience of professional education, would be the foregrounding of affective abilities within programme design throughout higher education and their evaluation in the assessment diet. We suggest that this would go beyond the employability agenda and the swamp of practice and help address the complex issues arising from matters such as grade inflation, the Prevent duty, citizenship and graduate skills, while firmly keeping in view the value of a liberal education and how these all engage with the rich diversity of human nature.
References Airasian, P. W. (2001). Classroom Assessment: Concepts and Applications (4th edn). New York, NY: McGraw-Hill. Bourdieu, P. (1992) The Logic of Practice. Polity Press. Bruner, J. S. (1961) The act of discovery. Harvard Education Review 31: 21–32. Calman, L., Watson, R., Norman, I., Redfern, S., Murrells, T. (2002) Assessing practice of student nurses: methods, preparation of assessors and student views. Journal of Advanced Nursing 38(5): 516–523. Cantillon, P. and Wood D. (2010) ABC of Learning and Teaching in Medicine (2nd edn). Chichester: Wiley-Blackwell. Cate, O.T., Custers, E. J. F. M., Durning S. J. (eds) (2018) Principles and Practice of Case-based Clinical Reasoning Education: A Method for Preclinical Students. Heidelberg: Springer Open.
240
240 C Maguire, A Devereux, L Gell, D Pachi Dandridge, N. (2018) Nicola Dandridge: Improving graduate employability. Office for Students, 17 September (blog post). www.officeforstudents.org.uk/news-blog- and-events/news-and-blog/nicola-dandridge-improving-graduate-employability (accessed 6 January 2019). Davies, M.S. (2003) Producing a research proposal, paper and presentation. Real World FDTL Project. Davies, M. S. and Maguire, C.W. (2018) Unintended Consequences of Assessment Design. A Research Interview. Unpublished. Epstein, R. M. and Hundert, E. M. (2002) Defining and assessing professional competence JAMA 287 (2): 226–235. Gopee, N. (2011) Mentoring and Supervision in Healthcare (2nd edn). London: Sage. Isaacson, J. J. and Stacy, A. S. (2009) Rubrics for clinical evaluation: objectifying the subjective experience. Nurse Education in Practice 9 (2): 134–140. Krathwohl, D. R., Bloom, B. S., Masia, B. B. (1964) Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain. New York, NY: David McKay. National Midwifery Council (2010) Standards for Pre-Registration Nursing Education. London: NMC. OIA (2017) Annual Report 2017. Reading: Office of the Independent Adjudicator. Rich, J. (2015) Employability: Degrees of Value. Occasional Paper 12. Oxford: Higher Education Policy Institute. Shuman, L., Besterfield-Sacre, M., McGourty, J. (2005) The ABET ‘professional skills: can they be taught? Can they be assessed? Journal of Engineering Education 94 (1): 41–55. Slavin, R. E. (2012) Educational Psychology: Theory into Practice (10th edn). Boston, MA: Allyn and Bacon. Van Tuleken, C., Tipton, M., Massey, H., Harper, M. C. (2018) Open water swimming as a treatment for major depressive disorder, BMJ Case Reports 2018. DOI: 10.1136/ bcr-2018–225007. Wigens, S. and Heathershaw, R. (2013) Mentorship and Clinical Supervision Skills in Healthcare. Andover: Cengage Learning. Woolfolk, A., Hughes, M., Walkup, V. (2013) Psychology in Education (2nd edn). Harlow: Pearson.
241
Conclusion Resilience, resourcefulness and reflections Cordelia Bryan and Karen Clegg
Context: the need for innovation There is now, more than ever, an urgent need for innovations when the current discourse of assessment continues to emphasise measurement and grades over opportunity. This discourse is the symptom of modernity and our concern with accounting and legitimising knowledge. We argue that as practitioners we need to acknowledge the shortcomings of educational assessment and let go of the idea that we can claim a scientific ‘reliability’ in what is essentially a subjective practice. As argued in the first edition, assessment is a ‘frail and flawed’ technology. Yet still we cling to the notion of ‘objective’ assessment, invigilated exams and online individual assessments, largely due to our desire to be seen to be ‘rigorous’ and our fear of the threat of plagiarism. We live now in what Barnett describes as, ‘a high risk, super complex’ society (Barnett, 2000), ‘characterised by uncertainty and unpredictability’. What is called for to counteract the increasing corporatisation, commodification and objectification of knowledge production may be no less than anarchism. Anarchism, as a body of theories and practices advocated by Rouhini (2012), provides support for creative, non-coercive, practical learning spaces –this is what we hope this collection of innovative assessments achieves within existing neoliberalising institutions. Collectively, here, we offer some uniquely situated case studies for educational change, most of which are firmly rooted within a grounded, liberating, student-led critical pedagogy. In this context, we need to know what, how and the extent to which a student can apply knowledge. As Birenbaum and Dochy (1996: 4) suggest, successful functioning in this era demands an adaptable, free thinking, autonomous person, who is a self- regulated learner, capable of communicating and cooperating with others. Twenty-first century higher education is responding to this shifting nature of learning and societal requirements as exemplified in the case studies in this book. It is notable how few are concerned with the assessment of content knowledge and how many focus on the assessment of skills, abilities, and capabilities (Kane and Banham, Chapter 8; Gilbert and Bryan, Chapter 13, Clegg and Martin, Chapter 18 and Maguire, Chapter 20). These qualities are what Claxton (1999) refers to as the learner’s ‘toolkit’, which he defines as the
242
242 Cordelia Bryan and Karen Clegg three ‘Rs’: resilience, resourcefulness, and reflection, borrowed here for the title of our conclusion, as we advocate that successful academic practitioners also need these qualities if we are to support ourselves and our students to cope with the super-complexity of the western world. Interwoven throughout the chapters is the idea that good assessment involves active engagement with real-life learning tasks. Researchers such as Birenbaum (2003), Gulikers et al (2004) and Villarroel (2018) maintain that this focus on ‘authenticity’, as defined by the relationship of the task to the context in which it is set, is what sets innovative assessment apart from more traditional assessments. If assessment tasks are representative of the context being studied and both relevant and meaningful to those involved, then they may be described as ‘authentic’. This marriage between context and tasks is demonstrated by several case studies and in particular those in Part IV, where the focus is on helping learners in different subject disciplines develop professional knowledge, skills and attitudes. In many UK universities, work is being developed to equip PhD students and researchers with a range of skills to enhance their academic practice and performance as researchers and as supporters of learning. This commitment to the development of future university teachers illustrates a most welcome shift in values and a recognition that professional development is not just about meeting a series of assessments to show competence but, as Elton (2006) advocated, is about developing ‘connoisseurship’. This interpretivist approach is akin to that traditionally found in arts and design programmes where assessments rely on the ‘connoisseurship’ of professionally developed ‘examiners’, a concept originally conceptualised by Eisner (1985) –who of course would have to go through a process of formation or enculturation. Several of the case studies in this edition offer different ways of guiding students through a similar process of formation in an attempt to develop their evaluative skills. This master–apprentice partnership which ideally leads to a sense of ‘shared connoisseurship’ is fundamental to assessment for learning. McDowell, to whom this collection is dedicated identified the key components as being:
• • • • •
authenticity enhancing formative assessment active and participatory learning feedback through dialogue and participation student autonomy (McDowell, 2012).
Assessment and emotion The emotional impact of assessment has, until relatively recently, been underrated but it is central to motivation and to case studies in this collected volume (Chapters 10, 11 and 13). Being assessed is an emotional experience. Having a judgement made about the quality of your work can be a potentially
243
Conclusion 243 humiliating experience, which is why Pitt (Chapter 10) explores students’ ability to process especially negative emotions when receiving grades that may mismatch their expectations and Winstone and Nash (Chapter 11) seek to develop students’ feedback literacy and their ‘proactive recipience’. Gilbert and Bryan (Chapter 13) focus on group work and processes of collaboration to enhance students’ rational compassion. They emphasise the need for training and assessing students’ emotionally intelligent actions, taken towards shaping positive group dynamics and interactions. They provide evidence that these are cognitive, deliberative behavioural skills that can, and are, being trained and assessed in higher education, in very little time and at no financial cost. Research shows that events which are highly charged with emotion tend to be well remembered (Cannon & Edmondson, 2005), which would account for the minute details people can recall about sitting their finals, taking a driving test or going through their PhD viva. For many learners, it is the explicit recognition of having done well or discovering how to achieve excellence that drives learning. Of course, there are those who learn just for the love of understanding but in western society that learning is verified and celebrated through certificates, post-nominals and promotion. Assessment outcomes can mark success, status and public accolade. The corollary for those who do not do well is lack of confidence, a well-documented inhibitor to learning which can result in student dropout (Yorke, 2002; Nicol & Macfarlane-Dick, 2007; Kilbride, 2014). Dweck’s well cited work on self-theories explains how learners develop either an entity (fixed) or incremental (developmental) theory of intelligence. Those students who ascribe to an entity view believe that assessment is an all-encompassing activity that defines them as people. If they fail at the task, they are failures. Those students who feel that intelligence is incremental, she claims, have little or no fear of failure (Dweck, 2009). Good assessment then should help students appreciate challenge while also being sensitive to the range of negative emotions associated with fear of failure. The more we can, as Rouhani (2012) advocates, develop and support non-coercive, practical learning environments that enable students to experiment and to get things wrong and learn from their mistakes, the less likely they are to adopt an entity view. By developing supportive, compassionate and authentic learning environments we may also help to reduce the fear of failure and protect student wellbeing. Assessment also has the power to combat some of our concerns about plagiarism. We cannot legislate for the determined student intent on doing well at the expense of others but we can address poorly designed assessment such that the majority of students are inspired enough not to compromise their academic integrity. Innovative, authentic, assessment that provides opportunities for students to evaluate their performance against negotiated criteria and offer timely, quality feedback and feedforward go a long way to combating the temptations of ‘cheat sites’ and essay mills by using formats for which such sites present difficulties.
244
244 Cordelia Bryan and Karen Clegg What the case studies show is that good assessments centre on the process of learning and examine the extent to which an individual has increased skills and understanding –and can articulate that. This process can be evidenced through, for example; oral examination, viva, debate, portfolio, skeleton drafts, reflective logs, teamwork project and any other method that requires the student to account for the process of learning, the links and connections that prompted him or her to connect one piece of information in relation to his/her own experience. Accounting for how you learnt and offering a rationale for choosing to cite one particular source over another is very difficult if you are not the author or orator of the particular piece of work. It is apparent, particularly in speech, when students are ‘waffling’ and it is very difficult, even for the most practised orator to pass off the work of someone else as his/her own when faced with inquisitive peers ready to ask challenging questions. Caroll, (2013) suggests that in designing assessment, tutors should ensure that tasks include reference to current affairs (not dated ones that enable old essays to be recycled) and require students to access primary sources. The more the task involves the student conducting research or referring to recent references, the less opportunity there is likely to be for plagiarism. There are endless possibilities for the learner to demonstrate thinking and engagement with the task, all of which make learning and teaching more interesting and plagiarism/cheating by impersonation more difficult. In developing a set of guiding principles for innovative assessment, it may be useful to extend the definition of ‘authentic’ to include the verification and authentication of the ownership of the assessed work (‘whodunnit?’), as Race (2006) suggests. In the light of debate suggesting that academics are not beyond passing off someone else’s work as their own (Anonymous Academic, 2017) then perhaps we should also consider building into any new framework of professional standards a concept of academic scholarship legitimised by data protection.
Levers for change Chai and Tai (Chapter 5) highlight the transformative role of self and peer-assessment in helping students to develop their critical thinking and understanding of standards. Their work corroborates the research conducted (Sadler, 1998; Cowan, 1998; Boud, 2000 and 2013; Hinett, 1999), illustrating how involving students in discussions about quality and standards provides them with better insight into the assessment process. Gibbs (Chapter 2) also highlights the value of using peer feedback and exemplars in that they provide prompt and efficient feedback to students. However, none of this denies the real need for students to develop a sense of ownership of their work and to see learning as a series of incremental steps facilitated by assessment tasks. In most subjects, especially those like education studies, law, and health that are related to a profession, there are rules, theories and conventions that are used to provide the foundation for the curriculum. However,
245
Conclusion 245 in postgraduate research programmes such as taught doctorates, there is an expectation that the conventions will be questioned and that current thinking will be challenged. The same is true at master’s level courses in creative disciplines such as art and fashion where the emphasis on originality requires that boundaries are pushed. On the other hand, few traditional assessment modes lend themselves to judging what is inherently subjective. Dacre’s case study (Chapter 16) outlines some employability challenges in assessing simulated professional practice in the performing arts where students are assessed on the effectiveness of their participation in a variety of roles ranging from performance to stage design. In this instance, complexities relating to parity of experience, enhancing employability and applying rigorous assessment all call for imaginative solutions. It is posited that even the audience response (as one element of assessment) may be usefully taken into consideration. Maharg (Chapter 17) uses an example of digital modes of legal education in law to illustrate complexities which often prevent wide-scale take up of innovations in learning and assessment. He focuses on the next stage, the dissemination and further implementation required if innovations are to bring about wide-scale transformation, and attempts some answers to an apparently simple question: why is it that so many promising digital education innovations, conceived, theorised, carefully implemented and with equally promising results, fail to be taken up more generally in higher education? If innovators are to make a difference beyond their own practices and theories, he argues, we need to give equal thought to the social context of change and determine our own levers for that change. This is indeed a challenge and one which we extend to all innovators in the academic community although this goes beyond the remit of this volume. One major lever for change, however, is advocated by Jessop (Chapter 3) who reports on the work of Transforming the Experience of Students through Assessment (TESTA), as originated by Graham Gibbs and colleagues (Jessop et al, 2011). The TESTA project and work carried out in the UK, India and Australia sought to understand assessment patterns on degree programmes and subsequently advocates that institutions adopt a programme-wide approach to assessment. TESTA has thus given the sector new ways of thinking about assessment and feedback. First, it has prompted universities and programme teams to take a more holistic approach to the design of assessment which focuses outside the ‘module box’. This in itself is prompting a ‘step change’ to curriculum design. Second, TESTA has encouraged a more critical approach to the technical rational quality assurance machinery, so that the paper trail of audit and accountability contained in templates and validation documents is increasingly seen as a second order issue, with more ownership of curriculum design vested in disciplinary academic teams. This is a welcome shift in perspective which directly addresses student satisfaction issues, for example, UK National Student Survey feedback and aims to place student learning at the heart of curriculum design.
246
246 Cordelia Bryan and Karen Clegg
Fostering and assessing professional judgement If students are to be capable of dealing with the unknown they need to be able to create solutions. Much has been written on fostering creativity in education (see Shaheen, 2010 for an overview of the literature) with a general shift in the discourse which recognises that the concept of creativity may not just be applicable to ‘high flyers’ but that it may encompass a whole continuum from replication; formulation; innovation to origination (Fennell, 1993). In other words, everyone is capable of creative action. Fennell’s model widens the focus and shifts it away from the ‘genius end’ of creativity that used to be the main focal point for discussion and allows for a developmental approach to be taken. Employing methods of assessment so that students get used to developing effective learning is central to many of the innovations in this volume. Assessing creative actions or creative products requires that we view assessment differently and think of the process almost as an art form which seeks to both make sense of and make judgements about the unknown. Professional judgement required to assess the unknown need not, and should not, assume exclusively tutor judgement of student work. It is a concept which requires training, education and acculturation towards the attainment of professional ‘connoisseurs’. Developing professional judgement therefore applies equally to both student and ‘professional’ who may then engage in a discourse about the creative process or product before negotiating and agreeing upon a grade.
Where do we go from here? As practitioners we need to be mindful that there are usually no quick fixes and that as innovators we constantly seek to find a balance between gains and risks of different modes of assessment. For example, sometimes it may be appropriate to go for quick and efficient feedback at the expense of detail and depth, while at another time choose to balance this with a task which requires students to adopt a deep approach to learning. Sometimes, if we’re honest, we go for what’s easiest. Further research in the development of creativity and professional judgement is needed so that innovative and creative assessment solutions may become common practice within the higher education community. As we have evidenced throughout the book, modern society requires a fundamentally different conceptual discourse for assessment akin to what Broadfoot advocated two decades ago: What is urgently needed now is the beginnings of an active search for a more humanistic, even intuitive, approach to educational assessment which is more in keeping with the spirit and needs of the times. (Broadfoot, 2000: 201) It is the privilege of having achieved economic stability that allows western society to be concerned with information and for our educational challenges
247
Conclusion 247 to be with helping learners discern between what knowledge is needed, how to look for it and how to use it with integrity. As Norton (Chapter 9) asserts, assessment design, marking and feedback is fundamentally contextualised in what Malcolm and Zukas (2009) describe as the messy experience of academic work. We need to enter into a discourse on the perceived ‘expectation gap’ between what students think is acceptable and the reality of what is required of our citizens. We also need to take into consideration the ‘view from the ground’, namely academic perceptions of assessment practice which may go some way in understanding why there can still be a mismatch between the espoused and the actual assessment practice. For that reason, Norton describes assessment as a ‘wicked problem’; it cannot easily be defined, it changes as we study it so that there are rarely, if ever, definitive solutions and every response creates new problems or challenges of its own. What we are proposing is a move from techno-rationalism to a more sophisticated, intuitive concept of assessment that accepts and embraces the subjectivity of judgement drawing on a pedagogy of narrative disclosure. In disclosing our own struggles and continuous grappling with the ‘wicked problem’ of assessment, we aim to model for our students an openness to multiple perspectives and engage them too in the discourse of assessment. As academic practitioners we therefore need to be alert to our own unconscious biases, in a way similar to that advocated by Brookfield (2016) when he writes about diversity, and to admit that we do not, in fact, have assessment ‘stitched up’. Furthermore, that it’s laudable to focus our own continuing professional development on the search for multiple assessment options from which we may then select and adapt approaches for our context to best serve our diverse student body. The chapters in this book have provided evidence to support the case for a re-conceptualisation of assessment as an instrument of empowerment and liberation rather than one of measurement and limitation. The case studies offered here form part of a growing evidence base of innovative practice (Schultz & Webb, 2002; Mockler, 2011; Finley & McNair, 2013) illustrating that assessment can serve both a political and educational purpose. This new assessment culture is defined by the following characteristics:
• • • • •
active participation in authentic, real-life tasks that require the application of existing knowledge and skills; participation in a dialogue and conversation between learners (including tutors); engagement with and development of criteria and self-regulation of one’s own work; employment of a range of diverse assessment modes and methods adapted from different subject disciplines; opportunity to develop and apply attributes such as reflection, resilience, resourcefulness and professional judgement and conduct in relation to problems;
248
248 Cordelia Bryan and Karen Clegg
• •
acceptance of the limitations of judgement and the value of dialogue in developing new ways of working; communities of practice as vital elements of learning.
This edited collection contributes to a new assessment paradigm built on the evidence base of research into student learning. What the authors in this book share is a common belief that assessment and learning should be seen in tandem. Each should contribute to the other. Collectively we have acknowledged that there are problems with existing assessment and that we are ‘short-changing’ students when we fail to provide them with the feedback they need in order to improve. The cliched feedback phrase ‘could do better’ applies to ourselves as practitioners and agents of change, not just to our students. By definition to be ‘innovative’ means improving and advancing our academic practice. It does not signal an achieved state of excellence but a constant search for enhancement. We invite others to join us in our search for excellence and to take the moral high-ground, stand up for what research tells us is right and commit to better assessment processes.
References Anonymous Academic. (2017) Plagiarism is rife in academia, so why is it rarely acknowledged? Guardian 27 October. www.theguardian.com/higher-education- network/ 2 017/ o ct/ 2 7/ p lagiarism- i s- r ife- i n- a cademia- s o- w hy- i s- i t- r arely- acknowledged (accessed 7 January 2019). Barnett, R. (2000) Realizing the University in an Age of Supercomplexity. Buckingham: SRHE and Open University Press. Birenbaum, M. (2003) New insights into learning and teaching and their implications for assessment. In: Segers, M., Dochy, F., Cascallar, E. (eds) Optimising New Modes of Assessment: In Search of Qualities and Standards. Dordrecht: Kluwer Academic, pp. 13–36. Birenbaum, M. and Dochy, F. J. R. C. (eds). (1996) Alternatives in Assessment of Achievements, Learning Processes and Prior Knowledge. Boston, MA: Kluwer. Boud, D. (2000) Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education 22 (2): 151–167. Boud, D. (2013) Enhancing Learning through Self Assessment. London: Routledge Falmer. Broadfoot, P. (2000) Assessment and intuition. In: Atkinson, T. and Claxton, G. (eds) The Intuitive Practitioner. Buckingham: Open University Press, pp. 199–219. Brookfield, S. (2016) White teachers in diverse classrooms: using narrative to address teaching about racial dynamics. In: Scott, C. and Sims, J. (eds) Developing Workforce Diversity Programs, Curriculum and Degrees in Higher Education. Hershey, PA: IGI Publishing. Cannon, M. D. and Edmondson, A. C. (2005) Failing to learn and learning to fail (intelligently): how great organizations put failure to work to innovate and improve. Long Range Planning 38: 299–319. Caroll, J. (2013) A Handbook for Deterring Plagiarism in Higher Education (2nd edn). Oxford: Oxford Centre for Staff and Learning Development, Oxford Brookes University.
249
Conclusion 249 Claxton, G. (1999) Wise Up: The Challenge of Lifelong Learning. New York, NY: Bloomsbury. Cowan, J. (1998) On Becoming an Innovative University Teacher: Reflection in Action. Buckingham: SRHE and Open University Press. Dweck, C. (2009) Who will the 21st century learners be? Knowledge Quest 38 (2): 8–9. Eisner, E. W. (1985) The Art of Educational Evaluation. London: Falmer Press. Elton, L. (2006) Academic professionalism –the need for change. In: Bryan, C. and Clegg, K. (eds) Innovative Assessment in Higher Education, Abingdon: Routledge, pp. 209–215. Falchikov, N. (2013) Improving Assessment Through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education. London: Routledge. Fennell, E. (1993) Categorising creativity. Competence and Assessment 23: 7. Finley, A. and McNair, T. (2013) Assessing Underserved Students’ Engagement in High-Impact Practices. Washington DC: Association of American Colleges and Universities. Gulikers, J., Bastianens, T., Kirschner P. (2004) Perceptions of authentic assessment: five dimensions of authenticity. Paper presented at the Second Biannual Joint Northumbria/EARLI SIG Conference, Bergen. Hinett, K. (1999) Improving Learning Through Reflection -Part 2. York: Higher Education Academy. Jessop, T., El-Hakim, Y. and Gibbs, G. (2011) The TESTA project: research inspiring change. Educational Developments 12 (4): 12–16. Kilbride, D. (2014) Recognising self-esteem in our pupils: how do we define and manage it? Research in Teacher Education 4 (2): 17–21. Malcolm, J. and Zukas, M. (2009) Making a mess of academic work: Experience, purpose and identity. Teaching in Higher Education 14 (5): 495–506. McDowell, L. (2012) Assessment for learning. In: Clouder, L. , Broughan, C., Jewell, S. Steventon, G. (eds) Improving Student Engagement and Development Through Assessment: Theory and Practice in Higher Education. London: Routledge, pp. 73–85. Mockler, N. (2011) Beyond ‘what works’: understanding teacher identity as a practical and political tool. Teachers and Teaching Theory and Practice 17 (5): 517–528. Nicol, D. J. and Macfarlane-Dick (2007) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education 31 (2): 199–218. Race, P. (2006) The Lecturer’s Toolkit (3rd edn). Abingdon: Routledge. Rouhani, F. (2012) Practice what you teach: facilitating anarchism in and out of the classroom. Antipode 44 (5): 1726–1741. Sadler, D. R. (1998) Formative assessment: revisiting the territory. Assessment in Education: Principles, Policy and Practice 5 (1): 77–84. Schwartz, P. and Webb, G. (2002) Assessment: Case Studies, Experience and Practice from Higher Education. Abingdon: Routledge. Shaheen, R. (2010) Creativity and education. Creative Education 1 (3): 166–169. Villarroel, V., Bloxham, S., Bruna, D., Bruna, C., Herrera-Seda, C. (2018) Authentic assessment: creating a blueprint for course design. Assessment and Evaluation in Higher Education 43 (5): 840–854. Yorke, M. (2002) Employability in Higher Education: What it is –What it is Not. Learning and Employability, Series 1. York: Higher Education Academy.
250
Index
Note: Page numbers in italic refer to figures; page numbers in bold refer to tables. 3P model 69–71 academic practices 3, 6, 54–5, 241–2, 247 accredited programmes 3, 6, 209; see also GTAs (graduate teaching assistants); YLTA (York Learning and Teaching Award) Adachi, C. 4, 64 advanced searching assignment 224–5 AdvanceHE 6, 12 AEQ (Assessment Experience Questionnaire) 28, 30–1, 39 AfL (assessment for learning) 2, 5–6, 24, 51–3, 116, 173–4, 176–7, 185, 187, 242; case studies 177–8, 179–84, 185–6, 187 agency 37, 44, 131, 185 alienation 37, 38, 45 Alverno College, United States 2, 4, 56–8 anarchism 241 anthrax 199–200 assessment 1, 2, 4–6, 9, 14, 22–4, 50–1, 61, 111, 232, 246 assessment as learning 163–4, 165–70, 171 assessment conditions 34 assessment design 2–3, 6, 15, 17, 55–6, 61–2, 112, 115, 118, 244, 247; constructive alignment 17, 69, 194; fit-for-purpose model 53–5, 56; programmatic 42, 43, 46–7 Assessment Experience Questionnaire see AEQ (Assessment Experience Questionnaire) assessment for learning see AfL (assessment for learning)
assessment literacy 14, 51, 66, 67, 125, 131, 133 assessment methods 22, 27–8, 175–6, 233, 238–9, 246, 247–8 assessment of learning 28–9, 30–1, 164, 165, 244 assessment practices 2, 4, 6, 9, 14, 111, 112, 113, 114–8, 247; good practice 16–17, 18, 113 assessment tactics 27, 31–4 assessment tasks 3, 51–3, 242, 244 assessors 55, 65, 68 assignments 31–4, 54, 61, 77–8 AUSSE (Australian Survey of Student Engagement) 16 Australia 1, 11, 12, 15, 16, 27, 36, 64, 103 authentic assessments 14, 15, 42, 43, 44, 60–1, 242, 243 autonomy 5, 6, 174, 175, 177, 179–84, 186–7 awareness 17, 131, 148, 181–2 Baker, T. 6, 227–9 Bales, R. F. 78, 79 Banham, T. 5 Barnett, R. 227, 241 barriers, feedback 130, 131 Bath Spa University 210, 213–15, 217 Beard, C. 121, 122 Bearman, M. 111, 113, 169 behavioural skills 152, 243 Bennett, S. 111, 113, 169 Biglan, A. 115 Birenbaum, M. 241 Black and minority attainment gap 155–8 blogging, individual 167
251
Index 251 Bloxham, S. 51–3 BME (black and minority ethnic) students 155–6 Boud, D. 66, 67, 68, 111, 113, 169 Boyd, P. 51–3 Bradford University 58 Brookfield, S. 247 Brown, S. 4 Bryan, C. 5, 6, 243 Bryan, C and Clegg, K. 1, 130, 151 Bryant, R. 116 Calman, L. 236 Canada 1, 12, 16 Capp (occupational psychologists) 104 Caroll, J. 244 CBM (certainty-based marking) 5, 141–3, 144–8 certainty levels 142–3 Chartered Management Institute 101 Cheng, J. 65 Chia, D. 65 Claxton, G. 3, 195–6, 241–2 Clayton, B. 121, 122 Clegg, K. 6, 210 Cochrane, R. 59 coding system, written feedback 79, 80, 81, 86 cognisance 131, 134 collaboration 18, 32, 104, 148, 151, 177, 192, 243 collaborative technologies 163, 164, 171 colluders 5, 152, 153 communities of practice 3, 18, 174 compassion-focused pedagogy 152–4, 156, 157, 158 compassionate micro skills 5, 152, 153, 154, 155–7, 158, 159–60, 243 computer-based feedback 32–3 confidence-based marking see CBM (certainty-based marking) constructive alignment 17, 69, 194 Course Experience Questionnaire 118 creativity 171, 194–6, 246 critical autonomy 175, 181–4, 186, 187 critical thinking abilities 65, 66–8, 71 Crymble, A. 6, 224–5 Cullen, R. 116 Dacre, K. 6, 245 Dandridge, N. 239 Davies, M. 231, 238, 239 Dawson, P. 64, 111, 113, 169
Dearing Report (1977) 10, 13 DEFT (Developing Engagement with Feedback Toolkit) 133–4 Devereux, A. 231, 238 dialogic assessment 58, 59–60 digital communication 164, 167–8 digital environments 198 digital storytelling 165–7 digital technologies 6, 15, 32–3, 164, 198, 201, 245 Dimensions of Professional Practice 220, 221–4 Dirkx, J. M. 122 Dochy, F. J. R. C. 241 duty solicitors 231, 233–4, 238, 239 Dweck, C. 3, 55, 57, 243 Ecclestone, K. 175, 186 educational creativity see creativity educational psychology 231, 234–5, 238 effective assessment 14, 51–3 effective learning 22, 246 Eisner, E. W. 242 Elton, L. 242 emotional literacy 121, 123, 124, 125 emotions 121–4, 125–6, 127, 152, 242–3 employability 3, 5, 17, 101–4, 192, 231–2, 239, 245; Australia 64, 103 engagement 29, 37, 38, 132, 242 environmental science 231, 237, 238 Europe 11, 13, 27 evaluative judgements 66, 67, 68 external examiners 11, 114, 242 faking good 24–5 Falchikov, N. 68 feedback 3, 4–5, 14–16, 25–7, 32–4, 44–6, 50, 77, 116–17, 243, 246–8; barriers 130, 131; dialogic assessment 58, 59–60; emotions 121, 122, 124, 125–6, 127; employability 101; engagement 29, 85, 129, 130–3, 134–6; Open University 79, 81–5, 86; presentations 5, 88, 91–2, 96, 97–8; SAP 65, 66, 69; TESTA 36–7; written 60, 77–8, 79, 80, 81, 86; YSF 106, 107–8, 109 feedback delivery 129, 132, 133, 135 feedback literacy 5, 124, 125, 133, 135, 243 feedback portfolios 133, 134 feedback practices 18, 111, 112, 113–14, 115, 118, 129–30; good practice 77, 78–9, 81, 86, 97, 117
252
252 Index feedback workshops 133, 134 feedforward 52, 60, 243 Fennell, E. 246 Firth, R. 59 fit-for-purpose model 53–5, 56 Fitness, J. 121 FMD (foot and mouth disease) 200–1 Forbes, D. 25–6, 27 formative assessments 9, 14, 22, 24, 36, 39, 40, 41–2, 51, 54, 174; presentations 91–2 Forsyth, R. 116 Fransson, A. 23
Hutchinson, S. 5 Hyland, F. 78–9
Gardner-Medwin, A. R. 5 Gell, L. 231, 233, 238, 239 Germany 13–14 Gibbs, G. 36, 47, 50, 77, 79, 86, 244 Gilbert, T. 5, 243 Giles, F. 6, 229–30 Glover, C. 4 good feedback practice 77, 78–9, 81, 86, 97, 117 Google 158 grade inflation 11–12 graduate teaching assistants see GTAs (graduate teaching assistants) Groessler, A. 169 group dynamics 151, 152, 243 group presentations 95–6 group work 5, 65, 151–2, 153, 154–8, 243 GTAs (graduate teaching assistants) 6, 209, 215–17, 218, 242; Bath Spa University 210, 213–15, 217; University of York 210–13, 214, 215, 217, 218
Jenkins, M. 6 Jessop, T. 4, 245 judgements 26–7, 66, 67, 68, 246; see also peer assessments; self-assessments
Hall, M. 111, 113 Handley, K. 116, 121, 132 Hattie, J. 129 Hay, M. 196 Haynie, A. 169 HEA (Higher Education Academy) 6, 12, 36, 220 Higgs, B. M. 169 higher education 1, 9, 10, 36, 37–8, 111, 151 higher education policies 4, 17–18 higher education providers 10–11, 13, 17–18 honours degree system, UK 112 Humberstone, B. 121, 122
IAA (Interviewing and Advocacy Assessment) 233–4 Ibarra-Sáiz, M. S. 116 inclusive assessment 14 innovation dissemination 198–9, 201, 203–4 innovative assessment 1–2, 6, 27, 111, 113, 114, 117–18, 241–8 interventions 129, 132, 133, 135 investment 12–13
Kane, S. 5 Kelly, M. 224 King, H. 3–4, 37 Kleiman, P. 194, 195 Klemmer, S. R. 65 Knight, P. T. 185 knowledge 141, 144, 147, 148, 194–7 Koller, D. 65 Kulkarni, C. 65 language skills 229–30 Latour, B. 199–200, 203–4 Laurillard, D. 163–4 law assessment 6, 231, 232, 233–4, 238, 239; duty solicitors 231, 233–4, 238, 239 Le, H. 65 Lea, M. 186 learning environments 121, 124–5, 126, 127, 243; see also AfL (assessment for learning); simulated professional practice learning-oriented assessments 42, 44 learning outcomes 17, 22, 123–4, 194 learning technologies 163–4 legal education 198, 201–3, 245; see also law assessment Leszsz, M. 154 Lievens, F. 105–6 McDowell, L. 2, 51, 53, 164, 242 Macaulay, J. 169 Macdonald, R. 79, 87 Macfarlane-Dick, D. 77, 79, 97
253
Index 253 Maguire, S. 6 Maharg, P. 4, 6, 245 Malcolm, J. 112, 247 marking 112, 116, 118, 143, 247 Martin, G. 6, 210 MCQs (multiple-choice questions) 143–4 mechanised feedback 32–3 Medland, E. 112 Mentkowski, M. 56, 57 Mercer-Mapstone, L. 169 Millar, J. 121, 132 MIT (Massachusetts Institute of Technology) 22–3 modular degrees 36, 40–1, 44, 46, 47 Molloy, E. 111, 113, 169 Monash University, Australia 103 monopolisers 5, 152, 153 Montgomery, C. 51, 53 Moreale, E. 79 Murrells, T. 236 Nash, R. A. 5, 243 National Committee of Inquiry into Higher Education see Dearing Report (1977) negative marking schemes, fixed 143 Nicol, D. J. 77, 79, 97 Norman, I. 236 Norton, B. 6 Norton, L. 5, 6, 247 NSS (National Student Survey) 16, 37, 39, 112, 113, 115, 118, 135, 245 NSSE (National Survey of Student Engagement) 16 nursing assessment 6, 231, 232–3, 235–7, 238 objective assessment 227–9 O’Connor, E. 155 O’Donovan, B. 116, 186–7 Office for Students 10, 12 Open University 31, 33, 78; feedback 79, 81–5, 86 oral presentations see presentations Orr, S. 59 Oxford University 24 Pachi, D. 231, 234, 238, 239 Papadopoulos, K. 65 Pasteur, L. 6, 199–200, 202, 203–4 Paul (case study) 125–6
peer assessments 4, 5, 26–7, 32, 65, 67, 96, 174, 185, 186, 244; see also judgements peer reviews 3, 33, 65, 168, 176, 244 performing arts 151, 225–7, 245; simulated professional practice 190, 191–6, 197 Pitt, E. 5, 45, 243 plagiarism 15, 243, 244 portfolio-based assessments 6, 54, 133, 217 portfolio presentations 167 practice, communities of see communities of practice presentations 5, 88–91, 92–7, 98–9, 244; feedback 97–8; formative assessments 91–2; summative assessments 91, 93 presentation skills 5, 88, 90, 91, 92, 93–5 Price, M. 116, 121, 132 procedural autonomy 175, 180–1, 186 professional development 6, 12, 247 professional judgements 246 professional practice 3, 6, 59, 175, 232–3, 245; assessment 231, 238–9; simulated 190–7, 245 Professional Standards Framework, UK 6, 12 programmatic design 42, 43, 46–7 psychosocial processes of group dynamics 154–5, 157–8 QAA (Quality Assurance Agency) 10, 15 quality assurance 11, 18, 47, 112, 245 Quality Code for Higher Education, UK 10–11 Quesada-Sierra, V. 116 quiet students 5, 152, 153 Raaper, R. 116 Race, P. 194, 244 Ramley, J. A. 111 Raw, S. 79 recipience skills 131–3 Redfern, S. 236 Reimann, N. 116 Reznikova, D. 6, 225–7 Rich, J. 231–2 Ringan, N. 116 Rittel, H. W. J. 111 Rodríguez-Gómez, G. 116 Rogers, C. 26 Rorty, R. 79
254
254 Index Rose Bruford College of Theatre and Performance 190, 191–6, 197 Rouhani, F. 241, 243 Rowe, A. 121 Rust, C. 116 Sackett, P. R. 105–6 Sadler, D. R. 58, 68 Sadler, I. 116 SAGE taxonomy (feedback recipience skills) 131–2 Sambell, A. 5 Sambell, K. 2, 5, 51, 53 SAP (self- and peer assessment) 64–5, 66, 67, 68–71 school mathematics study 144–5 Scott, P. 200–1 self-appraisal 131 self-assessments 4, 26–7, 32, 52, 65, 67, 174, 175, 176, 180–1, 185, 186, 187, 244; see also judgements self-feedback 225–7 self-tests 145–6 shadow pedagogies 198, 202, 203 Shaffer, D. W. 200–1 Sheffield Hallam University 77–8 Shreeve, A. 59 Shulman, L. 198, 202, 203 SIMPLE project (SIMulated Professional Learning Environment) 201–3 Simpson, C. 77, 79, 86 simulated clients 202–3 simulated professional practice 190–7, 202–3, 245 simulation environments 190–7, 198, 201–3, 245 Snyder, B. R. 22–3 Social Change Model 103, 104 social media 164, 167, 168–9 Soler, R. 67 Spence, J. 25–6, 27 Squire, K. D. 200–1 Strathclyde Law School 201–2 Street, B. 186 strengths-based recruitment 102–3, 104 stressors 123 Stubbs, M. 116 student learning 28–9, 30–1, 34, 43 student partnerships 14–15 subjective assessment 6 summative assessments 9, 32, 39–41, 43, 51, 55, 123, 174; presentations 91, 93
sustainable assessments 66–7 Sweeney, T. 169 Tai, J. H. M. 4, 64, 244 Teaching Quality Enhancement Fund 12 technology 2–3, 6, 15; see also digital technologies technology-enabled assessments 163, 164, 165, 169–70, 171 TEF (Teaching Excellence Framework) 1, 2, 36, 37, 209 TESTA (Transforming the Experience of Students Through Assessment) 2, 4, 36–7, 38–40, 42–7, 245 Thomson, W. (Lord Kelvin) 200 Timperley, H. 129 transferable skills 90, 93, 95, 164, 237 tuition fees 13–14 tutor-authored feedback 77–8 tutorials 24 UCL (University College London) 141, 142–3, 146–7 UK (United Kingdom) 1, 12, 16, 17, 28, 36 UKPSF (UK Professional Standards Framework) 220, 221–4 universities 1, 9, 36, 38–9, 40, 112 University of Coventry 164, 166, 168 University of Edinburgh 23, 156 University of Hertfordshire 155–6 University of York 101, 104, 164, 167–8; GTAs 210–13, 214, 215, 217, 218 USA (United States) 11, 12, 16, 26–7, 103 volition 131 Walker, R. 6 Watson, R. 236 Webber, M. M. 111 Wei, K. P. 65 Wei, W. 115 West, D. 169 Whitelock, D. 79 wicked problems 111–12, 117, 247 Winstone, N. E. 5, 45, 243 Wood, L. 121 Worthen, M. 61 Write Now CETL (Centre for Excellence in Teaching and Learning) 111, 115–16
255
Index 255 written feedback 60, 77–8; coding system 79, 80, 81, 86; Open University 81–5, 86 Yalom, I. 153 Yanmei, X. 115
YLTA (York Learning and Teaching Award) 210–13 Yorke, M. 122, 185 YSF (York Strengths Framework) 103–9 Zukas, M. 112, 247
256