132 117 40MB
English Pages 823, CLII [980] Year 2016
Handbook of Research on Technology Tools for Real-World Skill Development Yigal Rosen Harvard University, USA Steve Ferrara Pearson, USA Maryam Mosharraf Pearson, USA
A volume in the Advances in Higher Education and Professional Development (AHEPD) Book Series
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA, USA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail: [email protected] Web site: http://www.igi-global.com Copyright © 2016 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Handbook of research on technology tools for real-world skill development / Yigal Rosen, Steve Ferrara, and Maryam Mosharraf, editors. pages cm Includes bibliographical references and index. ISBN 978-1-4666-9441-5 (hardcover) -- ISBN 978-1-4666-9442-2 (ebook) 1. Educational technology--Study and teaching--Handbooks, manuals, etc. 2. Computer literacy--Study and teaching--Handbooks, manuals, etc. 3. Problem solving-Study and teaching. I. Rosen, Yigal, 1978- editor of compilation. LB1028.3.H36 2016 371.33--dc23 2015028792 This book is published in the IGI Global book series Advances in Higher Education and Professional Development (AHEPD) (ISSN: 2327-6983; eISSN: 2327-6991)
British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher. For electronic access to this publication, please contact: [email protected].
Advances in Higher Education and Professional Development (AHEPD) Book Series Jared Keengwe University of North Dakota, USA
ISSN: 2327-6983 EISSN: 2327-6991 Mission
As world economies continue to shift and change in response to global financial situations, job markets have begun to demand a more highly-skilled workforce. In many industries a college degree is the minimum requirement and further educational development is expected to advance. With these current trends in mind, the Advances in Higher Education & Professional Development (AHEPD) Book Series provides an outlet for researchers and academics to publish their research in these areas and to distribute these works to practitioners and other researchers. AHEPD encompasses all research dealing with higher education pedagogy, development, and curriculum design, as well as all areas of professional development, regardless of focus.
Coverage
• • • • • • • • •
Adult Education Assessment in Higher Education Career Training Coaching and Mentoring Continuing Professional Development Governance in Higher Education Higher Education Policy Pedagogy of Teaching Higher Education Vocational Education
IGI Global is currently accepting manuscripts for publication within this series. To submit a proposal for a volume in this series, please contact our Acquisition Editors at [email protected] or visit: http://www.igi-global.com/publish/.
The Advances in Higher Education and Professional Development (AHEPD) Book Series (ISSN 2327-6983) is published by IGI Global, 701 E. Chocolate Avenue, Hershey, PA 17033-1240, USA, www.igi-global.com. This series is composed of titles available for purchase individually; each title is edited to be contextually exclusive from any other title within the series. For pricing and ordering information please visit http://www.igi-global.com/book-series/advances-higher-education-professional-development/73681. Postmaster: Send all address changes to above address. Copyright © 2016 IGI Global. All rights, including translation in other languages reserved by the publisher. No part of this series may be reproduced or used in any form or by any means – graphics, electronic, or mechanical, including photocopying, recording, taping, or information and retrieval systems – without written permission from the publisher, except for non commercial, educational use, including classroom teaching purposes. The views expressed in this series are those of the authors, but not necessarily of IGI Global.
Titles in this Series
For a list of additional titles in this series, please visit: www.igi-global.com
Furthering Higher Education Possibilities through Massive Open Online Courses Anabela Mesquita (CICE – ISCAP / Polytechnic of Porto, Portugal & Algoritmi RC, Minho University, Portugal) and Paula Peres (CICE – ISCAP / Polytechnic of Porto, Portugal) Information Science Reference • copyright 2015 • 312pp • H/C (ISBN: 9781466682795) • US $175.00 (our price) Handbook of Research on Teacher Education in the Digital Age Margaret L. Niess (Oregon State University, USA) and Henry Gillow-Wiles (Oregon State University, USA) Information Science Reference • copyright 2015 • 722pp • H/C (ISBN: 9781466684034) • US $415.00 (our price) Handbook of Research on Enhancing Teacher Education with Advanced Instructional Technologies Nwachukwu Prince Ololube (Ignatius Ajuru University of Education, Nigeria) Peter James Kpolovie (University of Port Harcourt, Nigeria) and Lazarus Ndiku Makewa (University of Eastern Africa, Kenya) Information Science Reference • copyright 2015 • 485pp • H/C (ISBN: 9781466681620) • US $315.00 (our price) Handbook of Research on Advancing Critical Thinking in Higher Education Sherrie Wisdom (Lindenwood University, USA) and Lynda Leavitt (Lindenwood University, USA) Information Science Reference • copyright 2015 • 568pp • H/C (ISBN: 9781466684119) • US $325.00 (our price) Measuring and Analyzing Informal Learning in the Digital Age Olutoyin Mejiuni (Obafemi Awolowo University, Nigeria) Patricia Cranton (University of New Brunswick, Canada) and Olúfẹ́mi Táíwò (Cornell University, USA) Information Science Reference • copyright 2015 • 336pp • H/C (ISBN: 9781466682658) • US $185.00 (our price) Transformative Curriculum Design in Health Sciences Education Colleen Halupa (A.T. Still University, USA & LeTourneau University, USA) Medical Information Science Reference • copyright 2015 • 388pp • H/C (ISBN: 9781466685710) • US $215.00 (our price) Handbook of Research on Innovative Technology Integration in Higher Education Fredrick Muyia Nafukho (Texas A&M University, USA) and Beverly J. Irby (Texas A&M University, USA) Information Science Reference • copyright 2015 • 478pp • H/C (ISBN: 9781466681705) • US $310.00 (our price) New Voices in Higher Education Research and Scholarship Filipa M. Ribeiro (University of Porto, Portugal) Yurgos Politis (University College Dublin, Ireland) and Bojana Culum (University of Rijeka, Croatia) Information Science Reference • copyright 2015 • 316pp • H/C (ISBN: 9781466672444) • US $185.00 (our price)
701 E. Chocolate Ave., Hershey, PA 17033 Order online at www.igi-global.com or call 717-533-8845 x100 To place a standing order for titles released in this series, contact: [email protected] Mon-Fri 8:00 am - 5:00 pm (est) or fax 24 hours a day 717-533-8661
Editorial Advisory Board Chris Dede, Harvard University, USA Kimberly O’Malley, Pearson, USA Andreas Schleicher, Organisation for Economic Co-Operation and Development (OECD), France
List of Contributors
Anderson-Inman, Lynne / University of Oregon, USA....................................................................... 68 Andrada, Gilbert N. / Connecticut State Department of Education, USA........................................ 678 Arie, Perla / Kibbutzim College of Education Technology and the Arts, Israel................................. 528 Avni, Edith / Toward Digital Ethics Initiative, Israel........................................................................... 13 Back, Susan Malone / Texas Tech University, USA.............................................................................. 42 Bakken, Sara / Pearson Education, USA............................................................................................ 360 Becker, Kirk A. / Pearson, USA......................................................................................................... 385 Beltran, Valerie / University of La Verne, USA.................................................................................. 105 Benjamin, Roger W. / CAE, USA....................................................................................................... 230 Bielinski, John / Pearson Education, USA......................................................................................... 360 Boughton, Keith A. / CTB/McGraw-Hill, USA.................................................................................. 590 Brenner, Daniel G. / WestEd, USA..................................................................................................... 191 Browder, Diane M. / University of North Carolina at Charlotte, USA.............................................. 445 Brunner, Belinda / Pearson, UK........................................................................................................ 385 Buckley, Barbara C. / WestEd, USA.................................................................................................. 191 Bunch, Michael B. / Measurement Incorporated, USA...................................................................... 611 Cannella, Dolores / Stony Brook University, USA.............................................................................. 163 Childress, E. Lee / Corinth School District, USA............................................................................... 284 Craven, Patrick / City & Guilds of London Institute, UK.................................................................. 415 Davidson, Rafi / Kaye Academic College of Education, Israel.......................................................... 558 Decker, Jessica / University of La Verne, USA................................................................................... 105 DiCerbo, Kristen E. / Pearson, USA.................................................................................................. 777 Eckardt, Patricia / Stony Brook University, USA............................................................................... 163 Elzarka, Sammy / University of La Verne, USA................................................................................. 105 Emira, Mahmoud / City & Guilds of London Institute, UK.............................................................. 415 Erlanger, David P. / Stony Brook University, USA............................................................................. 163 Foltz, Peter W. / Pearson, USA & University of Colorado – Boulder, USA....................................... 658 Frazer, Sharon / City & Guilds of London Institute, UK................................................................... 415 Frias, Kellilynn M. / Texas Tech University, USA................................................................................ 42 Gierl, Mark / University of Alberta, Canada...................................................................................... 590 Glassner, Amnon / Kaye Academic College of Education, Israel...................................................... 558 Greenhalgh-Spencer, Heather / Texas Tech University, USA............................................................. 42 Hanlon, Sean T. / MetaMetrics, USA................................................................................................. 284 Hao, Jiangang / Educational Testing Service, USA............................................................................ 344
Harmes, J. Christine / Assessment Consultant, USA.................................................................. 137,804 He, Qiwei / Educational Testing Service, USA................................................................................... 749 Hildreth, Bridget / University of Oregon, USA.................................................................................... 68 Janotha, Brenda / Stony Brook University, USA................................................................................ 163 Johnson, Cheryl K. / Pearson Education, USA................................................................................. 360 Keller, Lisa / University of Massachusetts – Amherst, USA............................................................... 724 Knox, Carolyn Harper / University of Oregon, USA........................................................................... 68 Kornhauser, Zachary / CAE, USA..................................................................................................... 230 Kyllonen, Patrick / Educational Testing Service, USA...................................................................... 344 Lai, Hollis / University of Alberta, Canada........................................................................................ 590 Latifi, Syed F. / University of Alberta, Canada.................................................................................. 590 Liu, Lei / Educational Testing Service, USA....................................................................................... 344 Lorié, William / Questar Assessment, Inc., USA................................................................................ 627 Loveland, Mark T. / WestEd, USA..................................................................................................... 191 Marino, Marie Ann / Stony Brook University, USA.......................................................................... 163 Matovinovic, Donna / CTB/McGraw-Hill, USA................................................................................ 590 Matzaganian, Mark / University of La Verne, USA........................................................................... 105 Miel, Shayne / LightSide Labs, USA................................................................................................... 611 Mosharraf, Maryam / Pearson, USA.......................................................................................... 319,502 Quellmalz, Edys S. / WestEd, USA..................................................................................................... 191 Rahman, Zeeshan / City & Guilds of London Institute, UK.............................................................. 415 Rimor, Rikki / Kibbutzim College of Education Technology and the Arts, Israel.............................. 528 Root, Jenny / University of North Carolina at Charlotte, USA.......................................................... 445 Rosen, Yigal / Harvard University, USA............................................................................... 319,360,502 Rotem, Abraham / Toward Digital Ethics Initiative, Israel................................................................. 13 Rouet, Jean-Francois / University of Poitiers, France....................................................................... 705 Salomon, Gavriel / Hiafa University, Israel........................................................................................... 1 Saunders, Alicia / University of North Carolina at Charlotte, USA................................................... 445 Silberglitt, Matt D. / WestEd, USA..................................................................................................... 191 Steedle, Jeffrey T. / Pearson, USA...................................................................................................... 230 Stenner, A. Jackson / MetaMetrics, USA & University of North Carolina, USA.............................. 284 Swartz, Carl W. / MetaMetrics, USA & University of North Carolina, USA..................................... 284 Tagoe, Noel / Chartered Institute of Management Accountants, UK.................................................. 385 Terrazas-Arellanes, Fatima / University of Oregon, USA................................................................... 68 Tsacoumis, Suzanne / HumRRO, USA............................................................................................... 261 Vaughn, David / Measurement Incorporated, USA............................................................................ 611 Vavik, Lars / Stord/Haugesund University College, Norway................................................................. 1 von Davier, Alina A. / Educational Testing Service, USA.................................................................. 344 von Davier, Matthias / Educational Testing Service, USA.......................................................... 705,749 Vörös, Zsofia / University of Poitiers, France..................................................................................... 705 Walden, Emily Deanne / University of Oregon, USA.......................................................................... 68 Walker, Nancy T. / University of La Verne, USA............................................................................... 105 Wang, Xi / University of Massachusetts – Amherst, USA................................................................... 724 Welsh, James L. / University of South Florida, USA......................................................................... 137 Wilson, Joshua / University of Delaware, USA.................................................................................. 678
Winkelman, Roy J. / University of South Florida, USA..................................................................... 137 Wise, Steven L. / Northwest Evaluation Association, USA................................................................ 804 Wolf, Raffaela / CAE, USA.......................................................................................................... 230,472 Zahner, Doris / CAE, USA........................................................................................................... 230,472 Zapata-Rivera, Diego / Educational Testing Service, USA............................................................... 344 Zenisky, April L. / University of Massachusetts – Amherst, USA...................................................... 724
Table of Contents
Foreword by Andreas Schleicher.....................................................................................................xxviii Foreword by Chris Dede..................................................................................................................... xxx Preface..............................................................................................................................................xxxiii Acknowledgment...........................................................................................................................xxxviii
Volume I Section 1 Defining Real-World Skills in Technology-Rich Environments Chapter 1 Twenty First Century Skills vs. Disciplinary Studies?............................................................................ 1 Lars Vavik, Stord/Haugesund University College, Norway Gavriel Salomon, Hiafa University, Israel Chapter 2 Digital Competence: A Net of Literacies............................................................................................... 13 Edith Avni, Toward Digital Ethics Initiative, Israel Abraham Rotem, Toward Digital Ethics Initiative, Israel Chapter 3 The Application of Transdisciplinary Theory and Practice to STEM Education.................................. 42 Susan Malone Back, Texas Tech University, USA Heather Greenhalgh-Spencer, Texas Tech University, USA Kellilynn M. Frias, Texas Tech University, USA
Chapter 4 The SOAR Strategies for Online Academic Research: Helping Middle School Students Meet New Standards................................................................................................................................................ 68 Carolyn Harper Knox, University of Oregon, USA Lynne Anderson-Inman, University of Oregon, USA Fatima Terrazas-Arellanes, University of Oregon, USA Emily Deanne Walden, University of Oregon, USA Bridget Hildreth, University of Oregon, USA Chapter 5 The Value of Metacognition and Reflectivity in Computer-Based Learning Environments............... 105 Sammy Elzarka, University of La Verne, USA Valerie Beltran, University of La Verne, USA Jessica Decker, University of La Verne, USA Mark Matzaganian, University of La Verne, USA Nancy T. Walker, University of La Verne, USA Chapter 6 A Framework for Defining and Evaluating Technology Integration in the Instruction of RealWorld Skills......................................................................................................................................... 137 J. Christine Harmes, Assessment Consultant, USA James L. Welsh, University of South Florida, USA Roy J. Winkelman, University of South Florida, USA Chapter 7 Equipping Advanced Practice Nurses with Real-World Skills............................................................ 163 Patricia Eckardt, Stony Brook University, USA Brenda Janotha, Stony Brook University, USA Marie Ann Marino, Stony Brook University, USA David P. Erlanger, Stony Brook University, USA Dolores Cannella, Stony Brook University, USA Section 2 Technology Tools for Learning and Assessing Real-World Skills Chapter 8 Simulations for Supporting and Assessing Science Literacy.............................................................. 191 Edys S. Quellmalz, WestEd, USA Matt D. Silberglitt, WestEd, USA Barbara C. Buckley, WestEd, USA Mark T. Loveland, WestEd, USA Daniel G. Brenner, WestEd, USA
Chapter 9 Using the Collegiate Learning Assessment to Address the College-to-Career Space......................... 230 Doris Zahner, CAE, USA Zachary Kornhauser, CAE, USA Roger W. Benjamin, CAE, USA Raffaela Wolf, CAE, USA Jeffrey T. Steedle, Pearson, USA Chapter 10 Rich-Media Interactive Simulations: Lessons Learned....................................................................... 261 Suzanne Tsacoumis, HumRRO, USA Chapter 11 An Approach to Design-Based Implementation Research to Inform Development of EdSphere®: A Brief History about the Evolution of One Personalized Learning Platform.................................... 284 Carl W. Swartz, MetaMetrics, USA & University of North Carolina, USA Sean T. Hanlon, MetaMetrics, USA E. Lee Childress, Corinth School District, USA A. Jackson Stenner, MetaMetrics, USA & University of North Carolina, USA Chapter 12 Computer Agent Technologies in Collaborative Assessments............................................................ 319 Yigal Rosen, Harvard University, USA Maryam Mosharraf, Pearson, USA Chapter 13 A Tough Nut to Crack: Measuring Collaborative Problem Solving.................................................... 344 Lei Liu, Educational Testing Service, USA Jiangang Hao, Educational Testing Service, USA Alina A. von Davier, Educational Testing Service, USA Patrick Kyllonen, Educational Testing Service, USA Diego Zapata-Rivera, Educational Testing Service, USA Chapter 14 Animalia: Collaborative Science Problem Solving Learning and Assessment................................... 360 Sara Bakken, Pearson Education, USA John Bielinski, Pearson Education, USA Cheryl K. Johnson, Pearson Education, USA Yigal Rosen, Harvard University, USA Chapter 15 Using Technology to Assess Real-World Professional Skills: A Case Study...................................... 385 Belinda Brunner, Pearson, UK Kirk A. Becker, Pearson, USA Noel Tagoe, Chartered Institute of Management Accountants, UK
Volume II Chapter 16 Assessment in the Modern Age: Challenges and Solutions................................................................. 415 Mahmoud Emira, City & Guilds of London Institute, UK Patrick Craven, City & Guilds of London Institute, UK Sharon Frazer, City & Guilds of London Institute, UK Zeeshan Rahman, City & Guilds of London Institute, UK Chapter 17 Technology-Assisted Learning for Students with Moderate and Severe Developmental Disabilities........................................................................................................................................... 445 Diane M. Browder, University of North Carolina at Charlotte, USA Alicia Saunders, University of North Carolina at Charlotte, USA Jenny Root, University of North Carolina at Charlotte, USA Chapter 18 Mitigation of Test Bias in International, Cross-National Assessments of Higher-Order Thinking Skills.................................................................................................................................................... 472 Raffaela Wolf, CAE, USA Doris Zahner, CAE, USA Chapter 19 Evidence-Centered Concept Map in Computer-Based Assessment of Critical Thinking................... 502 Yigal Rosen, Harvard University, USA Maryam Mosharraf, Pearson, USA Chapter 20 “Visit to a Small Planet”: Achievements and Attitudes of High School Students towards Learning on Facebook – A Case Study............................................................................................................... 528 Rikki Rimor, Kibbutzim College of Education Technology and the Arts, Israel Perla Arie, Kibbutzim College of Education Technology and the Arts, Israel Chapter 21 Cross-Border Collaborative Learning in the Professional Development of Teachers: Case Study – Online Course for the Professional Development of Teachers in a Digital Age.................................. 558 Rafi Davidson, Kaye Academic College of Education, Israel Amnon Glassner, Kaye Academic College of Education, Israel
Section 3 Automated Item Generation and Automated Scoring Techniques for Assessment and Feedback Chapter 22 Using Automated Procedures to Generate Test Items That Measure Junior High Science Achievement........................................................................................................................................ 590 Mark Gierl, University of Alberta, Canada Syed F. Latifi, University of Alberta, Canada Hollis Lai, University of Alberta, Canada Donna Matovinovic, CTB/McGraw-Hill, USA Keith A. Boughton, CTB/McGraw-Hill, USA Chapter 23 Automated Scoring in Assessment Systems........................................................................................ 611 Michael B. Bunch, Measurement Incorporated, USA David Vaughn, Measurement Incorporated, USA Shayne Miel, LightSide Labs, USA Chapter 24 Automated Scoring of Multicomponent Tasks.................................................................................... 627 William Lorié, Questar Assessment, Inc., USA Chapter 25 Advances in Automated Scoring of Writing for Performance Assessment......................................... 658 Peter W. Foltz, Pearson, USA & University of Colorado – Boulder, USA Chapter 26 Using Automated Feedback to Improve Writing Quality: Opportunities and Challenges.................. 678 Joshua Wilson, University of Delaware, USA Gilbert N. Andrada, Connecticut State Department of Education, USA Section 4 Analysis, Interpretation, and Use of Learning and Assessment Data from Technology Rich Environments Chapter 27 Assessing Problem Solving in Technology-Rich Environments: What Can We Learn from Online Strategy Indicators?............................................................................................................................. 705 Jean-Francois Rouet, University of Poitiers, France Zsofia Vörös, University of Poitiers, France Matthias von Davier, Educational Testing Service, USA
Chapter 28 Analyzing Process Data from Technology-Rich Tasks........................................................................ 724 Lisa Keller, University of Massachusetts – Amherst, USA April L. Zenisky, University of Massachusetts – Amherst, USA Xi Wang, University of Massachusetts – Amherst, USA Chapter 29 Analyzing Process Data from Problem-Solving Items with N-Grams: Insights from a ComputerBased Large-Scale Assessment............................................................................................................ 749 Qiwei He, Educational Testing Service, USA Matthias von Davier, Educational Testing Service, USA Chapter 30 Assessment of Task Persistence........................................................................................................... 777 Kristen E. DiCerbo, Pearson, USA Chapter 31 Assessing Engagement during the Online Assessment of Real-World Skills...................................... 804 J. Christine Harmes, Assessment Consultant, USA Steven L. Wise, Northwest Evaluation Association, USA Compilation of References............................................................................................................. xxxix About the Contributors.................................................................................................................. cxxix Index.................................................................................................................................................. cxlvi
Detailed Table of Contents
Foreword by Andreas Schleicher.....................................................................................................xxviii Foreword by Chris Dede..................................................................................................................... xxx Preface..............................................................................................................................................xxxiii Acknowledgment...........................................................................................................................xxxviii
Volume I Section 1 Defining Real-World Skills in Technology-Rich Environments This section includes chapters on curricula and frameworks for teaching real-world skills. Chapter 1 Twenty First Century Skills vs. Disciplinary Studies?............................................................................ 1 Lars Vavik, Stord/Haugesund University College, Norway Gavriel Salomon, Hiafa University, Israel This paper addresses the tension between a discipline-based and a skill and competences-based approach to today’s curriculum. The competences-based approach emphasizes the cultivation of market-oriented skills and competencies that people acquire in the knowledge society; it is the driving force behind many educational reforms. The other, more traditional approach emphasizes the acquisition of well organized disciplinary knowledge such as history and chemistry. The differences between learning guided by pre-determined educational goals, designed to acquire disciplined knowledge, and the acquisition of daily, net-related interest-driven partly out-of-school skills learning is too large to be ignored. Each of the two approaches has its advantages and drawbacks but jointly they can constitute fruitful curricula. On the one hand, such curricula address the three main purposes of school – qualification, socialization and subjectification – while on the other they address the needs of cultivating 21st Century skills and competences. The latter comes to serve the attainment of the former.
Chapter 2 Digital Competence: A Net of Literacies............................................................................................... 13 Edith Avni, Toward Digital Ethics Initiative, Israel Abraham Rotem, Toward Digital Ethics Initiative, Israel This chapter presents a proposal for a conceptual framework of digital competence, which is a civil right and need and is vital for appropriate, intelligent study and functioning in the real world, through means that technology and the internet offer the citizen. Digital competence in the 2010s is a multifaceted complex of a net of literacies that have been updated, reformulated and transformed under the influence of technology. The framework of the digital competency includes eight fields of digital literacies. At the top of the net is digital ethics literacy, outlines the moral core for proper use of technology; at the base are technological literacy and digital reading and writing literacy, comprising the foundation and interface for all the digital literacies, and in between are the digital literacies in these fields: information literacy, digital visual literacy, new media literacy, communication and collaboration literacy and social media literacy. These interconnected literacies compose a synergetic complex of the digital competence framework. Chapter 3 The Application of Transdisciplinary Theory and Practice to STEM Education.................................. 42 Susan Malone Back, Texas Tech University, USA Heather Greenhalgh-Spencer, Texas Tech University, USA Kellilynn M. Frias, Texas Tech University, USA The authors describe the application of transdisciplinary theory and practice to Science, Technology, Engineering and Mathematics (STEM) education at the undergraduate level. The modular approach which makes use of student collaboration within and across disciplines and input from outside experts holds promise for preparing students to address society’s “wicked” problems – those with interconnected causes and for which a solution often causes additional problems. Transdisciplinary theory and practice are described and their application to STEM education is proposed along with a model of measuring transdisciplinary skills. Recommendations are proposed for future research on cross-cultural/cross disciplinary models, pedagogy, measuring student collaboration, determining effective partnership models and institutional supports, and the potential role of the social sciences in contributing to research on transdisciplinary practice and education. Chapter 4 The SOAR Strategies for Online Academic Research: Helping Middle School Students Meet New Standards................................................................................................................................................ 68 Carolyn Harper Knox, University of Oregon, USA Lynne Anderson-Inman, University of Oregon, USA Fatima Terrazas-Arellanes, University of Oregon, USA Emily Deanne Walden, University of Oregon, USA Bridget Hildreth, University of Oregon, USA Students often struggle when conducting research online, an essential skill for meeting the Common Core State Standards and for success in the real world. To meet this instructional challenge, researchers at the University of Oregon’s Center for Advanced Technology in Education (CATE) developed, tested, and refined nine SOAR Strategies for Online Academic Research. These strategies are aligned with
well-established, research-based principles for teaching all students, with particular attention to the instructional needs of students with learning disabilities. To support effective instruction of the SOAR Strategies, researchers at CATE developed a multimedia website of instructional modules called the SOAR Toolkit. This chapter highlights the real world importance of teaching middle school students to conduct effective online research. In addition, it describes the theoretical and historical foundations of the SOAR Strategies, instructional features of the SOAR Toolkit, and research results from classroom implementations at the middle school level. Chapter 5 The Value of Metacognition and Reflectivity in Computer-Based Learning Environments............... 105 Sammy Elzarka, University of La Verne, USA Valerie Beltran, University of La Verne, USA Jessica Decker, University of La Verne, USA Mark Matzaganian, University of La Verne, USA Nancy T. Walker, University of La Verne, USA The purposes of this chapter are threefold: to explore the research on and relationships among metacognition, reflection, and self-regulated learning; to analyze students’ experiences with metacognition, reflection, and self-regulated learning activities in computer-based learning (CBL) courses; and to provide strategies that can be used in a CBL environment to promote students’ metacognition, reflection, and self-regulation. A review of underlying frameworks for and prior study findings in metacognition and reflection are presented. Case study findings are also described and form the basis for the suggested strategies. The value and implications of using such strategies are also offered. Finally, future research should address the teaching of metacognition and reflection in CBL environments with an emphasis on real world application. Chapter 6 A Framework for Defining and Evaluating Technology Integration in the Instruction of RealWorld Skills......................................................................................................................................... 137 J. Christine Harmes, Assessment Consultant, USA James L. Welsh, University of South Florida, USA Roy J. Winkelman, University of South Florida, USA The Technology Integration Matrix (TIM) was created to provide a resource for evaluating technology integration in K-12 instructional settings, and as a tool for helping to target teacher-related professional development. The TIM is comprised of 5 characteristics of meaningful learning (Active, Constructive, Authentic, Collaborative, and Goal-Directed) and 5 levels (Entry, Adoption, Adaptation, Infusion, and Transformation), resulting in 25 cells. Within each cell, descriptions are provided, along with video sample lessons from actual math, science, social studies, and language arts classrooms that illustrate a characteristic at the indicated level. Throughout development, focus groups and interviews were conducted with in-service teachers and technology specialists to validate the progression of characteristics and descriptive components.
Chapter 7 Equipping Advanced Practice Nurses with Real-World Skills............................................................ 163 Patricia Eckardt, Stony Brook University, USA Brenda Janotha, Stony Brook University, USA Marie Ann Marino, Stony Brook University, USA David P. Erlanger, Stony Brook University, USA Dolores Cannella, Stony Brook University, USA Nursing professionals need to assume responsibility and take initiative in ongoing personal and professional development. Qualities required of nursing graduates must include the ability to, “translate, integrate, and apply knowledge that leads to improvements in patient outcomes,” in an environment in which “[k] nowledge is increasingly complex and evolving rapidly” (American Association of Colleges of Nursing, 2008, p. 33). The ability to identify personal learning needs, set goals, apply learning strategies, pursue resources, and evaluate outcomes are essential. Nursing professionals must be self-directed learners to meet these expectations. Team-based learning (TBL) is a multiphase pedagogical approach requiring active student participation and collaboration. Team-based learning entails three stages: (1) individual preparation, (2) learning assurance assessment, and (3) team application activity. Section 2 Technology Tools for Learning and Assessing Real-World Skills Chapters in this section deal with core topic of technology tools and the wide range of applications aimed for learning and assessing of real-world skills. Chapter 8 Simulations for Supporting and Assessing Science Literacy.............................................................. 191 Edys S. Quellmalz, WestEd, USA Matt D. Silberglitt, WestEd, USA Barbara C. Buckley, WestEd, USA Mark T. Loveland, WestEd, USA Daniel G. Brenner, WestEd, USA Simulations have become core supports for learning in the digital age. For example, economists, mathematicians, and scientists employ simulations to model complex phenomena. Learners, too, are increasingly able to take advantage of simulations to understand complex systems. Simulations can display phenomena that are too large or small, fast or slow, or dangerous for direct classroom investigations. The affordances of simulations extend students’ opportunities to engage in deep, extended problem solving. National and international studies are providing evidence that technologies are enriching curricula, tailoring learning environments, embedding assessment, and providing tools to connect students, teachers, and experts locally and globally. This chapter describes a portfolio of research and development that has examined and documented the roles that simulations can play in assessing and promoting learning, and has developed and validated sets of simulation-based assessments and instructional supplements designed for formative and summative assessment and customized instruction.
Chapter 9 Using the Collegiate Learning Assessment to Address the College-to-Career Space......................... 230 Doris Zahner, CAE, USA Zachary Kornhauser, CAE, USA Roger W. Benjamin, CAE, USA Raffaela Wolf, CAE, USA Jeffrey T. Steedle, Pearson, USA Issues in higher education, such as the rising cost of education, career readiness, and increases in the achievement gap have led to a movement toward accountability in higher education. This chapter addresses the issues related to career readiness by highlighting an assessment tool, the Collegiate Learning Assessment (CLA), through two case studies. The first examines the college-to-career space by comparing different alternatives for predicting college success as measured by college GPA. The second addresses an identified market failure of highly qualified college graduates being overlooked for employment due to a matching problem. The chapter concludes with a proposal for a solution to this problem, namely a matching system. Chapter 10 Rich-Media Interactive Simulations: Lessons Learned....................................................................... 261 Suzanne Tsacoumis, HumRRO, USA High fidelity measures have proven to be powerful tools for measuring a broad range of competencies and their validity is well documented. However, their high-touch nature is often a deterrent to their use due to the cost and time required to develop and implement them. In addition, given the increased reliance on technology to screen and evaluate job candidates, organizations are continuing to search for more efficient ways to gather the information they need about one’s capabilities. This chapter describes how innovative, interactive rich-media simulations that incorporate branching technology have been used in several real-world applications. The main focus is on describing the nature of these assessments and highlighting potential solutions to the unique measurement challenges associated with these types of assessments. Chapter 11 An Approach to Design-Based Implementation Research to Inform Development of EdSphere®: A Brief History about the Evolution of One Personalized Learning Platform.................................... 284 Carl W. Swartz, MetaMetrics, USA & University of North Carolina, USA Sean T. Hanlon, MetaMetrics, USA E. Lee Childress, Corinth School District, USA A. Jackson Stenner, MetaMetrics, USA & University of North Carolina, USA Fulfilling the promise of educational technology as one mechanism to promote college and career readiness compels educators, researchers, and technologists to pursue innovative lines of collaborative investigations. These lines of mutual inquiry benefit from adopting and adapting principles rooted in design-based implementation research (DBIR) approaches. The purposes of this chapter are to: (a) provide the research foundation on which a personalized learning platform was developed, (b) present the evolution of EdSphere, a personalized learning platform that resulted from a deep and long-term collaboration among classroom teachers, school and district administrators, educational researchers, and technologists, and (c) describe a need for development of innovative technologies that promote college and career readiness among our earliest readers.
Chapter 12 Computer Agent Technologies in Collaborative Assessments............................................................ 319 Yigal Rosen, Harvard University, USA Maryam Mosharraf, Pearson, USA Often in our daily lives we learn and work in groups. In recognition of the importance of collaborative and problem solving skills, educators are realizing the need for effective and scalable learning and assessment solutions to promote the skillset in educational systems. In the settings of a comprehensive collaborative problem solving assessment, each student should be matched with various types of group members and must apply the skills in varied contexts and tasks. One solution to these assessment demands is to use computer-based (virtual) agents to serve as the collaborators in the interactions with students. The chapter presents the premises and challenges in the use of computer agents in the assessment of collaborative problem solving. Directions for future research are discussed in terms of their implications to large-scale assessment programs. Chapter 13 A Tough Nut to Crack: Measuring Collaborative Problem Solving.................................................... 344 Lei Liu, Educational Testing Service, USA Jiangang Hao, Educational Testing Service, USA Alina A. von Davier, Educational Testing Service, USA Patrick Kyllonen, Educational Testing Service, USA Diego Zapata-Rivera, Educational Testing Service, USA The purpose of our project is to explore the measurement of cognitive skills in the domain of science through collaborative problem solving tasks, measure the collaborative skills, and gauge the potential feasibility of using game-like environments with avatar representation for the purposes of assessing the relevant skills. We are comparing students’ performance in two conditions. In one condition, students work individually with two virtual agents in a game-like task. In the second condition, dyads of students work collaboratively with two virtual agents in the similar game-like task through a chat box. Our research is motivated by the distributed nature of cognition, extant research on computer-supported collaborative learning (CSCL) which has shown great value of collaborative activities for learning, and the framework for the Programme for International Student Assessment (PISA) framework. This chapter focuses on the development and implementation of a conceptual model to measure individuals’ cognitive and social skills through collaborative activities. Chapter 14 Animalia: Collaborative Science Problem Solving Learning and Assessment................................... 360 Sara Bakken, Pearson Education, USA John Bielinski, Pearson Education, USA Cheryl K. Johnson, Pearson Education, USA Yigal Rosen, Harvard University, USA The study described in this chapter is based on a joint World ORT, Israeli Ministry of Education and Pearson initiative to provide an opportunity for international student collaboration on a series of complex science problems. Students from four schools in Israel, three in the United States and one in Mexico, participated in collaborative complex problem-solving on science topics selected by teachers at the participating schools. The intent was to expose students to the realities of collaborating with people
under unfamiliar conditions (such as different cultures, languages, and time zones) in order to reach a shared goal, and to foster the value of this practice. The chapter presents the rationale for the project, describes the Animalia mini-course in detail, presents major findings and discusses implications for future curriculum development and further research. Chapter 15 Using Technology to Assess Real-World Professional Skills: A Case Study...................................... 385 Belinda Brunner, Pearson, UK Kirk A. Becker, Pearson, USA Noel Tagoe, Chartered Institute of Management Accountants, UK Innovative item formats are attractive to the sponsors of professional certification or qualification examinations because they provide greater fidelity to the real world than traditional item formats. Using the design of the Chartered Institute of Management Accountant’s professional qualification examinations as a case study, this chapter presents an in-depth exploration of the issues surrounding the use of innovative items to assess higher-order thinking skills required for professional competency, beginning with a discussion of approaches taken by various academic disciplines to define and characterize higher order thinking. The use of innovative, authentic assessments is examined in the context of validity arguments. A framework for principled thinking about the construct map of the assessment is introduced, and a systematic process for designing innovative items to address the desired constructs is provided.
Volume II Chapter 16 Assessment in the Modern Age: Challenges and Solutions................................................................. 415 Mahmoud Emira, City & Guilds of London Institute, UK Patrick Craven, City & Guilds of London Institute, UK Sharon Frazer, City & Guilds of London Institute, UK Zeeshan Rahman, City & Guilds of London Institute, UK This chapter aims to address assessment in the modern age in terms of its importance, challenges and solutions by examining the views of 1,423 users at UK test centres following their recent experience of using two systems which employ computer-based assessment (CBA) and computer-assisted assessment (CAA). Generally speaking, based on the research, which informs the findings presented in this chapter, both systems face similar challenges but there are challenges which are specific to the CAA system. Similarly, both systems may require common solutions to improve user’s future experience, but there are solutions which are more relevant to the CAA system. The chapter concludes with a discussion around the UK apprenticeship and a case study of a pilot apprenticeship programme in which CBA and CAA are also integrated.
Chapter 17 Technology-Assisted Learning for Students with Moderate and Severe Developmental Disabilities........................................................................................................................................... 445 Diane M. Browder, University of North Carolina at Charlotte, USA Alicia Saunders, University of North Carolina at Charlotte, USA Jenny Root, University of North Carolina at Charlotte, USA For students with moderate and severe developmental disabilities, including autism spectrum disorders and intellectual disability, technology can provide critical support for learning and life functioning. A growing body of research demonstrates the benefits of technology for these students to acquire academic skills, improve social functioning, and perform tasks of daily living. This chapter provides a description of this population and their learning needs. The research on technology applications for students with developmental disabilities is reviewed and synthesized. The review includes literature on technology to assist instruction and to provide options for student responding. Examples are provided of how technology can be applied to both instruction and assessment. Chapter 18 Mitigation of Test Bias in International, Cross-National Assessments of Higher-Order Thinking Skills.................................................................................................................................................... 472 Raffaela Wolf, CAE, USA Doris Zahner, CAE, USA The assessment of higher-order skills in higher education has gained popularity internationally. In order to accurately measure the skills required for working in the 21st century, a shift in assessment strategies is required. More specifically, assessments that only require the recall of factual knowledge have been on the decline, whereas assessments that evoke higher-order cognitive skills are on the rise. The purpose of this chapter is to discuss and offer strategies for mitigating bias for a computer-administered performancebased assessment of higher-order skills. Strategies to abate the effects of bias are discussed within the test design and test implementation stages. A case study of a successful adaptation and translation of CAE’s Collegiate Learning Assessment (CLA+) is presented to guide the discussion throughout the chapter. Chapter 19 Evidence-Centered Concept Map in Computer-Based Assessment of Critical Thinking................... 502 Yigal Rosen, Harvard University, USA Maryam Mosharraf, Pearson, USA A concept map is a graphical tool for representing knowledge structure in the form of a graph whose nodes represent concepts, while arcs between nodes correspond to interrelations between them. Using a concept map engages students in a variety of critical and complex thinking, such as evaluating, analyzing, and decision making. Although the potential use of concept maps to assess students’ knowledge has been recognized, concept maps are traditionally used as instructional tools. The chapter introduces a technology-enabled three-phase Evidence-Centered Concept Map (ECCM) designed to make students’ thinking visible in critical thinking assessment tasks that require students to analyze claims and supporting evidence on a topic and to draw conclusions. Directions for future research are discussed in terms of their implications to technology tools in large-scale assessment programs that target higher-order thinking skills.
Chapter 20 “Visit to a Small Planet”: Achievements and Attitudes of High School Students towards Learning on Facebook – A Case Study............................................................................................................... 528 Rikki Rimor, Kibbutzim College of Education Technology and the Arts, Israel Perla Arie, Kibbutzim College of Education Technology and the Arts, Israel The current chapter deals with the use of Facebook as a social network for learning. Collaborative learning, metacognition and reflectivity are theoretically discussed and assessed in the current Facebook learning environment, as essential skills of the 21st century. The case study presented examines the relationship between attitudes and achievements of high school students learning an English play in the Facebook closed-group environment. Its findings reveal a significant improvement in students’ attitudes at the end of the sessions. However, these were not found to correlate with students’ final achievements. In addition, low achieving students preferred to study collaboratively, as they did in the Facebook closed group, more than higher achieving students. These findings may indicate the contribution of other factors to achievement in addition to positive attitudes and satisfaction in the Facebook learning environment. A metacognitive analysis of the students’ written responses supports and expands the findings of this study. Chapter 21 Cross-Border Collaborative Learning in the Professional Development of Teachers: Case Study – Online Course for the Professional Development of Teachers in a Digital Age.................................. 558 Rafi Davidson, Kaye Academic College of Education, Israel Amnon Glassner, Kaye Academic College of Education, Israel The goal of this chapter is to present a theoretical and practical frame for PD of teachers at the digital age. The main question we ask is how to develop life competencies and skills of teachers in order to change their learning and teaching in a way that enables school graduates to acquire relevant skills for life. The chapter inquires this issue by a qualitative methodology case study . The case is an online course for teachers’ professional development. The chapter presents evidence from reflective diaries, interviews and scripts of students’ and teachers’ discussions, focusing on identification of the effects of the course’s learning environments on the development of the teachers’ self determination learning and skills. The findings indicate the useful effects of the combination between LMS environments and social media, such as Web 2.0 tools. The conclusions suggest new directions for teachers’ professional development that encourage the design of a flexible fractal net which enable fostering teachers’ leadership and innovation.
Section 3 Automated Item Generation and Automated Scoring Techniques for Assessment and Feedback The five chapters in this section address a wide range of technologies for automated scoring, automated item generation and feedback. Chapter 22 Using Automated Procedures to Generate Test Items That Measure Junior High Science Achievement........................................................................................................................................ 590 Mark Gierl, University of Alberta, Canada Syed F. Latifi, University of Alberta, Canada Hollis Lai, University of Alberta, Canada Donna Matovinovic, CTB/McGraw-Hill, USA Keith A. Boughton, CTB/McGraw-Hill, USA The purpose of this chapter is to describe and illustrate a template-based method for automatically generating test items. This method can be used to produce a large numbers of high-quality items both quickly and efficiency. To highlight the practicality and feasibility of automatic item generation, we demonstrate the application of this method in the content area of junior high school science. We also describe the results from a study designed to evaluate the quality of the generated science items. Our chapter is divided into four sections. In section one, we describe the methodology. In the section two, we illustrate the method using items generated for a junior high school physics curriculum. In section three, we present the results from a study designed to evaluate the quality of the generated science items. In section four, we conclude the chapter and identify one important area for future research. Chapter 23 Automated Scoring in Assessment Systems........................................................................................ 611 Michael B. Bunch, Measurement Incorporated, USA David Vaughn, Measurement Incorporated, USA Shayne Miel, LightSide Labs, USA Automated scoring of essays is founded upon the pioneer work of Dr. Ellis B. Page. His creation of Project Essay Grade (PEG) sparked the growth of a field that now includes universities and major corporations whose computer programs are capable of analyzing not only essays but short-answer responses to contentbased questions. This chapter provides a brief history of automated scoring, describes in general terms how the programs work, outlines some of the current uses as well as challenges, and offers a glimpse of the future of automated scoring. Chapter 24 Automated Scoring of Multicomponent Tasks.................................................................................... 627 William Lorié, Questar Assessment, Inc., USA Assessment of real-world skills increasingly requires efficient scoring of non-routine test items. This chapter addresses the scoring and psychometric treatment of a broad class of automatically-scorable complex assessment tasks allowing a definite set of responses orderable by quality. These multicomponent tasks are described and proposals are advanced on how to score them so that they support capturing
gradations of performance quality. The resulting response evaluation functions are assessed empirically against alternatives using data from a pilot of technology-enhanced items (TEIs) administered to a sample of high school students in one U.S. state. Results support scoring frameworks leveraging the full potential of multicomponent tasks for providing evidence of partial knowledge, understanding, or skill. Chapter 25 Advances in Automated Scoring of Writing for Performance Assessment......................................... 658 Peter W. Foltz, Pearson, USA & University of Colorado – Boulder, USA The ability to convey information through writing is a central component of real-world skills. However, assessing writing can be time consuming, limiting the timeliness of feedback. Automated scoring of writing has been shown to be effective across a number of applications. This chapter focuses on how automated scoring of writing has been extended to assessing and training of real-world skills in a range of content domains. It illustrates examples of how the technology is used and considerations for its implementation. The examples include 1) Formative feedback on writing quality, 2) scoring of content in student writing. 3) improving reading comprehension through summary writing, and 4) assessment of writing integrated in higher-level performance tasks in professional domains. Chapter 26 Using Automated Feedback to Improve Writing Quality: Opportunities and Challenges.................. 678 Joshua Wilson, University of Delaware, USA Gilbert N. Andrada, Connecticut State Department of Education, USA Writing skills are essential for success in K-12 and post-secondary settings. Yet, more than two-thirds of students in the United States fail to achieve grade-level proficiency in writing. The current chapter discusses the use of automated essay evaluation (AEE) software, specifically automated feedback systems, for scaffolding improvements in writing skills. The authors first present a discussion of the use of AEE systems, prevailing criticisms, and findings from the research literature. Then, results of a novel study of the effects of automated feedback are reported. The chapter concludes with a discussion of implications for stakeholders and directions for future research. Section 4 Analysis, Interpretation, and Use of Learning and Assessment Data from Technology Rich Environments This section introduces tools for analysis, interpretation and use of learning and assessment data in technology environments. Chapter 27 Assessing Problem Solving in Technology-Rich Environments: What Can We Learn from Online Strategy Indicators?............................................................................................................................. 705 Jean-Francois Rouet, University of Poitiers, France Zsofia Vörös, University of Poitiers, France Matthias von Davier, Educational Testing Service, USA The spread of digital information system has promoted new ways of performing activities, whereby laypersons make use of computer applications in order to achieve their goal through the use of problem solving strategies. These new forms of problem solving rely on a range of skills whose accurate assessment
is key to the development of postindustrial economies. In this chapter, we outline a definition of problem solving in technology-rich environment drawn from the OECD PIAAC survey of adult skills. Then we review research studies aimed at defining and using online indicators of PS-TRE proficiency. Finally, we present a case study of one item that was part of the PIAAC PS-TRE assessment. Chapter 28 Analyzing Process Data from Technology-Rich Tasks........................................................................ 724 Lisa Keller, University of Massachusetts – Amherst, USA April L. Zenisky, University of Massachusetts – Amherst, USA Xi Wang, University of Massachusetts – Amherst, USA A key task emerging in item analysis is identification of what constitutes valid and reliable measurement information, and what data support proposed score interpretations. Measurement information takes on many forms with computerized tests. An enormous amount of data is gathered from technology-based items, tracing every click and movement of the mouse and time stamping actions taken, and the data recorded falls into two general categories: process and outcomes. Outcomes are traditional scored answers that students provides in response to prompts, but technology-based item types also provide information regarding the process that students used to answer items. The first consideration to the practical use of such data is the nature of the data generated when learners complete complex assessment tasks. The chapter we propose serves to discuss some possible methodological strategies that could be used to analyze data from such technology-rich testing tasks. Chapter 29 Analyzing Process Data from Problem-Solving Items with N-Grams: Insights from a ComputerBased Large-Scale Assessment............................................................................................................ 749 Qiwei He, Educational Testing Service, USA Matthias von Davier, Educational Testing Service, USA This chapter draws on process data recorded in a computer-based large-scale program, the Programme for International Assessment of Adult Competencies (PIAAC), to address how sequences of actions recorded in problem-solving tasks are related to task performance. The purpose of this study is twofold: first, to extract and detect robust sequential action patterns that are associated with success or failure on a problem-solving item, and second, to compare the extracted sequence patterns among selected countries. Motivated by the methodologies of natural language processing and text mining, we utilized feature selection models in analyzing the process data at a variety of aggregate levels and evaluated the different methodologies in terms of predictive power of the evidence extracted from process data. It was found that action sequence patterns significantly differed by performance groups and were consistent across countries. This study also demonstrated that the process data were useful in detecting missing data and potential mistakes in item development.
Chapter 30 Assessment of Task Persistence........................................................................................................... 777 Kristen E. DiCerbo, Pearson, USA Task persistence is defined as the continuation of activity in the face of difficulty, obstacles, and/or failure. It has been linked to educational achievement, educational attainment, and occupation outcomes. A number of different psychological approaches attempt to explain individual and situational differences in persistence and there is mounting evidence that interventions can be implemented to increase persistence. New technological capabilities offer the opportunity to seamlessly gather evidence about persistence from individuals’ interactions in digital environments. Two examples of assessment of persistence in digital games are presented. Both demonstrate the ability to gather information without interruption of activity and the use of in-game actions as evidence. They also both require consideration of the student/ player model, task model, and evidence models. A design pattern outlining each of these elements is presented for use by those considering assessment of persistence in digital environments. Chapter 31 Assessing Engagement during the Online Assessment of Real-World Skills...................................... 804 J. Christine Harmes, Assessment Consultant, USA Steven L. Wise, Northwest Evaluation Association, USA The assessment of real-world skills will often require complex and innovative types of computer-based test items to provide more authentic assessment. Having information about how students remain engaged with the various innovative elements during an assessment is useful in both assessing the utility of different types of innovative test items and assessing the validity of the inferences made about the test scores of individual students. This chapter introduces the Item Engagement Index (IEI) and the Student Engagement Index (SEI) and demonstrates their use with a variety of innovative items that were pilot tested for a nursing licensure exam. The IEI provided useful information about the amount of student effort each innovative item received, while the SEI was found useful in identifying disengaged test takers. Compilation of References............................................................................................................. xxxix About the Contributors.................................................................................................................. cxxix Index.................................................................................................................................................. cxlvi
xxviii
Foreword
We need to think harder about how we prepare young people for tomorrow’s world. In the past, education was about teaching students something. Now, it’s about making sure that students develop a reliable compass and the navigation skills to find their own way through an uncertain, volatile, and ambiguous world. Now, schools need to prepare students for a world in which most people will need to collaborate with people of diverse cultural origins and appreciate different ideas, perspectives and values; a world in which people need to decide how to trust and collaborate across such differences; and a world in which their lives will be affected by issues that transcend national boundaries. Technology has become the key to bridge space and time in all of this. These days, we no longer know exactly how things will unfold. We are often surprised and need to learn from the extraordinary, and sometimes we make mistakes along the way. And it will often be the mistakes and failures, when properly understood, that create the context for learning and growth. A generation ago, teachers could expect that what they taught would last their students a lifetime. Today, schools need to prepare students for more rapid economic and social change than ever before, for jobs that have not yet been created, to use technologies that have not yet been invented, and to solve social problems that we don’t yet know will arise. How do we foster motivated, engaged learners who are prepared to conquer the unforeseen challenges of tomorrow, not to mention those of today? The dilemma for educators is that routine cognitive skills—the skills that are easiest to teach and easiest to test—are also the skills that are easiest to digitize, automate, and outsource. There is no question that state-of-the-art knowledge and skills in a discipline will always remain important. Innovative or creative people generally have specialized skills in a field of knowledge or a practice. And as much as ‘learning to learn’ skills are important, we always learn by learning something. However, educational success is no longer about reproducing content knowledge, but about extrapolating from what we know and applying that knowledge in novel situations. Put simply, the world no longer rewards people for what they know—Google knows everything—but for what they can do with what they know. Because that is the main differentiator today, education today needs to be much more about ways of thinking, involving creativity, critical thinking, problem-solving, and decision-making; about ways of working, including communication and collaboration; about tools for working, including the capacity to recognize and exploit the potential of new technologies; and, last but not least, about the social and emotional skills that help us live and work together. Conventionally, our approach to problems was to break them down into manageable bits and pieces and then to teach students the techniques to solve them. But today we create value by synthesizing the disparate bits. This is about curiosity, open-mindedness, and making connections between ideas that pre
Foreword
viously seemed unrelated, which requires being familiar with and receptive to knowledge in other fields than our own. If we spend our whole life in a silo of a single discipline, we will not gain the imaginative skills to connect the dots where the next invention will come from. Equally important, the more content knowledge we can search and access, the more important becomes the capacity to make sense of this content—the capacity of individuals to question or seek to improve the accepted knowledge and practices of their time. In the past, you could tell students to look into an encyclopedia when they needed some information, and you could tell them that they could generally rely on what they found to be true. Today, literacy is about managing non-linear information structures, building your own mental representation of information as you find your own way through hypertext on the Internet, and dealing with ambiguity—interpreting and resolving conflicting pieces of information that we find somewhere on the Web. Perhaps most importantly, in today’s schools, students typically learn individually and at the end of the school year, we certify their individual achievements. But the more interdependent the world becomes, the more we need great collaborators and orchestrators. Innovation today is rarely the product of individuals working in isolation but an outcome of how we mobilize, share, and link knowledge. In the flat world, everything that is our proprietary knowledge today will be a commodity available to everyone else tomorrow. Expressed differently, schools need to drive a shift from a world where knowledge is stacked up somewhere, depreciating rapidly in value, towards a world in which the enriching power of communication and collaborative flows is increasing. And they will need to help the next generation to better reconcile resilience (managing in an imbalanced world) with greater sustainability (putting the world back into balance). This is a tough agenda. What is certain is that it will never materialise unless we are able to clearly conceptualise and measure those 21st century knowledge areas and skills. Without rigorous conceptualisation, we will not be able to build meaningful curricula and pedagogies around these knowledge areas and skills. And, at the end of the day, what is assessed is what gets taught. This volume makes a major step in advancing this frontier. It examines a range of skills that are important; it looks at innovative measurement methods to make these skills amenable to quantitative assessment in ways that they become activators of students’ own learning, and it looks at how we can learn to drink from the firehose of increasing data streams that arise from new assessment modes. Andreas Schleicher Organisation for Economic Co-Operation and Development (OECD), France
xxix
xxx
Foreword
In its landmark report Education for Life and Work in the 21st Century, the National Research Council (2012) described “deeper learning” as an instructional approach important in preparing students with sophisticated cognitive, intrapersonal, and interpersonal skills. The approaches recommended by advocates of deeper learning are not new, and historically these instructional strategies have been described under a variety of terms. Until now, however, they have been rarely practiced within the schools (Dede, 2014), resulting in the sad situation that students who excel in school may struggle in the real world. And students who struggle in school are likely to sink in the real world. Various “deeper learning” approaches are described below. • • • • • • • • • •
Case-based learning helps students master abstract principles and skills through the analysis of real-world situations; Multiple, varied representations of concepts provide different ways of explaining complicated things, showing how those depictions are alternative forms of the same underlying ideas; Collaborative learning enables a team to combine its knowledge and skills in making sense of a complex phenomenon; Apprenticeships involve working with a mentor who has a specific real-world role and, over time, enables mastery of their knowledge and skills; Self-directed, life-wide, open-ended learning is based on students’ passions and is connected to students’ identities in ways that foster academic engagement, self-efficacy, and tenacity; Learning for transfer emphasizes that the measure of mastery is application in life rather than simply in the classroom; Interdisciplinary studies help students see how differing fields can complement each other, offering a richer perspective on the world than any single discipline can provide; Personalized learning ensures that students receive instruction and supports that are tailored to their needs and responsive to their interests (U.S. Department of Education, 2010; Wolf, 2010; Rose & Gravel, 2010); Connected learning encourages students to confront challenges and pursue opportunities that exist outside of their classrooms and campuses (Ito et al., 2013); and Diagnostic assessments are embedded into learning and are formative for further learning and instruction (Dede, 2012).
These entail very different teaching strategies than the familiar, lecture-based forms of instruction characteristic of industrial-era schooling, with its one-size-fits-all processing of students. Rather than requiring rote memorization and individual mastery of prescribed material, they involve in-depth, dif-
Foreword
ferentiated content; authentic diagnostic assessment embedded in instruction; active forms of learning, often collaborative; and learning about academic subjects linked to personal passions and infused throughout life. The chapters in this book demonstrate that new tools and media can be very helpful to many teachers who would otherwise struggle to provide these kinds of instruction for deeper learning (Dede, 2014). By analogy, imagine that you wish to visit a friend 20 miles away. You could walk (and some people would prefer to do so), but it would be much easier to use a bicycle, and it would be far easier still to use a car. In short, teachers who wish to prepare their students for the real world, as well as for further academics, don’t have to use educational technology; they may prefer to walk. Realistically, however, many, if not most, teachers will be hard-pressed to get from industrial-style instruction to deeper learning without the vehicles of digital tools, media, and experiences. In an extensive review of the literature on technology and teaching for the forthcoming American Educational Research Association (AERA) Handbook of Research on Teaching (5th Edition), Barry Fishman and I (in press) note the important distinction between using technology to do conventional things better and using technology to do better things (Roschelle et al., 2000). While there may be value in doing some types of conventional instruction better (i.e., more efficiently and effectively), the real value in technology for teaching lies in rethinking the enterprise of schooling in ways that unlock powerful learning opportunities and make better use of the resources present in the 21st-century world. In our review, we consider how and under what conditions technology can be productively employed by teachers to more effectively prepare students for the challenges presented by a rapidly evolving world. We argue that technology as a catalyst is effective only when used to enable learning with richer content, more powerful pedagogy, more valid assessments, and links between in- and out-of-classroom learning. The examined the following technologies in depth: • • • • •
Collaboration tools, including Web 2.0 technologies and tools that support knowledge building; Online and hybrid educational environments, which are increasingly being used to broaden access to education but also have the potential to shift the way we conceive of teaching and learning; Tools that support learners as makers and creators, which have their deep roots in helping students learn to become programmers of computers (and not just users of them); Immersive media that create virtual worlds to situate learning or augment the real world with an overlay of computational information; and Games and simulations that are designed to enhance student motivation and learning.
This book provides examples of these and other powerful technologies to aid this type of instruction. If used in concert, these deeper-learning technologies can help prepare students for life and work in the 21st century, mirroring in the classroom some powerful methods of knowing and doing that pervade the rest of society. Further, they can be used to create a practical, cost-effective division of labor, one that empowers teachers to perform complex instructional tasks. In addition, these media can address the learning strengths and preferences of students growing up in this digital age, including bridging formal instruction and informal learning. And, finally, these technologies can provide powerful mechanisms for teacher learning; by which educators deepen their professional knowledge and skills in ways that mirror the types of learning environments through which they will guide their students.
xxxi
Foreword
At a time in history when civilization faces crises that we need the full capacity of people across the world to resolve, this volume provides an exemplary suite of practical ways to move forward with curricula, instruction, and assessments that are truly oriented to 21st-century life and work. Chris Dede Harvard University, USA
REFERENCES Dede, C. (2012). Interweaving assessments into immersive authentic simulations: Design strategies for diagnostic and instructional insights (Commissioned White Paper for the ETS Invitational Research Symposium on Technology Enhanced Assessments). Princeton, NJ: Educational Testing Service. Dede, C. (2014). The role of technology in deeper learning. New York, NY: Jobs for the Future. Fishman, B., & Dede, C. (in press). Teaching and technology: New tools for new times. In D. Gitomer & C. Bell (Eds.), Handbook of research on teaching (5th ed.). New York, NY: Springer. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., & Salen, K. … Watkins, S. C. (2013). Connected learning: An agenda for research and design. Irvine, CA: Digital Media and Learning Research Hub. National Research Council. (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: The National Academies Press. Retrieved from http:// www.nap.edu/catalog.php?record_id=13398 Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin, D. N., & Means, B. M. (2000). Changing how and what children learn in school with computer-based technologies. The Future of Children: Children and Computer Technology, 10(2), 76–101. doi:10.2307/1602690 PMID:11255710 Rose, D. H., & Gravel, J. W. (2010). Universal design for learning. In E. Baker, P. Peterson, & B. McGaw (Eds.), International Encyclopedia of Education (3rd ed.). Oxford, UK: Elsevier. doi:10.1016/B978-008-044894-7.00719-3 U.S. Department of Education. (2010). Transforming American education: Learning powered by technology (National Educational Technology Plan 2010). Washington, DC: Office of Educational Technology, U.S. Department of Education. Wolf, M. A. (2010, November). Innovate to educate: System [re]design for personalized learning. Washington, DC: Software and Information Industry Association.
xxxii
xxxiii
Preface
Changes in the world economy, specifically toward information industries, have changed the skillset demand of many jobs (Organization for Economic Development [OECD], 2012a). Information is created, acquired, transmitted, and used—rather than simply learned—by individuals, enterprises, organizations, and communities to promote economic and social development. Major employers and policy makers are increasingly asking teachers and educators to help students develop so-called real-world skills (Gallup, 2013). While learning basic numeracy and literacy skills still is crucial to success in the job market, developing real-world skills also is essential to success in the job market and worldwide economic development. Real-world skills, or “21st century skills,” include critical thinking, collaborative problem solving, creativity, and global competency. These skills that facilitate mastery and application of science, mathematics, language arts, and other school subjects will grow in importance over the coming decade (National Research Council, 2012; OECD, 2012a, 2012b). A wide range of initiatives and programs in education promote learning and assessment of real-world skills. These include, for example, the Common Core State Standards (National Governors Association Center for Best Practices and Council of Chief State School Officers, 2010a, 2010b), Next Generation Science Standards (National Research Council, 2013), Common European Framework of Reference (Council of Europe, 2011), Partnership for 21st Century Skills (Partnership for 21st Century Skills, 2009), Education for Life and Work (National Research Council, 2012), and assessment frameworks in the Programme for International Student Assessment (PISA) (OECD, 2013). Because of the importance of promoting these skills, we have embarked on a journey to create a Handbook of Research on Technology Tools for Real-World Skill Development. Because conceptions and educational applications of real-world skills are evolving rapidly, we have welcomed a wide range of skills in the Handbook. The following four strands of skills are represented in the chapters: Thinking skills refer to higher-order cognition and dispositions such as critical thinking, complex problem solving, metacognition, and learning to learn. Social skills refer to attitudes and behaviors that enable successful communication and collaboration. Global skills refer to attitudes and behaviors that emphasize the individual’s role in, and awareness of, the local as well as the global and multicultural environment. Digital skills emphasize information and digital literacies needed in the technology-rich world in which we live. Similarly, the chapters in this Handbook describe a range of technology tools to support teaching, learning, assessment for learning (e.g., Stiggins, 2005; Wiliam, 2011), feedback for learning (e.g., Hattie, & Timperley, 2007; Shute, 2008), and scoring of student responses. For example, section 1 includes chapters on curricula and frameworks for teaching real-world skills; the chapters in section 2 describe specific technology tools for teaching, learning, and assessing real-world skills; the chapters in
Preface
section 3 describe automated scoring tools for assessment and learning; and section 4 contains chapters on techniques for analyzing data from technology-based performance assessments. Helping students learn real-world skills—that is, to internalize them and use them flexibly across a range of challenges and contexts in their everyday and work lives—is a significant educational challenge. Real-world skills cannot be taught in a single course or in a single year of schooling. And assessing real-world skills to provide feedback to guide development of those skills cannot be accomplished using conventional, largescale assessment and score reporting methods alone. The technology tools described here represent the range of current and developing capabilities of technology tools to support teaching, learning, assessment, and feedback for learning. As technology-rich environments for teaching, learning, assessment, and feedback are being integrated into educational processes, there is much to be learned about how to leverage advances in technology, learning sciences, and assessment to develop real-world skills for the 21st century. Research findings on what works best are just emerging, possibly due to the strong multi-disciplinary approaches required to extract the greatest value. This Handbook is intended to serve as a first body of research in the expanding area of technology tools for teaching, learning, assessment, and feedback on real-world skills that educators can turn to in the coming years as a reference. Our aim is to bring together top researchers to summarize concepts and findings. The Handbook contains contributions of leading researchers in learning science, educational psychology, psychometrics, and educational technology. Assuming that many readers will have little grounding in those topics, each chapter outlines theory and basic concepts and connects them to technology tools for real-world skill development. We see this as one of the most crucial contributions of the Handbook, seeking to establish strong theoretical principles that can inform educational research and practice and future research and development. The Handbook also provides brief overviews in each topic section for more knowledgeable readers. The Handbook is organized into four sections.
SECTION 1: DEFINING REAL-WORLD SKILLS IN TECHNOLOGY-RICH ENVIRONMENTS The seven chapters in Section 1 explore conceptualization of real-world skills and the role of technology. The section includes chapters on curricula and frameworks for teaching real-world skills. To aid readers in selecting specific chapters to study, we list the technology tools described in these chapters. Chapter 1: A principled approach for developing digital competency. Chapter 2: A model for teaching digital competency. Chapter 3: A model for measuring problem solving skills in science, technology, engineering, and mathematics (STEM). Chapter 4: A model for teaching Internet research skills. Chapter 5: Another model for teaching Internet research skills. Chapter 6: A matrix for evaluating technology integration in K-12 instructional settings, and teacherrelated professional development. Chapter 7: An online team-based learning model in nursing education.
xxxiv
Preface
SECTION 2: TECHNOLOGY TOOLS FOR LEARNING AND ASSESSING REAL-WORLD SKILLS Chapters 8 through 21 deal with the core topic of technology tools and a wide range of applications aimed at learning and assessing of real-world skills. The technology tools described in these chapters include the following. Chapter 8: Technology-rich simulations for learning and assessing science skills. Chapter 9: The Collegiate Learning Assessment, a test to evaluate the critical thinking and written communication skills of college students. Chapter 10: Guidance, based on lessons learned from developing rich-media simulations, for assessment for organization staff promotion and development. Chapter 11: A personalized learning platform for developing early reading. Chapter 12: Computer agent technology for assessing collaborative problem solving skills. Chapter 13: A model for assessing cognitive and social skills through online collaboration. Chapter 14: An approach for technology-rich learning and formative assessment of collaborative problem solving skills. Chapter 15: A framework for principled thinking about a construct map assessment of a higher-order thinking skills. Chapter 16: Computer-based and computer-assisted approaches for assessment of knowledge and skills. Chapter 17: Technology tools for learning for students with moderate and severe development and intellectual disabilities. Chapter 18: Strategies for mitigating bias for a computer-administered performance-based assessment of higher-order skills. Chapter 19: An evidence-centered concept map for a critical thinking assessment. Chapter 20: Facebook as a social network for learning. Chapter 21: A framework for teachers’ professional development in the digital age.
SECTION 3: AUTOMATED ITEM GENERATION AND AUTOMATED SCORING TECHNIQUES FOR ASSESSMENT AND FEEDBACK The five chapters in Section 3 address a range of technologies for automated scoring, automated item generation, and learner feedback. The technology tools described in these chapters include the following. Chapter 22: Procedures for automated generation of science items. Chapter 23: Automated scoring approaches for development of writing proficiency. Chapter 24: A principled framework for designing automated scoring of multicomponent assessment tasks. Chapter 25: Automated scoring as the basis for feedback to support improvement of writing skills. Chapter 26: Automated feedback to improve writing quality.
xxxv
Preface
SECTION 4: ANALYSES OF PROCESS DATA IN TECHNOLOGY-RICH PERFORMANCE TASKS Chapters 27 through 31 deal with analysis, interpretation, and use of learning and assessment data in technology environments. The technology tools described in these chapters include the following. Chapter 27: Analysis of solution paths in a technology-rich problem solving assessment. Chapter 28: Analysis of solution paths in technology-rich critical thinking assessment. Chapter 29: Use of a chi-square features selection algorithm (i.e., sequential pattern mining) and Ngrams representation model to analyze process data in technology-rich problem solving tasks. Chapter 30: Analytic methods to induce a persistence measure from game play click stream data and a design pattern to guide future development of persistence measures in digital environments. Chapter 31: An Item Engagement Index (IEI) and Student Engagement Index (SEI) for assessing engagement during the online assessment of real-world skills. Our goal in collecting and organizing these excellent chapters is to begin a process of crystalizing what our field has accomplished to date and what it knows, collectively, about technology tools and how those tools can be used to support and enhance teaching and learning of real-world skills. Knowing what we know should help us identify what we need to know. And it should guide further development of practical applications and empirical research on the efficacy of using technology tools for teaching, learning, assessing, and providing feedback as learners work to develop the skills they need for today’s high-tech, higher-order knowledge and skills world. We hope this Handbook will serve as a tool to encourage collaborations among researchers, educators, policy makers, employers, and the general public to promote learning, assessment, and personalized feedback technologies. By compiling the rich research and knowledge in this Handbook, we hope to spark innovation in education. The Handbook is a recommended reading source to the following audiences: Educators: This book will share essential insights for policy makers, principals, curriculum experts, and teachers who are interested in better understanding the practical challenges and opportunities in introducing new technology-rich programs aimed to promote learning, assessment, and feedback on real-world skills. Researchers: This book will provide a valuable springboard to researchers in psychology, education, assessment, and computer science to engage with the concept of technology-rich assessment and learning of higher-order thinking skills and work on new research directions. This will be aided by the emphasis of key gaps in existing research and providing details on what areas need more careful research and empirical validation. General audiences with interest in upcoming trends in learning, assessment, and feedback: This book will cover a range of topics related to real-world skills and value of real-world skills in next-generation education.
xxxvi
Preface
REFERENCES Council of Europe. (2011). Common European framework of references for languages: Learning, teaching, assessment. Strasburg: Author. Gallup. (2013). 21st century skills and the workplace: A 2013 Microsoft-Pearson Foundation study on 21st century skills and the workplace. Washington, DC: Author. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi:10.3102/003465430298487 National Governors Association Center for Best Practices and Council for Chief State School Officers. (2010a). Common core state standards for mathematics. Washington, DC: Author. National Governors Association Center for Best Practices and Council for Chief State School Officers. (2010b). Common core state standards for English language arts and literacy in history/social, science, and technical subjects. Washington, DC: Author. National Research Council. (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: The National Academies Press. National Research Council. (2013). Next generation science standards: For states, by states. Washington, DC: The National Academies Press. Organization for Economic Development (OECD). (2012a). Better skills, better jobs, better lives: A strategic approach to skills policies. OECD Publishing. Organization for Economic Development (OECD). (2012b). Education at a glance 2012: OECD indicators. OECD Publishing. Organization for Economic Development (OECD). (2013). PISA 2015 collaborative problem solving framework. OECD Publishing. Partnership for 21st Century Skills. (2009). P21 framework definitions. Washington, DC: Author. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi:10.3102/0034654307313795 Stiggins, R. J. (2005). From formative assessment to assessment FOR learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4), 324–328. doi:10.1177/003172170508700414 Wiliam, D. (2011). Embedded formative assessment. Bloomington, IN: Solution Tree Press.
xxxvii
xxxviii
Acknowledgment
In the writing and editing this book we have to thank the conceptual visionaries who pushed our thinking and contributed greatly to the formation of the ideas presented here. At Pearson, Dr. Kimberly O’Malley, a thoughtful leader who provided executive support during all the stages of our work on the Handbook. Our colleagues, Dr. Peter Foltz and Dr. Katie McClarty who have been key in developing these ideas as part of 21st Century Skills project. Prof. Andreas Schleicher, OECD and Prof. Chris Dede, Harvard University shared their invaluable insights and directions for further research in Foreword section of the Handbook. And then there is the outstanding group of authors from a wide range of organizations and geographies who contributed their chapters to this volume. Along the way, the authors graciously served as each other’s reviewers as we passed drafts around, nurturing each other’s chapters and adding new perspectives. We thank you all for making our own work on the Handbook a great pleasure.
Section 1
Defining Real-World Skills in Technology-Rich Environments This section includes chapters on curricula and frameworks for teaching real-world skills.
1
Chapter 1
Twenty First Century Skills vs. Disciplinary Studies? Lars Vavik Stord/Haugesund University College, Norway Gavriel Salomon Hiafa University, Israel
ABSTRACT This paper addresses the tension between a discipline-based and a skill and competences-based approach to today’s curriculum. The competences-based approach emphasizes the cultivation of market-oriented skills and competencies that people acquire in the knowledge society; it is the driving force behind many educational reforms. The other, more traditional approach emphasizes the acquisition of well organized disciplinary knowledge such as history and chemistry. The differences between learning guided by predetermined educational goals, designed to acquire disciplined knowledge, and the acquisition of daily, net-related interest-driven partly out-of-school skills learning is too large to be ignored. Each of the two approaches has its advantages and drawbacks but jointly they can constitute fruitful curricula. On the one hand, such curricula address the three main purposes of school – qualification, socialization and subjectification – while on the other they address the needs of cultivating 21st Century skills and competences. The latter comes to serve the attainment of the former.
1. INTRODUCTION Knowledge is of two kinds. We know a subject ourselves, or we know where we can find it. (Samuel Johnson, 1750) The new digital world has led to significant changes in all walks of life, including in the school. It has been claimed that there is a need to cultivate new competencies, competencies that serve as “steering
instruments” for educational reforms. Specifically, digital competence has become a key concept in the discussion of what kinds of skills and understandings people would need in the knowledge society (e.g., Punie, 2007; see also Sefton-Green, Nixon, & Erstad, 2009; OECD, 2010). The policy documents for education in Norway elevate the concept of digital competence to a higher degree compared to other countries. The final version of the national curriculum (MER, 2006a) was central
DOI: 10.4018/978-1-4666-9441-5.ch001
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Twenty First Century Skills vs. Disciplinary Studies?
in regards to Information and Communication Technologies, ICT and educational technology in schools; it increased the status of ICT and the ability to use digital tools as the fifth basic skill in all subjects at all levels. Indeed, it has become clear that with the rapid development of technology, the widespread access to endless amounts of information, and the appeal of these developments to education, much needs to change in the institution, curriculum, and practice of schooling. Similarly, the growing dominance of the Knowledge Society over the economy, communication, job markets, and financial markets should have profound effects on education. But how? Two schools of thought and practice have emerged in response to the digital challenges: the more pragmatic and the more traditional. The basic premise of the more pragmatic school is that curricula need to be guided by the desired outcomes of school, based on market demands, and thus cultivate the mastery of market-oriented digital skills. Much like the way the younger generation engages digital media, school needs to be learner-led rather than knowledge-led. The emphasis ought to be on the mastery of skills and far less on scholarly knowledge that can best serve these digital challenges (e.g., Loveless, 2013). On the other hand, the more traditional school is based on the assumption that education is about enabling learners to engage with disciplinedbased “powerful knowledge,” knowledge they are not likely to acquire out of school and that is important for them as future active and educated citizens (Young & Muller, 2010).
2. THE EMERGENCE OF DIGITAL COMPETENCY IN THE CURRICULUM The skills-and-competency approach defines the desired skills as generic, as the ability to search, produce, and communicate. More specifically: “It is the set of knowledge, skills, and attitudes
2
(thus including abilities, strategies, values, and awareness) that are required when using ICT and digital media to perform tasks; solve problems; communicate; manage information; collaborate; create and share content; and build knowledge effectively, efficiently, appropriately, critically, creatively, autonomously, flexibly, ethically, reflectively for work, leisure, participation, learning, socializing, consuming, and empowerment” (Ferrari, 2012, p. 3). Two basic assumptions underlie the digital competence approach. First, it is assumed that today’s students learn differently and think differently from their older peers; the traditional ways of schooling—mainly the top-down delivery of ready-made bodies of organized knowledge—have become of lesser relevance. The nature of learners’ relationships with information and knowledge is changing—i.e., learning is increasingly based on principles of collective exploration, play, and individuals’ interest and innovation, and school has to adapt itself to such changes (Ulbrich, Jahnke, & Mårtensson, 2011). The implication of this is that the starting point for curriculum planning should be the needs and wants of young people rather than a set of predetermined, disciplined subject matter (Reiss & White, 2013). The knowledge is not imposed from the outside, but the competencies that learners already have are capitalized upon. Thus, the approach encourages teaching that draws upon a learner’s own experiences and “everyday knowledge” and, in turn, assists learners in using their new learning in their lives and work. Secondly, it is assumed that the knowledge one needs can be easily accessed from the Internet and does not need to be learned (and then forgotten) in school. One learns from other sources at least as much. On the other hand, learners need to acquire skills of access, communication, collaboration, and processing with which they can gain the knowledge they want and need. In this respect, school is a node within a wider network of information and learning sources that spans from in school to out of school, from local to global, and is both physi-
Twenty First Century Skills vs. Disciplinary Studies?
cal and digital. The emphasis is on competencies, generic skills, and other categories of know-how (rather than know-that) instead of on a detailed mastery of knowledge or content. The competence curriculum blurs the line between school learning and everyday experience and blurs the locations for learning—the school classroom, the home, the neighborhood, the playground, etc.. It is assumed that learning can and does take place everywhere: at home, at work, and at school. Recent research of out-of-school settings focuses on how children and adolescents operate in the media ecology. Variations in use are conceptualized as being associated with friendship-driven or interest-driven communities of practice (Ito, et. al., 2010; Ito et. al., 2013). School is being criticized for not including or valuing the emerging new media literacies and associated genres of participation. Gee (2003) and Selander (2008) advocate for the integration of new modes of learning, so-called game literacy, into existing educational culture. In his work on 21st century skills, Jenkins (2006) analyzed new media literacy in light of active media participation. Buckingham (2007) saw this as symptomatic of a much broader phenomenon—a widening gap between children’s everyday “life worlds” outside of school and the emphases of many educational systems. The overall message is that school cannot remain as it is today and has to confront the internet-culture. The skills or competencies that are the target of cultivation are—to a large extent—content free and ought to be applicable to a variety of instances and contexts. They might include the ability to work in teams or to handle cognitive overload. The cultivation of such competencies is closely aligned with technology, which is both the main reason for the curricular change and the main vehicle for change. In this light, the cultivation of such skills can be accomplished by a diverse menu of contents—from motorcycle maintenance to the design of a playground—detached from any organized disciplinary bodies of knowledge.
According to Dede (2009), school traditionally separates knowledge from skill, presenting the former as “truth” without allowing the students to apply skills to re-construct the knowledge. By emphasizing the acquisition of digital skills and what have come to be known as 21st Century competencies (e.g., Lankshear & Knobel, 2008), students are to become competent in accessing and manipulating information and transforming it into their own knowledge. The knowledge economy has become a preferred vision for the future of society, with the result that the curriculum has been under intense pressure to reform. The consequence has been for reformers to put the emphasis on frameworks of skills, competencies, “know-how,” and the marginalization of content, knowledge, and “know-that.” Real disciplinary content gradually disappears from curricula. The prototypical examples of new curriculum programs show how the future of the curriculum is now in the hands of a great many varied individuals and organizations, many of them from outside the mainstream education. Close analysis of these developments shows how they are formed from an uneasy alliance of economic arguments about the need to equip students with skills for digital labor and educational ideals drawn from a history of progressivism and constructivist learning. There is a growing tendency in a variety of countries to gradually implement a competencybased curriculum, often designed by international agencies and organizations. This, for example, is highlighted by The Partnership for 21st Century Skills (P21) with close connections to the computer industry itself. The P21 consortium was founded in 2002 with the purpose of bringing together the business community, education leaders, and policymakers to position 21st century skills for all students.1 They argue that because of the rapid changes in technology and the globalization of the world’s economy, schools need to do a better job of educating all students in order to prepare them for success in the job market. Twenty-first
3
Twenty First Century Skills vs. Disciplinary Studies?
century skills are no longer just for those students headed to college, but essential for all students. A similar initiative was taken by the Organisation for Economic Co-operation and Development OECD Committee for the Definition and Selection of Competencies (DeSeCo). A number of OECD countries were asked to list which competencies they considered to be key competencies. Four groups were frequently mentioned: (1) Social Competencies/Cooperation; (2) Literacies/Intelligence and applicable knowledge; (3) Learning Competencies/Lifelong Learning; and (4) Communication Competencies (Trier, 2003). The DeSeCo committee selected Digital Competence as one of the 8 key competencies. They argued that digital competence is essential in a knowledge society and guarantees more flexibility in the labor force, allowing it to adapt more quickly to constant changes in an increasingly interconnected world. It was stated that this could be a major factor in innovation, productivity, competitiveness, and quality of work (Recommendation 2006/962/ EC). The guidelines for selecting key competencies emphasize measurable benefits, to a wide spectrum of contexts for all individuals. It was important to set quantifiable targets, indicators, and benchmarks as a means of comparing best practice and as instruments for monitoring and reviewing the progress achieved. What is considered legitimate knowledge produced by international agencies is reformulated into national school curricula. It is up to each country to conceptualize which of the key competencies should be highlighted and how these competencies could be implemented or reconstructed at the national level and then cultivated. The Norwegian committee believes that the concept of competence is fruitful for understanding education in the knowledge society. It is interpreted somewhat differently from the OECD draft, but most of the main elements recur (NOU 2003: 16, 2003, p 73). Norway was the only
4
country in the EU to select digital competence as a key competency. It is also worth noting that digital skills are exactly what the EU committee outlined as digital competence: •
•
•
•
Search and process means being able to use different digital tools, media, and resources as well as to search for, navigate in, sort out, categorize, and interpret digital information appropriately and critically. Produce means being able to use digital tools, media, and resources to compose, re-apply, convert, and develop different digital elements into finished products, e.g., composite texts or design PowerPoint presentations. Communicate means using digital tools, resources, and media to collaborate in the learning processes and to present one’s own knowledge and competence to different target groups. Digital judgments means being able to use digital tools, media, and resources in a responsible manner and being aware of rules for protecting privacy and the ethical use of the Internet (Ferrari, 2013).
Many curriculum initiatives are derived from a smart, cybernetic style of thinking about the future of education. The proposed re-configuration of formally schooled identities as fluid, selffashioning digital learning identities links young people more forcefully to changing working circumstances (Loveless & Williamson, 2013). Schools are therefore pressured to ensure that students possess the flexible ‘human capital’ by the high tech economy. It represents a futuristic vision of education that extends the schooled identities of young people into an ongoing process of self-fulfillment and personal lifestyle creation that has become the characteristic feature of lifelong learning. According to Loveless & Williamson
Twenty First Century Skills vs. Disciplinary Studies?
(2013), the cybernetic style of thinking, with the metaphors of connectedness, network, flexibility, multiplicity, and interaction, creates several curriculum initiatives: •
•
•
Connected Learning: It re-imagines school as a node within a network that spans from in school to out of school, from local to global, and is both physical and digital (Ito, 2010). Curriculum 2.0: The curriculum is reconceived as a self-made curriculum. Deschooling seems more realistic, rejecting the idea that school should act normatively (Fracer & Green, 2007) Learnifying the Curriculum: In the new educational technology and media, a term like “curriculum” has been replaced with popular expressions of personalized learning, learning styles, learning choice, and learning centers (Loveless & Williamson, 2013).
3. THE DISCIPLINEBASED APPROACH According to Wheelahan (2010), the endorsement of generic, competency-based curriculum is just another attempt to replace what is labeled “powerful disciplinary knowledge” (often erroneously considered “knowledge of the powerful” rather than “powerful knowledge”). According to her, students’ need for access to knowledge has never been greater; but it has been removed from the center of the curriculum. As prime examples, digital competency has been legitimized in the curriculum as the main tool for “learning to learn.” The point here is that not only disciplinary content itself is important, as in the traditional curriculum, but that knowledge is the bearer of concepts that are tools for thinking (Moore, 2006; Young & Lambert 2014). According to Young & Lambert (2014), this position represents a growing international
community of scholars in educational studies that “have left the social constructivism of the 1990s.” Although there are strong arguments to support each of the two approaches to the design of current curricula, still the choice between the competencybased approach and a discipline-based one is essentially a choice between two sets of socioeducational beliefs. What does society believe is the fundamental purpose of school and hence what should the outcome of school consist of? To put it bluntly: Is it mainly to serve the economy, thus be a skill-based preparation for a global market society, or is it mainly to serve intellectual development through a knowledge-based kind of schooling? Relatedly, is it to be a curriculum based on students’ experiences and preferences, or are these mere pedagogical motivators where the curriculum is subject-matter based? A distinction needs to be introduced here. We distinguish between information and knowledge. Information is piecemeal, highly context-bound, specific, and local (e.g., water boils at 100 degrees; Timbuktu has about 55,000 inhabitants). It is not part of conceptual networks nor does it mindfully (non-automatically) transfer to new instances. On the other hand, knowledge is the creation of an individual’s or a team’s deliberate construction, whereby bits and pieces of information become interconnected to create conceptual webs. It is knowledge, not information, from which meaning can be construed. The basic assumptions of the subject-based approach are that 1. School’s mission is to impart and cultivate knowledge beyond that which the student can acquire on his or her own from unstructured and unsystematic experiences; 2. The school-based knowledge should be “powerful knowledge”; it is conceptual, generalizable, systematic, discipline-based, and entails—as pointed out by John Dewey—the potency of acquiring new knowledge;
5
Twenty First Century Skills vs. Disciplinary Studies?
3. This kind of knowledge is needed for intellectual development of an educated, thoughtful, productive citizen in the knowledge society; 4. Skills and stand-alone competencies, functional as they may be, are like discrete bits and pieces of information; they do not combine into causal, correlational, temporal, or other kinds of logical networks and thus do not serve as foundations for the construction of meaningful knowledge, nor do they help the development of intelligent understandings. However, all this does not answer the question of why powerful knowledge is so important. And why, indeed, should it not be replaced by a more pragmatic, experience-based and marketoriented curriculum of skills and competencies? As mentioned above, after all is said, the issue boils down to beliefs. We believe that being able to access a bit of information here, a bit there, does not constitute what education should really be about: learning to understand parts of the social, human, and scientific world and to intelligently function in it. As argued by Sanger (2010, p. 1424), one of the founders of Wikipedia, unless one learns the basics in the known disciplines such as history, physics, or biology, “Googling a question will merely allow one to parrot an answer—not to understand it.” And it is the understanding that schooling is about. We also believe, as stated by Biesta, Arnesen, Salomon, & Vavik (2014), that ... a broad conception of education ... implies engagement with a number of ‘domains,’ such as the cognitive/scientific, the moral, the aesthetic, the political, the relational, the emotional, and so on. These are not domains in which ‘anything goes’ but they come with (historically and socially developed) standards and structures, hence it is important that students engage with these standards and structures (and do so in a structured rather than a random way) (p. 2).
6
Also, the argument that through the acquisition of skills and competencies one learns how to learn is questionable. Learning how to learn entails the development of learning strategies which often are discipline-based (learning how to test hypotheses in physics is not the same as learning to reach historical conclusions from bits and pieces of information). Learning how to learn does not come alone: It also entails dispositions and attitudes (Perkins, 2014). Moreover, learning how to learn entails the development of metacognitions, goal orientation, and self-monitoring (Pintrich, 2000). There is room to question whether these can develop in the context of content-free skill acquisition (e.g., Perkins & Salomon, 1989). Lastly, one needs to master skills and competencies while engaging purposefully with real content; otherwise, it is like learning to write with a wordprocessor without having any words to process or accessing from the Internet a description of photosynthesis without any botanical background or context. We shall return to this point a bit later.
4. FORMS OF KNOWLEDGE In an attempt to understand what kind of knowledge we are talking about, Bernstein’s (2000) distinction between “horizontal” and “vertical” discourses is very helpful. The horizontal discourse corresponds to a form of knowledge that is segmentally organized and differentiated. Usually, it is understood as everyday or commonsense knowledge; it tends to be oral, local, context dependent and specific, tacit, and multilayered discourse. Vertical discourse, on the other hand, represents the form of a coherent, explicit, hierarchically organized structure (as in the case of natural sciences) or the form of a series of specialized languages with specialized modes of questioning and specialized criteria of production, validation, and circulation of texts (as in the case of the social sciences and
Twenty First Century Skills vs. Disciplinary Studies?
humanities). Of importance are each discipline’s rules of argumentation and validation. It is one thing to validate a historical fact and another to validate a fact in chemistry. While a quote can serve as evidence in the study of literature, it cannot serve as valid evidence in, say, environmental studies. Democratic processes entail debates and argumentation, and knowing the appropriate rules for such must be part of the intellectual arsenal of active citizens (Banks, 2007). The way an academic discipline is structured has implications for the way in which it is translated for pedagogic purposes. The more hierarchical a body of knowledge (for example, physics) the more likely it is that pedagogy will need to be strongly sequenced because students need to understand what came before in order to understand what comes later (Muller, 2006). Educational knowledge is thus not simply the same as everyday knowledge but located in an educational context; it has a different form. Moreover, not all educational knowledge has the same form. Bernstein (2000) goes further to conceptualize the different forms taken by knowledge in terms of different “knowledge structures.” The forms taken by knowledge in different disciplines are different, as are the ways of thinking that are typical for them (it is one thing to think mathematically and another to think historically) and as are their structures of curriculum.
5. “CAN THE TWO WALK TOGETHER, EXCEPT THEY BE AGREED?” (AMOS, 3:3) As pointed out by Biesta (2013), it is reasonable to witness increasingly more learning that takes place via the Internet, thus the boundaries between school, home, and other non-school sources appear to gradually diminish. Still, school is a special place—not just one more in addition to the street, the Internet, or a sports club—having its special and unique purposes and functions that society
found necessary for the education of its subsequent generations. So, can the cultivation of skills and competencies to meet 21st century demands accomplish its mission without the more traditional functions of school? And can school today accomplish its functions without the cultivation of 21st century skills and competencies? It is interesting to examine the survey findings of Arnesen (2015), a survey of more than 3,000 high school students in Norway, Sweden, and Finland. Contrary to the frequently expressed argument that school must change in order to connect with young people, the findings suggest that the vast majority of Nordic students consider school learning as meaningful both for their lives outside of school and for their future careers. The emphasis on the cultivation of skills and competencies has become more visible in the Norwegian curricula (St.meld. nr. 16, 2006-2007), compared to former curricula. Knowledge in a competency curriculum is often horizontally organized. It introduces themes, projects, and problems that do not necessarily link to each other. In other words, rather than focusing explicitly on a curriculum that progresses vertically—where new work builds on old work and becomes increasingly difficult—it organizes teaching around one theme and then moves to another that may or may not be connected in any way with the former. The ontology of competencybased learning means that each of the constituent components of the model can be considered independently. The OECD requirement for selecting key competencies was closely connected to the fact that mastery of these competencies could be measured independently of each other. According to Wheelahan (2010), generic skills either become so rooted in their immediate context that they are not transferable to other contexts or become so general that they lose their direct relevance to the specific context in which they are going to be used. Moreover, emphasis on generic skills tends to under‐emphasize the domain‐specific knowledge of particular areas.
7
Twenty First Century Skills vs. Disciplinary Studies?
Figure 1. Competency and subject-based curriculum
As an example, a teacher needs specific mathematical knowledge for using an application like GeoGebra, a freely available digital tool that can allow visualization of mathematical ideas. This application, which combines geometry and algebra, could be a powerful tool in mathematics education for teachers who understand the subject matter deeply. The application in mathematics has little to do with the ability of, for example, writing a complex argument or debating ideas with others on a social media platform in a social science lesson. These two discrete applications have only one common feature: they are produced by programming languages and displayed on an ICT platform. Nobody is trying to make up a common competency by looking for similarities of other kinds of artifacts for educational purposes, like using a microscope in biology lessons or learning to play an instrument in music. These tools have few educational commonalities, and the differences do not disappear if we make them into digital tools. Students need access to the disciplinary system of meaning as a condition for using knowledge in contextually specific applications. For example, students need access to mathematics as a condition
8
for understanding and applying particular formulas and to use these formulas in different contexts. According to Biesta (2011), much disciplinary knowledge moves from the domain of certainty (the domain of “what is,”) to the domain of possibility (the domain of “what might be the case”). In contrast, everyday knowledge is particularized knowledge, because its selection and usefulness is determined by the extent to which it is relevant in a particular context. If curriculum knowledge is to be defined according to more horizontal or “open source” ideals rather than by vertical hierarchy, what will give knowledge its authority, and according to what theories and accounts will knowledge “count” as worthwhile? While it is easy to see the shortcomings of the competency-based approach, the subject-matter based approach is not without flaws itself. It appears to students as boring and irrelevant, and it fails many of them. Moreover, it does not seriously face the challenges of the 21st century age of knowledge and technology. Nevertheless, we do not have the luxury of giving up the subjectmatter based curriculum for the reasons given above. Of equal importance are the three main
Twenty First Century Skills vs. Disciplinary Studies?
purposes of school: qualification (the engagement with knowledge, skills, and dispositions), socialization (the engagement with traditions and ways of being), and subjectification (the engagement with the question of the human person as a subject of action and responsibility) (Biesta, Salomon, Arnesen, & Vavik, 2014, p. 2). Only a disciplinary-based curriculum can face these challenges in a balanced way. At this point we need to follow the distinction between curriculum and pedagogy offered by Biesta (2011). Pedagogy is the collection of ways, means, and methods whereby a socially sanctioned curriculum is implemented. Pedagogy constitutes the ways in which a discipline is translated into a program fitting the subject matter and pupils of this or that age and location; it translates a part of a subject matter into the ways it is taught. This distinction is important, as it underlies the distinction between the subject-matter curriculum and the relative emphasis on the cultivation of skills and competencies. Pedagogy comes to serve the purposes of the curriculum, and it can do so while emphasizing the cultivation of skills and competencies as part of subject-matter learning. One learns to communicate, work in teams, construct a useful spreadsheet table, or overcome cognitive overload while studying organized and systematic contents of disciplinary origin. The application, and hence the cultivation of skills, is an integral part of the learning of chosen contents that are deemed personally and socially valued. Skills are rarely context-free; they are—in greater part at least—context and content bound (Perkins & Salomon, 1989). Newell, Shaw, & Simon’s (1960) old idea of the “general problem solver” has not held much water. It could solve simple problems but not more complex ones that needed specific knowledge. The distinction between curricular contentsto-be-learned and pedagogy that emphasizes the cultivation of skills and competencies in the service of coming to master those contents points to how the discipline and the skills approach can
co-inhabit school. Contrary to Resnick’s (2009) position, we do not advocate a “disruptive” model whereby technology is applied in a way that creates something entirely different from what “the traditional” school is. According to Resnick, who advocates the disruptive model, primary and secondary schools should model themselves after kindergartens, with students engaging in unstructured play and collaboration, without much information delivered directly by the teacher. Rather, based on the assumption that we do not want to reject the accumulated bodies of knowledge and thereby cultivate generations of “skilled ignoramuses,” we advocate the more moderate model of “sustaining innovations” through the cultivation of digital skills and competencies into the acquisition of updated disciplinary knowledge.
6. CONCLUSION Blurring the distinction between everyday and school knowledge is supposed to make school more relevant to the demands of the 21st century, to prepare the next generation for the market demands of the Global Society, and to give a greater number of learners access to the curriculum. This is to be carried out by drawing on students’ own experiences and understandings and by emphasizing the cultivation of digital skills and competencies at the expense of the acquisition of disciplinary knowledge. Very often an attempt is made to solve school failure by linking everything to the pupil’s daily life by treating the curriculum as an instrument for motivating learners. However, we need to remember that the institution of school was developed to serve certain unique functions that no other societal organization can accomplish. It becomes clear that it is not a good idea to just assume that the school should adopt the characteristics of digital cultures. School that does not address its three unique functions of qualification, socialization, and subjectification (Biesta 2010) falls short of its mission. Focusing
9
Twenty First Century Skills vs. Disciplinary Studies?
on mastery of skills without any organized bodies of knowledge may do no more than train for empty-vessel experts. Rather, the challenge is to look through an educational lens at what information and communication technology can offer, in order to ask what could legitimately enrich the key tasks of the school and how. The disciplinary study has undergone changes in the last 15 years. The curriculum at many universities and teacher training colleges are introducing new content and tools. For social science, the analysis of the urban and regional changes brought about by information technology is an important new topic for understanding the kind of economy, culture, and society in which we live. There is a need to provide in-service training to teachers who do not yet master the the kinds of skills they are expected to teach. Across science, math, language, and social studies, classroom teachers weigh in on whether they are content-driven or skills-driven in their teaching. According to Cuban (2014), the dichotomy afflicts all academic subjects. We can place ourselves near the center of this continuum, not only to focus on the content but also to teach students how to read and think like a historian, geographer, or scientist. The meaning of “skill” here is the ability and capacity acquired through deliberate, systematic, and sustained effort to carry out discipline and inter-disciplinary activities involving ideas (cognitive skills), things (technical skills), and/or people (interpersonal skills). Consequently, this approach presupposes that ICT skills are inseparable from subject-based achievements. It seems to be very problematic to see ICT achievements as a discrete learning area—it is as if assumed that ICT achievement transcends individual disciplines and comprises a set of knowledge, skills, and understandings that learners can readily adapt and transfer to new contexts.
10
REFERENCES Arensen, T. (2015). Internet Access in Secondary Schools and Perseverance in Academic Work: Norway/Sweden Versus Finland. Paper presented at the 2015 Annual Meeting of the American Educational Research Association, Chicago, IL. Bernstein, B. (2000). Pedagogy, symbolic control and identity (Rev. ed.). Lanham, MD: Rowman & Littlefield. Biesta, G. J. J. (2010). Good education in an age of measurement: ethics, politics, democracy. Boulder, CO: Paradigm Publishers. Biesta, G. J. J. (2011). Experience, meaning and knowledge: A pragmatist view on knowledge and the curriculum. Paper presented at the ESRC seminar series Curriculum for the 21st Century: Theory, Policy and Practice. Seminar One: Knowledge and the Curriculum, Stirling. Biesta, G. J. J. (2012) Giving Teaching Back to Education: Responding to the Disappearance of the Teacher. Phenomenology & Practice, 6 (2), 35-49. Biesta, G. J. J., Salomon, G., Arnesen, T., & Vavik, L. (2014). Twenty First Century skills vs. disciplinary studies? Paper prepared for the Norwegian project “Learning in the 21st Century”. Buckingham, D. (2007). Beyond technology: Children`s learning in the age of digital culture. Cambridge, MA: Polity. Cuban, L. (2014). Larry Cuban on school reform and classroom practice. Retrieved November 21, 2014, from http://larrycuban.wordpress.com/ Dede, C. (2009). Technologies that facilitate generating knowledge and possibly wisdom. Educational Researcher, 38(4), 260–263. doi:10.3102/0013189X09336672
Twenty First Century Skills vs. Disciplinary Studies?
Ferrari, A. (2012). Digital Competence in Practice: An Analysis of Frameworks. JRC Technical Reports. Institute for Prospective Technological Studies, European Union. Ferrari, A. (2013). DIGCOMP: A Framework for Developing and Understanding Digital Competence in Europe. Report EUR 26035 EN. doi: 10.2788/52966 Fracer, K., & Green, H. (2007). Curriculum 2.0: Educating the digital generation. In S. Perker & S. Perker (Eds.), Unlocking Innovation: Why citizens hold the key to public service reform (pp. 47–58). Retrieved from http://www.demos.co.uk/ files/Unlocking%20innovation.pdf Gee, J. P. (2003). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan. Ito, M. (2010). Hanging out, messing around, and geeking out: Kids living and learning with new media. Cambridge, MA: MIT Press. Ito, M. (2013). Connected learning: An agenda for research and design. Retrieved from http:// eprints.lse.ac.uk/48114/ Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York: New York University Press. Lankshear, C., & Knobel, M. (Eds.). (2008). Digital literacies: Concepts, policies and practices. New York: Peter Lang. Loveless, A., & Williamson, B. (2013). Learning identities in a digital age. London: Routledge. MER. (2006). Program for Digital Kompetanse. Oslo: Statens Forvaltningsteneste. Moore, A. (Ed.). (2006). Schooling, society and the curriculum. Abingdon: Routledge.
Muller, J. (2006). Differentiation and progression in the curriculum. In M. Young & J. Gamble (Eds.), Knowledge Curriculum and Qualifications for South African Further Education. Cape Town: Human Sciences Research Council. Newell, A., Shaw, J. C., & Simon, H. A. (1960). Report on a general problem-solving program for a computer. In Information processing: Proceedings of the international conference on information processing (pp. 256–264). Paris: UNESCO House. NOU 2003: 16. (2003). I første rekke — Forsterket kvalitet i en grunnopplæring for alle. Retrieved from https://www.regjeringen.no/nb/ dokumenter/nou-2003-16/id147077/?docId=N OU200320030016000DDDEPIS&ch=1&q=N OU%202003:%2016&redir=true&ref=search& term=NOU%202003:%2016 OECD. (2010). The Definition and Selection of Key Competencies (DeSeCo). Retrieved from www.oecd.org/pisa/35070367.pdf Perkins, D. N. (2014). Future wise: educating our children for a changing world. San Francisco, CA: Jossey-Bass. Perkins, D. N., & Salomon, G. (1989). Are cognitive skills context bound? Educational Researcher, 18(1), 16–25. doi:10.3102/0013189X018001016 Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). San Diego, CA: Academic Press. Punie, Y. (2007). Learning Spaces: An ICTenabled model of future learning in the knowledge-based society. European Journal of Education, 42(2), 185–199. doi:10.1111/j.14653435.2007.00302.x
11
Twenty First Century Skills vs. Disciplinary Studies?
Reiss, M. J., & White, J. (2013). An aims-based curriculum: The significance of human flourishing for schools. London: IOE Press. Resnick, M. (2009, May 27). Kindergarten is the model for lifelong learning: Let’s keep teaching creativity throughout school and adulthood. Edutopia. Retrieved from http://www.edutopia.org/ kindergarten-creativity-collaboration-lifelonglearning Sanger, L. (2010). Individual knowledge in the Internet age. EDUCAUSE Review, (45), 14–24. Retrieved from http://www.educause.edu/ero/ article/individual-knowledge-internet-age Sefton-Green, J., Nixon, H., & Erstad, O. (2009). Reviewing approaches and perspectives on “Digital Literacy”. Pedagogies, 4(2), 107–125. doi:10.1080/15544800902741556 Selander, S. (2008). Designs for learning and ludic engagement. Digital Creativity, 19(3), 145–152. doi:10.1080/14626260802312673 St.meld. nr. 16 (2006-2007). - og ingen sto igjen…: Tidlig innsats for livslang læring. Retrieved from https://www.regjeringen.no/nb/dokumenter/ stmeld-nr-16-2006-2007-/id441395/ Trier, U. P. (2003). Twelve countries contributing to DeSeCo: A summary report. In D. S. Rychen, L. H. Salganik, & M. E. McLaughlin (Eds.), Contributions to the Second DeSeCo Symposium: Definition and selection of key competencies. Retrieved from http://www.deseco.admin.ch/bfs/deseco/ en/index/02.parsys.26255.downloadList.54824. DownloadFile.tmp/2003.symposiumvolume.pdf
12
Ulbrich, F., Jahnke, I., & Mårtensson, P. (2011). Special issue on knowledge development and the net generation. International Journal of Sociotechnology and Knowledge Development. Wheelahan, L. (2010). Why knowledge matters in curriculum. New York, NY: Routledge. Young, M., & Lambert, D. (2014). Knowledge and the future school: Curriculum and social justice. London: Bloomsbury Academic. Young, M., & Muller, J. (2010). Three educational scenarios for the future: Lessons from the sociology of knowledge. European Journal of Education, 45(1), 11–27. doi:10.1111/j.14653435.2009.01413.x
ENDNOTE
1
The following organizations and individuals participated in making this as an important policy document for education: Time Warner Foundation, Apple Computer, Inc., Cable in the Classroom, Cisco Systems, Inc., Dell Computer Corporation, Microsoft Corporation, National Education Association, SAP.
13
Chapter 2
Digital Competence: A Net of Literacies Edith Avni Toward Digital Ethics Initiative, Israel Abraham Rotem Toward Digital Ethics Initiative, Israel
ABSTRACT This chapter presents a proposal for a conceptual framework of digital competence, which is a civil right and need and is vital for appropriate, intelligent study and functioning in the real world, through means that technology and the internet offer the citizen. Digital competence in the 2010s is a multifaceted complex of a net of literacies that have been updated, reformulated and transformed under the influence of technology. The framework of the digital competency includes eight fields of digital literacies. At the top of the net is digital ethics literacy, outlines the moral core for proper use of technology; at the base are technological literacy and digital reading and writing literacy, comprising the foundation and interface for all the digital literacies, and in between are the digital literacies in these fields: information literacy, digital visual literacy, new media literacy, communication and collaboration literacy and social media literacy. These interconnected literacies compose a synergetic complex of the digital competence framework.
INTRODUCTION Digital competence (hereafter Dcom) is a right and necessity of humans and citizens, which is fundamental for proper, intelligent functioning in the real world in the 2010s with technological means. ICT, which has become an almost inseparable part of every aspect of our lives, has vastly changed
the ways we communicate, use language, information and knowledge, think and solve problems, work, consume, and relate to culture and leisure. As a result, the knowledge and skills required by every citizen, student and graduate of the education system for coping with daily needs, functioning optimally in society and the labor market, and surviving in the competitive world, have changed. Dcom also comprises a key to suitable scholastic
DOI: 10.4018/978-1-4666-9441-5.ch002
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Digital Competence
skills in the K-12 educational system as well as a basis for acquiring an education and for continual learning and developing throughout life. Dcom is a complex, multifaceted capability of a complex of traditional, familiar literacies that technology has tinted with digital features, reshaping the current character of literacies and adding new skills that were nonexistent in the pre-digital age. These digital literacies include knowledge, performance skills and high-level thinking skills, viewpoints, and values, for which the common denominator is intelligent, efficient application by digital means and on the internet, in accordance with the needs of the student and citizen in the 2010s. Dcom refers to almost every aspect of modern human functioning, the intrapersonal aspects alongside the interpersonal ones (Pellegrino & Hilton, 2012). In each of the literacies, functioning moves along a continuum of personal and collective-social context, in which one can discern the rising trend of shared-social use on the timeline. The abilities emphasizing more personal functioning are primarily the technological literacies: digital reading and writing, information and visual literacies. The abilities emphasizing more shared-social functioning are the communication and collaboration, new media, and social media literacies. The importance of digital competence to digital citizenry as an answer to society’s contemporary needs, the business world, and the educational realm, raises the need for modernized conceptualization of its framework. Over the years, corresponding to the incorporation of ICT and socialization in routine life, a variety of digital abilities have been characterized and conceptualized that have focused primarily on technological starting-off points; while the essence of abilities required by the student and citizen are based on traditional literacies that have been refashioned in the technological context. Limitations of these former conceptualizations include focusing on narrower aspects rather than the broad, comprehensive
14
complex of required skills, as well as focusing on technology itself without sufficiently dealing with appropriate, astute implementation for the benefit of humanity and society. Because of the rapid development of technological means, these limitations have led to considerable difficulty in updating required skills in accordance with lifestyle in the digital age. The conceptual framework proposed here has been constructed with the understanding that digital competence and scholastic qualifications have become integrated, interwoven, imminent essences. In effect, there is no meaning or relevance today to learning skills that stand alone outside the context of technology. For the past decade many countries have seen the necessity of giving priority to providing their young learners and older citizens with a digital competence. Initiatives for encouraging digital literacy have been increasing, among them government bodies in the United States (National Telecommunications and Information Administration, 2014) and education committee (The European Commission, 2014), Measuring Digital Skills across the EU (2014) and located at aspects of schools, education systems, and teaching associated (Fraillon, 2014). One such outstanding project is Education 2020 (Cross, Hamilton, Plested, & Rez, 2013), an initiative for young learners, which has been adopted by many countries in the belief that educational systems should not only accept necessary changes but should lead as an ecological and synergetic system, in conjunction with communities and study networks. The goal of constructing a conceptual framework of updated, digital competence for the 2010s is to analyze and present the essence, terms and content of the abilities needed for complex, multifaceted literacy functioning in the digital age. Such a defined, comprehensive framework will aid in raising public and educational awareness of the topic. We can achieve this by using clear anchors, a common, agreed-upon language and better understanding of the fundamental principles
Digital Competence
to serve the public and educational discourse with all the relevant factors and components of knowledge literacy, which comprise a basis for setting out a policy for acquisition and implementation of the digital literacies and their integration in educational work. Construction of an updated Dcom framework necessitates both a critical attitude and intellectual humility with regard to its components and content, owing to awareness of the uncertainty of innovation, change, and dynamic development of the insights involved therein. We must admit that we do not yet actually know how a young person, born into and growing up in the digital age, experiences reality, both cognitively and emotionally, whether by use of devices attached to his/her body or through the digital glass, as a metaphor for the digital landscape. Nor do we know how an older person or “digital immigrant” (Prensky, 2001), in the digital environment and without human support, will experience transparency, accessibility, loss of privacy, instant communication, use of web applications and software, deciphering of digital messaging and so on. Thus, we must be open to reception of new knowledge and skills and abandonment of less relevant or important ones for today’s world. Acquiring digital competence is a dynamic, developing process lasting a lifetime. The challenge of implementing it for both students and older citizens offers developing, spiraling learning by classification of the target communities into levels and emphasis on what is relevant for them. In the educational system, the definition of a conceptual framework offers merging of the dimensions of digital literacy with innovative pedagogy and technological learning environments as is appropriate for the scholastic range and characteristics of disciplines. Dcom, in the same way that it is implemented in the daily routine of life, is integrated into a synergetic system that makes significant, authentic, scholastic progress possible for all learners. This is done in up-to-date study
tracks conducted in a variety of ways throughout the day, wherever the student is to be found, as part of managing his/her life in the digital age.
BACKGROUND: FROM LITERACY TO DIGITAL COMPETENCE The abilities required for optimal functioning of the individual in society change and evolve, according to the period and culture in which he/ she lives. The concept of literacy has changed and developed over the years, and today, in the 2010s, it refers to the complex, multi-dimensional facets of competence required for a lifestyle incorporating digital means. In order to clarify the framework of digital competences, we will distinguish among these concepts: literacy, digital literacy, and digital competence.
Literacy In times past, the concept of literacy related to the ability required of a person to receive, decipher, create, and disseminate messages of various types, as was customary in the culture and period in which he or she lived. From this traditional perception of literacy, as the command of basic skills like reading and writing, the concept of literacy has broadened and now refers to an “[…] expanding set of knowledge, skills and strategies that individuals build on through life in various contexts, through interaction with their peers and the wider community” (OECD, 2013, p. 9). In a wider meaning, “The term literacy implies a universal need, a condition that must be met to enable full and equitable economic and social participation. We view literacy as a tool that may be applied to simple or more complicated contexts […]. In its broadest sense, literacy is a dynamic tool that allows individuals to continuously learn and grow.” (National Research Council, 1999, p.16).
15
Digital Competence
Digital Literacy Integration of ICT in daily life has led to the transformation of the concept from “literacy” to “digital literacy”. Earlier, narrower definitions of digital literacy stressed the technological aspects, such as: Computer Information Literacy – CIL (Fraillon & Ainley, 2013), Information and Communication Literacy – ICT (Partnership for 21st Century Skills, 2014a), as well as other concepts such as cyber literacy, electronic literacy, new online social literacy, internet literacy, technological literacy, and mobile literacy (Lee, Lau, Carbo, & Gendina, 2013). Having digital literacy requires more than just the ability to use software or to operate a digital device; it includes a large variety of complex skills such as cognitive, motoric, sociological, and emotional skills that users must master in order to use digital environments effectively (Eshet, 2012). A more comprehensive definition is found in reference to “New Media” literacy (Ito, Horst, Bittanti, Boyd, Herr-Stephenson, Lang et al., 2008; Manovich, 2001) in the sense of access from every digital device, at any time and place, to any platform of content and information, and interactive and creative participation in information in cyberspace. Today, digital literacy (2014) comprises a broader perception of capabilities required by the digital citizen or student, in a reality in which technology shapes conduct in the life of the individual and society. The distinctions among the various fields of digital literacy have become vague because of the fact that many aspects of digital functioning interface on the screen. The literacies form a net and are no longer separate, but are components of an umbrella of interconnected and overlapping literacies that, together, comprise a freestanding field (UNESCO, 2013). Digital literacy is no longer seen as a technologyfocused literacy; rather, technology constitutes a
16
common denominator for a broad net of multifaceted, pluralistic, interconnected, contextual, and dynamic abilities that create a new synergy among themselves (Lee et al., 2013). Comprehensive description of digital literacy is also offered by Hall, Atkins, & Fraser (2014): “Digital Literacy refers to the skills, attitudes, and knowledge required by educators to support learning in a digitally rich world. To be digitally literate, educators must be able to use technology to enhance and transform classroom practices, and to enrich their own professional development and identity. The digitally literate educator will be able to think critically about why, how, and when technology supplements learning and teaching”. While the digital student copes with such an explosion of “information” as data and facts in the public domain, the main issue is constructing the “knowledge,” which refers to what we know, how we personally understand it, and how we can apply it.
Digital Competence Dcom expands the concept of digital literacy into a multidimensional framework of capabilities, which comprises a comprehensive, integrated approach to the essence of the citizens’ and students’ characters in the 21st century and defines the conditions for his or her optimal functioning in society and culture. The use of terminology of “competence” is far more extensive than the skills and strategies entailed in a specific literacy. Competence expresses the digital citizen’s ability to successfully meet crucial requirements of life in the 21st century in a personal, proactive manner. These complex challenges and tasks, in their ever-changing contexts, require one to recruit and design cognitive and psychological resources (OECD 2002, 2005). The cardinal place of technology in these aspects
Digital Competence
requires formation of an up-to-date framework in which technology is interwoven in all the components of the net of literacies. Dcom is defined by The European Parliament and The Council of the EU (2006) as one of the eight crucial qualifications for learning throughout life and participating in the information society. The wide-ranging nature of Dcom described in the document involves “The confident and critical use of Information Society Technology (IST) for work, leisure and communication. It is underpinned by basic skills in ICT: the use of computers to retrieve, access, store, produce, present, and exchange information, and to communicate and participate in collaborative networks via the Internet.” In the report “Digital Competence in Practice: An Analysis of Frameworks,” Ferarri (2012) expands on the content of Dcom: “Digital Competence is the set of knowledge, skills, and attitudes (thus including abilities, strategies, values, and awareness) that are required when using ICT and digital media to perform tasks; solve problems; communicate; manage information; collaborate; create and share content; and build knowledge effectively, efficiently, appropriately, critically, creatively, autonomously, flexibly, ethically, reflectively for work, leisure, participation, learning, socializing, consuming, and empowerment.” Dcom should thus be viewed as a whole complex of abilities that are part of the skills of the 21st century, as a key to the skills required by every graduate of the educational system and every digital citizen. Ferrari (2013) states in “A Framework for Developing and Understanding Digital Competence in Europe:” “Digital Competence can be broadly defined as the confident, critical, and creative use of ICT to achieve goals related to work, employability, learning, leisure, inclusion, and/or participation in society. Digital competence is a transversal key competence, which, as such, enables us to acquire other key competencies (e.g., language, mathematics, learning to learn, cultural awareness). It is
related to many of the 21st Century skills, which should be acquired by all citizens, to ensure their active participation in society and the economy.” In Ferarri’s report, a framework is proposed of Dcom that refers to the following fields: information, communication and collaboration, content and creation, security, and problem solving. This last includes identifying digital needs and resources, reaching intelligent decisions about adapting tools to a goal or need, and finding solutions by digital means through creative use of technology, such as solving technical problems. The importance of Dcom is not merely in its capabilities for multi-faceted literacy functioning appropriate for the society and culture in real life, but also for nurturing and training active, critical, young learners who, as they mature, will be aware of the power of this competence and will see it as an important means of expression and empowerment, acquisition of knowledge, development, and consolidation of personal and social-cultural identity, finding suitable employment opportunities and advancement in life. The proposed framework of Dcom comprises a net of literacies based on updating the literacies valid for every period and context, with the addition of the technological layer, which affects their shaping such as digital literacies, and expands the range of abilities with new capabilities derived from the use of technology.
Ethics as the Core of Digital Competence Dcom is not only an ability, but also comprises a modern-world view and civil awareness; therefore, at the core of the net of literacies lie the ethics of the digital citizen. The change generated by ICT is not only a technological revolution, but also a social and ethical change that far exceeds one new technology or another (Bynum & Rogerson, 2004). Digital culture is global; its community is composed of
17
Digital Competence
digital citizens; it is fashioned as a complex tapestry of technology, its producers and users, its methods of use, and its social context. The concept “digital citizen” is a modernized adaptation of the classical concept of “citizen” in the reality of the digital age. “A digital citizen refers to a person utilizing information technology in order to engage in society, politics, and government participation.” (Mossberger, Tolbert & McNeal, 2011). Proper citizenship in the society of information and integration in the social communications culture, in which online presence has become an indivisible part of life, dictates ethics as a common denominator that lies at the base of the competences required in order to function in the digital age. Ribble (2014), in his proposal for an outline of the competences required of the digital citizen, emphasized the ethics of REP (Respect, Educate, and Protect) as the core of values and as a starting-off point for the digital citizen. Dcom is manifested through ethical functioning, both personal and social, of the digital citizen. Included among the personal aspects of the ethical use of technology are the values of rectitude, equity, responsibility, credibility, protection of intellectual property, and privacy policy. The interpersonal aspects relate to respect for others, sharing, tolerance, and protection of rights such as freedom of expression, as well as values of global ethics such as social responsibility and social conduct that contributes to shared communities. These interpersonal core values are also the moral base for functioning in the global, pluralistic world, in which there are encounters that cross various social and cultural borders, requiring inclusiveness, openness to listening, and tolerance of a variety of opinions and voices. The importance of ethics increases with the expansion of collaborative communities, which break down traditional boundaries and norms, and with
18
the increase in the perception of global sharing and the wisdom of the crowd as a social value and economic asset. Dcom, in this sense, is not a goal in itself but is rather a means of nurturing active citizenship. Ethics in Dcom is inevitable, because of the adult citizen and the young student coping with a variety of ethical issues arising from the use of technology, which require ethical knowledge and the ability to make moral judgments. On the digital plane in which activity is interactive, public, shared, and social, there are many dilemmas such as property rights and copyrights as opposed to sharing and freedom of information, and freedom of expression as opposed to protection of privacy. The need to strengthen the crucial role of digital ethics also arises from occurrences of negative phenomena entailing exploitation of opportunities that technology presents, among them cybercrime and cyberbullying; these must be eradicated in order to protect the society functioning according to the rules and norms of conduct for the benefit of its citizens (Avni, 2012; Avni & Rotem 2009, 2010; Rotem & Avni 2008, 2010, 2011, 2012). An ethical-educational perception views ethical literacy as a starting-off point for digital competence and as a transformative means with potential for effecting positive social change and advancing a free, egalitarian, just society. In the net of literacies of the framework of digital competence proposed here, ethical literacy constitutes “super-literacy,” which is both situated above the net of literacies and is interwoven among its components. Dcom can be defined as: A human and civil need and right, vital for the intelligent, appropriate functioning of the personal, social, and professional aspects in the lives of all the citizens in society in the real world, by means offered in the current era by modern technology.
Digital Competence
CONCEPTUALIZATION OF DIGITAL COMPETENCE IN THE NET OF LITERACIES General Description Conceptualization of digital competence currently required for the student and graduate of the educational system is vital for building a basis of knowledge for educational-public discourse, for defining updated goals of education and study, and for characterizing ways of implementing them. In accordance with the above survey and reasoning, a conceptual framework of digital competence is proposed here, in which technology gives modern meaning to the net of literacies needed by the digital citizen and student in the 2010s.
The digital competency framework proposed in this article encompasses eight realms of digital literacies, organized according to the following structure (Fig. 1): 1. Literacy of digital ethics, positioned at the top of digital competence and enveloping the net of digital literacies, in which ethics comprises the crucial component of perceptions, viewpoints, and values, essential for suitable application of technology by humans in every dimension of digital competence. 2. Technological literacy and digital reading and writing literacy, comprising a functional basis and interface for all the digital literacies included in digital competency.
Figure 1. Framework of digital competence in the 2010s
19
Digital Competence
3. Fields of Digital Literacies: Information literacy, visual literacy, new media literacy, communication and collaboration literacy, and social media literacy. Premises for selecting areas of digital literacy: 1. Inclusion of a complex of abilities needed by the digital citizen and student, required for optimal, suitable functioning in life in the age of ICT. 2. Modernization and adaption of the classical, familiar, pedagogically proven literacies of former times, for relevant needs and emphases of modern digital literacy. 3. Addition of new aspects of knowledge, skills, and values arising from the new technological means and use.
Digital Ethics Literacy The literacy of digital ethics is the moral-behavioral base of technological use, and is integrated as an immanent part of each one of the other digital literacies.
Digital Ethics Literacy: Background Digital ethics is a new field derived from an overall view of ethics as an essence and a moral foundation of human society since time began. Characteristics such as accessibility, transparency, interactivity, publicity, and sharing raise new, ethical issues. Principles like freedom of information, freedom of expression, perception of privacy, property, and so forth have become part of the digital citizen’s routine in daily life. Digital ethics relates to ethical, responsible, cautious behavior in the digital plane, based on knowledge of the law, moral values, observation of the rules, awareness of dangers on the internet, and protection rights. Alongside the benefits that digital means bring the citizen, they also cast him or her into a whirlwind of uncertainty and confusion, set traps
20
in his or her path and expose him or her to injury. Society has not yet found effective ways to deal with these ethical challenges, which cast in doubt the perception of civil and public rights. Thus, it is important to raise awareness of the proper use of technology and to avoid dangers by developing the ability to guard against other people’s malicious intentions that might damage a person’s body, dignity, reputation, or property. The foundation of literacy of digital ethics is ethical literacy. Ethical literacy is the personal ability of the individual to operate in the routine of his or her life with a conscious, moral standpoint, and to reach ethical decisions made in consideration of his or her obligations both to society and to himself or herself (Avni & Rotem, 2011). The literacy of digital ethics is the interface between digital literacy and ethical literacy. It includes a complex of abilities for use and exploitation of digital means for the benefit of the individual, while relating morally to a variety of ethical aspects and issues. It is based on acquired knowledge that refers to protection of human dignity in every context on the background of characteristics of the digital age. Among the central issues are: the right to privacy, the right to a person’s control over information about himself or herself, his or her decisions, and his or her personal space vis-á-vis freedom of information and freedom of expression (Birenhak, 2010); protection of intellectual property with respect to the principle of the public’s right to derive pleasure from works that are part of its heritage and cultural world. The citizen possessing literacy of digital ethics can reach ethical decisions on the basis of moral judgment and critical thinking, through deliberation on a variety of possible options in a given context; his or her ethical behavior is manifested in digital stages. A citizen’s digital competence, therefore, must include the main components of digital ethics. In British Colombia’s Digital Literacy Framework report of the American Library Association (2013) the components of the literacy of digital ethics are
Digital Competence
detailed, providing a full picture: internet safety, privacy and security, digital relationships and communication, cyberbullying, digital footprint and reputation, creative credit and copyright, legal and ethical aspects, and understanding and awareness of the role of ICT in society.
•
Digital Ethics Literacy: Definition
•
The ability of a person to operate in a proper, ethical manner though digital means and on the internet, through morally dealing with ethical aspects and issues entailed in the use of technology, and the ability to protect him/herself and other internet users from traps and improper use (Avni & Rotem, 2011).
Abilities Included in Digital Ethics Literacy •
•
•
Awareness: The ability to comprehend human, social, and cultural issues concerned with use of technology for better or worse, and acquaintance with principles of legal and ethical behavior within the digital environment. Online Presence: Management of digital presence and fingerprints the user leaves, coming from awareness of his or her true, personal identity and consideration regarding the way to represent himself or herself online. This includes understanding of the importance of a personal label and the ability to have online influence. Technological Means: Proper, personal use of technological means, apparatuses, programs, applications, and the internet, while guarding the principles of the dignity of the individual and his or her property. For this purpose, technical tools and methods are necessary for securing information on one’s personal computer and in the virtual, public space.
•
•
•
Intellectual Property: Comprehension of the value of guarding one’s intellectual property and copyright. Knowledge of laws and rules for protecting copyrights on the web, regulations for use and permission for use of materials belonging to others on the web. Privacy: Understanding the principle of the right to privacy and of the person’s right to control over his or her personal information, personal space with regard to the right to freedom of expression. Making decisions regarding sharing of personal information, understanding the privacy of another when publishing information about him or her on the web, avoiding exposure of personal information of people that one does not know, activating mechanisms for guarding privacy and securing information, familiarity with mechanisms, and rules for securing information. Connections: Developing sensitivity to methods of interpersonal communication on the web. Having the ability to distinguish between a positive connection or relationship and an inappropriate one on the web. Recognition of the internet as a global space; consciousness of respect for different cultures and inclusion of the other. Harm: Awareness of cyberbullying and causes of its increase and severity, such as anonymity and mass distribution. Avoidance of deliberate, malicious injury caused by elements such as incitement, racism and distribution of hateful messages; opposition to distribution of injurious information, either personal or public. Protection: Familiarity with methods of production against harm on the internet. Ability to identify and know how to defend oneself against dangerous or injurious situations on the web, including: impersonation, identity theft, bulling, incitement,
21
Digital Competence
•
phishing, addiction, unpleasant or threatening situations; avoidance of discourse with and conveying personal details to unknown people; recognizing insecure environments, using help channels and mechanisms for personal and technical reporting when discerning injury, threat, or danger. Collaboration: Understanding the value of sharing, being part of an online community and contributing to collective wisdom, regarding the potential risks of sharing information.
Technological Literacy Technological Literacy: Background
and to learn to use them by making them compatible with the user’s needs in his or her work environment. 2. Critical Understanding: Comprehension of the effective, proper use of digital means, among them digital media and information, by acquaintance with the consequences of use of technology on perceptions, beliefs, and emotions of a person on the environment and on the personal health of the user, and taking actions leading to minimization of problems arising from them. 3. Production: The knowledge and ability to produce content and communicate via digital technology as an active consumer and producer in digital society.
Technological literacy is an updated extension of computer skills required for intelligent operation of computer technology, information, and communication for personal use and for full participation in the global, digital society. This ability constitutes a crucial foundation for wise application of the other digital literacies included in Dcom. Technological literacy includes use of technological aids – tools, services, applications, and communication – for personal, academic, professional, and social needs. These updated, technological competences are vital in order to link the personal world of every citizen and student – which is implemented today through a personal computer or other device – to social conduct and to a variety of aspects of the reality of online life. It is highly important for the digital citizen to be able to manage him or herself through use of technology for personal needs, for professional development and acquiring an education, and for his or her overall social needs. Technological literacy is based on three principles (Media Smarts, 2010):
Technological Literacy: Definition
1. Use: Necessary knowledge and skills for using a variety of digital applications and the ability to constantly adapt to relevant, new, frequently updated technological tools
•
22
The attitude and ability to properly and effectively use digital technology in daily use as needed. Technological literacy includes: accessibility to technological means; selection of means in accordance with needs; technological operation by acquaintance with the basic principles of actions and functions; ability to constantly learn and adapt to new means by adjusting to changing needs; intelligent, proper use including development of awareness of the consequences of use of technology on the environment and on health, and acquaintance with methods and tools to minimize their potential harm.
Abilities Included in Technological Literacy •
Accessibility: Access to computer hardware and personal digital devices; knowing how to use resources such as programs and wideband services. Acquaintance and Use: Familiarity with the principles of current, common humancomputer interfacing. Knowledge and skills of the use and operation of devices,
Digital Competence
•
•
•
•
• •
applications, programs, tools, services, browsers, means of communication by smartphones and internet and auxiliary applications; differentiating between hardware and software. Management of applications, operational set-ups and systems, installing programs, program updates, means of protection of hardware and software, restoration and backup. Management and Ensuring Security of Personal Information: Management of personal information, files, data, and personal online space using technological means and services that are compatible with the user’s needs and his or her work environment. Intelligent, secure storage of digital information, enabling easy, accessible retrieval. Conducting backup in cloud computing and personal devices. Awareness of security, management of passwords and permission to access, including restoring access permission. Use of devices preventing illegal entry into one’s computer and eliminating viruses. Digital Communication: Acquaintance with and use of means of communication in various media, including tools and services for storage and collaboration. Cloud Computing: Use of clouds of services, apps, tools, and information on the web, without dependence on personal digital devices. Personalizing and Interfacing: Operating a personally adapted technological infrastructure for personal needs. Producing interfaces and synchronization of information among personal, digital means, computer, tablet, smartphone, etc. Navigation: Intelligent, critical navigation among the multitudes of tasks and variety of digital tools. Production: Processing, creating and producing information products, wisely choosing the appropriate digital means.
•
•
Health and Ergonomics: ◦◦ Minimizing Electromagnetic Radiation: Consciousness of the consequences of use of technology on the environment and one’s personal health, identification of possible risks and dangers, and taking steps to minimize them, with emphasis on usage habits that minimize harm caused by cordless electromagnetic radiation and by the electric network (ELF). ◦◦ Suitability of Furniture and Lighting Fixtures: Making furniture and lighting suitable to the work environment according to conventional recommendations for ways to minimize physical damage, such as placement of the computer, height, distance, support, etc. ◦◦ Physical Awareness: Taking care to position one’s hands correctly, avoidance of unnatural posture and lack of physical exercise for extended periods of time. ◦◦ Avoidance of Visual Overload: Awareness of the danger of damage to eyesight caused by prolonged focusing on the computer screen; importance of frequent blinking and resting one’s eyes. Ethics: Awareness of the implications of the use of technology on personal and social, ethical issues, with emphasis in the field of civil rights and protection.
Digital Reading and Writing Literacy Digital Reading and Writing Literacy: Background In times past, the concept of “literacy” was a synonym for the basic meaning of reading and writing; at a later stage, it also included understanding and deciphering of written messages in
23
Digital Competence
local, cultural contexts. As a central condition for optimal learning and functioning as a citizen in modern society, even before the age of digital technology, the concept was found in every educational aspect and discourse dealing with civil welfare (Street, 1984). Digital means have expanded the traditional concept of literacy as relating to a printed or handwritten text; it now relates to dynamic, multimedia, digital texts, by means of which humans absorb knowledge, produce, share, and communicate (Rotem & Peled, 2008). As opposed to the written or printed text, the digital text appears on a screen, organized units of text that are connected by links branching out to one another and to and from other resources of information and digital media. Digital text is interactive and allows for constant editing and updating; it is publicized in a variety of digital environments, among them email, text messages, websites, discussion groups, social networks, smartphones and digital books. Digital reading and writing literacy, as a foundation stone for the digital competence of every digital student and citizen, requires activation of different abilities and skills than those needed for traditional reading and writing, and includes collective, design and ergonomic challenges that are unique to the digital text. The reader is active and forms the experience of interactive reading by reasonable selection of the method for navigating a text and by use of a variety of digital aids such as dictionaries, search functions within the text and information resources. Gathering information from the digital text requires scanning of large quantities of material and instantaneous evaluation of its credibility. Thus, critical thinking has become an important component of reading literacy. Because the distinction between the receiver and the producer in digital text is often unclear, it frequently happens that the reader also becomes a writer. During the digital writing process, familiarity with a variety of tools and applications is required, including use of the keyboard and means of writing, processing, editing, and producing a
24
text in various forms, applications, and formats (OECD, 2013). Digital reading and writing are also a process entailed in human connection and in social and cultural discourse that takes place in an increasing diversity of social platforms, on which there are exchanges of interpersonal or public messages. Social interaction requires dealing with legal and ethical aspects, including protection of privacy and freedom of expression. The importance of this literacy is described by The European Commission (2001) as being the “key to all areas of education and beyond, facilitating participation in the wider context of lifelong learning and contributing to individuals’ social integration and personal development.” Reading and writing literacy is vital as a foundation for studying and learning in the framework of the educational system; it comprises a crucial condition for successful functioning in most fields in life.
Digital Reading and Writing Literacy: Definition The ability required to read, decipher, write, and produce interactive, linked, multimedia, digital text effectively, characterized by a variety of representations and designs, including decentralization and sharing.
Abilities Included in Digital Reading and Writing Literacy •
•
Characterization of Text: Familiarity with the attributes of both traditional and digital texts; recognizing the contexts in which it is appropriate to use the different types and levels of text. Accessibility: Accessibility to create text on different platforms such as a computer screen, smartphone display, tablet, website, application, message board, social network, digital book.
Digital Competence
•
•
•
•
• •
•
•
•
Navigation: Navigating among units of linked, digital texts that branch out through the information-rich space; using discretion regarding the proper track for reading purposes and critical consideration of suitability to the readers’ needs. Analysis, Deciphering and Comprehension: Ability to extract information from a text and research a text by using designated means and applications. Writing and Editing: Acquaintance with means of digital writing in the required language, among them using the keyboard, touch and sound activation, copying and editing text, adding objects, creating links, updating, and proofreading. Documentation and Management: Documenting and managing digital text by means of storing, saving, securing, keeping records of different versions, restoring, following up on changes, comparing. Creation and Production: Creating a linked digital text, interactive, and multimedia. Design: Designing a digital text appropriate for the writing goals, target community and platform, using a variety of editing and designing methods. Textual Representation: Selection of different modes of presenting the text and converting it to the various forms appropriate for the goals for presenting information, the target community, and the tools available to the presenter. Production and Publication: Producing and publicizing by various means and digital environments, in accordance with the writing goals, the recipients, and different publication platforms. Ergonomics and Accessibility: Dealing with the ergonomic aspects of use and study of the text, including provision of special needs accessibility.
•
•
Collaboration: Familiarity with characteristics of shared writing and awareness of the implications of involvement in writing and editing in a decentralized, shared environment, dealing with norms and conventions in writing on the web. Ethics: Awareness and comprehension of the legal and ethical aspects relating to writing and publicizing on personal and social platforms, such as protection of privacy, freedom of expression, and fair use of network sources.
Information Literacy Information Literacy: Background Information, a key resource to people, society, and the economy, has become accessible to all through the internet. The citizen has become not only a consumer but also a producer and distributor of information. The shared spaces allow information sharing and constructing collective knowledge. Coping with information is a vital asset for the personal advancement of every citizen, for development of social and cultural resources and for professional and economic achievement. Compared to traditional processes of accumulation of edited, arranged, controlled knowledge, in the current environment containing a wealth of information, one requires a complex, astute ability to use the huge quantities of knowledge, far greater than those available in the previous generation, in a proper, critical, efficient manner, in order to derive benefit, reach decisions or solve problems. A decade ago, UNESCO, in its Alexandria Proclamation (2005), proclaimed that information literacy is a vital precondition for efficient participation in the information society and is one of the basic human rights and a means to reduce inequality among people. The organization defined information literacy as a key to enabling effective access to production and transmission of content
25
Digital Competence
and to its use for economic, educational, and medical development as well as all other aspects of modern society (Horton, 2013). In the “Nine components of digital citizenship” (Ribble, 2014), aspects of information literacy are emphasized in the following fields: digital commerce, digital communication, and digital access. The roots of information literacy are found in the previous era of computer technology (Zurkowski, 1974). Reference is made to an assortment of skills required for efficient use of information from different sources: to identify, locate, retrieve, store, organize, edit, analyze, process, distribute, and publicize information, in accordance with the purposes of the use of information. The customary, widely quoted definition of information literacy is that of the American Library Association – ALA (1989): “the ability to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information”. Information literacy constitutes an approach that strives to develop an autonomous learner who is self-directed in his or her studies, who knows how to construct knowledge for him/herself, to think critically and to solve problems. It occupies a central place in the definitions of “21st century skills” (Partnership for 21st Century Skills, 2014b) and in the framework of digital competence (Ferrari, 2013). Different versions of standards and characteristics have been developed for it, based on the principles set by the American Library Association – ALA (2014): Know, Access, Evaluate, Use and Ethical/Legal. Information literacy is a basic academic ability. People who are information literate are those who have learned how to learn. Learning in an information-rich, technological environment forms the set of critical skills for optimal learning and the dimensions of information literacy necessary to cope with the new characteristics of information.
26
Information Literacy: Definition The ability to recognize when information is needed and to effectively locate, evaluate, and use the needed information
Abilities Included in Information Literacy •
• •
•
•
Accessibility to Information: The ability to obtain and consume digital information and participate in its transmission on the internet. Identification and Definition of Need: Defining the question/problem/issue and focusing on it by surveying the information. Presentation of Information: Acquaintance with various formats for presenting information, in accordance with the need (e.g., textual, visual, vocal, audiovisual, artistic, scientific information) and manner of usage. Sources of Information: Familiarity with a variety of information resources, among them different types of information and different repositories of information sources; distinguishing among them according to their features, purposes, and target audiences. Search Methods: Familiarity with and selection of methods and means to find and retrieve information according to the information required and the character of the sources: ◦◦ Identification of key features and formats for the purpose of focusing and filtering the process of finding information. ◦◦ Familiarity with the mechanisms for finding information such as navigation bars, hyperlinks, site guides, internal search engines.
Digital Competence
◦◦
•
• •
•
•
•
Filtering out and focusing on desired information via search engines by defining relevant terms and operators. ◦◦ Help from human services and/or digital agents to find information. ◦◦ Gathering data with digital tools for preliminary inquiry, data collection, and building databases. Evaluation of Information: Critically evaluating the quality of the information, its relevance, credibility and scope, according to the purpose and goal, and by means of comparison and cross-checking with additional sources. Information Management: Managing personal and shared information by designated tools and methods. Processing and Analyzing of Information: Use of methods and means for comparing, merging, processing, editing, and analyzing information in order to reach conclusions, make decisions, find solutions, and produce a suitable product of information. Creation of a Product: Consolidating, constructing, and presenting an edited, designed product of information in accordance with the goal, character of information, target community, circumstances, and platform for presentation, through operation of suitable digital means. Presentation and Publication of Information: Presenting the product of information clearly and in a style suitable for and supportive of the target community; publicizing it via digital means suitable to the purpose, characteristics of the information, target community, circumstances, and tools available to the user, by implementing effective presentation skills and methods to activate the target community. Distribution of Information: Distributing digital information to the intended target communities and relevant platforms by
•
using distribution means and advertising strategies, branding, image, market analysis, and market promotion. Ethics: Awareness, comprehension and relating to responsibility to follow laws and rules of ethics regarding accessibility to and use of information, such as intellectual property, copyrights, fair use of open or free sources, freedom of expression, privacy, and information security.
Digital Visual Literacy Digital Visual Literacy: Background Since the dawn of human society, visual representations have served as a means for transmitting information, messages, and meanings. Digital means have expanded the possibilities of traditional, written texts into updated language and culture, including a dynamic, representative variety of data, information, and messages. Infographics, as a general title for visual representation and illustration of information, was largely in the possession of advertising people and graphic artists and was intended primarily for conveying commercial themes. With the rising use of digital, visual media, it has become accessible to all. Visual culture, which enables anyone to produce a digital, visual text with its diversity of representations and platforms, has become common property (Morgan & Van Dam, 2008). Visual images in the digital environment play a major part in a variety of personal, professional, and social fields. Daily experiences, occasions, and events are conveyed by many media via digital photos, graphics, and video clips. Advertisements, games, computer applications, smartphones, user interfaces, operational instructions for electronic and other devices, digital content, and computerized media environments are currently presented through visual images. The availability of powerful design tools enables every individual to easily create designed graphic information in a variety of
27
Digital Competence
ways and means of expression. Visual representations have the power to convey ethical, humane, and social messages, and to express ideas, feelings, values, and worldviews. The growing use of visual components in digital text has increased the need of the student and citizen to develop his or her visual literacy that will enable him or her to consume, comprehend, and use digital, visual images for thought, study, expression, creation, and production. The language of visual images requires the ability to decipher, understand, and derive meaning from visual messages as well as the ability to create and produce them. This ability deals not only with identification of visual components but also with comprehension of the context in which they appear, awareness of the viewpoint and specific intentions of the creator, identification of the target community, and comprehension of the possible effects of the images upon it. In addition, visual literacy relates to the skills of critical viewing and consumption of visual messages, avoiding being deceived or led astray by manipulations. Visual literacy of the digital citizen is manifested primarily in digital commerce and digital communication (Ribble, 2014). For the student, visual literacy constitutes an important element in learning and an effective way of handling styles of presentation and internalization of information, which increase understanding and motivation to learn.
Digital Visual Literacy: Definition The ability to read, understand and analyze critically and to derive meaning from messages expressed in digital, visual texts; to communicate and convey visual messages efficiently; to produce expressions of visual messages by judging and selecting the way to represent them.
Abilities Included in Digital Visual Literacy •
28
Deciphering of a Multidimensional, Visual Text: Deciphering different types
•
•
•
•
of digital, visual texts including three-dimensional, static, and dynamic text. The ability to analyze and understand visual, multimedia information by integrating its details into a meaningful, interpretational piece of work. The ability to read visual, user interfaces, and understand visual instructions. Critical, Visual Reading: Critical reading, evaluation, and interpretation of meanings and messages of visual texts through awareness of the ease of manipulation in processing, editing, and publishing visual information. Production of a Visual Text: Creating, processing, and designing a text by means of visual representations, among them graphics, photography, graphic organizers and tools of production, and editing, by correspondence of the digital, visual text’s configuration to the context, goals, intended recipients, and publicizing platforms. Conversion and Correspondence of Information Representations: Converting information, data, ideas, and messages into visual expression by means of digital tools. Ethics: Awareness of ethical, legal, social and economic issues included in the very meaning of creation, its copyrights and the use of visual images.
Communication and Collaboration Literacy Communication and Collaboration Literacy: Background Communication literacy is the ability to use a variety of means of communication intelligently according to need. Educational discourse about communication literacy is relatively new. Beginning in the 1970s, a link has been made between communication and literacy, the intention of which is the ability to communicate in the context of
Digital Competence
reading and writing. Communication literacy in the digital age is based on integration between information and communication technology and ways of communicating between people (Approaches to Information and Communication Literacy, 2007). In the online space, there is widespread communication going on between the personal and the social, on a variety of platforms and through diverse means. Each channel of discourse has its own unique features, intended goals, diverse target communities, and rules for appropriate discussion. Digital means offer a new, interactive dynamic between creators of knowledge and messages and their consumers, and enable new channels of active expression, exchange of information and ideas, learning, and personal development through social involvement. The integration of technology in daily life has greatly increased the need to instill communication literacy, the ability to decipher and produce messages conveyed though digital channels, in every student and citizen. Components of basic communication skills include verbal, aural, and visual communications; nonverbal communication including ways of expression such as gestures; textual communication including various genres in the writing places on the web, and visual communications including a variety of visual representations (Communication Skills, 2014). At the basic level of literacy, this refers to the ability to decipher, produce, and convey messages via communications means. At the advanced level, this refers to comprehension of the place and contribution of communication in processes of creating meanings and in designing reality, as a platform for activity using digital communication services. The European Parliament and The Council of the EU (2006) has adopted this ability as the first of eight basic skills required by every individual for personal development, self-fulfillment, active civil participation, social belonging, and employment. Digital communication constitutes a basic condition and foundation for sharing within per-
sonal, professional, and social contexts. One of the main challenges is to train the citizen and student living in the global environment to effectively communicate and share knowledge and ideas, to learn from colleagues and experts, to produce collaborate outcomes through teamwork with nearby or distant partners, and to actively participate in collaborative communities that cross borders and cultures. A person who is literate in communications and collaboration, who knows how to use, and is capable of using, tools and services of a variety of digital communications for his or her various needs, is conscious of the rights of those photographed and of the responsibility for their appropriate, secure use.
Communication and Collaboration Literacy: Definition The ability to communicate in a variety of means of digital communication and to conduct efficient interaction that crosses boundaries and frameworks – whether interpersonal or collective, private or public, synchronous or asynchronous – by adapting the discourse to the characteristics of means of communication. The ability to share information and messages, using collaboration digital tools for personal and social needs, and to participate in online communities and networks, with ethical awareness of global citizenry.
Abilities Included in Communication and Collaboration Literacy •
Characterization of Means of Communication: Recognition of the attributes of the various means of communication, understanding of contexts and how it is fitting to use each of them according to need, goal, and target community. Evaluation of the efficiency of means of communication and awareness of their influence.
29
Digital Competence
• •
•
•
•
•
Accessibility and Operation: Accessibility to diverse means of communications and ability to handle them. Transmission of Messages: Effective expression of ideas, opinions, and messages using a variety of forms and contexts by digital means of communication. Collaboration: Cooperation, exchange of information and efficient, respectful teamwork that appreciates each individual’s contribution to the team, using digital means of communications and collaboration. Dynamic Communications – Cloud Services: Use of cloud computing and dynamic, mobile communications to derive meaning from messages and to publicize and share information. Global Citizenship: Use of means of digital communications to expand the network of acquaintances and connections crossing geographical boundaries; development of global consciousness and the practical meanings derived from them. Ethics: Protection of rights and appropriate rules of behavior such as property, privacy and freedom of expression, by awareness of the personal and social implications of behavioral norms in communications and in crossing borders.
New Media Literacy New Media Literacy: Background Media literacy in the 2010s has undergone significant changes, principally consolidating most means of media on a digital platform on the screen. This is the new media, in which a variety of media channels and different types of information, sharing, and social services are displayed together. The new media in the digital arena constitutes the “Where” (Peña-López,
30
2009) – publicized platform of the interpersonal, private, and public flow of information, messages, and communication. The new media is described by Jenkins (2006) as “media ecology”. Ito et al (2008) explain the uniqueness of the term: “We have used the term new media rather than terms such as digital media or interactive media because the moniker of ‘the new’ seemed appropriately situational, relational, and protean, and not tied to a specific media platform”. The new media offers and creates interactive, multi-sensorial activation by means of a powerful integration of words, graphics, and sounds. In this respect, Ito et al (2008) claim that young people develop, through personal experimentation, a wide variety of new forms of literacy, among them personal adaptation of the media for their own needs: selection of definitions, consumption of video clips, creation of new genres of presentation, remix editing, hybridizing, computer video games, sharing of information and knowledge, and even taking on traditional roles of adults, such as public and political involvement and social criticism. Traditional media literacy has focused on the critical, efficient consumption of information and messages on the various communication channels, through awareness of the effect of structuring and forming messages on the perception of reality. Because there is interaction in the new media with a large community whose members have changed from being media consumers to being producers of content and active partners in media, literacy has expanded to active consumption and production of content and messages by involvement of the individual in the interactive, social, online medium. Self-expression, conveying of messages in a variety of platforms, and construction of collective meaning, are important aspects for the digital citizen in a democratic society, which offer ethical, social consciousness, and responsibility for content and the ways it is distributed.
Digital Competence
Gaming as an Updated Component of New Media Expansion of new media literacy to a diversity of forms and modes of use has elicited new abilities, which until now have not been part of the educational-public discourse concerned with literacy competence required of the citizen and student. One of the abilities that has had an increasing place in our daily life is the digital game (Zimmerman, 2009), which offers personal enjoyment or social entertainment through gaming communications crossing borders of both place and partners. Furthermore, digital gaming is currently used for teaching and learning through experiences, challenges, and motivation and as a model for action in the real world. Gaming abilities involve complex aspects of reading and writing, advanced thought, problem solving, creativity, deciphering of various kinds of visual messages, psychological aspects of competition, and cooperation in socialecological relationships of human interaction.
•
•
• •
New Media Literacy: Definition The ability to consume messages from a variety of digital media channels in a personally adapted and critical manner, to demonstrate involvement in interactive social media, and to produce and publicize communicative messages that have collective significance in social and cultural contexts.
Abilities Included in New Media Literacy •
•
Characterization of Media Means: Recognition of the attributes of new media channels consolidated on the screen or digital platform; acquaintance with modes of presenting, disseminating and publicizing information; evaluation of the effectiveness of media methods, and awareness of their influence on the user. Active Viewing: Critical consumption of messages and derivation of information
•
and meaning from them by active viewing, research and synthesis, entailing comprehension of the goal sought in publicizing the messages and the effect of their form on the perception of reality. Navigating, Managing and Updating: Handling attentiveness among many simultaneously appearing media channels, applying discretion in choosing how to navigate among them. Acquaintance with tools and methods to receive constant communications information via manual and automatic update tools. Processing, Editing and Producing: Familiarity with tools and skills for use of media files, downloading, editing, processing, and production of media, and publicizing messages on a variety of digital media platforms. Gaming: Personal and social gaming by use of digital processes. Updating and Involvement: Constant but controlled updating of what is going on in the social and cultural media environment; involvement in interactive media for personal empowerment and social contribution. Ethics: Awareness of the variety of options for expression and use of various media and their implications, taking legal and ethical responsibility with regard to accessibility and use of media.
Social Media Literacy Social Media Literacy: Background The increased conduct through online social media offers empowering of personal, educational, professional and social processes that are vital for every citizen and learner who participates in a collaborative community. Social media are a complex of internet services enabling multidirectional communication in shared social places by sharing
31
Digital Competence
a variety of contents, including text, photographs, video, music, and internet sites. Among the online media places are social networks, apps (online applications) for sharing photos, videos and music, online discussion boards, Wiki, Twitter, blogs, and a variety of collaborative games. The use of social media in effect expands existing social reality to a rich, diverse expanse of human information and communication in a shared, digital environment; thus, involved, meaningful activity in relevant social services is currently available to everyone. Goodman (2014), for example, points out the vital importance of the abilities needed by every digital citizen in the social media context – among them, the use of media for commercial needs, understanding and coping with propagandist media, being conscious of the effect of censorship and media ownership on information published in the social media, and identification of stereotypes in communications. Social media literacy involves the ability to make intelligent choices of relevant places for collaborating and managing online presence through awareness of how to design a personal profile and its consequences for social interaction. This literacy includes intrapersonal, interpersonal, and social skills. Among the personal skills are self-orientation, time management, definition of limits, emotional management, self-expression, expressing opinions, designing messages, reflection and evaluating messages, creativity, reputation management, and aggression. The interpersonal and social skills include attentiveness, sensitivity, tolerance, sharing, communication, community management, discourse management, exchange of opinions, dealing with dilemmas, coping with criticism, giving and receiving feedback, social and sometimes commercial publicity, through ethical awareness that is personal, social, and global. Competence of the digital citizen, as a person who is literate in social media in the digital world, is manifested in consciousness of the right to free access to the web, to digital commerce, and to
32
relevant laws and rules – among them, personal security mechanisms and prevention of injury (Ribble, 2014).
Social Media Literacy: Definition A complex of qualifications that enable interconnectedness and interaction among people via communication and sharing of information. These competences allow one to communicate in a suitable manner, to be involved, to cooperate and participate actively, to give and take, in the social environment of communication and sharing of content. A person who is literate in social media forms his or her personality, worldview, and manner of social conduct, among other ways through tools for collaborating and managing information found on the web.
Abilities Included in Social Media Literacy •
•
•
•
Characterization of the Social Media Environment: Recognition of types of online social media places in order to make a wise selection: essence, platform, recipients, structure, types of pages, interfaces, conditions for use, manner of conduct. Operation: Registration, constructing a profile, definition of privacy, friends, interest groups, levels of collaboration, monitoring notices, posting and uploading information to various media. Online Presence: Construction of a personal identity, designing a profile, management of a reputation, participation in and management of a presence by awareness of consequences of digital fingerprints left by the user on the web. Communication and Interaction: Definition of circles of communication, management of notices, and responses by choosing suitable communication channels
Digital Competence
•
• •
•
(wall, chat), wording appropriate for the community, and the characteristics of the social media. Timing: Setting of timing of updates. Intelligent, critical expression by understanding the proper timing and frequency of promulgation of messages. Interfaces: The ability to interface social media with mobile devices such as smartphones or personal tablets. Intelligent, Critical Use: Understanding modes of transmission in social media, circumstances of success in communication, and distribution of messages through analysis of data and statistics. Critical evaluation of relevance and reliability of information and messages conveyed via the social network and its implications for the user and the information he or she shares. Awareness of the immediate, public, shared, social platform used by many who are interactive partners, and its implications on motivation for writing. Ethics: Awareness of legal and ethics aspects on the social network: copyrights, fair and decent protection of privacy, limits of freedom of expression, understanding the significance of distribution of injurious messages and information and avoidance of such actions. Awareness of defense mechanisms such as identification of impersonators and acquaintance with mechanisms for reporting injury.
Solution and Recommendations The proposed framework of Dcom offers ways to deal with the vital need to define and describe distinct conceptualization, broad awareness and clear guidance in feasible ways, in order to be able to implement them and instill them in citizens and students, This need includes supply, the current lack of efficient methods of control, and evaluation. Presently, the main challenge is to progress
from terminology to action, in order to instill all aspects of Dcom in the population – not as something that is nice to have but as a competence that is vital and essential to have. Over the years, Dcom has been a hazy, protean term bearing various names and referring to different things. Along with this, there has been a widespread, mistaken assumption regarding the “illusion of knowing” that must be uprooted: in fact, young people born into the technological age are not necessarily digitally literate simply as a result of it. They do not necessarily construct knowledge as a result of using digital literacy. Gathered data from almost 60,000 Grade 8 students in more than 3,300 schools from 21 countries or education systems within countries - suggest that the knowledge, skills, and understandings described in the CIL scale (Computer Information Literacy), can and should be taught. Regardless of whether or not we consider young people to be digital natives, we would be naive to expect them to develop CIL in the absence of coherent learning programs (Fraillon, 2014). Command of digital skills achieved through daily activity with computerized systems is far from constituting broad Dcom. Dcom is a complex, multifaceted field, part of personal development and empowerment. The process of acquiring it, both as a means and as a goal, does not happen circumstantially but rather requires deliberate, explicit study and experience. Therefore, both the social system and the educational system have the obligation to provide it. Formal policies and messages presenting Dcom as a goal of highest importance raise the awareness and crucial nature of the subject, but are not a substitute for immanent integration in the routine of training and learning. A program of action that includes a variety of methods of implementation as an answer to the various target communities, as well as the allocation of appropriate funding and other resources, must be put together and integrated into the daily routine.
33
Digital Competence
Instilling Dcom in citizens and students should be conducted through a variety of channels and paths of action, via synergy among partners connected with the various aspects of digital culture, such as government offices, economic bodies, organizations,, and educational systems. Suitable opportunities for training and experimenting in digital literacy must be developed and offered alongside deliberate, methodological study. For this purpose, it is important to supply access to available, professional mediation, relevant teaching materials, guidance, and exercises to the target communities. It is very important to demonstrate and present these materials in digital texts that will provide authentic experience in the reality of a lifestyle interwoven with technology. In the educational system and in training Dcom should be implemented according to a number of principles: •
•
•
34
The Principle of Integration: Integrating the digital literacies in teaching programs by correlating fields of literacies to characteristics of the disciplines. According to this principle, the teacher is responsible for integrating aspects of Dcom into his or her work as well as demonstrating literacy work in his or her teaching and modeling literacy in his or her personal and professional conduct. The Principle of Technological Focusing: Instilling basic aspects of digital literacy, with the emphasis on technological literacy, as a field that stands on its own; alongside an updated definition of literacies, with the emphasis on digital reading and writing, and teaching it as an integral part of language and linguistic studies. The Principle of Variation: There must be variation in the aspects of digital literacies learned in each grade at school. In academic programming, which of the aspects are taught in each grade level must be decided. Some of the activities should be
•
•
•
•
focused and short-term, in accordance with their circumstances, and some of them are to be long-term and include complex, multifaceted processes and products. The Principle of Spiraling: Dealing with Dcom is not a one-time activity, but rather recurring during the different stages of study over the school years in the form of a spiral, with, expansion, and depth. The Principle of Achievement: Student achievement in the field of Dcom should be measured through a system of appraisal and evaluation. The Principle of Modeling: Dcom should become an immanent part of all academic conduct, including through indirect demonstration, routine conduct by the administrative and teaching staff and their requirements from the students, through integration of connected vessels and modern, technological methods, and means of teaching and learning. The Principle of Authorization: In order to emphasize the importance of Dcom in the academic, and employment world and to ensure sufficient competence for proper functioning by the public at large and by students, it is recommended to define a precondition to be required for authorization – a kind of license that is compulsory for completion of every training course, degree, etc., and that attests to the Dcom of the citizen or student.
FUTURE RESEARCH DIRECTIONS Dcom is a complex, dynamic, developing field that offers many channels of research focusing on the effect of technological services, means, and tools on the behavior and conduct of the citizen and student. It is important to treat the framework of Dcom as a broad, open, dynamic expanse requiring constant updating, but at the same time include a
Digital Competence
clear, defined content of knowledge, skills, and abilities to wisely and properly use each aspect of Dcom. It is imperative to be open to research, to learn the new, developing language of young people born into the digital age, and at the same time, to research the effect on migrants – older citizens who must learn how to integrate technical processes into their personal and professional lives. Research on the methods of technology use in daily occurrences, on the essential knowledge required, the skills and viewpoints involved in ICT, and the ethical issues entailed in use of technology, will enhance the conceptual framework and instillment of Dcom. In the educational field, it is recommended to research the ways the relevant educators and organizations deal with imbuing and nurturing Dcom, learning from successes and expanding support mechanisms, mediation, and teaching materials accordingly. It is important to formulate ways to evaluate the different dimensions of digital literacy, and to construct assessment rubrics for evaluation of the model. It would be interesting to examine the effectiveness of instilling Dcom by comparing the various ways of teaching, such as, for example, teaching Dcom as a separate, distinct, focused field as opposed to integrating it in the disciplines of the an academic program.
CONCLUSION This chapter deals with a proposal for a conceptual framework of Dcom, which constitutes a human and civil need and right that is essential in order to wisely and properly function in the world in the 2010s through means offered by technology. Digital citizens and students today need complex, multifaceted qualifications, which until now have not yet been defined in a clear framework, have not been part of public and educational policy and civil consciousness, and are not instilled in a distinct, concrete manner in the public composed
of citizens and students. The very fact of defining the essence and content of this competence is likely to contribute to the understanding of the essence of intelligent, proper functioning with literacy in the complex of possibilities offered by technology in the digital age. In this chapter, a new view is presented of the literacies entailed in use of technology, as a synergetic complex. Dcom as presented here constitutes a net of literacies that do not focus on technology but on the forming and changing caused by the very presence of technology; on the knowledge, performance skills, thinking skills, viewpoints, and morals involved in conduct in a digital environment, which is the environment in which we now live and work. In this updated net, technology tints the familiar, traditional literacies in digital hues, that can be found in technological, digital reading and writing, information, visual, new media, communication and collaboration and social media, forming the updated character of literacies and the wise, efficient functioning with them. Alongside the conventional literacies, new abilities are integrated, which are derived from technological attributes that were non-existent before the digital age. The net of literacies proposed here is based on the existing concepts of types of competences and literacies; it attempts, as far as possible, to provide content, interpretation, and updating to these concepts, and in this way to rely on the proven pedagogical principles that have been sufficient for every time and context, and to add to them a level of characteristics of new abilities that are relevant for current use of technology. The structure of the proposed net consists of three layers: 1. Digital Ethics Literacy: Stands at the top of the net of literacies in which ethics constitutes a vital component of perceptions, viewpoints, and morals that are essential for proper application of technology by humans in every dimension of digital literacy
35
Digital Competence
2. Technological Literacy and Digital Reading and Writing Literacy: Constitute a basic foundation and functional interface for all the digital literacies and which are anchored as an inseparable part of them 3. Dimensions of Digital Literacies: In accordance with main fields that information and communication technology touches. These dimensions range from the connection with the personal use of technology to the interpersonal, social collaborative connection: literacy of information, digital visual literacy, literacy of new media, literacy of communication and collaboration, and literacy of social media The importance of consolidating a framework of Dcom lies in the proposal of clear anchors that enable formulation of a common, agreed-upon language to serve the public and educational discourse, including all the relevant authorities, to serve as a foundation of knowledge for creation of a unified policy on the subject and to constitute a sturdy, accepted basis for instilment and evaluation of a complex of updated knowledge, competences and skills regarding the literacies required by citizens and students in the 2010s.
REFERENCES American Library Association. Digital Literacy, Libraries, and Public Policy, Information Technology Policy’s Digital Literacy Task Force. (2013). BC’s digital literacy framework (DRAFT). Retrieved July 20, 2014, from http://www.bced. gov.bc.ca/dist_learning/docs/digital-literacyframework-v3.pdf American Library Association (ALA). (1989). Presidential committee on information literacy: Final report. Retrieved July 20, 2014, from http:// www.ala.org/acrl/publications/whitepapers/ presidential
36
American Library Association (ALA). (2014). Standards toolkit. Retrieved July 20, 2014, from http://www.ala.org/acrl/issues/infolit/standards/ standardstoolkit Approaches to Information and Communication Literacy. (2007). Teacher Tap- professional development resources for education & librarians. Retrieved July 20, 2014, from http://eduscapes. com/tap/topic72.htm Avni, E. (2012). Hitpatkhut mudaut etit shel morim digitaliim [Development of ethical awareness of digital teachers]. (Unpublished doctoral dissertation). University of Haifa, Israel. (Hebrew) Avni, E., & Rotem, A. (2009). Pgia mekuvenet. [Cyberbullying]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics. com/wp-content/uploads/2009/10/cyberBullying_IA_oct_09.pdf (Hebrew). Avni, E., & Rotem, A. (2010). Nohal shimush bemedia khevratit mekuvenet bevatei hasefer [Regulations for usage of online social media in schools]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics. com/wp-content/uploads/2010/12/socail-mediaschoo-IA.pdf (Hebrew) Avni, E., & Rotem, A. (2011). Oryanut etit baidan hadigitali – Mimiyumanut letfisat olam [Ethical literacy in the digital age – From skill to worldview]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics.com/ wp-content/uploads/2011/06/Ethical-LiteracyAI.pdf (Hebrew) Avni, E., & Rotem, A. (2013). Lemida mashmautit 2020 – Tekhnologia meatzevet mashmaut [Meaningful 2020 learning – Technology that forms meaning]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics.com/ wp-content/uploads/2013/09/deeper-learning2020-AI-.pdf (Hebrew)
Digital Competence
Birenhak, M. (2010). Merkhav ishi: hazkhut lepratiut bein mishpat vetekhnologia. [Personal space: the right to privacy between law and technology]. Nevo Publications. (Hebrew). Bynum, T. W., & Rogerson, S. (2004). Editors’ introduction: Ethics in the information age. In T. W. Bynum & S. Rogerson (Eds.), Computer Ethics and Professional Responsibility (pp. 1-13). Oxford, UK: Blackwell. Communication Skills. (2014). SkillsYouNeed. Retrieved July 20, 2014, from http://education-2020. wikispaces.com/ Eshet, Y. (2012). Thinking in the Digital Era: A Revised Model for Digital Literacy. Issues in Informing Science and Information Technology, 9, 267–276. Ferrari, A. (2012). Digital competence in practice: An analysis of frameworks. JRC Technical Reports [online]. Retrieved July 20, 2014, from http://www.ifap.ru/library/book522.pdf Ferrari, A. (2013). DIGCOMP: A framework for developing and understanding digital competence in Europe. European Commission. Retrieved July 20, 2014, from http://ftp.jrc.es/EURdoc/ JRC83167.pdf Fraillon, J. (2014). Preparing for Life in a Digital Age. In The IEA International Computer and Information Literacy Study. Springer International Publishing. Retrieved Nov 1, 2014, from http://research.acer.edu.au/cgi/viewcontent. cgi?article=1009&context=ict_literacy Fraillon, J., & Ainley, J. (2013). The IEA international study of computer and information literacy (ICLIS). Australian Council for Educational Research. Retrieved July 20, 2014, from http:// icils2013.acer.edu.au/wp-content/uploads/examples/ICILS-Detailed-Project-Description.pdf
Goodman, S. (2014, May 30) Social media literacy: The five key concepts. The George Lucas Educational Foundation. Edutopia. [blog message]. Retrieved July 20, 2014, from http:// www.edutopia.org/blog/social-media-five-keyconcepts-stacey-goodman Hall, R., Atkins, L., & Fraser, J. (2014). Defining a self-evaluation digital literacy framework for secondary educators: The DigiLit Leicester project. Research in Learning Technology, 22(0). doi:10.3402/rlt.v22.21440 Horton, F. W. (2013). Overview of Information literacy resources worldwide. Paris: UNESCO. Retrieved July 20, 2014, from http://www.unesco. org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/ pdf/news/overview_info_lit_resources.pdf Ito, M., Horst, H. A., Bittanti, M., Boyd, D., Herr-Stephenson, B., Lange, P. G. (2008). Living and learning with new media: Summary of findings from the digital youth project. MIT Press. Retrieved July 20, 2014, from http://digitalyouth. ischool.berkeley.edu/report Jenkins, H. (2006). Convergence culture: Where old and new media collide. New York: New York University Press. Lee, A., Lau, J., Carbo, T., & Gendina, N. (2013). Conceptual relationship of information literacy and media literacy in knowledge societies. World Summit on the Information Society – WSIS. UNESCO. Retrieved July 20, 2014 from http:// www.unesco.org/new/fileadmin/MULTIMEDIA/ HQ/CI/CI/pdf/wsis/WSIS_10_Event/WSIS_-_ Series_of_research_papers_-Conceptual_Relationship_between_Information_Literacy_and_ Media_Literacy.pdf Manovich, L. (2001). The language of new media. MIT Press.
37
Digital Competence
Measuring Digital Skills across the EU: EU wide indicators of Digital Competence. (May, 2014). DG Connect - European Commission. Retrieved Nov 1, 2014 http://ec.europa.eu/information_society/newsroom/cf/dae/document. cfm?action=display&doc_id=5406 Media Smarts. (2010). Digital literacy in Canada: From inclusion to transformation. A submission to the Digital Economy Strategy Consultation. Retrieved July 20, 2014 from http://mediasmarts. ca/sites/default/files/pdfs/publication-report/full/ digitalliteracypaper.pdf Morgan, S. A., & Van Dam, A. (2008). Digital visual literacy. Theory into Practice, 47 (2), 93–101. Retrieved July 20, 2014, from http://stevetrevino. pbworks.com/f/DVL.pdf Mossberger, K., Tolbert, J. C., & McNeal, S. R. (2011). Digital Citizenship: The internet, society, and participation. Scribd. Retrieved July 20, 2014, from http://www.scribd.com/doc/13853600/ Digital-Citizenship-the-Internetsociety-andParticipation-By-Karen-Mossberger-Caroline-JTolbert-and-Ramona-S-McNeal National Research Council. (1999). Being fluent with information technology. Washington, DC: The National Academies Press. Retrieved July 20, 2014, from http://www.nap.edu/catalog. php?record_id=6482 National Telecommunications and Information Administration. (2014). Digital literacy resources and collaboration. U.S. Department of Commerce. Retrieved July 20, 2014, from http://www. digitalliteracy.gov/ OECD - Organization for Economic Co-operation and Development. (2002). The definition and selection of key competencies. Retrieved July 20, 2014, from http://www.oecd.org/dataoecd/47/61/35070367.pdf
38
OECD - Organization for Economic Co-operation and Development. (2005). Definition and selection of key competencies: Executive summary. Paris: OECD. Retrieved July 20, 2014, from http://www. oecd.org/dataoecd/47/61/35070367.pdf OECD - Organization for Economic Co-operation and Development. (2013). PISA 2015- Draft reading literacy framework. Paris: OECD. Retrieved July 20, 2014, from http://www.oecd.org/pisa/ pisaproducts/Draft%20PISA%202015%20Reading%20Framework%20.pdf Partnership for 21st Century Skills. (2014a). ICT literacy. Retrieved July 20, 2014, from http://www. p21.org/about-us/p21-framework/350-ict-literacy Partnership for 21st Century Skills. (2014b). Framework for 21st century learning. Retrieved July 20, 2014 from http://www.p21.org/our-work/ p21-framework Pellegrino, J. W., & Hilton, M. L. (Eds.). (2012). Education for life and work: Developing Transferable knowledge and skills in the 21st century. Committee on defining deeper learning and 21st century skills. Board on Testing and Assessment and Board on Science Education, Division of Behavioral and Social Sciences and Education. National Research Council. Washington, DC: The National Academies Press. Retrieved July 20, 2014, from http://www.leg.state.vt.us/WorkGroups/EdOp/Education%20for%20Life%20 and%20Work-%20National%20Academy%20 of%20Sciences.pdf Peña-López, I. (2009). Towards a comprehensive definition of digital skills. ICTlogy. Retrieved July 20, 2014, from http://ictlogy.net/20090317towards-a-comprehensive-definition-of-digitalskills/
Digital Competence
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9 (5). Retrieved July 20, 2014, from http://www.marcprensky.com/ writing/Prensky%20-%20Digital%20Natives,%20 Digital%20Immigrants%20-%20Part1.pdf Ribble, M. (2014). Nine themes of digital citizenship. In: Digital Citizenship: Using Technology Appropriately. Retrieved July 20, 2014, from http://www.digitalcitizenship.net/Nine_Elements. html Rotem, A., & Avni, E. (2008). Reshet khevratit khinuchit [Social education network]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics.com/?page_id=2577 (Hebrew) Rotem, A., & Avni, E. (2010). Hamoreh bemalkodet hareshet hakhevratit – Moreh or khaver? [The teacher in a social network trap: Teacher or friend?] Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics.com/wp-content/ uploads/2010/11/teacher-student-facebook.pdf (Hebrew) Rotem, A., & Avni, E. (2011). Yisum horaa-limida bemedia khevratit mekuvenet [Teaching-learning implementation in the online social media]. Machon Mofet Journal, 46, 42-46. Retrieved July 20, 2014, from http://www.mofet.macam.ac.il/ktiva/ bitaon/Documents/bitaon46.pdf (Hebrew) Rotem, A., & Avni, E. (2012). Muganut hamoreh haretzuya bemisgeret medinuyut hitnahalut bakita [Desired teacher protection in the framework of policy of conduct in the digital classroom]. Toward Digital Ethics Initiative. Retrieved July 20, 2014, from http://ianethics.com/wp-content/uploads/2012/04/muganutIA4-2012.pdf (Hebrew) Rotem, A., & Peled, Y. (2008). Digital text. In A. Rotem & Y. Peled (Eds.), Likrat beit sefer mekuvan [School turns on line] (pp. 79–90). Tel Aviv: Klil Pub. (Hebrew)
Street, B. V. (1984). Literacy in theory and practice. Cambridge University Press. The European Commission. (2001). European report on the quality of school education: Sixteen quality indicators. Luxembourg: Office for Official Publications of the European Communities. Retrieved July 20, 2014, from http://europa.eu/legislation_summaries/education_training_youth/ lifelong_learning/c11063_en.htm The European Commission. (2014). Enhancing digital literacy, skills and inclusion (Pillar VI). Digital Agenda for Europe. Retrieved July 20, 2014, from http://ec.europa.eu/digital-agenda/ en/our-goals/pillar-vi-enhancing-digital-literacyskills-and-inclusion The European Parliament and the Council of the EU. (2006). Recommendation of the European Parliament and the Council of 18 December 2006 on key competences for lifelong learning. Official Journal of the European Union, L, 394(310). Retrieved from http://eur-lex.europa. eu/LexUriServ/LexUriServ.do?uri=OJ:L:2006: 394:0010:0018:en:PDF UNESCO – The United Nations Educational, Scientific and Cultural Organization. (2005). Beacons of the Information Society. The Alexandria proclamation on information literacy and lifelong learning. Retrieved July 20, 2014, from http://www.codyassociates.com/alexandriaproclamation.html UNESCO – The United Nations Educational, Scientific and Cultural Organization. (2013). Global media and information literacy assessment framework: Country readiness and competencies. Paris: UNESCO. Retrieved July 20, 2014, from http://unesdoc.unesco.org/images/0022/002246/224655e. pdf
39
Digital Competence
Zimmerman, E. (2009). Gaming literacy: Game design as a model for literacy in the twenty-first century. In B. Perron & M. J. P. Wolf (Eds.), The video game theory reader, 2 (pp. 23–31). New York: Routledge. Zurkowski, P. G. (1974). The information service environment: Relationships and priorities. Washington, DC: National Commission on Libraries and Information.
ADDITIONAL READING Gura, M. (2014). Teaching Literacy in the Digital Age: Inspiration for All Levels and Literacies. International Society for Technology in Education. Heidi Hayes Jacobs. (2014). Mastering Digital Literacy - Contemporary Perspectives on Literacy. Solution tree press. IN, USA. Leading Thinkers. (2014). Digital Media & Learning 2014. Spotlight on Digital Media & Learning (B. Ray, S. Jackson, & C. Cupaiuolo, Eds.). Spotlight on Digital Media & Learning.
KEY TERMS AND DEFINITIONS Communication and Collaboration Literacy: The ability to communicate in a variety means of digital communication and to interact effectively, crossing interpersonal and collective, private and public borders and frameworks, synchronous and asynchronous, fitting the discourse to the characteristics of the means of communication. The ability to share information and messages by collaborative digital tools for personal and social purposes and to participate in communities and online networks through ethical awareness and global citizenship.
40
Digital Competence: A civil right and need, vital for appropriate, intelligent study and functioning for the personal, social and professional aspects of all citizens in a society in the real world, through means that modern technology offers the citizen. Digital Ethics Literacy: The ability of a person to operate in a proper, ethical manner though digital means and on the internet, through morally dealing with ethical aspects and issues entailed in the use of technology, and the ability to protect him or herself and other internet users from traps and improper use. Digital Reading and Writing Literacy: The ability required to read, decipher, write and produce interactive, linked, multimedia, digital text effectively, characterized by a variety of representations and designs including decentralization and collaboration. Digital Visual Literacy: The ability to critically read, understand and analyze and to produce meaning from information and messages presented in visual, digital texts, to communicate and transmit visual messages effectively, to create and produce presentations expressing visual messages, by consideration and selection of how to present them. Information Literacy: The ability to recognize when information is needed and to effectively locate, evaluate and use the needed information. New Media Literacy: The ability to critically and suitably consume messages in a variety of digital media channels, to be involved in interactive social media, to produce and publicize communicative, public messages bearing collective meaning in social and cultural contexts. Social Media Literacy: A complex of qualifications that enable interconnectedness and interaction among people via communication and sharing of information. These competences allow one to communicate in a suitable manner, to be
Digital Competence
involved, to cooperate and participate actively, to give and take, in the social environment of communication and sharing of content. A person who is literate in social media forms his/her personality, worldview and manner of social conduct, among other ways through tools for collaborating and managing information found on the web. Technological Literacy: The attitude and ability to properly and effectively use digital technology in daily use as needed. Technological
literacy includes: accessibility to technological means; selection of means in accordance with needs; technological operation by acquaintance with the basic principles of actions and functions; ability to constantly learn and adapt to new means by adjusting to changing needs; intelligent, proper use including development of awareness of the consequences of use of technology on the environment and on health and acquaintance with methods and tools to minimize their potential harm.
41
42
Chapter 3
The Application of Transdisciplinary Theory and Practice to STEM Education Susan Malone Back Texas Tech University, USA Heather Greenhalgh-Spencer Texas Tech University, USA Kellilynn M. Frias Texas Tech University, USA
ABSTRACT The authors describe the application of transdisciplinary theory and practice to Science, Technology, Engineering and Mathematics (STEM) education at the undergraduate level. The modular approach which makes use of student collaboration within and across disciplines and input from outside experts holds promise for preparing students to address society’s “wicked” problems – those with interconnected causes and for which a solution often causes additional problems. Transdisciplinary theory and practice are described and their application to STEM education is proposed along with a model of measuring transdisciplinary skills. Recommendations are proposed for future research on cross-cultural/cross disciplinary models, pedagogy, measuring student collaboration, determining effective partnership models and institutional supports, and the potential role of the social sciences in contributing to research on transdisciplinary practice and education.
INTRODUCTION It has been widely accepted that modern educational curricula in the Sciences, Technology, Engineering and Mathematics (STEM) must help students develop critical thinking and problem-
solving skills. One-way transfer of information served the needs of 19th century industry but will no longer suffice in today’s high-tech, multi-faceted economy. The National Academies of Science, Engineering and the Institute of Medicine (NASEIM, 2014) call for movement beyond the current
DOI: 10.4018/978-1-4666-9441-5.ch003
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Application of Transdisciplinary Theory and Practice to STEM Education
patchwork of educational approaches to one that is integrated and multi-sectorial. Value-added skills such as creativity, analysis and synthesis, problem definition, and innovation will be required to sustain students as they enter a continually changing work environment. In response to this need, an approach to problem solving has emerged in the scientific and education literature variously known as Interdisciplinarity, Convergence, or Transdisciplinarity (NASEIM, 2004; 2014). The American Academy of Arts and Sciences (AAAS, 2013) defines transdisciplinary as “an approach that represents a functional synthesis of methodologies and a broad point of view that combines different fields. This is a step beyond interdisciplinary which borrows techniques from different fields without integrating them to yield new concepts and approaches” (p.2). AAAS cites the example of the need to develop economically and ecologically sound replacements for fossil fuels as requiring input from chemical, systems and environmental engineering, microbiology, plant science, ecology, computational science, and economics, as well as an understanding of social change. The transdisciplinary approach supersedes multi- and interdisciplinary practices and is better suited to addressing complex, open-ended problems because this approach transcends singular or interdisciplinary knowledge and applies domain-specific knowledge within an integrated framework. While multidisciplinarity approaches do include multiple disciplines; the differing perspectives are considered as side-by-side views on an issue. Interdisciplinarity extends beyond multidisciplinarity by integrating the theories, methods and concepts of several disciplines for the purpose of arriving at common solutions to multi-faceted projects or issues. (Cronin, 2008). For example, math, physics, and engineering might be brought to bear in the design of a light-rail transportation system for a city in an interdisciplinary effort. A transdisciplinary approach would also incorporate knowledge from
other disciplines such as marketing, finance, social science, humanities, and non-academic fields. In the light-rail example, transdisciplinarity might include input from neighborhood residents, government agencies, and civil society. On a larger scale, researchers, practitioners, policy makers, and civil organizations have called upon the scientific community and society at large to address issues requiring sophisticated research and intervention involving a high degree of collaboration among disciplines. An example is the United Nations articulation of Millennium Development Goals which address several interrelated humanitarian crises including: eradicating extreme poverty and hunger; reducing child mortality; improving maternal health; combatting HIV/AIDS, malaria and other diseases; and ensuring environmental sustainability (United Nations, 2014). AAAS (2013) asserts that Transdisciplinary better captures the extent of integration required than do terms such as collaboration, multidisciplinarity or interdisciplinarity. Transdisciplinarity is “the dismantling of disciplinary boundaries, rather than ad hoc collaborations, that could transform the scientific enterprise and deliver the potential to address previously intractable problems” (p. xii.) In a study of housing and health, Lawrence (2004) describes interdisciplinary research as a mixing of disciplines, while transdisciplinary research is characterized as a fusing of disciplines. In a similar vein, the National Academies of Science, Engineering, and the Institute of Medicine (NASEIM, 2014) have focused on the term “convergence” to describe the process of thinking beyond usual paradigms and approaching issues informed by many, integrated perspectives. Furthermore, the process of convergence takes place within a network of partners forming an ecosystem that facilitates basic research as well as translational applications and the potential to benefit society. Expansion of research and teaching beyond traditional disciplines has been promoted in both scientific and business literature (Cronin,
43
The Application of Transdisciplinary Theory and Practice to STEM Education
2008; Tufano, 2014). As one progresses from interdisciplinary to transdisciplinary approaches, the collaborative products increasingly reflect an integration of conceptual and methodological perspectives (Rosenfield, 1992). The fusion of practices has generated solutions beyond the scope of a single area of study and, in some cases, new “Mega” or “Meta” disciplines have developed (Costanza, 1990; NASEIM, 2014; NRC, 2009). Figure 1 illustrates the increasing integration of disciplines as one progresses from multidisciplinarity to interdisciplinarity and ultimately to transdisciplinarity and the potential formation of new disciplines (ATLAS, 2012).
The figure illustrates how discipline-specific knowledge begins as an encased shell of knowledge. Externalities such as the need to solve increasingly complex scientific and societal problems, however, have brought disciplines together in seeking solutions to challenges. Nevertheless, a gap remains between disciplinary knowledge and solutions to complex problems. Transdisciplinary research arose from the need to address this complexity, “taking into account the diversity of real-world and scientific perceptions of problems, as well as linking abstract and case specific knowledge” (Hadron, et al., 2008, p. 19).
Figure 1. From disciplinarity to transdisciplinarity. (© 2012, The ATLAS Publishing. Used with permission.).
44
The Application of Transdisciplinary Theory and Practice to STEM Education
Numerous researchers, policy makers and professional organizations have articulated several of today’s “Grand Challenges” (AAAS 2013; NASEIM, 2014; Hoffmann-Riem, Joye, Pohl, Wiesmann, & Zemp, 2008; NASEIM, 2014): • • • • • • • • • • • • •
Climate change. Food production and security. Fossil fuel dependency. Energy and water efficiency. Environmental footprint of agriculture. Emerging diseases and interacting disease mechanisms. Delivery of healthcare to populations in emerging economies. War, poverty and strife resulting in food insecurity and the displacement of persons. Need for precision medicine i.e., treatment specific to personal history, genetics and behavior. Economic and research challenges from China, South Korea and India. Industry backing away from long-term research in favor of shorter turnaround for innovations. Difficulty sustaining a high-tech workforce. Government cutbacks to research due to economic downturn, unemployment, debt, and rising healthcare costs.
While AAAS sees Grand Challenges as global, interconnected and urgent, it also identifies them as a unique opportunity to focus talent, motivate the scientific and engineering communities, inspire the next generation of STEM students, and capture the imagination of the public. Furthermore, it suggests an additional Grand Challenge: “to motivate alignment, cooperation, and integration of efforts and approaches across academia, government, and industry” (AAAS, 2014, p. 25). Thus, the need for collaboration across industry and government sectors has become acute, as has the realization that collaborative, problem-driven strategies will be required.
BACKGROUND During World War II, massive collaborative efforts were necessary to satisfy the needs of the wartime effort. This was followed with a period characterized by a withdrawal from collaborative efforts and an emphasis on basic research which was curiosity-driven and investigator-initiated, particularly in the life sciences. Nevertheless, the government and industry did continue to sponsor applied, mission-oriented research, primarily in the physical sciences and engineering. An example of the basic research focus is the War on Cancer which was investigator-initiated, while the Apollo Project was a massive inter-connected, mission-driven endeavor. Thus, two separate research cultures arose and drifted further apart (AAAS, 2013). With time and the end of the Cold War, physical science and engineering research in national laboratories became more diffuse with a greater percentage of federal funding allocated to basic research. In the physical sciences, basic and applied research gradually became isolated from each other. More recently, funding for physical science and engineering research has increased for applied research (AAAS, 2013). The environment in which research is conducted has become increasingly complex. Indeed, the siloing of academic work has made it increasingly difficult for engineers and scientists across disciplines to work in concert with each other. This is problematic, as most problems that scientists and engineers address have multiple components that bridge disciplinary boundaries (Crow, 2010; Griffin, 2010). Life sciences research primarily maintained its focus on basic research with an emphasis on scientific excellence as opposed to practical significance. Recent challenges in public health, particularly with regard to emerging diseases, have highlighted the need for a paradigm shift within the life sciences. The federal government has begun to promote translational research whereby discoveries in the laboratory are brought to bear
45
The Application of Transdisciplinary Theory and Practice to STEM Education
on health needs more rapidly than in the past, but the standard of success in this field often remains the award of an individual investigator award (AAAS, 2013). By the late 20th century, basic and applied research had become increasingly isolated from each other. `Nevertheless, collaborative research is emerging as the source of new knowledge and innovative solutions to problems. Research is progressing from a phase of ultra-specialization to one of collaboration among disciplines (AAAS, 2013). It has been noted that this process gradually began in the 1970s with the rise of increasingly complex challenges in industry, society, and the environment (Hadorn et al., 2008). The term “transdisciplinarity” was introduced at the first international conference on interdisciplinarity held in 1970. The focus at that time was on altering the linear structure of universities. The academic disciplines, so carefully delineated in the 19th century, were seen as leading to narrow careers and no longer serving society’s purposes (Klein, 2008a, Nicolescu, 2010). In response to awareness of the interconnectedness of influences, there emerged an approach which takes into account the complexity of interacting influences and the diversity of perceptions in solving real-world problems, a process which is inherently distinct from research within traditional discipline boundaries. Perspectives taken into consideration began to include those from the public and private sectors as well as from civil society. In addition, the orientation began to be in the service of the common good or public well-being, as opposed to contributions to pure science or responsiveness to client needs. It became necessary to take into consideration conflicting values and ethical issues of fairness and justice in addition to considerations of effectiveness and cost (Hadorn, Pohl & Bammer, 2011). What also became apparent was that the issues being addressed were more complex than society had previously noticed or was willing to admit. Of particular concern are society’s “wicked”
46
problems – those with inter-connected causes and consequences such that a proposed solution to one problem can cause further problems (Rittel, Horst & Weber, 1973). For example, a pending food crisis threatens to become one of the largest health issues of our time (Financial Post, 2009). Proposed agricultural solutions often contribute to environmental threats. Attempts to solve the problem through transportation and economic development are related to political stability, security and, in some cases, cultural norms. International aid in the form of food donations can undermine local producers and result in cessation of local food production, thus leading to further food insecurity and a spiraling down of the economy. Therefore, addressing these issues involves the interaction of social, political, managerial, economic, and biological systems. Within engineering, project requirements have changed considerably. Before the industrial revolution, functionality was the main concern. Since then additional requirements include production volume, cost reduction, production efficiency, improved appearance and marketability, quality, pollution controls, safety concerns, automation, computerization, miniaturization, complex systems integration, and resource constraints. However, the focus has been on the product as a system in itself, not on the product as part of a larger ecosystem. As an example, shifting requirements regarding environmental change cannot be solved with methods derived from the industrial age. The environment is a complex, adaptive system which defies prediction. Engineering solutions will inevitably become part of this larger ecosystem. Therefore, engineering must now consider the larger system and directly address complex adaptive systems design. The rise of systems theory (von Bertalanffy, 1969) contributed to the articulation of problems and solutions as being interconnected, a perspective known as “dependent origination” in Tibetan Buddhism. Reductionism, industrial age models, and the linear model of causality were no longer
The Application of Transdisciplinary Theory and Practice to STEM Education
recognized as being sufficient when working with inter-related systems. Added to this was the recognition that there would always be a degree of uncertainty in tackling complex problems. These uncertainties include unknown facts, insufficient understanding of causal relationships, and whether or not an intervention will be effective (Hadorn et al., 2008; Hadorn, Pohl & Bammer, 2011). One-way transfer of knowledge was no longer acceptable in the context of interactive relationships that spanned disciplinary boundaries and included the humanities, social science, public administration and civil society.
EMERGING PARADIGM IN STEM RESEARCH AND EDUCATION Transdisciplinary Research and Practice Ertas (2010) defines transdisciplinarity as the development of new knowledge, concepts, tools and technologies, shared by researchers from different disciplines. He further describes it as a collaborative process of knowledge generation and integration for designing solutions to unstructured, complex problems. Cronin (2008) proposes that transdisciplinary research is well suited to solving problems when it is difficult to grasp the complexity of the issue and there is a need to incorporate external non-academic knowledge into finding a solution. A true transdisciplinary approach involves bringing together academic researchers and non-academic experts in order to create new, boundary-spanning ideas. In describing collaborative work across disciplines, Sharp and Langer (2011) note that basic research in molecular biology and genetics is moving beyond the first iteration of interdisciplinary molecular and cellular biology, and proceeding to a transformative integration of life sciences, physical sciences, medicine and engineering. Supercomputing has made modeling an increas-
ingly powerful tool, complementing theory and experimentation. Advances in mathematics, information sciences, and computer engineering are enhancing work done in engineering and the physical, life and social sciences (AAAS, 2013). A transdisciplinary scientific collaboration in medicine found that results were enhanced by such cross-fertilization (Stokols, Harvey, Fuqua, & Phillips, 2004). These collaborations are not merely interdisciplinary efforts whereby a specialist in one field performs a function and then turns the project over to another specialist. “Rather, there must be multi-disciplinary collaboration from the start, with all participants having common reference points and language” (Sharp & Langer, 2011, p. 527). Transdisciplinary research can exist side-by-side with the more traditional approaches to scientific inquiry, which in many cases form the basis for transdisciplinary advances (NASEIM, 2014). Successful interdisciplinary researchers have integrated and synthesized disciplinary depth with breadth of methods and skills (NASEIM, 2004). The National Academies of Sciences, Engineering and the Institute of Medicine cites examples in which basic research, combined with a transdisciplinary approach, has resulted in new knowledge as well as benefits to society (NASEIM, 2014): • •
Cognitive neuroscience has benefited from collaboration with behavioral, biological and medical sciences. Recent identification of microbes and their relationship to health and disease has benefited from integration of knowledge from: ◦◦ Genetics, ◦◦ Life and chemical sciences, ◦◦ Mathematical and computational tools, ◦◦ Public health studies, ◦◦ Engineering and synthetic biology, ◦◦ Materials science,
47
The Application of Transdisciplinary Theory and Practice to STEM Education
◦◦
•
•
Clinical trials and regulatory procedures, ◦◦ Industry partners for manufacture, and ◦◦ Social and behavioral targeting of intervention to populations. Tissue engineering for purposes of replacing or improving tissue, organs and functions combines expertise from: ◦◦ Developmental biology. ◦◦ Engineering and materials science. Integration of physical sciences and engineering was instrumental in the development of: ◦◦ Satellite-based global positioning systems, ◦◦ Development of atomic clocks which are not subject to gravitational and atmospheric influences, ◦◦ Nanotechnology, ◦◦ Biotechnology, genomics, bioinformatics.
Sharp and Langer (2011) cite the results of such collaboration as including: • • •
Targeted nanoparticle therapeutics. Personalized medicine resulting from the integration of large data sets. Detection of early-stage disease with the use of micro-sensors.
In the process of these collaborations, each discipline benefits from the insights and processes of the other and the results can include not only innovative solutions but also the creation of “Macro” or “Meta” disciplines such as the development of the “New Biology” (NASEIM, 2014; NRC, 2009). The merging of diverse approaches - at what has been described as a high level of integration - is cited as a crucial strategy for solving knotty problems and addressing complex intellectual questions underlying these emerging disciplines (NASEIM, 2014).
48
Creation of New Frameworks Almost three decades ago, Costanza (1990) noted the emergence of “meta-disciplines” in university course offerings. Since then, interdisciplinary offerings and degrees have flourished. Within research paradigms, innovative disciplines and frameworks are developing, as in the case of the “New Biology” which resulted from a fusion of traditional biology plus physical and chemical sciences, computational science, mathematics, and engineering. Proponents of the New Biology argue that integrating life sciences with other disciplines will result in a deeper understanding of biological systems and yield insights into biology-based solutions in health, the environment, energy and the food supply (NASEIM, 2014). Klein (2008a) notes this trend and couples it with the rise of transdisciplinary research and education whereby a wider range of disciplines and stakeholders are included in the creation of knowledge and innovation. Some have extended the notion of collaboration to include the social sciences, economics, the humanities, civil society, industry, economics, government, and nonacademic knowledge (AAAS, 2013; Donnelly, 2004; Ertas, 2010, 2012; Hadorn et al. 2008; Leavy, 2011; Rosenfield, 1992; Root-Bernstein & Root-Bernstein, 2011; Shuster, 2008). “Because convergence extends beyond basic science discovery to translational application, bringing clinical, national laboratory, and industry partners into convergent research efforts can provide valuable connections and potentially increase the impact of research” (NASEIM, 2014, p. 86). Furthermore, many convergence efforts include an entrepreneurship component that leads to the development startup companies, new products, economic innovation, and new jobs (NASEIM. 2014). The emerging process for scientific research is that it must take into consideration societal purposes and practices. In addition, scientific inquiry is seen as a collective endeavor, not sim-
The Application of Transdisciplinary Theory and Practice to STEM Education
ply a matter for researchers to address amongst themselves. As the concept of transdisciplinarity has evolved, characteristics have been noted which extend beyond the concept of integrated collaboration among science disciplines. Science “increasingly draws upon contributions from fields such as the economic and social sciences, which have their own cultures and norms that must be considered” (NASEIM, 2014, p. 6). In order to foster common understandings and communication, researchers must learn how to incorporate the different viewpoints and areas of expertise. Ertas (2010) notes that transdisciplinarity has been described as a dialogue among researchers and society.
The Role of Dialogue and Team Efforts There is a need in both research and teaching efforts to lower barriers to understanding. It has become widely recognized that future scientists will be required to understand a broad range of disciplines, to critically evaluate information and to be able to work as a productive member of a team (NASEIM, 2014, Colgoni & Eyles, 2010). Therefore, dialogue has been cited as a key component of effective collaboration in that it contributes to shared meaning (Hadorn, et al., 2010). NASEIM (2004) recommends that researchers immerse themselves in the languages, cultures, and knowledge of their collaborators. Considerable differences exist among disciplines with regard to research methods and expectations. Ertas (2009) found that project development in bioscience is complicated by cultural, vocabulary and analytical differences among specialties as well as between subspecialties. For example, within the field of chemistry, the assumptions and approaches used by formulation chemists and medicinal synthetic chemists can differ significantly. In a transdisciplinary project conducted by Baccini and Oswald (2008), the investigators introduced individuals from each discipline to the language and concepts of the others. While this
process takes extra time, NASEIM (2004) notes that it is valuable for the building of consensus and for learning new approaches.
Transdisciplinary Education Transdisciplinary theory and practice can serve as a model for a paradigm shift in STEM education. Three significant features of the transdisciplinary model are a) its ability to address open-ended issues b) the incorporation of knowledge from multiple academic fields, and c) doing so within an integrative approach. A transdisciplinary approach to STEM education teaches students to engage in collaborations outside the bounds of their discipline, consider many perspectives, and engage in mutual discovery. Students preparing for careers in the STEM and business fields will benefit from an integrated curriculum that includes components of the social sciences, economics, and business while emphasizing the importance of communication and teamwork (Gehlert, 2012). The National Academies finds it imperative for higher education design programs to promote student learning that transcends traditional disciplinary boundaries and that prepares future scientists to develop convergent approaches to complex scientific questions (NASEIM, 2014). Likewise, AAAS (2013) cites the need for education to model the new paradigm in transdisciplinary problem-solving. Additionally, studies suggest students describe these types of courses as more engaging and effective (Lattuca, Voigt & Fath, 2004). In particular, information is easier to recall (Rogoff, 2003), and the organization and storage of knowledge may be enhanced (Ausubel, Novake & Hanesian, 1978). The characteristics of a transdisciplinary STEM education include: an emphasis on teamwork; bringing together non-academic experts and academic researchers from multiple disciplines; developing and sharing of concepts, methodologies, processes, norms, and tools - all to create fresh, stimulating ideas that span boundaries.
49
The Application of Transdisciplinary Theory and Practice to STEM Education
The transdisciplinary approach teaches students to seek collaboration outside the bounds of their areas of expertise, to explore different perspectives, to express and exchange ideas, and to gain new insights. To meet this need, Colgoni and Eyles (2010) have made use of a modular, theme-based curriculum which begins with a six-week foundations module and inter-disciplinary teams of instructors within “integrated concept seminars” beginning in first year STEM studies. The emphasis is on active engagement of students in the inquiry-based process. This approach is congruent with that of the National Academies which also suggest the use of modules that can be added and removed with experience. Pilot versions of the modules could be tested during summer courses or in seminars (NASEIM, 2014). The National Academies recommend that curricula at the undergraduate level should integrate relevant physical, mathematical, computational, and engineering concepts (NASEIM, 2014). The incorporation of data gathering and analysis into foundational courses and opportunities for undergraduate research has also been cited as important for the development of future scientists and engineers (NASEIM. 2004). The key, however, is to facilitate an integrated foundation of knowledge and skills within a curriculum that balances depth and breadth. Sharp and Langer (2011) call for training that stresses deep disciplinary background, plus robust cross-disciplinary pedagogy. Suggestions for striking this balance include the National Academies citing the need for involvement of students, faculty, staff, department chairs and deans who are open to communicating across a breadth of disciplines while simultaneously facilitating depth of expertise. Klein points to the “new quadrangulation of disciplinary depth, multidisciplinary breadth, interdisciplinary integration, and transdisciplinary competencies” in the education of students (Klein, 2008a, p. 406). In her conceptualization, the tra-
50
ditional disciplines do not disappear but rather form the foundation for a new view that transcends traditional knowledge and prepares students for complex problem solving. Students will still need to be able to internalize and express the concepts of their traditional fields. Klein (2008a) indicates that students should develop the ability to: • • •
Identify information pertinent to solving a problem, Compare and contrast differing approaches, Generate an integrative framework.
Five goals for undergraduate learning in all STEM fields have been recommended by the National Academies (NASEIM, 2014): 1. Develop the intellectual capacity to address real-world, complex problems. 2. Build confidence and willingness to address problems from multiple perspectives. 3. Build the ability to communicate across disciplines. 4. Develop the ability to make decisions when faced with uncertainty. 5. Develop an understanding of the strengths and limitations of multiple perspectives. The Academies further assert that these goals can be accomplished with a team-based, problemsolving approach in which the teams are composed of students from multiple disciplines. A problem-solving approach pushes the evolution of curricula and keeps courses fresh, a benefit for both students and faculty. Problem-solving approaches can also be an effective way to help students learn how to work in teams. An important consideration when using this type of team-based, problem-solving strategy is to form student teams that are diverse in terms of educational and personal background, to provide practice opportunities to collaborate in such environments
The Application of Transdisciplinary Theory and Practice to STEM Education
and because research has shown that teams that include a diverse mix of individuals may be more likely to succeed. (NASEIM, 2014, p.82) The approaches described above do incorporate in an integrated manner, several STEM disciplines in an active, problem-solving approach. However, they do not necessarily include disciplines such as social science or economics, nor do they stress public/private collaboration or engagement with civil society. Grasso and Martinelli (2007) state that those in the field of engineering must look beyond math and science in seeking solutions to entire problems, a process which involves an appreciation of complexity and inter-relatedness of systems (Leavy, 2011) . “The social sciences and humanities are under-tapped resources for convergence efforts” (NASEIM, 2014, p.14). While researchers and industry have become accustomed to working across boundaries, undergraduate courses offered by many universities have remained discipline-specific for quite some time. Unfortunately, interdisciplinary science education is an awkward fit for an academic structure built upon discipline-based departments often scattered across an institution’s campus (NASEIM, 2014). Perspectives on a given problem differ depending upon the training and culture of each discipline. Even within an academic department such as engineering, the underlying assumptions, methods, and vocabulary differ for those in chemical, civic, mechanical, electrical, and computer engineering. But real-world problems do not necessarily align with traditional academic disciplines. Klein urges undergraduate students . . . to seek courses at the interfaces of traditional disciplines that address basic research problems, courses that study social problems, research experiences that span more than one traditional discipline, and opportunities to work with faculty who have expertise in both their disciplines and the interdisciplinary process. (Klein, 2008a, p. 405)
Modular Approaches to Transdisciplinary Learning The National Academies recommend that components of a new curriculum be designed as modules. Such modules could integrate knowledge from several fields and facilitate transdisciplinary teaching (NASEIM, 2014). As noted earlier, Colgoni and Eyles (2010) have made use of a modular, theme-based curriculum. Increasingly, teams have become the primary unit of performance in the workplace, and collaboration skills now play a major role in career success (Katzenbach & Smith, 2003). Thus, the first step in preparing students for transdisciplinary problemsolving in the workplace is to enroll students from several perspectives in integrated courses focused on addressing real-world problems. Dr. Ertas, Professor of Mechanical Engineering at Texas Tech University has developed a senior engineering design course based on core and supplementary modules which is offered to students in mechanical and electrical engineering (Ertas, Maxwell, Tanik, & Rainey, 2003; Ertas, Gatchel, Rainey, & Tanik, 2007). Prior to use of the modular approach, a previous iteration of an interdisciplinary course was implemented in which faculty from differing engineering disciplines took turns teaching skills to classes composed of mechanical and electrical engineering students. The classes were not truly integrated, nor did the assignments focus on open-ended complexities. While student teams were interdisciplinary, not all the teams “jelled.” Student feedback indicated that some material presented in lectures and included in tests was more representative of either electrical or mechanical engineering and therefore unfair to students from the other discipline. Class participation was lacking in some cases, especially on the part of the mechanical engineering students when the electrical engineering instructor was lecturing and vice-versa. And it was possible for students
51
The Application of Transdisciplinary Theory and Practice to STEM Education
to hide in the project groups and not contribute fully to the effort. The result was that faculty who were involved with the program received six years of suboptimal student evaluations (Ertas, Frias, Tate & Back, 2015).
Solutions and Recommendations Presently, a modular approach to undergraduate education is planned in Engineering at Texas Tech University. Students will be required to complete four Core Modules and will be allowed to choose from a variety of Supplemental Modules, based upon the particular problems and solutions they pursue. The content of the core modules is derived from engineering principles and includes information and knowledge common to multiple disciplines as well as appropriate shared concepts and methods. The modules are designed to enable students to progressively synthesize the modular information and create new transdisciplinary knowledge as they solve a given research project while also incorporating domain-specific knowledge unique to their individual backgrounds. The resulting core modules consist of: Transdisciplinarity and Complexity Management (1): This first module establishes a practical foundation for complexity management and addresses human behavior as well as societal, economic systems and environmental systems. Students evaluate a system’s complexity in relation to functions and qualitative factors, such as social mores and human values. The module covers a) definitions and characteristics of complexity; b) understanding complexity in thought and behavior; c) modeling of complex systems; d) tools and methods for managing complex systems; e) strategies for addressing social complexity; f) complexity and structure; g) management and integration of knowledge; h) managing complexity through systems design; and i) the a process of interactive management. Transdisciplinarity Sustainable Development (2): Transdisciplinary assessment and methodol-
52
ogy development for purposes of guiding research, policy and action towards sustainability is covered. Students learn broad research skills and knowledge in strategies for sustainable integration, sustainable resource use and management, environmental conflict resolution, policy formulation and decision-making. Rural and urban sustainability, ecological sustainability, the interconnectivity of environment, economy and society will also be covered. Transdisciplinarity Research and Discovery (3): The focus of this module is to enable students to work jointly with others across disciplines. This module covers: generic design; idea generation and management; brain-writing pool and idea structuring; tradeoff analysis methodology; collaborative activities, practice and research ethics; transdisciplinary research process using a systems approach; impact of social issues on design; the role of experts in transdisciplinary research processes and transdisciplinary case studies. Transdisciplinarity System and Product Development (4): This module teaches system and product development methods, techniques and tools so that engineers will have a big-picture view of the whole system/product lifecycle and will be able to use systematic approaches in designing and developing products and systems. Risk assessment, and how to deal with uncertainties will also be covered. Supplemental Modules are planned from which students can select information pertaining to their team Project and will include units on: • • • • • • • • •
Design, Financial modeling and management, Innovation and creativity, Ethics, Statistical decisions and reliability modeling, Entrepreneurship, International studies, Market research, Social systems,
The Application of Transdisciplinary Theory and Practice to STEM Education
• •
Collaboration, Communication and teamwork.
In addition, students will be encouraged to seek input from experts outside the classroom and outside the university in order to gather whatever information may be needed for their projects.
Example of Project Development and Solution Approach An example of a class project would be to design a self-sustaining rural eco-village system. The existing knowledge that would be available through modules as well as information gathered
from outside the classroom and from experts could be integrated with the students’ newly generated knowledge. In Figure 2, The ATLAS Group (2012) depicts the integration of context specific as well as generic knowledge that is brought into the problem solution domain. Ertas (2012) postulates that students will progressively synthesize the modular content and, with integration of concepts and processes, they will launch a spiral of new knowledge. The process takes place within the task of solving a realworld, complex problem for which conventional disciplines and methods would be insufficient. Students would work in cross-functional teams, much as they would in the workplace (Parker,
Figure 2. Transdisciplinary skills and new knowledge development process. (© 2012, The ATLAS Publishing. Used with permission.).
53
The Application of Transdisciplinary Theory and Practice to STEM Education
2003). The composition of the class would consist of engineering students at Texas Tech University and would eventually expand to include chemical engineering students from Prairie View A & M University, a historically black institution (HBCU) who will join via video conferencing. In addition, the course will also be cross-listed with a senior course in marketing from the College of Business at Texas Tech University. In this way, cultural and disciplinary diversity will be achieved. Klein (2008a) sees trandisciplinary education as a dialogue of content and process. Content designates the knowledge, principles, and methods of different disciplines as well as inter- or transdisciplinary approaches, the ability to analyze complex problems, and familiarity with problem solving strategies. Process designates knowing how to organise and participate in interor transdisciplinary processes and projects and knowing how to communicate across academic disciplines, and with external stakeholders. (Klein, 2008a, p. 407) In the model proposed, student transdisciplinary teams would engage in a process of “Interactive Management” – a system used in industry which is designed for management of complexity. It involves substantial communication among team members whereby the participants learn from each other (Warfield and Cardenas, 2002). In the Interactive Management process, students are assigned specific “Roles” to play such as Planner, Facilitator, Client, Designer, Stakeholder, or Implementer. The Interactive Management process enables ideas to be grouped into clusters that can be examined internally as well as in relation to each other. Through constructive dialogue, the collective best ideas of sub-project teams emerge and the incorrect or fuzzy ideas that teams hold at the outset are eventually recognized as incorrect or sharpened to make them useful. The process is designed to
54
enable teams to transcend disciplinary knowledge and create solutions, ultimately resulting in innovation and, in the case of education, improved STEM learning. In this way the transdisciplinary course mirrors the transdisciplinary process in research and practice. Use could be made of an online interactive management platform such as Lensoo which allows team members to engage in online video conferencing, networking among both students and trandisciplinary and domain-specific experts. The Lensoo teaching and learning platform and its mobile education apps on iOS and Android are free to students in universities and colleges in the United States through the nonprofit organization, The Academy of Transdisciplinary Learning & Advanced Studies (TheATLAS). Accessibility can be expanded to international students located in other countries who are enrolled in classes collaborating with U.S. students. The Lensoo ecosystem consists of a mobile app, “Lensoo Create,” integrated with a cloudbased collaborative learning infrastructure. This ecosystem is a disruptive innovation that provides an enhanced teaching and learning experience through peer-to-peer and learner-toexpert (mentor) interaction. The system provides integrated study groups, note taking, and community collaboration. In addition, it integrates self-assessments, graded tests, and issuance of certificates on course completion. The Lensoo Create app and ecosystem are currently used by over 500 professors worldwide to teach more than 300 courses in Engineering, and there are over 30,000 users in K-12 with more than 10,000 multimedia presentations created, published, and shared in STEM and other subject areas. Enabling students to dialogue within and across project teams, to discuss alternative proposals, to update their progress, and to engage external parties throughout the project is postulated as a means for improving both team projects and student learning. The process of collaboration
The Application of Transdisciplinary Theory and Practice to STEM Education
could itself be a focus of study as the process unfolds and an interactive platform yields data on interactions. Quantitatively, patterns of student interactions could be tracked via a process known as the TeamBased Design Structure Matrix ([DSM] Tyson, 2001) which tracks information flow among team members. The following possible information flows could be captured: Level of Detail: Ranges from sparse, consisting of sharing of documents and email, to rich, consisting of face-to-face interactions Frequency: Low to High Direction: One Way/Two Way Timing: Early to Late Use of qualitative research methods such as Discourse Analysis (Bernard & Ryan, 2010) could be applied to the processes involved in Interactive Management in order to determine whether
students consider points of view beyond their own, to what degree use of power influences decisions, and to monitor evidence of teamwork, leadership and networking. Students could track this data themselves to gain insights into their collaborative processes. It is proposed that transdisciplinary integration will manifest with the application of disciplinary and transdisciplinary knowledge in the development of a research project that will eventually result in an innovation. The learning outcomes desired include the development of transdisciplinary skills through modular research projects, and learning how to collaborate with other students as well as with practitioners and domain experts from outside the classroom with whom they may interact. Student outcomes in the form of transdisciplinary skills have been depicted by The ATLAS Group (2012). (See Figure 3.)
Figure 3. Transdisciplinary skills. (© The ATLAS Publishing. Used with permission.).
55
The Application of Transdisciplinary Theory and Practice to STEM Education
It can be seen that many of the desired skills are not those typically included in assigning grades in a course. For example, non-verbal communication, and thinking about one’s behavior are largely personal in nature and present difficulties for instructors to assigning student grades based in these indicators. Student assessment is further complicated by the intertwined relationship of transdisciplinary competencies. Areas for future research which could address these and other education issues are outlined in “Future Research Directions” below.
Institutional Supports The offering of truly integrated curricula represents a cultural shift for higher education which has traditionally been organized around disciplinebased departments. While there is a significant body of research articulating the value of an interdisciplinary approach to science teaching and student learning, interdisciplinary science education still fits awkwardly into an academic structure that is layered into discipline-based departments often scattered across a campus’s geography. (NASEIM, 2014, p. 81) Designing an integrated course may take several rounds of development (NASEIM, 2014). Institutional supports in the form of release time for course development, provision of support staff and graduate assistants, and revision of tenure and promotion guidelines reflective of the effort required would facilitate the expansion of these efforts across a campus. Implementation of the new paradigm will require creation of “ecosystems” or logistical partnerships that encourage interactions within and across organizations, both academic and otherwise. Such ecosystems will provide the impetus to go beyond traditional paradigms and to approach
56
issues from many perspectives through networks of partners and will facilitate science innovation from basic research to translational application. (NASEIM, 2014). “Because convergence relies on integrating expertise from multiple fields and multiple partners, an open and inclusive culture, a common set of concepts and metrics, and a shared set of institutional and research goals are needed to support this close collaboration” (NASEIM, 2014, p.7). The National Academies recommend “designing educational modules, hiring faculty in transdisciplinary clusters, and establishing new research institutes” (NASEIM, 2014, p. 8). In addition, the Academies identified the following essential cultural and structural elements in successful convergence ecosystems (NASEIM 2014): •
• •
•
People: Involvement of students, faculty, staff, department chairs and deans who are open to communicating across a breadth of disciplines while simultaneously facilitating depth of expertise. Organization: Important to have inclusive governance procedures and be open to taking risks and learning from mistakes. Culture: Mutual respect and willingness to share knowledge and facilitate team members becoming conversant across disciplines. Ecosystem: Logistical partnerships that encourage interactions among multiple partners both within and across organizations. Other recommendations include:
• • • •
Informal gatherings for faculty interested in promoting convergence in teaching, Online resource for classes that incorporate convergence, Collaborative teaching of classes, Release time to develop courses,
The Application of Transdisciplinary Theory and Practice to STEM Education
•
• •
Provision of examples for incorporating convergence in advanced as well as introductory courses for graduates and undergraduates, Cross-institution collaboration, Modifications to tenure and promotion policies to recognize the value of transdisciplinary teaching and research.
NASEIM further recommends that institutions develop specific Memoranda of Understanding with partners that address as many contingencies as possible. While this is a time consuming process, its long-term results are well worth the effort. In some cases, scientists and educators have initiated their own supports in the form of professional networking and, in one case, the formation of a nonprofit organization, TheATLAS, which was founded as a catalyst for change in education and research with a focus on transdisciplinary approaches to solving complex scientific and societal problems. The thrust of this nonprofit lies outside the realm of traditional disciplines in science and engineering and at the nexus between disciplinary boundaries and frontiers. Services include a global information exchange through innovative publishing and an “open laboratory” for the sharing of ideas. In addition to the above practices and recommendations, it will also be necessary to identify further practices that facilitate convergence. “Continuing social science, humanities, and information-science-based studies of the complex social and intellectual processes that make for successful IDR are needed to deepen the understanding of these processes and to enhance the prospects for the creation and management of successful programs in specific fields and local institutions” (NASEIM, 2004, p. 7.) Klein (2008a) cites “cognitive flexibility” – the ability to seek solutions beyond one’s own discipline - as among the skills needed by educators in designing and leading transdisciplinary efforts in education. By modeling this orientation, educators will demon-
strate use of cognitive flexibility in overcoming cultural and disciplinary barriers to scientific discovery and the implementation of change.
FUTURE RESEARCH DIRECTIONS Assessment/Evaluation Methods of evaluation are not well documented in the literature on transdisciplinary research, implementation, or education. The nature of these emerging practices is complex. In education questions arise as to the weight that must be given to the final product in relation to the collaborative and integrative processes involved in arriving at the end result. While collaboration and dialogue are essential components of transdisciplinary practice and education, the tools necessary to enhance the development of student assessment and program evaluation are also critical. Klein outlines parallels between transdisciplinary research performance and evaluation and notes that “standards must be calibrated, and tensions among different disciplinary, professional, and interdisciplinary approaches carefully managed in balancing acts that require negotiation and compromise” (Klein, 2008b, p. S116). She provides seven overarching principles for an evaluation framework: “(1) variability of goals; (2) variability of criteria and indicators; (3) leveraging of integration; (4) interaction of social and cognitive factors in collaboration; (5) management, leadership, and coaching; (6) iteration in a comprehensive and transparent system; and (7) effectiveness and impact” (Klein, 2008b, p. S116). Conventional evaluation methods are not well suited to transdisciplinary research and also fall short when considering student assessment and more broadly, evaluation of educational efforts. Colgoni and Eyles (2010) call for the development, application and assessment of appropriate pedagogical methodologies. Likewise, the National Academies point to a need for reliable and valid
57
The Application of Transdisciplinary Theory and Practice to STEM Education
assessment and accreditation methods (NASEIM, 2004) and call for a “common set of concepts and metrics” (NASEIM, 2014, p. 7). A comprehensive approach to student assessment should take into consideration not only student and team products, but also individual student comprehension and retention, the ability to define and analyze a problem, and the ability to engage in team collaboration. Colgoni and Eyles (2010) also include the ability to communicate to professional, policy-making, and lay audiences as an essential skill. Ertas (Ertas, 2012; Ertas, et al., 2015) has proposed a multi-modal, multi-layered approach to student assessment and evaluation of the new pedagogy based on experience with successive
iterations of implementing transdisciplinarity in the classroom. Table 1 contains the operational definition for the transdisciplinary skills used in assessment in the Texas Tech University model (Ertas, 2012). Refinement of these indicators could continue with each new cohort of students. Results, particularly students’ statements on questionnaires and in exit interviews, could be incorporated into continuous program improvements as well as refinement of assessment methods. In addition to focusing on student assessment, transdisciplinary educational programs should themselves be evaluated in such a way that informs continuous improvement of both pedagogical and assessment methods. Such evaluations should in-
Table 1. Criteria for determining transdisciplinarity Indicators to Measure Transdisciplinarity
Degree of Indication
Deeper understanding of the material
Check: midterm and final exams; interactive problem solutions to what degree and how correctly methods and fundamental concepts are used. (Individual and Team Levels)
Transdisciplinary skills
Check: social, thinking, research, communication, and self-management skills, self-efficacy, survey of student attitudes and interest (Ertas, Kollman & Gumus, 2011; Hollander, Loibi & Wilts, 2004; Cai & Zhao, 2012; Chen, Gully & Eden, 2001, Oliver, 1993).
Knowledge integration
Check: whether the content of the research outcomes reflect knowledge integration; diversity of knowledge sources; sharing from different sources; how many of the integrative steps set out and how well the steps were carried out (Newell, 2006).
Generation of new knowledge that transcends disciplinary boundaries
Check: content of the research outcome; what kind of existing data and information are used to transform them into a new knowledge; knowledge assets such as intellectual capital; value of new knowledge.
Collaboration and team processes
Check: practice of collaboration of the project teams with different disciplines; interaction of social and cognitive factors in collaboration and team work; transdisciplinary behavioral patterns of project team members; use of external experts (Klein, 2008b; Loehr, 1991).
Innovation
Check: capture of new physical phenomena; bootstrapping of existing technologies; use of disruptive technology; patent system. (Devon, 2004; Green, Gavin & Aiman-Smith (1995).
Creativity
Check: number of concepts generated (fluency); diversity of concepts generated (flexibility); originality of concepts generated (originality); check amount of detail of concepts (elaboration).
Management, leadership and networking
Check: how well the organizational structure fosters communication; networking among group members and project teams; joint work activities and shared decision making; leadership tasks - cognitive tasks, structural tasks, and process tasks (Loehr, 1991; Zula, Yarrish & Christensen, 2010).
Research and Bibliometric indicators
Check: literature search, diversity and the co-authorship of the references publications used, content of the research outcome, possible paper resulted from the modular project; research benefit to society.
© 2012, TheATLAS Publishing. Used with permission.
58
The Application of Transdisciplinary Theory and Practice to STEM Education
clude a balance between (a) criteria characteristic of transdisciplinarity such as contributions to creation of an emerging discipline and whether they address real-world problems, and (b) traditional criteria, such as research excellence (NASEIM, 2004; 2014). A process by which student assessment can be incorporated into continuous program improvement has been illustrated by the ATLAS Group (2012) and appears in Figure 4. A gap remains in the ability of conventional methods to address the complexity of transdisciplinary methods. Use of metrics, such as documentation of communication within teams and interactions with external parties, points to underlying collaboration in addressing thorny issues. Of necessity, many of the criteria and measures presented in Figure 4 and Table 1 are subjective and lend themselves to qualitative evaluation.
The increasing sophistication of qualitative research genres and specification of methods promises to illuminate potential means for determining the integration of student learning of content and transdisciplinary skills within the context of solving real-world complex problems. For example, Discourse Analysis (Bernard & Ryan, 2010) could be applied to the processes involved in the Interactive Management processes in order to determine whether students consider points of view beyond their own and to what degree use of power influences decisions. Figure 4 represents a systems perspective on the intertwined process of student assessment and program evaluation. Information derived from student assessment is used in revision of course requirements and content which, in turn, informs revised learning outcomes, and revised methods
Figure 4. Process of assessment/evaluation. (© The ATLAS Publishing. Used with permission.).
59
The Application of Transdisciplinary Theory and Practice to STEM Education
of assessment. Nevertheless, the gap remains between current means of assessment and the complexity of the projects students will be developing. Creativity on the part of faculty and openness to broader, more qualitative means of assessment will be needed as adoption of these pedagogical techniques becomes more widespread. Research on educator methods of assessing student learning and evaluating course content and processes will help to strengthen transdisciplinary education.
Pedagogical Research Comparison of traditional and transdisciplinary educational methods, with random assignment to transdisciplinary and control classes would help to determine whether the two methods differ with regard to student grades and team projects. In addition, student interactions within and across teams and interaction with outside experts could be compared. Qualitative studies could focus on the role of the instructor in the two different conditions and the types of projects assigned. There is also a need for research to identify the types of projects that can be used in transdisciplinary courses. Sharing of such information across traditional boundaries will strengthen the new pedagogy and contribute to its acceptance by institution administrators, professional societies and accreditation agencies.
Research on Student Collaboration Consistent with the trandisciplinary approach to learning, studying, and problem solving for complex projects, STEM and non-STEM students may spend a tremendous amount of their time focusing on the real-world problems. Addressing these problems will require effective opportunities for students to communicate and collaborate. Of particular importance for these students is a means for engaging with each other since it is likely students classified as different “majors” across the college campus many not congregate
60
in close proximity as would students of the same “major.” Therefore, students require alternative platforms for conversation. Use of online means for engagement thus assumes strategic advantage in the functioning of student teams. Platforms which track student interactions can serve as a basis for study of the effectiveness of student teams. The following research topics are suggested for the development of critical insights about how students engage in collaborative projects with an internet-based learning platform. •
•
Collaboration via an electronic platform. Much of the focus of previous research in collaboration among teams has focused on face-to-face interactions among group members. However, relatively little is known about how collaboration occurs among students interacting online in addressing a complex project. Research questions may include: ◦◦ Whether and how students’ teambased information flows differ as a result of an electronic platform? ◦◦ Is the centrality of information flows consistent with team leadership positions? ◦◦ How might information flows predict project performance? Measures of collaboration via an electronic platform. Unlike face-to-face communication, limited research has been done to suggest how collaboration and the perception of collaboration may be measured differently as a result of an online platform. Research questions may include: ◦◦ How is collaboration in learning measured for students utilizing electronic platforms? ◦◦ Are measures of collaborations consistent across different disciplines within a transdisciplinary setting?
The Application of Transdisciplinary Theory and Practice to STEM Education
◦◦
Do transdisciplinary collaborations differ from collaborations in traditional STEM courses?
Research on Cross Disciplinary Teams Recommendations have been made for crossdisciplinary teams that include students from various STEM disciplines as well as inclusion of students from the social sciences, business and the humanities. There is a need to determine the effect of team composition on STEM learning as well as on the learning in those other disciplines. Such studies could address the following: •
• •
The effect of inclusion of non-STEM students in a transdisciplinary course on STEM students’ learning of STEM principles. The effect of inclusion on non-STEM students’ learning of STEM principles. The effect of inclusion on STEM students’ learning of principles from the social sciences, business and the humanities.
Research on Formation of Professional Partnerships Cross-disciplinary, cross-institution, and crosscultural partnerships result in diversity of viewpoints which provides fertile ground not only for scientific discovery but also for wider acceptance of the results (Baron & Kreps, 1999). The National Academies call for expanding the scope of partnerships to include those that are multinational and recommend that representatives from all these perspectives be brought into an ecosystem of those working on educational innovation (NASEIM, 2014). The National Academies recommend identification of evidence-based practices that facilitate convergence and development of partnerships,
especially with small colleges and universities that serve groups which have traditionally been under-represented in STEM careers (NASEIM, 2014). Research illuminating such practices will not only nurture transdisciplinarity across subject areas, but also help to strengthen institutions individually and contribute to the greater goal of fostering transdisciplinarity more broadly.
Research on Institutional Supports Discipline-specific cultures and institutional structures can interfere with creation of research and educational ecosystems responsive to the promise of boundary-spanning perspectives. Because institutional supports are so important to the success of transdisciplinary research, it is of vital importance that research addresses the factors that inhibit or facilitate its implementation and the means for effecting institutional acceptance and change. Case studies on both successful and unsuccessful models will illuminate the steps that can be taken to bring about various models of transdisciplinary STEM education. Higher education can learn from industry which for quite some time has integrated expertise from several fields to address complex problems. Research documenting institutional practices business and higher education can also shed light on potential solutions for STEM education. Also, educators can capitalize on studies of student attraction to courses that integrate practices across disciplines, especially if those courses have social relevance.
Social Science Research on Transdisciplinary Education The social sciences may be of help in deepening understanding of the means for facilitating the anticipated shift in educational structures and processes. Bringing these areas of study into the dialogue on transdisciplinary education may help
61
The Application of Transdisciplinary Theory and Practice to STEM Education
us understand the complex social and intellectual factors that influence acceptance of societal and institutional transitions and the successful implementation of truly transdisciplinary learning.
CONCLUSION Growing complexity in solving scientific and societal problems will require continual adaptations by experts in their respective fields, many in ways not possible to anticipate. Thus, educators must prepare students with the transdisciplinary experience and skills and the cognitive flexibility that will enable them to expand scientific frontiers and address complex issues as they arise. It is the responsibility of higher education to provide a foundation that will allow students to be conversant across disciplines, to analyze and synthesize information, to grasp and address complexity, to work in cross-functional teams, and to adapt to change. The future will be here, whether or not academe’s practices and institutions are prepared for it. The preliminary theoretical work on convergence and transdisciplinarity, coupled with pedagogical, communication, institutional and social science research can provide a foundation for further educational research that that will help to clarify the way forward. Both quantitative and qualitative research will be needed to provide a comprehensive picture of best practices and the means for preparing students for the careers and challenges of tomorrow. The collaborative efforts of researchers from education, STEM fields, social science, communication science and organizational studies in transdisciplinary approaches in the study of optimal means for collaborative STEM education will be required to show the way forward.
62
REFERENCES American Academy of Arts and Sciences. (2013). ARISE 2: Unleashing America’s research & innovative enterprise. Cambridge, MA: American Academy of Arts and Sciences. Ausubel, D. O., Novak, J. D., & Hanesian, H. (1978). Educational psychology: A cognitive view (2nd ed.). Holt, Rinehart and Winston. Back, S. M. (2009). The bio-entrepreneurship MBA: Options for business schools. Journal of Commercial Biotechnology, 15(2), 183–193. doi:10.1057/jcb.2008.57 Baron, J. N., & Kreps, D. M. (1999). Strategic human resources: Framework for general managers. Wiley. Bernard, H. R., & Ryan, G. W. (2010). Analyzing qualitative data. Los Angeles: Sage. Cai, S., & Zhao, Y. (2012). A study of technical design-based instrument and multi-dimensional assessment in engineering learning. Latin American and Caribbean Journal of Engineering Education, 62, 32-42. Chen, G., Gully, S. M., & Eden, D. (2001). Validation of a new general self-efficacy scale. Organizational Research Methods, 4(1), 62–83. doi:10.1177/109442810141004 Colgoni, A., & Eyles, C. (2010). A new approach to science education for the 21st century. EDUCAUSE Review, 45(1), 10–11. Costanza, R. (1990). Escaping the overspecialization trap. In M. E. Clark & S. A. Wawrytko (Eds.), Rethinking the curriculum: Toward an integrated interdisciplinary college education (pp. 95–106). Greenwood.
The Application of Transdisciplinary Theory and Practice to STEM Education
Cronin, K. (2008). Transdisciplinary research (TDR) and sustainability. Environment Science and Research (ESR) Ltd. Retrieved on February 20, 2014 at: http://www.learningforsustainability. org/pubs/Transdisciplinary_Research_and_Sustainability.pdfn Crow, M. (2010, July/August). Organizing Teaching and Research to Address the Grand Challenges of Sustainable Development. Bioscience, 60(7), 488–489. doi:10.1525/bio.2010.60.7.2 Devon, R. (2004). EDSGN 497 H: Global approaches to engineering design. Retrieved on July 20. 2014 from http://web.archive.org/ web/20050801085903/http://www.cede.psu. edu/~rdevon/EDSGN497H.htm Donnelly, J. F. (2004). Humanizing science education. Science Education, 88(5), 762–784. Ertas, A. (2010). Understanding of transdiscipline and transdisciplinary process. Transdisciplinary Journal of Engineering & Science, 1(1), 55–73. Ertas, A. (2012). Integrating transdisciplinarity in undergraduate education. Transdisciplinary Journal of Engineering & Science, 3, 127–143. Ertas, A., Frias, K., Tate, D., & Back, S. M. (2015). Shifting engineering education from disciplinary to transdisciplinary practice. International Journal of Engineering Education, 31(1), 94–105. Ertas, A., Kollman, T., & Gumus, E. (2011). Transdisciplinary educational performance evaluation through survey. International Journal of Engineering Education, 27(5), 1094–1106. Ertas, A., Maxwell, T. T., Tanik, M. M., & Rainey, V. (2003). Transformation of higher education: The transdisciplinary approach in engineering. IEEE Transactions on Education, 46(1), 289–295. doi:10.1109/TE.2002.808232
Financial Post. (2009). Retrieved from http:// www.worldhunger.org/articles/08/food−crisis.htm Gehlert, S. (2012). Shaping education and training to advance transdisciplinary health research. Transdisciplinary Journal of Engineering & Science, 3, 1–10. Grasso, D. U., & Martinelli, D. (2007). Holistic engineering. The Chronicle of Higher Education, 53(28), B8. Green, S. G., Gavin, M. B., & Aiman-Smith, L. (1995). Assessing a multidimensional measure of radical technology innovation. IEEE Transactions on Engineering Management, 42(3), 203–214. doi:10.1109/17.403738 Griffin, M. (2010). How do we fix system engineering. 61st International Astronautical Congress. Prague, Czech Republic. Paper: 1IAC-10. D1.5.4 Gulowsen, J. (1972). A measure of work group autonomy. In L. Davis & J. Taylor (Eds.), Design of jobs (pp. 374–390). Harmondsworth, UK: Penguin. Hadorn, G., Biber-Klemm, S., GrossenbacherMansuy, W., Hirsch Hadorn, G., Joye, D., Pohl, C., & Zemp, E. et al. (2008). The emergence of transdisciplinarity as a form of research. In H. Hoffmann-Riem, S. Biber-Klemm, W. Grossenbacher-Mansu, D. Joye, C. Pohl, U. Wiesmann, & E. Zemp (Eds.), Handbook of Transdisciplinary Research (pp. 19–39). Springer. doi:10.1007/9781-4020-6699-3_2 Hadorn, G., Pohl, C., & Bammer, G. (2011). Problem solving through transdisciplinary research integration. Procedia: Social and Behavioral Sciences, 28, 636–639.
63
The Application of Transdisciplinary Theory and Practice to STEM Education
Hoffman-Riem, H., Biber-Klemm, S., Grossenbacher-Mansuy, W., Hirsch Hadorn, G., Joye, D., Pohl, C., & Zemp, E. et al. (2008). Idea of the handbook. In H. Hoffmann-Riem, S. BiberKlemm, W. Grossenbacher-Mansuy, D. Joye, C. Pohl, U. Wiesmann, & E. Zemp (Eds.), Handbook of transdisciplinary research (pp. 3–17). Springer. doi:10.1007/978-1-4020-6699-3_1 Katzenbach, J., & Smith, D. K. (2003). The wisdom of teams: Creating the high-performance organization. New York: Collins Business Essentials. Klein, J. T. (2004, May). Disciplinary origins and differences. Paper presented at Fenner Conference on the Environment: Understanding the population-environment debate: Bridging disciplinary divides, Canberra, Australia. Klein, J. T. (2008a). Education. In H. HoffmannRiem, S. Biber-Klemm, W. Grossenbacher-Mansuy, D. Joye, C. Pohl, U. Wiesmann, & E. Zemp (Eds.), Handbook of Transdisciplinary Research (pp. 399–410). Springer. doi:10.1007/978-14020-6699-3_26 Klein, J. T. (2008b). Evaluation of Interdisciplinary and transdisciplinary research: A literature review. Journal of Preventive Medicine, 35(2), S116–S123. doi:10.1016/j.amepre.2008.05.010 PMID:18619391 Lattuca, L. R., Voigt, L. J., & Fath, K. Q. (2004). Does interdisciplinarity promote learning? Theoretical support and researchable questions. The Review of Higher Education, 28(1), 23–48. doi:10.1353/rhe.2004.0028 Lawrence, R. J. (2004). Housing and health: From interdisciplinary principles to transdisciplinary. Futures, 36(4), 487–502. doi:10.1016/j. futures.2003.10.001 Leavy, P. (2011). Essentials of transdisciplinary research: Using problem-centered methodologies (Vol. 6). Walnut Creek: Left Coast Press.
64
Loehr, L. (1991). Between silence and voice: Communicating in cross-functional project teams. IEEE Transactions on Professional Communication, 34(1), 51–56. doi:10.1109/47.68428 National Academies of Science and Engineering and National Institute of Medicine (NASEIM). (2004). Facilitating interdisciplinary research. Washington, DC: National Academies Press. Retrieved on June 12, 2014 from: www.nap.edu/ catalog/11153.html National Academies of Science and Engineering and National Institute of Medicine (NASEIM). (2014). Convergence: Facilitating transdisciplinary integration of life sciences, physical sciences, engineering and beyond. Washington, DC: National Academies Press. Retrieved on February 11, 2014 from: http://www.nap.edu/ catalog.php?record_id=18722 National Research Council (NRC). (2009). A new biology for the 21st century. Washington, DC: The National Academies Press. Newell, W. H. (2006). Interdisciplinary integration by undergraduates. Issues in Integrative Studies, 24, 89–111. Nicolescu, B. (2010). Methodology of transdisciplinarity – levels of reality, logic of the included middle and complexity. Transdisciplinary Journal of Engineering & Science, 1(1), 19–38. Oliver, R. L. (1993). Cognitive, affective, and attribute bases of the satisfaction response. The Journal of Consumer Research, 20(3), 418–430. doi:10.1086/209358 Parker, G. M. (2003). Cross-functional teams. San Francisco: Jossey-Bass. Pohl, C. (2010). From transdisciplinarity to transdisciplinary research. The ATLAS Transdisciplinary- Transnational-Transcultural BiAnnual Meeting. Georgetown, TX: TheATLAS Publications.
The Application of Transdisciplinary Theory and Practice to STEM Education
Rittel, H. W. J., & Weber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169. doi:10.1007/BF01405730 Rogoff, B. (2003). The cultural nature of human development. Oxford University Press. Root-Bernstein, R., & Root-Bernstein, M. (2011, March 16). Turning STEM into STREAM: writing as an essential component of science education. Retrieved on July 12, 2014 from: http://www.nwp. org/cs/public/print/resource/3522 Rosenfield, P. L. (1992). The potential of transdisciplinary research for sustaining linkages between the health and social sciences. Social Science & Medicine, 35(11), 1343–1357. doi:10.1016/02779536(92)90038-R PMID:1462174 Sharp, P. A., & Langer, R. (2011). Promoting convergence in biomedical science. Science, 333. PMID:21798916 Shuster, D. M. (2008, August). The arts and engineering. IEEE Control Systems Magazine, 28(4), 96–98. doi:10.1109/MCS.2008.924881 Stokols, D., Harvey, R. G., Fuqua, J., & Phillips, K. (2004). In vivo studies of transdisciplinary scientific collaboration, lessons learned and implications for active living research. American Journal of Preventive Medicine, 28(2), 202–213. doi:10.1016/j.amepre.2004.10.016 PMID:15694529 T h e AT L A S . ( 2 0 1 4 ) . L E N S O O : A c tive learning & group collaboration platform. Retrieved on July 5, 2014 at: http:// www.theatlas.org/index.php?option=com_ content&view=article&id=244 Transdisciplinary Skills. (2012). Retrieved on July 6, 2014 at: http://www.amersol.edu.pe/es/ pyp/PYPskills.asp Tufano, P. (2014, February 24). Business schools are stuck in a self-reinforcing bubble. Business Week.
Tyson, R. B. (2001). Applying the Design Structure Matrix to system decomposition and integration problems: A review and new directions. IEEE Transactions on Engineering Management, 48(3), 292–306. doi:10.1109/17.946528 United Nations. (2014). We can end poverty: Millennium development goals and beyond 2015. Author. Von Bertalanffy, L. (1969). General system theory. George Braziller. Warfield, J. N. (1982). Interpretive structural modeling. In Group Planning and problemsolving methods in engineering (pp. 155–201). New York: Wiley. Warfield, J. N., & Cardenas, A. R. (2002). A handbook of interactive management. Palm Harbor, FL: Ajar Publishing Company. Wiesmann, U., Biber-Klemm, S., GrossenbacherMansuy, W., Hirsch Hadorn, G., Hoffmann-Riem, H., Joye, D., & Zemp, E. et al. (2008). Enhancing transdisciplinary research: a synthesis in fifteen propositions. In H. Hoffmann-Riem, S. BiberKlemm, W. Grossenbacher-Mansuy, D. Joye, C. Pohl, U. Wiesmann, & E. Zemp (Eds.), Handbook of transdisciplinary research (pp. 433–441). Springer. doi:10.1007/978-1-4020-6699-3_29 Zula, K., Yarrish, K., & Christensen, S. D. (2010). Initial assessment and validation of an instrument to measure student perceptions of leadership skills. The Journal of Leadership Studies, 4(2), 48–55. doi:10.1002/jls.20168
ADDITIONAL READING Alexandrou, A. N., & Durgin, W. W. (1993). Interdisciplinary project approach to engineering design. Innovations in Engineering Design Education. NY: American Society of Mechanical Engineers (ASME).
65
The Application of Transdisciplinary Theory and Practice to STEM Education
Ardelt, M. (2004). Wisdom as expert knowledge system: A critical review of a contemporary operationalization of an ancient concept. Human Development, 47(5), 257–285. doi:10.1159/000079154
Eunsook, H. (2012). Engineering transdisciplinarity in university academic affairs: Challenges, Dilemmas, and Progress. Transdisciplinary Journal of Engineering & Science, 3, 58–68.
Boothby, S. (2005). Supplementing interdisciplinary studies programs with a conscious-based transdisciplinary approach to increase students’ holistic development. Retrieved on June 5, 2014 at: http://horizon.unc.edu/conferences/lc/ papers/ a5.html
Jones, J. C., Ertas, A., & Parten, M. (1995). Multidisciplinary engineering design program at Texas Tech University. The First World Conference on Integrated Design and Process Technology, IDPT, 1, 117-120.
Bowen, M. D. (2013). Technological innovation and engineering education: Beware the DaVinci requirement. International Journal of Engineering Education, 29(1), 77–84. Brown, J. S., & Duguid, P. (2000). Re-education. In J. S. Brown & P. Duguid (Eds.), The Social Life of Information (pp. 207–241). Boston: Harvard Business School Press. Committee on Prospering in the Global Economy of the 21st Century. (2007). Rising above the gathering storm: Energizing and employing America for a brighter economic future. Washington, DC: The National Academies Press. Davies, M., & Devlin, M. T. (2007). Interdisciplinary higher education: Implications for teaching and learning. Centre for the Study of Higher Education. Melbourne: Univ. of Melbourne. Ertas, A. (2000). The Academy of Transdisciplinary Education and Research (ACTER). Journal of Integrated Design and Process Sciences, 4(4), 13–19. Ertas, A., Gatchel, S., Rainey, V., & Tanik, M. M. (2007). A network approach to transdisciplinary research and education. ATLAS Publications, 3(2), 1–12. Ertas, A., & Jones, J. (1993). The Engineering Design Process. Canada: John Wiley & Sons, Inc.
66
Levy, F., & Murnane, R. J. (2012). The new division of labor: How computers are creating the next job market. NY: Princeton University Press. McWilliam, E., Greg Hearn, G., & Brad Haseman, B. (2008). Transdisciplinarity for creative futures: What barriers and opportunities? Innovations in Education and Teaching International, 45(3), 247–253. doi:10.1080/14703290802176097 Pitso, T. (2013). The creativity model for fostering greater synergy between engineering classroom and industrial activities for advancement of students’ creativity and innovation. International Journal of Engineering Education, 29(5), 1136–1143. Singh, M. D., & Kant, R. (2008). Knowledge management barriers: An interpretive structural modeling approach. International Journal of Management Science and Engineering Management, 3(2), 141–150. Snyder, L., Aho, A. V., Linn, M., Packer, A., Tucker, A., Ullman, J., & Van Dam, A. (1999). Being fluent with information technology. Computer Science and Telecommunications Board, National Research Council. Washington, DC: National Academy Press. Sternberg, R. J. (2003). Wisdom, intelligence, and creativity synthesized. NY: Cambridge University Press. doi:10.1017/CBO9780511509612
The Application of Transdisciplinary Theory and Practice to STEM Education
Tate, D., Maxwell, T. T., Ertas, A., Zhang, H.-C., Flueckiger, P., William, L., & Chandler, J. et al. (2010). Transdisciplinary approaches for teaching and assessing sustainable design. International Journal of Engineering Education, 26(2), 1–12. The Academy of Transdisciplinary Learning & Advanced Studies [TheATLAS]. (2014). Transdisciplinarity. www.The ATLAS.org. Thierry, R. (2004). Transdisciplinarity and its challenges: The case of urban studies. Futures, 36(4), 423–439. doi:10.1016/j.futures.2003.10.009 Thompson, N., Alforf, E., Liao, C., Johnson, R., & Matthews, M. (2005). Integrating undergraduate research into engineering: A communications approach to holistic education. The Journal of Engineering Education, 94(3), 297–307. doi:10.1002/j.2168-9830.2005.tb00854.x
KEY TERMS AND DEFINITIONS Convergence: Scientific inquiry characterized by thinking beyond usual paradigms and approaching issues informed by many, integrated perspectives. The process of convergence takes place within a network of partners forming an ecosystem that facilitates basic research as well
as translational applications and the potential to benefit society. Interdisciplinarity: A joint effort to solve a problem or develop a product in which experts from different disciplines exchange theories and methods in a process of sharing their areas of expertise. Unlike the multidisciplinary approach where there is little cross-over from one area of expertise to the other, in this process techniques are borrowed across different fields. Multidisciplinarity: An approach whereby specialists from varying disciplines address common problems by each focusing on their respective areas of expertise, resulting in a side-by-side attempt to reach a solution or product. Transdisciplinarity: A fusing of theories, methods and expertise across disciplinary boundaries in which each discipline merges with the others in the formation of a whole that is greater than the sum of its parts. New disciplines may emerge as in the case of the “New Biology” which is the result of collaborative efforts on behalf of traditional biology, physical and chemical sciences, computational science, mathematics, and engineering. Transdisciplinarity may also include perspectives and methods from such disciplines as social science, economics, and public administration as well as from civil society and a wide range of stakeholders.
67
68
Chapter 4
The SOAR Strategies for Online Academic Research: Helping Middle School Students Meet New Standards
Carolyn Harper Knox University of Oregon, USA
Fatima Terrazas-Arellanes University of Oregon, USA
Lynne Anderson-Inman University of Oregon, USA
Emily Deanne Walden University of Oregon, USA
Bridget Hildreth University of Oregon, USA
ABSTRACT Students often struggle when conducting research online, an essential skill for meeting the Common Core State Standards and for success in the real world. To meet this instructional challenge, researchers at the University of Oregon’s Center for Advanced Technology in Education (CATE) developed, tested, and refined nine SOAR Strategies for Online Academic Research. These strategies are aligned with well-established, research-based principles for teaching all students, with particular attention to the instructional needs of students with learning disabilities. To support effective instruction of the SOAR Strategies, researchers at CATE developed a multimedia website of instructional modules called the SOAR Toolkit. This chapter highlights the real world importance of teaching middle school students to conduct effective online research. In addition, it describes the theoretical and historical foundations of the SOAR Strategies, instructional features of the SOAR Toolkit, and research results from classroom implementations at the middle school level.
INTRODUCTION The importance of information technologies for twenty-first century academic research is well documented (Coiro, Knobel, Lankshear, & Leu,
2008; Eisenberg, 2008; Julien & Barker, 2009). Nearly three quarters of American college students use the Internet more than the library for research, while less than 10% use the library more than the Internet for this purpose (Jones, 2002). Yet
DOI: 10.4018/978-1-4666-9441-5.ch004
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The SOAR Strategies for Online Academic Research
students and teachers alike report that college students lack the skills necessary to find relevant and high-quality information online (Jensen, 2004). The nationwide Common Core State Standards (CCSS) initiative was prompted by the desire to ensure that all U.S. students graduate from high school “college and career ready.” That is, students should possess the skills necessary to earn a selfsustaining wage or participate in postsecondary education without remediation. The CCSS were meant to establish consistent educational standards for U.S. students by specifying what they should know and be able to do at the end of each grade. CCSS for English language arts and mathematics were released in 2010, and have been adopted by 46 states and the District of Columbia. Online research skills are embedded throughout the CCSS for English language arts, as well as the standards for literacy in history/social studies, science, and technical subjects. The 2010 introduction to the CCSS states: The need to conduct research and to produce and consume media is embedded into every aspect of today’s curriculum. In like fashion, research and media skills and understandings are embedded throughout the Standards rather than treated in a separate section. (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010, p. 4) College and career readiness in today’s digitally connected world requires that students be able to: • • • •
Use digital tools and online resources strategically; Construct sound arguments and critique the reasoning of others; Communicate and collaborate effectively; and Solve problems, construct explanations, and design solutions.
To achieve these skills, students need instruction and practice in (a) using digital tools and online resources; (b) engaging in argument, reasoning, and problem solving; and (c) collaborating on authentic tasks that require academic reading, writing, and research (National Governors Association Center for Best Practices & Council of Chief State School Officers, 2010, p. 12). The expectation that students use the Internet to conduct research for papers and projects recognizes that the web has become a primary source of information in today’s media-rich, informationsaturated society (Levin & Arafeh, 2002). To successfully use the Internet for academic research, students (a) ask questions they can test and obtain results they can analyze; (b) carry out online searches efficiently that yield high-quality results; (c) decide whether results are credible; and (d) make connections between different sources (Kingsley & Tancock, 2013). Conducting academic research requires using the Internet as an “inquiry tool” to access digitized information (Frechette, 2002; Windschitl, 1998, 2000). Success depends on students’ abilities to read and understand complex information at high levels (Alexander & Jetton, 2002; Bransford, Brown, & Cocking, 2000), which can be challenging for many students, but is especially difficult for students with disabilities that affect their capacity to read and comprehend text. The objectives of this chapter are to (a) highlight the real-world importance of instructing students across ability levels to conduct research online; (b) describe one evidence-based approach—the SOAR Strategies for Online Academic Research—for teaching middle school students foundational concepts, skills, vocabulary, and procedures necessary for their continued growth toward 21st-century post-secondary and career readiness; and (c) describe the details of a case study conducted in general education middle school classrooms, showing significant pre/post
69
The SOAR Strategies for Online Academic Research
gains across ability levels in both knowledge and performance when conducting research online. The chapter is divided into three major sections: 1. Background: This section describes the historical and theoretical foundations for this research and development effort. 2. Teaching Online Reading and Research: This section describes (a) issues related to information literacy for students at diverse ability levels; (b) SOAR Strategies for online academic research—how they relate to CCSS and how they support students of diverse ability levels; and (c) the SOAR Toolkit—the SOAR multimedia instructional website and how its features particularly support learning by students with diverse ability levels. 3. Evidence: This section shares and discusses research results from a case study implementation of the SOAR Toolkit in general education classes at one middle school in Connecticut.
BACKGROUND Historical and Theoretical Foundations Strategy-Based Instruction The SOAR Strategies for Online Academic Research emerged from two independent lines of research. The first line of research was conducted by Dr. Lynne Anderson-Inman and colleagues at the University of Oregon’s Center for Advanced Technology in Education (CATE), who studied the use of technology to support and improve students’ reading, writing, and studying. Of special interest to this group was the use of digital tools, such as computer-based outlining and concept mapping applications that enable students to gather, organize, and synthesize information from textbooks and other classroom materials (Anderson-Inman,
70
1992, 1995; Anderson-Inman & Ditson, 1999; Anderson-Inman & Tenny, 1989). These digital “information organizers” allow students to record notes from their readings; arrange notes to synthesize information and build knowledge; and use arranged notes to study for tests, write papers, and create classroom presentations. Anderson-Inman and colleagues recognized that students need step-by-step strategies with explicit instruction on when and how to use information-organizing tools for reading, writing, and studying. This led to the development, testing, and refinement of a collection of evidence-based “computer-based study strategies” and accompanying materials to support classroom adoption (Anderson-Inman, Horney, Knox-Quinn, Ditson, & Ditson, 1997). In addition, they (a) developed strategy-based curriculum materials for using specific types of computer-based information organizers, (b) promoted their adoption in specific subject areas, and (c) incorporated emerging technologies such as mobile devices (AndersonInman, Richter, Frisbee, & Williams, 2007; Ditson, Kessler, Anderson-Inman, & Mafit, 2001; Kessler, Anderson-Inman, & Ditson, 1997). A parallel focus was investigation into the use of computer-based study strategies to support the reading, writing, and studying of students with learning disabilities. Students with learning disabilities represent the largest disability population in the U.S., and most experience significant difficulty with reading and writing (Bryant, Bryant, & Hammill, 2000; Gersten et al., 2008). By definition, students with learning disabilities have average or above-average intelligence and the potential to meet high academic standards. Yet these students struggle to succeed in school. Anderson-Inman and colleagues found that students with learning disabilities can successfully learn a variety of computer-based study strategies and apply them in appropriate situations to increase their academic success (Anderson-Inman, 1999; Anderson-Inman, Knox-Quinn, & Horney, 1996; Knox & Anderson-Inman, 2005). Stepwise strate-
The SOAR Strategies for Online Academic Research
gies provide students with explicit instruction on how to use technology tools to create digital study environments and how to use the computer as a “cognitive partner” (Jonassen, 1995) to compensate for learning differences. Successful application of the strategies helps students see themselves as efficient learners, capable of using digital technologies to overcome hurdles imposed by their difficulties with reading and writing (AndersonInman, 1999; Anderson-Inman, Knox-Quinn, & Szymanski, 1999; Anderson-Inman & Reinking, 1998). Results from this line of research indicate that computer-based strategy instruction empowers struggling students and improves academic performance. Recent research has validated the applicability of strategy-based instruction for students with learning disabilities. Cantrell, Almasi, Carter, Rintamaa, and Madden (2010) found that struggling readers benefit from strategy-based instruction. Their results showed that students who were given a learning strategies curriculum in reading outperformed students who did not receive the curriculum. Similarly, Bui, Schumaker, and Deshler (2006) reported that strategy-based instruction in writing was effective for both general education students and students with learning disabilities. Strategy-based instruction has also helped students with learning disabilities overcome daily challenges in the classroom, such as assignment completion. In a study by Ness, Sohlberg, and Albin (2011), students in a resource room who learned a classroom-based strategy for completing assignments improved assignment completion. In addition, strategy-based instruction has proved valuable in specific content areas. For example, students improved science achievement when they received mnemonic-based instructional strategies directly related to the science curriculum (Therrien, Taylor, Hosp, Kaldenberg, & Gorsh, 2011). In sum, instruction designed to help students with learning disabilities become “strategic learners”
(Schumaker & Deshler, 2006) has been effective at multiple grade levels and across multiple measures of academic success.
New Literacies The second line of research was conducted by Dr. Donald Leu and colleagues at the University of Connecticut’s New Literacies Research Lab. While studying the online reading skills of middle school students, Leu found evidence that reading online required different cognitive and technical skills than their paper-based analogs (Leu, 2000, 2002). Also, because skills for reading online and for reading traditional print materials are not isomorphic, the researchers found that students with high comprehension of traditional text were not necessarily skilled in reading online. The opposite was also true: A significant number of low-achieving offline readers were high-achieving online readers. Leu and colleagues found that, compared to reading print, reading online tended to be more goal-oriented and more focused on solving a specific problem. Reading online also typically occurred in the context of searching for information, often while conducting research for a school assignment (Leu, McVerry, O’Byrne, & Zawilinski, 2011). Online reading materials were clearly different, too; text tended to be shorter in length and web pages included features that interfered with searching for information, distracting students from reading. These findings suggest that educators cannot assume that skills for reading and learning in one environment (such as printed text) will automatically transfer to, or be sufficient for, reading and learning in other environments (such as informational websites on the Internet). A primary implication of these studies is that online reading and researching must be taught explicitly, using a different approach and different materials than for print-based materials
71
The SOAR Strategies for Online Academic Research
(Leu, Kinzer, Corio, & Cammack, 2004; Leu, McVerry, O’Byrne, & Zawilinski, 2011). Leu and colleagues identified five sets of literacy skills as part of the Teaching Internet Comprehension to Adolescents project: (a) identifying the question or problem, (b) locating appropriate information through searching and reading search results, (c) evaluating the accuracy of the information found, (d) reading and synthesizing information from multiple sources, and (e) communicating a response to the question or problem they formulated (Leu & Reinking, 2005; Leu et al., 2007). These “new literacies” emerged from their observations of, and interviews with, middle school students as they navigated the Internet, read informational websites, and completed assigned academic tasks. Research at the New Literacies Research Lab highlighted the need for new instructional approaches to teach students skills and strategies to conduct academic research on the Internet. Research conducted by Anderson-Inman and colleagues suggested that strategy-based instruction could be an effective approach for addressing this need, especially for students struggling in school because of poor reading skills. These two lines of research were merged to develop a strategy-based curriculum for teaching middle school students how to conduct research online.
Project SOAR: Strategies for Online Academic Research In 2009, with funding from the U.S. Department of Education’s Office of Special Education Programs (OSEP), Drs. Lynne Anderson-Inman and Carolyn Knox launched an initiative that combined these two lines of research to develop and evaluate a curriculum teaching specific stepwise strategies designed to help all students, including students with learning disabilities, to use the computer as a cognitive partner for conducting research online. By applying principles of strategy-based instruction, Project SOAR: Strategies for Online Academic Research addressed the need for a cur-
72
riculum that teachers could use to teach online research in fundamentally different ways from research using only print resources (Leu, Kinzer, Corio, & Cammack, 2004). The goal of Project SOAR was to improve academic outcomes for all students by providing instruction and practice in strategies that guide them through the process of (a) locating and identifying credible information online, (b) gathering and recording that information digitally, and (c) organizing the information for use in classroom assignments and sharing with others (for example, in a paper, report, or presentation). This work led to the development of nine SOAR Strategies for Online Academic Research, designed for all middle school students but with a special emphasis on the instructional needs of students with learning disabilities. The project also created and tested an accompanying website with video-based materials, called the SOAR Toolkit, for teachers to use when teaching the strategies to middle school students in both inclusive and special education settings. Each strategy targets a step in the process of (a) developing, testing, and refining search questions; (b) evaluating search results and websites; (c) finding and reading relevant information on websites; and (d) collecting and organizing information in ways that support accomplishing assignments that meet teacher expectations. In 2010, Drs. Anderson-Inman and Knox received funding from the U.S. Department of Education’s Institute of Education Sciences (IES) to refine the SOAR Strategies and conduct classroom implementations with teachers who were not involved in the development process. The goals were to (a) investigate the feasibility of using the SOAR Toolkit by middle school teachers of inclusive general education classrooms; (b) evaluate whether instruction using the SOAR Toolkit showed promise of impacting middle school students’ abilities to search for, find, evaluate, read, and utilize appropriate information when reading and researching online; and (c) determine
The SOAR Strategies for Online Academic Research
the extent to which students who had successfully learned to implement the SOAR Strategies retained their skills over time. This led to three years of additional refinement of the SOAR Strategies and pilot testing of the entire SOAR Toolkit with middle school students in Connecticut. The results of the Connecticut case study are reported later in this chapter.
TEACHING ONLINE READING AND RESEARCH Issues, Controversies, Problems Most teens and young adults (93%) go online at least once a day and often several times a day (Pew Research Center, 2009), but very little of that time is for academic purposes. For example, Rideout, Foehr, and Roberts (2010) found that U.S. students between the ages of 8 and 18 spend an average of 7 hours and 38 minutes daily engaged with technology that provides entertainment, much of it online. The data suggest that, despite a high level of familiarity with the Internet, students at all levels of schooling are seldom adept at online research for academic purposes (Belland, 2010; Currie, Devlin, Emde, & Graves, 2010; Holman, 2011; Mullenburg & Berge, 2007). Even in college, students often prefer to obtain information quickly rather than search for credible resources (Biddix, Chung, & Park, 2011). In a 2012 survey by the Pew Research Center (Zickuhr & Smith, 2012), 75% of teachers reported that technological advances had a positive impact on education, but that students did a fair to poor job of conducting online research. Identified student deficits involved not typing effective search queries (38%), not realizing that some websites and their content were biased (71%), not determining accuracy of content (61%), and not finding more than one source to support an argument (59%). These
findings were not surprising, given that students are seldom taught the skills necessary to conduct online research with accuracy and efficiency. For struggling students, such as those with learning disabilities, conducting online research can be doubly difficult because of poor skills in decoding and vocabulary comprehension, as well as problems with language and reading comprehension—all of which affects their ability to read at grade level (Spencer, Quinn, & Wagner, 2014). In addition, students with learning disabilities often do not know, and are rarely taught, how to manage the types of Internet searches necessary for academic assignments (Roberts, Crittenden, & Crittenden, 2011).
Solutions and Recommendations Project SOAR focused on creating a strategybased curriculum for teaching general education middle school students, and especially students with learning disabilities, how to conduct research using the Internet. During implementation, nine SOAR Strategies were developed and tested to teach students how to efficiently and effectively search for online sources, gather information, and organize information. To support implementation of the SOAR Strategy curriculum, project staff developed an instructional website called the SOAR Toolkit. The following sections describe each of the nine SOAR Strategies in more detail, followed by a thorough description of the SOAR Toolkit.
The SOAR Strategies for Online Academic Research Detailed below are the steps, concepts, procedures, and technology skills involved in each of the nine SOAR Strategies. The strategies are divided into three categories, and each category is linked to one or more CCSS in English language arts.
73
The SOAR Strategies for Online Academic Research
Strategies for Finding and Selecting Sources The first four SOAR Strategies focus on locating appropriate and relevant information on the Internet. This task is aligned with one of the CCSS in English language arts for middle school students: CCSS.ELA-Literacy. W/HST.6-8.8 Gather relevant information from multiple print and digital sources; assess the credibility of each source; and quote or paraphrase the data and conclusions of others while avoiding plagiarism and providing basic bibliographic information for sources. SOAR Strategy 1: Starting a Web Search. This strategy teaches students how to create good search questions that will guide them to appropriate and relevant information on the Internet. The steps to Strategy 1 are illustrated in Figure 1. In Step 1, students open a “digital notebook” and brainstorm questions, phrases, and vocabulary words about their research topic. This process grounds the student—reflecting on what he or she knows about the topic before accessing the Internet. A digital notebook is the electronic equivalent of a paper notebook. Students use their digital notebooks throughout the research process. In this strategy, students learn to construct “Google ready” search questions. The term, Google ready, refers to a question that (a) starts with a questioning word, (b) has been checked for spelling and grammar, and (c) includes the most specific words the student knows about the topic. In the SOAR Toolkit, students watch an instructional video illustrating each step. In the process of implementing these strategies, students also require a variety of technical and procedural skills. Short ”red-word” videos are there to help students learn efficient ways to use the technology to accomplish each strategy, which is important for all students and especially those students with learning disabilities who have literacy challenges.
74
SOAR Strategy 2: Improving a Web Search. The second strategy teaches students to refine and test a new question when search results seem poorly matched to their research topic. The steps to Strategy 2 are illustrated in Figure 2. Students learn to look through their “results list” (more than just the first few listings) to identify which results may be useful and relevant. Students learn where to look in their results list to understand which results are commercial sites and which are more likely to yield the desired information. In this strategy, students find new, more topicspecific vocabulary in their results list and “collect” these terms by copying and pasting them into their digital notebooks. By looking carefully at the vocabulary used in a results list, students understand their topic more thoroughly and learn to identify better descriptors for searching. SOAR Strategy 3: Choosing Good Sites to Open. The third strategy teaches students how to select websites from their results list for further investigation. Videos in the SOAR Toolkit prompt students to (a) look at URLs in the results list to find names of people and institutions they recognize and trust; (b) identify non-commercial websites by looking at domain names; and (c) open at least three websites they believe will have appropriate, relevant, and trustworthy information. As shown in Figure 3, one very important red-word video in this strategy is the Wikipedia video, which discusses how to use the contents of a Wikipedia site. Rather than telling students not to use Wikipedia, this video describes how to use Wikipedia entries to set the context for deeper learning and for refining search questions. Students learn (a) to collect new vocabulary from a Wikipedia entry; (b) to use the Wikipedia entry’s Table of Contents to understand the structure of knowledge (topics and subtopics) related to their search topic; and (c) to look for external links in that might take them to more academically appropriate websites.
The SOAR Strategies for Online Academic Research
Figure 1. SOAR Strategy 1: Starting a Web search
75
The SOAR Strategies for Online Academic Research
Figure 2. SOAR Strategy 2: Improving a Web search
76
The SOAR Strategies for Online Academic Research
Figure 3. SOAR Strategy 3: Choosing good sites to open
77
The SOAR Strategies for Online Academic Research
SOAR Strategy 4: Weighing a Website. The fourth strategy teaches students how to evaluate a website they have opened. As illustrated in Figure 4, the strategy introduces students to important page elements that help in evaluating a website. Students are taught practical techniques for navigating a website to find answers to their search questions, and they are prompted to reflect on the appropriateness of the site’s reading level for their understanding. Strategies for Reading and Recording Information SOAR Strategies 5–7 focus on finding information in a website, reading for understanding, and “clipping” selected details into a digital notebook. These tasks are aligned with three of the CCSS in English language arts for middle school students: CCSS.ELA-Literacy. RH.6-8.2 Determine the central ideas or information of a primary or secondary source; provide an accurate summary of the source distinct from prior knowledge or opinions. CCSS.ELA-Literacy. RI.6.4 Determine the meaning of words and phrases as they are used in a text, including figurative, connotative, and technical meanings. CCSS.ELA-Literacy. RW.6.9 Draw evidence from literary or informational texts to support analysis, reflection, and research. SOAR Strategy 5: Finding Information in a Website. Strategy 5 teaches students how find information embedded in a website quickly and accurately. After students determine that a website is appropriate and trustworthy, they must find the place in the website that contains the specific information they need. Students start by reviewing questions in their digital notebook to help them focus on the topics about which they are seeking
78
information. The most important red-word video in this strategy gives students in-depth tips for using the Find command to look for specific information within a text-heavy website. SOAR Strategy 6: Reading to Learn. With the sixth strategy, students are taught to use text-to-speech to enhance their comprehension of unknown words and phrases by having them spoken out loud. Students are also taught to reflect on their reading and check for understanding by asking themselves questions. SOAR Strategy 7: Recording Notes. The seventh strategy teaches students how to “clip” information from a website and record it in their digital notebooks in a way that cites its original source. Students use this strategy when they decide that what they’ve read is important to save for later. Students copy and paste “clippings” from the website’s text, and “tag” the clippings so they can always find or cite the original source. Tagging is accomplished by adding the same capital letter to both the URL of the website and the clippings taken from that website. In addition, students are taught to write a brief phrase under each clipping, describing why that clipping is important or how they plan to use it later. Strategies for Organizing and Using Information The last two SOAR Strategies relate to organizing the information gathered from online sources for use in academic assignments. This task is aligned with two of the CCSS in English language arts for middle school students: CCSS.ELA-Literacy. RH.6.8.8.2a Introduce a topic clearly, previewing what is to follow; organize ideas, concepts, and information into broader categories as appropriate to achieving purpose; include formatting (e.g., headings), graphics (e.g., charts, tables), and multimedia when useful to aiding comprehension.
The SOAR Strategies for Online Academic Research
Figure 4. SOAR Strategy 4: Weighing a Website
79
The SOAR Strategies for Online Academic Research
CCSS.ELA-Literacy. RH.6-8.1 Cite specific textual evidence to support analysis of primary and secondary sources. SOAR Strategy 8: Creating Categories. The eighth strategy teaches students how to categorize information in their digital notebooks based on the meaning of the text they collected. Prior to implementing Strategy 8, information in students’ digital notebooks has been organized by URL, rather than by meaningful categories. Strategy 8 starts students on the process of creating a digital outline they will use to organize their information in relation to its significance rather than its source. With this strategy, students review the information in their digital notebooks and consider how to categorize what they have chosen to save. This process is aided by what students did in Strategy 7: Record Reasons, which instructed them to record why each clipping was important. Now, students can use those reasons to help organize clippings into meaningful categories in their digital outlines. During this process, students may realize they need more information in some categories or that they should discard some clippings as irrelevant. SOAR Strategy 9: Combining Notes in an Outline. The ninth strategy helps students reorganize information into appropriate subtopics or categories. After using this strategy, students have digital outlines with clippings organized by meaningful headings. All clippings are tagged with capital letters that match tags in the reference list. Once the information has been reorganized into an outline, students can use it to complete assignments such as writing papers, preparing a presentation, or studying for a test.
The SOAR Toolkit The SOAR Toolkit contains nine instructional modules (nine web pages), one for each SOAR Strategy. The modules form a cohesive curriculum for learning how to conduct online academic
80
research, with each module building on the skills and strategies learned in previous modules. All modules have four major components: (a) instructional videos designed to teach the SOAR Strategies one step at a time; (b) practice exercises that quiz students about what they have learned; (c) short red-word videos for implementing a strategy efficiently; and (d) Try It! assignments, designed to help students apply what they have learned in each strategy, using one realistic and motivating research topic. SOAR Strategy Videos Project staff created 34 SOAR Strategy Videos in which student and teacher avatars use words, supportive graphics, and simultaneous screencasts to walk students through each step in each strategy. At the beginning of each strategy video, the teacher avatar describes when to use the strategy and briefly introduces the most important concepts, skills, and vocabulary. Following that introduction, students watch videos that teach each step in depth. The teacher avatar starts each video with a short lecture and screencast demonstration, and then a student avatar models the application of that step within a personal research project using the Internet. The student avatar thinks out loud authentically while working through the step, occasionally making mistakes and then making corrections. The student avatar also demonstrates what is being done onscreen through simultaneous screencasting. Sometimes the student avatar is surprised by online discoveries. When this happens, the avatar thinks out loud and models how to solve the problem. For example, in one video the student avatar sees that the search question is quite weak—it does not lead to the expected rich search results. The student avatar then uses what was taught in that step, reviewing the results list and identifying better, more specific terms. In that way the student avatar refines and improves the search question to generate more relevant results.
The SOAR Strategies for Online Academic Research
Figure 5. SOAR Strategy 9: Combining notes in an outline
81
The SOAR Strategies for Online Academic Research
Practice Exercises After every step of each strategy students answer three to five questions in a “Practice Exercise.” This helps students think about what they have seen in the videos. Students receive immediate feedback as well as an explanation about why their response was accurate or inaccurate. Practice questions are intended to help cement into longterm memory what students learn while watching the videos (Willis, 2007). In addition to simple reinforcement, the format of student feedback is intended to support a “growth mindset” (Dweck, 2007), encouraging students to “grow their abilities” in this content area. In this way, the SOAR modules focus students on key concepts and skills, thus ensuring that concepts are understood before students move on to apply their new skills. Red-Word Videos Some strategies require that students know how to use certain features on the computer. There are Figure 6. Practice exercise for SOAR Strategy 7
82
13 red-word videos designed to teach technical skills associated with implementing SOAR Strategies. These videos were created in response to early implementation results showing that most middle school students (and some high school students) were not aware of keyboard shortcuts even for commonplace commands like Copy and Paste or Find. Try It! Application Assignments To increase interactivity and extend the learning experience, students are given a hands-on research task after learning each strategy. Upon completing the videos and practice exercises for a strategy, students press a “Try It!” button at the bottom of the page. This opens instructions for a guided application task in which students use the strategy they just learned. To accomplish this research assignment, students access the Internet and experience the unpredictability of working on an authentic task. When they finish the as-
The SOAR Strategies for Online Academic Research
signment, students submit the results of their Try It! via an online forum built into the SOAR Toolkit. In schools where teachers and students use Google Docs for assignments, students can save their submissions in a Google Doc shared with their teacher. The SOAR Toolkit was created for use in a variety of different implementation models: (a) by students working individually at their own pace; (b) as homework assignments to prepare for inclass applications; and (c) to support whole-class demonstrations, followed by independent practice. Teachers who use the SOAR Toolkit integrate the materials into their curriculum to fit their students’ needs and the expectations of their discipline. For example, some teachers may choose to start a class by watching and discussing one of the red-word videos as a classroom warm-up before students work individually on SOAR Toolkit modules. Other teachers may assign students to watch a specific SOAR Strategy video as homework, and then help students apply that strategy when doing their own research during class. Special education teachers have found it helpful to work one-on-one with students as they progress through the strategies. Teachers with more advanced classes may allow students to go through the SOAR Toolkit independently.
Supporting Students with Learning Disabilities One of the original goals for the SOAR Strategies and SOAR Toolkit was to make instructional modules universally accessible to all students in a general education classroom, including students with learning disabilities. To make the materials helpful for students with learning disabilities, project staff incorporated the following six features.
Strategic Many students with learning disabilities are nonstrategic in their approach to reading, writing,
and studying (Anderson-Inman, 1999; AndersonInman, Knox-Quinn, & Szymanski, 1999). Students may find it difficult to identify or adopt a sequence of activities that enables them to meet teachers’ academic expectations. Some students with learning disabilities may implement what they believe are appropriate strategies for accomplishing assigned tasks, but view any lack of success as a personal failure, caused by their own limitations, rather than as the fault of the strategy they chose. Academic research using the Internet requires reading and synthesizing information from multiple sources. When students are assigned to learn about a complex topic by bringing together information from multiple websites, they are being asked to locate credible information, analyze and synthesize the information, and use the information to construct knowledge. To do this, they must form hypotheses and test their hypotheses with the data they find online. Such higher-order learning skills can be facilitated by sophisticated and efficient use of the computer, digital tools, and online resources. The SOAR Strategies help students with learning disabilities be strategic when conducting their research by teaching them a systematic, step-by-step approach to this complicated set of tasks.
Digital The SOAR environment for learning and working is entirely digital. Digital learning environments have a variety of advantages for students with learning disabilities. They are flexible, forgiving, and expansive (Anderson-Inman & Reinking, 1998). The flexibility of digital learning environments allows students to tackle problems and record information in a way that suits their own learning styles. The ability to cut and paste words and images from digital sources enables students to harvest what they find online—then save and organize it in a format that is meaningful to them. In addition, a digital environment is forgiving; errors can be corrected easily and without a trace, and
83
The SOAR Strategies for Online Academic Research
spell checkers can protect students from searching the Internet with misspelled variants of their topic’s key words. Students have the flexibility to reorganize and move text around. A digital environment is also expansive, allowing for insertion of new information near related information recorded earlier. Online work environments can also be personalized to match students’ reading and research needs (Anderson-Inman, 2009b). Choices made while working online can be saved for later. For example, students applying the SOAR Strategies learn to open new websites with tabs instead of opening—and possibly becoming confused by— multiple windows. If students have to shut down their computers in the middle of online reading and research, they can use the Bookmark All Tabs command. This saves the open tabs, allowing them to recover the same websites by selecting the Open All in Tabs command when they return to work. In this and other ways, SOAR Strategies teach students to use standard features of an online environment to compensate for difficulties they may have in managing their learning.
Visual Instructional modules in the SOAR Toolkit are highly visual, illustrating both the strategies and Figure 7. Avatars used in the SOAR strategy videos
84
the steps in each strategy with simple videos using computer-generated avatars and voice-over narration. Besides being highly interesting to students, the videos deliver instruction without distracting the learner from the message. Research suggests that students with learning disabilities learn better from simple animation, text, and narration than from live video or complex animation with flashy, distracting movements and narration with distracting sound effects (Fassbender, Richards, Bilgin, Thompson, & Heiden, 2012; Kennedy & Deshler, 2010; Rizzo et al., 2000; Mayer, 2009; Moreno & Mayer, 2007). Students with learning disabilities tend to benefit from multimedia segments that are short—less than a few minutes—and that contain manageable, memorable lesson material. The SOAR videos are controlled by the learner, and can be stopped at any point and re-watched. As illustrated in Figure 7, the technique of using avatars of different genders and ethnicity for different instructional purposes (for example, lecturer, student, peer mentor) provides additional visual appeal and instructional support.
Supportive Students with learning disabilities often struggle when reading text in traditional formats for a variety of reasons—heavy vocabulary load, distracting
The SOAR Strategies for Online Academic Research
images, complex sentence structure, and a high level of abstraction, to name just a few. Research suggests that many of these difficulties can be ameliorated by presenting content to students in alternate formats or embedding supports into digital versions of the text (Anderson-Inman, 2009a; Anderson-Inman & Horney, 2007). When developing the SOAR Toolkit, care was taken to ensure that the materials were maximally supportive for students with reading difficulties, including heavy reliance on narrated instructional video. When students were required to read text (for example, during the Practice Exercises), they could choose to have questions read out loud using the text-to-speech feature. Other design decisions in the SOAR Toolkit address the needs of students with disabilities who have difficulty finding information on a page or can be distracted by unnecessary features. The SOAR web pages were designed to minimize distractions and maximize efficiency. In line with research on effective user interfaces for multimedia learning (Mayer, 2008), the SOAR web pages eliminate the need for students to scroll, or to look back and forth between text placed on one page and related graphics on another. Extraneous words, pictures, and sounds were excluded, and cues were added to highlight the organization of the essential material (Leacock & Nesbit, 2007; Mayer, 2008, 2010, 2011; Nesbit, Li, & Leacock, 2006). All SOAR web pages have a consistent, clean appearance. The menu bar is situated in the same place on every page, providing unambiguous navigation throughout the SOAR Toolkit. As illustrated in Figure 8, icons are located in a logical order to deliver videos and Practice Exercises in the appropriate sequence. The video window appears next to the text and icons, and the video player can be enlarged to fill the screen.
Interactive Students with learning disabilities are most successful with learning in general, and online
learning in particular, when they are expected to apply what they have been taught in non-routine situations (Boyle &Weishaar, 1997; Conley & McGaughy, 2012). The SOAR Toolkit includes explicit expectations for students to interact with the content independently and apply their knowledge in real world contexts. The self-paced nature of the instructional modules require students to make their own decisions about how to move through the activities, when to replay videos, under what circumstances to use features like text-to-speech; and when to skip, apply, or repeat steps. Most helpful for ensuring that students apply what they have learned is the Try It! assignment at the end of each strategy. Students can watch red-word technical tip videos or read hover-over etext supports to remind them of technical skills they had learned earlier. Each of the nine Try It! assignments leads students to use a strategy to accomplish one more phase in the overall SOAR research assignment. After completing each Try It! assignment students submit their results by posting to a forum built into the SOAR Toolkit, thus facilitating collaboration between students and teachers. At the end of the ninth Try It! students have a digital outline of clippings taken from the Internet, organized by topic and matched to references.
Authentic Research has documented that students with learning disabilities improve their understanding of content material when instruction is designed around authentic tasks and they are given opportunities within those tasks to develop and apply new cognitive strategies (Anderson-Inman, KnoxQuinn, & Szymanski, 1999; Morocco, 2001). Not only are authentic academic tasks more engaging to students, the skills learned are also more easily transferable to future real-world activities. The SOAR Toolkit facilitates this transfer by having students practice the nine SOAR Strategies while researching one topic on the Internet.
85
The SOAR Strategies for Online Academic Research
Figure 8. SOAR Toolkit web page for SOAR Strategy 1
In the early development stages of the SOAR Toolkit, each Try It! assignment was presented in the context of a different research issue or topic. However, classroom observations showed the need to create one consistent research topic for students to use across all strategies. Students benefit from seeing an overarching goal with all strategies applied in one continuous process rather than distinct and separate activities (Quintana, Zhang, & Krajcik 2005).
86
To feel authentic, the tasks students engage in must be as realistic as possible (Larmer & Mergendoller, 2010). Instruction that teaches students efficient online research skills requires the inclusion of features they would be expected to encounter in the real world. SOAR instruction assigns students to use the Internet to develop educated positions about complex real-world issues and create a substantial outline of evidence to support their position.
The SOAR Strategies for Online Academic Research
EVIDENCE
Intervention
In 2012, a case study was undertaken to (a) investigate the feasibility of using the SOAR Toolkit with middle school students in inclusive general education classes; (b) evaluate the extent to which the SOAR Toolkit shows promise of impact on middle school students’ abilities to search for, find, evaluate, read, and utilize appropriate information when conducting research online; and (c) determine the extent to which students retain these skills over time.
Using a one-group, pretest-posttest design, we investigated the effects of the SOAR Toolkit on student ability to conduct research online as measured by the Internet and Computer Skills Assessment (ICSA) and a Performance-Based Assessment (PBA) developed by project staff. The independent variable was the use of the SOAR Toolkit for approximately 5 weeks.
Teacher and Research Assistant Training
Setting and Participants
Two graduate students from the University of Connecticut, who had not been involved in developing the SOAR Toolkit, served as research assistants for the study. During the previous year, the teacher had learned the basics of the SOAR Strategies and watched other students use the SOAR Toolkit in an independent and self-paced fashion. This experience was helpful because it gave the teacher basic information about the strategies, the materials, and how students use the strategies for research. Oregon project staff met with the teacher and research assistants face to face for about three hours to introduce the project and discuss roles. Subsequent training was comprised of 10 hours of video teleconferencing and took place over the summer prior to implementation. The teacher and research assistants trained together so they could support each other in their classroom roles. The teacher was taught how to take an active teaching
With support from the University of Connecticut’s New Literacies Research Lab, a language arts teacher was recruited from a school district in a Connecticut town of less than 2,000 residents. This school serves pre-kindergarten through eighth grade with an enrollment of about 225 students. Students with disabilities account for almost 10% of the school’s population. About 91% of students with disabilities spend most (more than 75%) of their time in general education classrooms. The total sample consisted of 84 students, 12 of whom were students with a learning disability, identified by the teacher as being average or above-average in intelligence, having an individualized educational program (IEP), and requiring academic intervention. Eighteen students were sixth graders, 44 were seventh graders, and 22 were eighth graders (see Table 1). Table 1. Participants by grade and gender Teacher
Grade
N Female
Male
A
6
18
8
A
7
44
18
8
22
15
84
41
A Total
Gender
SWLD
General ED
10
3
15
25
7
37
7
2
20
42
12
72
Note. Table 1 includes all intervention participants. Not all participants completed both the pre- and posttests measuring dependent variables.
87
The SOAR Strategies for Online Academic Research
role during the implementation and to create an observation log for each class, describing student questions and issues as well as interesting anecdotes. Project staff prepared teaching resources for the teacher, and these were discussed during trainings. The research assistants were taught how to collect data once a week in each class. It was necessary for them to understand the concepts, technical skills, and procedures students were learning in order to produce useful and knowledgeable observations of student behaviors. Observation forms were prepared and observation details agreed upon. To avoid singling out one student or one type of student for observation, the research assistants were instructed to focus on the whole class with particular attention to a small group of students in each class: two struggling students, two average students, and one advanced student. The teacher identified these students in each class.
Implementation Each student was given an Internet-ready laptop that could access the school’s wireless network. SOAR Strategies were taught during students’ language arts classes four days a week for one hour per day. Students were allowed to pick up their computers, open them, log onto the SOAR Toolkit, and start working before class began. Students’ work was mostly self-paced with the teacher monitoring progress and assisting as needed. Occasionally the teacher would discuss issues and concepts in a whole-class format, and often ask a student to project his or her work so the whole class could see it and discuss issues about the strategy involved. Sometimes the teacher would play and discuss a red-word video to the whole class to be sure everyone understood the concepts or technical skills. Because students worked independently using earphones, the classroom atmosphere was mostly silent. However, collaboration and peer-to-peer
88
sharing was encouraged. It was common to hear students quietly helping each other remember keyboard shortcuts, showing each other websites, or talking about what they had found. The teacher often answered questions from her desk, and a research assistant walked around writing observation notes and answering questions one day per week. The implementation lasted five weeks (including days when students could not work on the strategies because of fire drills, emergency practice procedures, special assemblies, and snow days). Some students finished more quickly than others, and the teacher gave them follow-up assignments so that when they finished students knew what to do next. Early finishers did not disrupt the flow of the class.
Data Collection Measures of Fidelity and Feasibility Digital recordings. Screen-capture software, called iShowU, ran in the background of student computers documenting their work with the SOAR Strategies. iShowU documents contain video recordings of each student’s computer screen, including all cursor movements as well as the ambient sound during the time the student was working. These videos were coded using a project-created instrument. Classroom observation logs. Observation logs were created by the research assistants during class periods when the SOAR Toolkit was used. These observations were emailed to Oregon staff on the same day they were collected, enabling project staff to respond or answer questions within one day. Communications were collated by date and organized by topic. They were also coded using standard qualitative coding procedures. Interim, follow-up, and reflection correspondence. Oregon project staff, the Connecticut teacher, and the research assistants corresponded
The SOAR Strategies for Online Academic Research
by email when questions or issues arose. Sometimes Oregon staff had questions about what the observers had written, and follow-up correspondence provided a way to clarify what had happened. The project collected and collated sets of this correspondence, coding it using standard qualitative coding procedures. The Oregon staff asked the research assistants sets of focused interim and post-implementation questions that considered general as well as specific examples of student learning, retention, and confidence characteristics. Reflection notes and additional correspondence between research assistant observers, the classroom teacher, and the Oregon project staff over the following summer documented continuing discussion around questions emerging from data analysis, which took place during those months. Teacher logs. The participating middle school teacher kept digital logs of issues, questions, anecdotes, and other helpful information. These logs were shared with project staff after the end of implementation. Teacher survey. Project staff designed and administered a 12-item survey to the teacher. The purpose of the survey was to understand the outcomes, benefits, and barriers encountered in delivering the intervention. The survey included Likert-type scale items and prompts for unstructured responses. Student exit interviews. At the end of the implementation, project staff asked students questions in a whole classroom discussion format. In addition, three students at differing ability levels in each class were interviewed individually. These interviews were coded using standard qualitative coding procedures. Student documents. Performance-based pre/ post tests, digital notebooks, and outlines were collected in MS Word format and analyzed in Oregon using standard qualitative coding procedures.
Dependent Measures - Impact of Intervention on Student Learning Internet and Computer Skills Assessment (ICSA). In pre/post assessments, students were asked 24 multiple-choice questions about different aspects of their computer skills, including knowledge of keyboard shortcuts, online research skills, website reliability, gathering and outlining information from the Internet, and other knowledge needed for online research taught by the SOAR Strategies. This assessment was created by project staff and tested in prior classroom implementations, yielding a Cronbach’s alpha score of .70, indicating reasonable reliability of the instrument. Performance-Based Assessment (PBA). Performance-based pre/post tests were administered to assess student application of the SOAR Strategies. In this assessment, students were presented with motivating, real-world scenarios describing opposing sides of an issue. The students were assigned to conduct online research to look for, evaluate, and organize relevant and reliable information in a way that would allow them to support their position on the issue. Two different research scenarios were presented to students to limit practice effects with the content, which could affect results. Each student was randomly assigned to one scenario as a pretest. Students who completed Scenario A as a pretest completed Scenario B as a posttest; students who completed Scenario B as a pretest completed Scenario A as a posttest. Student assessments were scored using a 21-point scoring rubric at both pre- and posttest after learning all nine strategies. This assessment was created by project staff and tested with groups of students participating in earlier implementations to ensure that each topic was equal in terms of ease and time needed for adequate online research. Cronbach’s alpha was .74, indicating good reliability.
89
The SOAR Strategies for Online Academic Research
Results
•
Fidelity and Feasibility Fidelity of implementation was monitored through weekly communication, discussion of classroom observation data, iShowU digital recordings, teacher logs, and focused interim and post-implementation question sets answered by the teacher and research assistants. These data indicate that the teacher implemented the SOAR curriculum with fidelity. Fidelity of assessment was assured in two ways: (a) Oregon project staff assisted in the administration of pre/post assessments and data collection and (b) assessment administration was standardized for students by using online text and audio-based online resources. Data from the teacher survey indicated that the SOAR Toolkit was relevant for (a) addressing the CCSS in English language arts, (b) learning in other subject areas, and (c) preparing students for future grade levels. The teacher also believed that students with IEPs or in need of academic intervention benefitted from the SOAR Strategies. In terms of the instructional content, the teacher reported that students had the most difficulty determining whether a website was reliable and trustworthy, and noted the most student growth in “evaluating websites” and “technical skills.” The teacher commented: I love that the SOAR Strategies teach students a lot of necessary skills for the 21st century. The strategies are self-paced, which allows for differentiation and varied teacher support. I believe that the SOAR Strategies have the capacity to vastly improve students’ digital literacy skills. Data from classroom observations, teacher logs, and student exit interviews showed the following:
90
•
•
•
•
•
Students were able to use the SOAR Toolkit after initial introduction and training with no further questions. Students knew how to navigate through the site to get to the section they wanted to find; Students used red-word videos to increase efficiency and reduce cognitive load. In many cases, using shortcuts made the difference between a student struggling to use the technology and a student feeling technologically competent; Students focused on the video-based instruction. The teacher and graduate students expressed surprise at the level of attention students gave the videos when working through the online instructional modules; Students of differing ability levels enjoyed learning the content through video avatars. They reported that they liked being able to watch a video more than one time; The teacher appreciated that video-based avatars delivered the same instruction no matter how many times students chose to watch the videos and that using these online materials enabled her to provide the same instruction to all students in every class; and Students reported that using the text-tospeech feature was an important aspect of instruction. Students at all ability levels said they would use text-to-speech in the future.
Results show that the SOAR Toolkit is feasible for use in middle school inclusive general education classes. Students at all ability and grade levels reported that the multimedia learning environment and features available on the computer enhanced their learning experience. The participating teacher and research assistants found the SOAR
The SOAR Strategies for Online Academic Research
Toolkit to be easy to use and believed it to be helpful for students. In general, the teacher viewed this curriculum as useful and relevant for all students, including those with learning disabilities, as the strategies teach necessary skills, allow for differentiation, and can improve digital literacy.
Impact on Student Internet and Computer Skills Student knowledge of the Internet and use of the computer were assessed with the Internet and Computer Skills Assessment (ICSA) before and after instruction with the SOAR Toolkit. A total of 66 students from all five groups completed the
ICSA at both pre- and posttest conditions. Each of the 24 questions on the ICSA was worth one point, with a possible maximum of 24 points. Descriptive statistics (e.g., means and standard deviations) and t test results of the students’ ICSA scores for both general education students and students with learning disabilities are reported in Table 2 and illustrated in Figure 9. Statistically significant differences (p < .05) in pretest to posttest scores were obtained for both general education students and students with learning disabilities (respectively, mean difference = 25 percentage points, SD = 26%, t(57) = 14.57, p< .05; mean difference = 21 percentage points, SD = 13%, t(7) = 2.64, p < .05). These results indicate
Table 2. Descriptive statistics for Internet and Computer Skills Assessment for general education students and students with learning disabilities (SWLD) Pretest
Posttest
M(SD)
M(SD)
Mean Gain
df
t value
General Ed
54%(14)
79%(14)
25%*
57
14.57
SWLD
49%(14)
70%(15)
21%*
7
2.64
Groups
Overall Change in Means
Note. * Statistically significant (p < .05)
Figure 9. Change in Internet and Computer Skills Assessment for general education students and students with learning disabilities
91
The SOAR Strategies for Online Academic Research
a positive and statistically significant increase in student knowledge about the Internet and computer skills. Differences in mean increases between the two groups were not statistically significant (mean = 4 percentage points, t(65) = -1.24, p > .05), indicating that, on average, students with learning disabilities improved as much as other students in the class. Minimal differences in students’ pretest to posttest gains were observed across the three student grade levels. Descriptive statistics and t test results by grade are reported in Table 3 and illustrated in Figure 10.
Gains differed by grade level. Both sixth- and eighth-grade students obtained the same mean gain from pretest to posttest on the ICSA, which was statistically significant (respectively, mean difference = 27 percentage points, SD= 10%, t(17) = 8.93, p < .05; mean difference = 27 percentage points, SD = 13%, t(19) = 4.68, p < .05). Students in the seventh grade also obtained a statistically significant mean increase on this measure (mean difference = 23 percentage points, SD= 15%, t(28) = 8.6, p < .05), however this difference was 4 percentage points below that of sixth- and eighth-grade students.
Table 3. Descriptive statistics for the Internet and Computer Skills Assessment by grade Pretest
Posttest
M(SD)
M(SD)
Mean Gain
df
t value
6th
47%(10)
74%(12)
27%*
17
8.93
7th
54%(15)
77%(11)
23%*
28
8.6
8th
57%(13)
84%(16)
27%*
19
4.68
Groups
Overall Change in Means
Note. * Statistically significant (p < .05).
Figure 10. Change on the Internet and Computer Skills Assessment from pretest to posttest by grade
92
The SOAR Strategies for Online Academic Research
Impact on Student Performance A Performance-Based Assessment (PBA) measured changes in students’ abilities to conduct research online using the Internet. The PBA was administered both before and after using the SOAR Toolkit to teach the SOAR Strategies. A total of 74 students from the three middle school grades completed the Performance-Based Assessments for both the pretest and posttest. Means and standard deviations were calculated for general education students and students with learning
disabilities, and t tests were conducted to examine group differences. Results are reported in Table 4 and in Figure 11. Statistically significant differences (p < .05) from pretest to posttest were obtained for both general education students and students with learning disabilities (respectively, mean difference = 33 percentage points, SD = 26%, t(65) = 10.6, p< .05; mean difference = 33 percentage points, SD= 23%, t(9) = 4.64, p < .05). This indicates a positive and statistically significant increase in student ability to perform an online search given
Table 4. Descriptive statistics for the Performance-Based Assessment for general education students and students with learning disabilities (SWLD) Pretest
Posttest
M(SD)
M(SD)
Mean Gain
df
t value
General Ed
44%(22)
77%(27)
33%*
65
10.6
SWLD
31%(15)
64%(13)
33%*
9
4.64
Groups
Overall Change in Means
Note. * Statistically significant (p < .05).
Figure 11. Change in Performance-Based Assessment scores for general education students and students with learning disabilities
93
The SOAR Strategies for Online Academic Research
a complex real-world scenario, a clearly formed research assignment, and access to the Internet. It is particularly noteworthy that gains in the ability to conduct online research were also educationally significant, with a mean increase of 33 percentage points for both groups (44% to 77% for general education students and 31% to 64% for students with learning disabilities). Small differences in students’ pretest to posttest gains were observed for each grade level. Descriptive statistics and t test results of pretest to posttest scores for Performance-Based Assessments by grade are reported in Table 5 and in Figure 12. Gains measured by the Performance-Based Assessments differed across grade levels. Seventh-
grade students improved most with a mean gain of 38 percentage points (SD = 22%, t(38) = 10.93, p < .05). Sixth-grade students obtained a mean gain of 32 percentage points (SD = 31%, t(16) = 4.18, p < .05). Eighth-grade students obtained the lowest mean gain of 26 percentage points (SD = 25%, t(19) = 4.68, p < .05).
Retention of Student Performance over Time Seventh-grade students completed a second Performance-Based Assessment posttest 12 months after the first posttest. The goal of the delayed posttest was to determine whether students who
Table 5. Descriptive statistics for the Performance-Based Assessment by grade Pretest Groups
Posttest
Overall Change in Means
M(SD)
M(SD)
Mean Gain
df
t value
6th
37%(17)
69%(27)
32%*
16
4.18
7th
40%(23)
78%(27)
38%*
38
10.93
8th
52%(13)
78%(16)
26%*
19
4.68
Note. * Statistically significant (p < .05).
Figure 12. Change in Performance-Based Assessment scores by grade
94
The SOAR Strategies for Online Academic Research
had received instruction with the SOAR Toolkit had retained the online research skills learned during the implementation. Of the 39 seventh-grade students who were part of the original sample, only eight students completed the delayed posttest. Given that only one of the eight students had a documented disability, we analyzed these data using a three-time-point, repeated-measures design without differentiating groups. Descriptive statistics and Wilks’ Lambda values are reported in Table 6 and illustrated in Figure 13. Results indicate that students not only retained, but also increased their abilities to use online research skills 12 months after completing the intervention (mean gain from posttest 1 to post-
test 2 = 9 percentage points, F(2,6) = 11.19, p = .014). Though the increase is small, from a mean of 76% to 85%, it suggests that students may have continued to use what they had learned in other contexts over the course of the intervening year
Discussion The first goal of this study was to investigate the feasibility of using the SOAR Toolkit by a middle school teacher of inclusive general education classes. Both the participating teacher and the graduate student research assistants found the SOAR Toolkit to be easily usable, useful for diverse students, and feasible for adoption in the
Table 6. Descriptive statistics for the Performance-Based Assessment for seventh-grade students across three time points Pretest Grade
M(SD)
7th
42%(23)
Posttest 1 M(SD) 76%(16)
Posttest 2
Wilks’ Lambda
M(SD)
F
df (time, error)
85%(11)
11.19
2,6
p Value .014
Figure 13. Change in Performance-Based Assessment scores across three time points for seventh-grade students
95
The SOAR Strategies for Online Academic Research
classroom. The teacher reported that the strategies taught necessary skills, allowed for differentiation, and helped to improve students’ digital literacy. In short, the participating middle school teacher was able to incorporate the SOAR Toolkit into her classroom instruction and use the materials as intended. The second goal of this study was to determine the intervention’s promise of impact on student knowledge and performance related to online research. To answer this question, two types of data were collected: a knowledge test focused on what students knew about online research and a performance test designed to assess their ability to conduct online research using the SOAR Strategies they had learned in class. On both measures there was a statistically significant improvement (p < .05) in students’ pretest-to-posttest scores. This was true for students with learning disabilities as well as for general education students. In addition, the statistically significant differences were found across all grade levels, with only minimal difference between grades. Because this case study evaluated the curriculum in an inclusive general education classroom, the number of participating students with learning disabilities was limited. Thus, these results cannot be generalized to the larger community. More large-scale research is needed in this area. Nonetheless, the educational significance of these results is potentially far reaching for the educational community, and important for teachers, schools, and school districts as they strive to meet national standards. The results presented in this chapter suggest that self-paced, multimedia, strategy-based instruction helps improve middle school students’ abilities to locate and evaluate online resources, read and record relevant information, and organize that information in ways that support classroom expectations. This is true not only for general education students, but also for students with learning disabilities when the online instructional materials have been designed with their instructional needs in mind.
96
The third goal of this study was to determine the extent to which students who successfully learned to use the SOAR Strategies retained their skills over time. Results from a delayed performance posttest among a small group of students revealed that the ability to perform online research using the Internet was not only retained but slightly improved 12 months post-intervention. These findings suggest that this group of students continued to use the skills and strategies learned during the intervention and likely transferred them to realworld tasks and other instructional settings.
FUTURE RESEARCH DIRECTIONS The SOAR Project accomplished its goal to develop, evaluate, and refine effective strategies and instruction to improve the online research skills of middle school students with learning disabilities. Next steps involve more rigorous testing of the SOAR Strategies and the SOAR Toolkit, and the design of new professional development modules to prepare teachers to use these tools in diverse school settings. Future directions also go beyond these research and development efforts. The SOAR Strategies and SOAR Toolkit represent one approach to meeting the real-world need for well-planned instruction of strategies for conducting research online. Future researchers could broaden the present investigation by expanding to other populations, such as younger and older students, and by branching into different content areas. Research questions might address the ways that the strategies and instruction described in this chapter could be adapted for other student populations, such as post-secondary students, blind students, or science students. It would be valuable to learn which strategies and instructional formats are equally effective for differing disability populations, differing student age groups, and differing academic content areas. The work described here has implications for future research directions in educational technol-
The SOAR Strategies for Online Academic Research
ogy and for designing and using online instructional materials in the classroom. The following discusses four areas for future exploration.
Role of Instructional Avatars How would learning differ for middle school students with learning disabilities if the instruction did not include avatars like those in the SOAR Toolkit? Do avatars make an important difference? Would videos of real people and demonstrations yield the same results? How would outcomes differ between text-based and avatar-based instruction?
Role of the Live Teacher To what extent would middle school students, and especially students with learning disabilities, learn the SOAR Strategies without active support from a teacher in a classroom environment? Could schools require students to use the SOAR Toolkit on school library computers and expect the same pre/ post learning gains demonstrated here? How does teaching style relate to the success of students with different initial levels of computer knowledge? Could students at high pretest levels learn the strategies more independently? Do students with low pretest scores need a more hands-on teaching approach to succeed? And, how might different levels of computer knowledge affect achievement among students with learning disabilities? Would students with learning disabilities who initially have high levels of computer knowledge be able to complete the SOAR Toolkit in a more independent fashion than students with learning disabilities who start with less computer literacy?
Teacher Preparation In what ways can pre-service teacher-education and in-service professional development programs be created to prepare teachers to effectively and
efficiently use online curriculum materials to benefit all students in the middle school classroom? How do teaching strategies change when integrating videos into instruction?
Designing for Students with Learning Disabilities How can we improve online learning for students with disabilities? Lessons learned in the research and development of the SOAR Toolkit have significant potential to benefit the field of online learning in general, and particularly online learning by students with learning disabilities. Research reported in this chapter shows that the SOAR Toolkit is a successful model for designing instructional websites with features that support learning for diverse students. Worthy of additional research are future investigations into ways to provide: • • • • • • •
Unambiguous navigation, Minimal distractions, Text that is accessible to readers with diverse abilities, Alternative representations of the content such as video and audio, Compelling avatars that match specific instructional purposes, Interactive, engaging, and hands-on instruction, Real-world motivating topics that require complex research.
CONCLUSION To be fully competent citizens of the 21st century, all students must know how to use the Internet to inform themselves about complex, real-world issues. Over the past several years, studies have shown that today’s secondary students seldom succeed in conducting online research for aca-
97
The SOAR Strategies for Online Academic Research
demic purposes, even though they grew up with the Internet. To bridge this gap, educators and researchers have begun building and integrating new curricula designed to teach students a consistent vocabulary for talking about online research, a standardized set of concepts and procedures related to online research, and up-to-date technical skills for efficient and effective use of online resources. This chapter discussed the historical and theoretical foundations, development, implementation, and evaluation of one such curriculum—the SOAR Strategies. In a study of general education sixth-, seventh-, and eighth-grade language arts classrooms, the nine SOAR Strategies, taught to students via the web-based SOAR Toolkit, was found to support middle school students, including those with learning disabilities, in using online resources to complete academic assignments. Results indicate that the SOAR Toolkit is a feasible and usable tool for all middle school students, and appears to be effective in increasing students’ knowledge about and ability to perform online research. The benefits also appear to be lasting, as students maintained their knowledge and skills for at least one year. Future randomized controlled group testing of this curriculum is necessary with larger numbers of student participants. Nonetheless, these preliminary findings are relevant and important for all students, and especially for students with learning disabilities. The CCSS for English language arts, as well as those for literacy in history/social studies, sciences, and technical subjects, express the importance of digital technologies for all students. As teachers increasingly integrate instruction aligned with the CCSS into their classes, they will require evidence-based models for teaching students how to use online resources to achieve academic goals. The SOAR Strategies and SOAR Toolkit offer one evidence-based model for the design and evaluation of online instruction that supports all middle school students in addressing the CCSS and preparing for successful careers.
98
The authors encourage future investigations to extend this line of research by testing SOAR Strategies with different populations of students and at different academic levels, and by examining how professional development can be designed to help teachers implement these strategies in the classroom. The ultimate goal is to provide all schools with evidence-based strategies for Internet research that can be integrated into the core curriculum.
REFERENCES Alexander, P. A., & Jetton, T. L. (2002). Learning from text: A multidimensional and developmental perspective. In M. L. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. 3, pp. 285–310). Mahwah, NJ: Erlbaum. Anderson-Inman, L. (1992). Electronic studying: Computer-based information organizers as tools for lifelong learning. In N. Estes & M. Thomas (Eds.), Education “sans frontiers”: Proceedings of the ninth annual international conference on technology and education, (pp. 1104-1106). Austin, TX: The University of Texas at Austin. Anderson-Inman, L. (1995). Computer-assisted outlining: Information organization made easy. Journal of Adolescent & Adult Literacy, 39, 316–320. Anderson-Inman, L. (1999). Computer-based solutions for secondary students with learning disabilities: Emerging issues. Reading & Writing Quarterly, 15(3), 239–249. doi:10.1080/105735699278215 Anderson-Inman, L. (2009a). Supported etext: Literacy scaffolding for students with disabilities. Journal of Special Education Technology, 24(3), 1–8.
The SOAR Strategies for Online Academic Research
Anderson-Inman, L. (2009b). Thinking between the lines: Literacy and learning in a connected world. On the Horizon, 17(2), 122–141. doi:10.1108/10748120910965502 Anderson-Inman, L., & Ditson, L. (1999). Computer-based concept mapping: A tool for negotiating meaning. Learning and Leading with Technology, 26(8), 6–13. Anderson-Inman, L., & Horney, M. (2007). Supported etext: Assistive technology through text transformations. Reading Research Quarterly, 42(1), 153–160. doi:10.1598/RRQ.42.1.8 Anderson-Inman, L., Horney, M., Knox-Quinn, C., Ditson, M., & Ditson, L. (1997). Computerbased study strategies: Empowering students with technology. Eugene, OR: Center for Electronic Studying, University of Oregon. Anderson-Inman, L., Knox-Quinn, C., & Horney, M. (1996). Computer-based study strategies for students with learning disabilities: Individual differences associated with adoption level. Journal of Learning Disabilities, 29(5), 461–484. doi:10.1177/002221949602900502 PMID:8870517 Anderson-Inman, L., Knox-Quinn, C., & Szymanski, M. (1999). Computer-supported studying: Stories of successful transition to postsecondary education. Career Development for Exceptional Individuals, 22(2), 185–212. doi:10.1177/088572889902200204 Anderson-Inman, L., & Reinking, D. (1998). Learning from text in a technological society. In C. Hynd, S. Stahl, B. Britton, M. Carr, & S. Glynn (Eds.), Learning from text across conceptual domains in secondary schools (pp. 165–191). Mahwah, NJ: Lawrence Erlbaum.
Anderson-Inman, L., Richter, J., Frisbee, M., & Williams, M. (2007). Computer-based study strategies for handhelds. Eugene, OR: Center for Advanced Technology in Education, University of Oregon. Anderson-Inman, L., & Tenny, J. (1989). Electronic studying: Information organizers to help students study better not harder. The Computing Teacher, 16(8), 33–36. Belland, B. R. (2010). Portraits of middle school students constructing evidence-based arguments during problem-based learning: The impact of computer-based scaffolds. Educational Technology Research and Development, 58(3), 285–309. doi:10.1007/s11423-009-9139-4 Biddix, J. P., Chung, C. J., & Park, H. W. (2011). Convenience or credibility? A study of college student online research behaviors. The Internet and Higher Education, 14(3), 175–182. doi:10.1016/j. iheduc.2011.01.003 Boyle, J. R., & Weishaar, M. (1997). The effects of expert-generated versus student-generated cognitive organizers on the reading comprehension of students with learning disabilities. Learning Disabilities Research & Practice, 12(4), 228–235. Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Bryant, D. P., Bryant, B. R., & Hammill, D. D. (2000). Characteristic behaviors of students with LD who have teacher-identified math weaknesses. Journal of Learning Disabilities, 33(2), 168–177. doi:10.1177/002221940003300205 PMID:15505946
99
The SOAR Strategies for Online Academic Research
Bui, Y. N., Schumaker, J. B., & Deshler, D. D. (2006). The effects of a strategic writing program for students with and without learning disabilities in inclusive fifth-grade classes. Learning Disabilities Research & Practice, 21(4), 244–260. doi:10.1111/j.1540-5826.2006.00221.x Cantrell, S. C., Almasi, J. F., Carter, J. C., Rintamaa, M., & Madden, A. (2010). The impact of strategy-based intervention on the comprehension and strategy use of struggling adolescent readers. Journal of Educational Psychology, 102(2), 257–280. doi:10.1037/a0018212 Coiro, J. (2008). Handbook of research on new literacies. New York: Lawrence Erlbaum Associates/Taylor & Francis Group. Conley, D. T., & McGaughy, C. L. (2012). College and career readiness: Same or different? Educational Leadership, 69(7), 28–34. Currie, L., Devlin, F., Emde, J., & Graves, K. (2010). Undergraduate search strategies and evaluation criteria: Searching for credible sources. New Library World, 111(3/4), 113–124. doi:10.1108/03074801011027628 Ditson, L., Kessler, R., Anderson-Inman, L., & Mafit, D. (2001). Concept-mapping companion (2nd ed.). Eugene, OR: International Society for Technology in Education. Dweck, C. S. (2007). Mindset: The new psychology of success. New York: Random House. Eisenberg, M. B. (2008). Information Literacy: Essential skills for the information age. Journal of Library & Information Technology, 28(2), 39–47. doi:10.14429/djlit.28.2.166 Fassbender, E., Richards, D., Bilgin, A., Thompson, W. F., & Heiden, W. (2012). VirSchool: The effect of background music and immersive display systems on memory for facts learned in an educational virtual environment. Computers & Education, 58(1), 490–500. doi:10.1016/j. compedu.2011.09.002 100
Frechette, J. (2002). Developing media literacy in cyberspace: Pedagogy and critical learning for the twenty-first century classroom. Westport, CT: Praeger Publishers. Gersten, R., Compton, D., Connor, C. M., Dimino, J., Santoro, L., Linan-Thompson, S., & Tilly, W. D. (2008). Assisting students struggling with reading: Response to intervention and multi-tier intervention for reading in the primary grades: A practice guide (NCEE 2009–4045). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Holman, L. (2011). Millennial students’ mental models of search: Implications for academic librarians and database developers. Journal of Academic Librarianship, 37(1), 19–27. doi:10.1016/j. acalib.2010.10.003 Jensen, J. (2004). It’s the information age, so where’s the information? Why our students can’t find it and what we can do to help. College Teaching, 52(3), 107–112. Jonassen, D. H. (1995). Computers as cognitive tools: Learning with technology, not from technology. Journal of Computing in Higher Education, 6(2), 40–73. doi:10.1007/BF02941038 Jones, S., & Madden, M. (2002). The Internet goes to college: How students are living in the future with today’s technology. Washington, DC: Pew Internet & American Life Project. Julien, H., & Barker, S. (2009). How high-school students find and evaluate scientific information: A basis for information literacy skills development. Library & Information Science Research, 31(1), 12–17. doi:10.1016/j.lisr.2008.10.008 Kennedy, M. J., & Deshler, D. D. (2010). Literacy instruction, technology, and students with learning disabilities: Research we have, research we need. Learning Disability Quarterly, 33, 289–298.
The SOAR Strategies for Online Academic Research
Kessler, R., Anderson-Inman, L., & Ditson, L. (1997). Science concept mapping companion. Eugene, OR: Center for Electronic Studying, University of Oregon. Kingsley, T., & Tancock, S. (2013). Internet inquiry: Fundamental competencies for online comprehension. The Reading Teacher, 67(5), 389–399. doi:10.1002/trtr.1223 Knox, C., & Anderson-Inman, L. (2005). Project EXCEL: EXcellence through Computer Enhanced Learning. Final Report to the U.S. Department of Education, Office of Special Education Programs (OSEP). Larmer, J., & Mergendoller, J. R. (2010). Seven essentials for project-based learning. Educational Leadership, 68(1), 34–37. Leacock, T. L., & Nesbit, J. C. (2007). A framework for evaluating the quality of multimedia learning resources. Journal of Educational Technology & Society, 10(2), 44–59. Leu, D. J. Jr. (2000). Developing new literacies: Using the Internet in content area instruction. In M. McLaughlin & M. Vogt (Eds.), Creativity and innovation in content area teaching (pp. 183–206). Norwood, MA: Christopher-Gordon. Leu, D. J. Jr. (2002). The new literacies: Research on reading instruction with the Internet and other digital technologies. In J. Samuels & A. E. Farstrup (Eds.), What research has to say about reading instruction (pp. 310–336). Newark, DE: International Reading Association. Leu, D. J., Kinzer, C. K., Corio, J., & Cammack, D. (2004). Toward a theory of new literacies emerging from the Internet and other information and communication technologies. In R. B. Ruddell & N. J. Unrau (Eds.), Theoretical models and processes of reading (5th ed., pp. 1570–1613). Newark, DE: International Reading Association.
Leu, D. J., McVerry, G., O’Byrne, C. K., & Zawilinski, L. (2011). The new literacies of online reading comprehension: Expanding the literacy and learning curriculum. Journal of Adolescent & Adult Literacy, 55(1), 5–14. Leu, D. J., & Reinking, D. (2005). Developing Internet comprehension strategies among adolescent students at risk to become dropouts. Research grant project funded by the U.S. Department of Education, Institute of Education Sciences. Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., & O’Neil, M. (2007). What is new about the new literacies of online reading comprehension? In L. Rush, J. Eakle, & A. Berger (Eds.), Secondary school literacy: What research reveals for classroom practices (pp. 37–68). Urbana, IL: National Council of Teachers of English. Levin, D., & Arafeh, S. (2002). The digital disconnect the widening gap between internet-savvy students and their schools. Washington, DC: Pew Internet & American Life Project. Mayer, R. E. (2008). Learning and instruction (2nd ed.). Upper Saddle River, NJ: Pearson. Mayer, R. E. (2009). Multimedia learning (2nd ed.). New York: Cambridge University Press. doi:10.1017/CBO9780511811678 Mayer, R. E. (2010). Applying the science of learning. Upper Saddle River, NJ: Pearson. Mayer, R. E. (2011). Applying the science of learning to multimedia instruction. In J. Mestre & B. Ross (Eds.), Cognition in education: Psychology of learning and motivation (Vol. 55). San Diego, CA: Academic Press. doi:10.1016/B978-0-12387691-1.00003-X
101
The SOAR Strategies for Online Academic Research
Moreno, R., & Mayer, R. E. (2007). Interactive multimodal learning environments. Special issue on interactive learning environments: Contemporary issues and trends. Educational Psychology Review, 19(3), 309–326. doi:10.1007/s10648007-9047-2 Morocco, C. C. (2001). Teaching for understanding with students with disabilities: New directions for research on access to the general education curriculum. Learning Disability Quarterly, 24(1), 5–13. doi:10.2307/1511292 Mullenburg, L. Y., & Berge, Z. L. (2007). Student barriers to online learning: A factor analytic study. Distance Education, 26(1), 29–48. doi:10.1080/01587910500081269 National Governors Association Center for Best Practices & Council of Chief State School Officers. (2010). Common Core State Standards for English language arts and literacy in history/social studies, science, and technical subjects. Washington, DC: National Governors Association Center for Best Practices (NGA Center). Nesbit, J. C., Li, J., & Leacock, T. L. (2006). Web-based tools for collaborative evaluation of learning resources. Systemics, Cybernetics, and Informatics, 3(5), 102–112. Ness, B. M., Sohlberg, M. M., & Albin, R. W. (2011). Evaluation of a second-tier classroombased assignment completion strategy for middle school students in a resource context. Remedial and Special Education, 32(5), 406–416. doi:10.1177/0741932510362493 Pew Research Center. (2009). Teen and young adult Internet use. Millennials: A portrait of generation next. Retrieved from www.pewresearch. org/millennials/teen-internet-use-graphic/ Quintana, C., Zhang, M., & Krajcik, J. (2005). A framework for supporting metacognitive aspects of online inquiry through software-based scaffolding. Educational Psychologist, 40(4), 235–244. doi:10.1207/s15326985ep4004_5 102
Rideout, V. J., Foehr, U. G., & Roberts, D. F. (2010). Generation M2: Media in the lives of 8-18 year-olds. Menlo Park, CA: Kaiser Family Foundation. Rizzo, A. A., Buckwalter, J. G., Bowerly, T., Van Der Zaag, C., Humphrey, L., Neumann, U., & Sisemore, D. et al. (2000). The virtual classroom: A virtual reality environment for the assessment and rehabilitation of attention deficits. Cyberpsychology & Behavior, 3(3), 483–499. doi:10.1089/10949310050078940 Roberts, J. B., Crittenden, L. A., & Crittenden, J. C. (2011). Students with disabilities and online learning: A cross-institutional study of perceived satisfaction with accessibility compliance and services. The Internet and Higher Education, 14(4), 242–250. doi:10.1016/j.iheduc.2011.05.004 Schumaker, J. B., & Deshler, D. D. (2006). Teaching adolescents to be strategic learners. In D. D. Deshler & J. B. Schumaker (Eds.), Teaching adolescents with disabilities: Accessing the general education curriculum (pp. 121–156). Thousand Oaks, CA: Corwin Press. Spencer, M., Quinn, J. M., & Wagner, R. K. (2014). Specific reading comprehension disability: Major problem, myth, or misnomer? Learning Disabilities Research & Practice, 29(1), 3–9. doi:10.1111/ ldrp.12024 PMID:25143666 Therrien, W. J., Taylor, J. C., Hosp, J. L., Kaldenberg, E. R., & Gorsh, J. (2011). Science instruction for students with learning disabilities: A meta-analysis. Learning Disabilities Research & Practice, 26(4), 188–203. doi:10.1111/j.15405826.2011.00340.x Willis, J. (2007). Research-based strategies to ignite student learning: Insights from a neurologist and classroom teacher. Alexandria, VA: Association for Supervision & Curriculum Development. Windschitl, M. (1998). The WWW and classroom research: What path should we take? Educational Researcher, 27(1), 28–33.
The SOAR Strategies for Online Academic Research
Windschitl, M. (2000). Supporting the development of science inquiry skills with special classes of software. Educational Technology Research and Development, 48(2), 81–97. doi:10.1007/ BF02313402 Zickuhr, K., & Smith, A. (2012). Digital differences. Pew Internet & American Life Project. Retrieved from www.pewinternet.org/files/ oldmedia/Files/Reports/2012/PIP_Digital_differences_041312.pdf
ADDITIONAL READING Boss, S., & Krauss, J. (2007). Reinventing projectbased learning: Your field guide to real-world projects in the digital age. Eugene, OR: International Society for Technology in Education. Christenbury, L. (2009). Handbook of adolescent literacy research. New York, NY: Guilford Press. Dodge, B. (2007). WebQuest.org. Retrieved from webquest.org Krauss, J., & Boss, S. (2013). Thinking through project-based learning: Guiding deeper inquiry. Thousand Oaks, CA: Corwin. McKenzie, J. (2005). Learning to question – to wonder – to learn. Bellingham, WA: FNO Press. McKenzie, W. (2012). Intelligence quest: Projectbased learning and multiple intelligences. Eugene, OR: International Society for Technology in Education. Meyer, A., Rose, D., & Gordon, D. (2014). Universal design for learning: Theory and practice. Wakefield, MA: CAST. Rappolt-Schlichtmann, G., Daley, D., & Rose, L. (2012). A research reader in universal design for learning. Cambridge, MA: Harvard Education Press.
Universal Design for Learning Hall. T. (2012). Universal design for learning in the classroom: Practical applications. New York, NY: Guilford Press.
KEY TERMS AND DEFINITIONS Clipping: A clipping is a meaningful chunk of text copied from the Internet and pasted into a digital notebook in order to collect relevant information about a research topic. Cognitive Partner: Students work in cognitive partnership with technology when they use it to support their cognitive processing and intellectual performance. When students create their own digital learning environment, they extend and enhance their ability to learn and use what they have learned to succeed in the classroom. Computer-based Study Strategies: Computer-based study strategies are step-by-step strategies for using common computer features in ways that support cognitively diverse students to achieve academic assignments. For example, students learn to use text-to-speech to increase reading comprehension. A strategies-based approach deconstructs academic tasks and provides students with explicit, manageable, and carefully sequenced steps they can use again and again. Digital Notebook: A digital notebook is the digital analog to a paper notebook. It is the space for student inquiry, thinking, brainstorming, reflecting, taking notes, collecting clippings of text, and synthesizing information from disparate online sources. The digital notebook is a problem space, central to each SOAR Strategy and essential to the whole research process. Google Ready: Google ready refers to making a search question viable for use in a typical search engine’s search box. This process includes starting the question with a questioning word, spell- and grammar-checking, and including in
103
The SOAR Strategies for Online Academic Research
the search question the most specific terms one knows about the topic. Growth Mindset: Growth mindset is a cognitive concept coined by Carol Dweck (2007). Students with a growth mindset are motivated to “grow their abilities” when learning new content rather than depending on “natural talent” (or feeling limited by a lack of natural talent) in that content area. Students with a growth mindset are better prepared to transfer learned skills and concepts to new learning contexts. New Literacies: New literacies refer to new forms of literacy made possible by digital tech-
104
nology developments. Commonly recognized examples include instant messaging, blogging, social networking, podcasting, photo sharing, digital storytelling, and conducting online searches. Tagged Text and Tagged URLs: Each clipping from a website and that website’s address can be tagged in a digital notebook or digital outline by adding a matching letter in parentheses to the end of both. Tagged text and tagged URLs allow the author to keep track of each clipping’s online source. This allows the author to easily navigate back to the source web page and eventually use URLs to create a reference list.
105
Chapter 5
The Value of Metacognition and Reflectivity in ComputerBased Learning Environments Sammy Elzarka University of La Verne, USA
Jessica Decker University of La Verne, USA
Valerie Beltran University of La Verne, USA
Mark Matzaganian University of La Verne, USA
Nancy T. Walker University of La Verne, USA
ABSTRACT The purposes of this chapter are threefold: to explore the research on and relationships among metacognition, reflection, and self-regulated learning; to analyze students’ experiences with metacognition, reflection, and self-regulated learning activities in computer-based learning (CBL) courses; and to provide strategies that can be used in a CBL environment to promote students’ metacognition, reflection, and self-regulation. A review of underlying frameworks for and prior study findings in metacognition and reflection are presented. Case study findings are also described and form the basis for the suggested strategies. The value and implications of using such strategies are also offered. Finally, future research should address the teaching of metacognition and reflection in CBL environments with an emphasis on real world application.
INTRODUCTION Metacognition, reflection, and self-regulated learning are terms that are commonly used in education circles. These skills are critical to students’ success in the learning process, and it is widely recognized that students who self-regulate their learning and are in tune with their metacognitive
and reflective skills perform better than those who lack those skills. It is often assumed that learners naturally acquire these abilities over time. However, students need specific, explicit instruction on how to develop these abilities. Students should also be presented with a variety of opportunities via specific course activities to practice these skills in context.
DOI: 10.4018/978-1-4666-9441-5.ch005
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
The importance of designing a course experience that promotes metacognitive, reflective, and self-regulation behaviors is even more crucial in the computer-based learning (CBL) environment in which students need to take even greater control of their learning. The purposes of this chapter are threefold: to explore the research on and relationships among metacognition, reflection, and self-regulated learning; to analyze students’ experiences with metacognition, reflection, and self-regulated learning activities in CBL courses; and to provide strategies that can be used in a CBL environment to promote students’ metacognition, reflection, and self-regulation.
BACKGROUND The ideas of metacognition and reflectivity, though not formally titled, have been topics of interest and practice for millennia. In the Greco-Roman era, Plato, Socrates, and Aristotle all emphasized the importance of self-examination and its result: self-knowledge. During the middle ages, Thomas Aquinas developed a sophisticated theory of self-knowledge developed on a foundation of selfevaluation and self-awareness (Cory, 2013). The 20th century, however, ushered in a more refined understanding of self-examination and knowledge. William James, John Dewey, Lev Vygotsky, and Jean Piaget each contributed significant advances associated with a modern approach to metacognition and reflectivity (Dewey, 1910; Inhelder & Piaget, 1958; Fox & Riconscente, 2008). During the past few decades, metacognition and reflectivity have become common terms in educational psychology and specialized topics for research and discussion. While closely related, they have followed largely separate paths in formal research. John Flavell first used the term metacognition in the mid 1970’s, indicating that it involves thinking about one’s own cognitive processes. He stated that metacognition involves two aspects:
106
awareness and control of cognitive processes (Flavell, 1976). Later, Flavell (1979) proposed a model for metacognition based upon two areas: metacognitive knowledge and metacognitive experiences. Metacognitive knowledge describes what is known about the factors that affect cognition, and metacognitive experiences describe the way people make conscious efforts to improve learning. He further divided metacognitive knowledge into three categories: 1. Person Variables: One’s ability to identify their strengths and weaknesses in the learning process. 2. Task Variables: One’s ability to identify the cognitive processes required to complete a task. Example: A student estimates the time required to read a particular journal article. 3. Strategy Variables: One’s ability to identify the strategies that they must apply in order to accomplish a task. Example: A student determines they will need to use a dictionary to look up unfamiliar words to understand the content of a technical journal article. The professional community has also recognized the importance of metamemory, the knowledge of one’s memory, as another component of metacognition (Cavanaugh & Perlmutter, 1982). More recently, Fogarty (1994) developed a threestage framework to assist teachers in developing student metacognitive processing, which includes planning, monitoring, and evaluation. Currently, researchers recognize and study three major factors of metacognition (Dunlosky & Metcalfe, 2009). They are: 1. Metacognitive Knowledge: Conscious knowledge that pertains to one’s cognition. 2. Metacognitive Monitoring: Assessing the progress of a particular cognitive activity. 3. Metacognitive Control: Management of an ongoing conscious activity.
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Successful learners employ each of these three metacognitive components, and each of them can be taught and developed. Most teachers in higher education spend little time supporting metacognitive development, believing that students should have picked up these skills in primary and secondary school (Silver, 2013). However, a variety of studies have demonstrated that metacognitively aware learners are more strategic in their studying and perform better than unaware learners. Metacognitive instruction needs to be a part of the classroom curriculum. According to Veenman, Van Hout-Wolters, and Afflerbach (2006), metacognitive instruction needs to be embedded in specific tasks. Students should be informed about the benefits of metacognitive skills to encourage extra effort. Instruction and training of metacognitive skills cannot just be done once; they need to happen and be reinforced over an extended amount of time. Metacognition is essential for successful problem solving (Siegel & Lee, 2001). It involves planning, monitoring, and reflecting (White & Frederisken, 2005). Amzil and Stine-Morrow (2013) conducted a study of 88 third-year university students using the Metacognitive Awareness Inventory (MAI) and participants’ GPAs. They found that there was a significant difference on the MAI between high and low achievers. Specifically, students who scored strongly on the regulation factor of metacognition academically outperformed those who had lower regulation abilities. Therefore, metacognition is seen as a strong predictor of academic success (Pressley & Ghatala, 1990). Veenam, Wilhelm, and Beishuizen (2004) stated that metacognitive instruction should be provided by teachers across disciplines in order to ensure transfer across tasks and domains. This is especially true in computer-based learning environments where student and instructor backgrounds can vary widely based on experience and discipline practice. As computer-based teaching grows, issues of knowledge transfer and application will become more prominent.
Computer-based learning environments require students to be self-directed, organized, and adept problem solvers; hence, metacognitive skills form the foundation for students’ success. The idea of reflectivity received a great deal of attention and credibility as a result of John Dewey’s foundational work in 1910. Dewey believed that reflective thought was an active, persistent, and careful consideration of any form of knowledge, belief, or idea. It is a conscious exploration of one’s own thoughts and experiences (Silver, 2013). In the 1970’s, observation and reflection received prominence as a core component in Kolb’s Experiential Learning Cycle. In 1983, Schon popularized the idea that reflection on one’s experience is a method of enhancing the learning process. By the mid-1980’s, reflection gained wide scholarly interest across all major disciplines of study (Boud, Keogh, & Walker, 1985). It is a foundational component of higher order thinking. Reflectivity also plays an important role in teaching. Teachers demonstrate more proficiency when they exercise and model examination of one’s ideas and their impact on practices and applications (Howard & Aleman, 2008). Darling-Hammond (2008, p. 336) summarized the relationship between reflection and teaching: This includes reflecting on their practice, to assess the effects of their teaching and to refine and improve their instruction. Teachers must continuously evaluate what students are thinking and understanding and reshape their plans to take account of what they’ve discovered as they build curriculum to meet their goals. With the growing popularity of social media and CBL education, the scholarly community is now exploring the use of technology to expand metacognition and reflectivity in the learning process. Findings have indicated that effective use of a wide range of technologies like chats, blogs, and clickers, can improve student metacognition and reflectivity (Dewiyanti, Brand-Gruwel, &
107
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Jochem, 2005; Yang, 2009; Bye, Smith, & Rallis, 2009; Poitras, Lajoie, & Hong, 2012; Brady, Seli, & Rosenthal, 2013).
SELF-REGULATED LEARNING Many education institutions, especially in higher education, strive to promote students’ commitment to lifelong learning. The goal is for students to develop skills that will help them be successful in the real world and to be open to ongoing learning. Additionally, such institutions are beginning to value students developing an awareness and management of their own learning. Nilson (2013) demonstrated that allowing students to complete a college experience without a deep understanding of how they learn will result in superficial levels of knowledge and under preparedness for post graduation challenges in the real world. There are many concepts that capture the idea of lifelong learning, and self-regulated learning (SRL) is a common one being embraced by education researchers and practitioners. SRL “is a process in which individuals take the initiative in diagnosing their own learning needs, formulating goals, identifying human and material resources, choosing and implementing appropriate learning strategies, and evaluating outcomes” (Knowles, 1975, p. 18). SRL is also the broad concept within which metacognition falls (Nilson, 2013). There are several theoretical frameworks that underpin SRL. Lee, Choi, and Kim (2013) identified the Composite Persistence Model (CPM), Student Integration Model (SIM), and Student Attrition Mode (SAM) among them. The SIM framework attempts to explain dropout or withdrawal behavior of students. The two primary components in this model are academic and social systems that impact such student behavior. SIM relates students’ engagement in an environment with their tendencies to persist. A highlight in this model is the relationship between students’ tendencies to drop out and their inability to capitalize
108
on prior knowledge and experience (Tinto, 1975). SAM expands the relationship between academic and environmental conditions suggesting that the impact of negative environmental factors can outweigh the positive aspects of academic experiences (Bean & Metzner, 1985). Of the three frameworks, CPM is especially relevant to SRL. CPM considers many factors including those in play prior to and after college admission. Many of these factors also relate to success in computerbased learning environments and are typically included in the exercise of best practices in such CBL settings. Examples of such factors are clear course purpose, policies, and expectations; identification with the institution; social connections; and appropriate support services (Workman & Stenard, 1996). CBL relies on the development and use of time management, organization, and self-discipline among students, all skills within the SRL realm (Grow, 1996). SRL “emphasizes autonomy and control by the individual who monitors, directs, and regulates action toward goals of information acquisition, expanding expertise, and self-improvement” (Paris & Paris, 2001, p. 89). The uses of SRL can be valuable for multiple instructional needs. For example, SRL has been shown to be a better predictor of standardized test scores than IQ or socio-economic status. The role of SRL strategies can also serve multiple goals. Because freshmen commonly underestimate the time required to complete assignments, SRL allows students to more deeply assess the scope of an academic activity. This is a great need given the fact that the inability to complete this type of assessment impacts academic performance more than other commonly used predictors such as high school grades (Nilson, 2013). There is ample research suggesting that students have the ability to develop SRL skills throughout their college experience. Training and prompting students have been shown to improve students’ SRL skills. Students with advanced SRL skills are better at representing the problem,
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
developing solutions, making justifications, and monitoring and evaluating (Bixler, 2008). These elements of the inquiry process relate well to Bloom’s concept of hierarchical thinking skills and the effective use of SRL to support students’ achievement of these high level skills (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956). Leong (2012) stated that SRL is a major component of the knowledge economy. His study of SRL found the following when examining behaviors of computer-based learners: • • • • •
A gender difference in online searches and seeking online learning partners, Younger students struggled more with improving their learning process, Computer-based learners were found to pay little attention to the learning process, Adult learners must be guided in creating computer-based learning communities to enhance SRL, and SRL must be deeply integrated in courses and students’ educational experiences.
In a study conducted with medical students in a clerkship, Algeria, Boscardin, Poncelet, Mayfield, and Wamsley (2014) found that students who used tablets for the purpose of promoting self-regulated learning did so in ways that were connected to their learning styles. Such students also made more effective use of resources and down time. These authors found that tablets served as powerful learning tools, especially in clinical environments. These findings were related to effort and initiative. The relationships among SRL, effort, and metacognition were validated by Puzziferro (2008). Effort regulation is students’ levels of commitment to manage tasks and challenges with regard to learning. Metacognitive regulation is students’ ability to plan, monitor, reflect, and adjust the learning process (Puzziferro, 2008). These are critical to students’ development of learning strategies and abilities to demonstrate lifelong learning.
There are models based on SRL which identify key components and innovative applications. Pintrich (2000) characterized SRL with four stages including planning and goal setting, selfmonitoring, controlling, and reflecting. Within these stages, processes are described including cognition, motivation, behavior, and context. These stages are sequential and culminate in reflection which includes making judgments about understanding and feedback analysis. Wan, Compeau, and Haggerty (2012) created a new domain called personalized SRL strategies which include nine sub-dimensions: self-evaluation, organizing and transferring information, goal setting and planning, seeking information, keeping records and monitoring, environmental structuring, selfconsequences, rehearsal, and memorization and reviewing. Students must deliberately use SRL strategies to be successful lifelong learners. SRL is critical in student-centered environments. Personal SRL strategies positively impact cognitive learning and skill development. Social SRL strategies positively impact cognitive learning and satisfaction. Students with intellectually demanding jobs use more SRL strategies such as online discussions, seeking assistance from others, and social interaction during training. Effective SRL strategies developed as a result of the explicit training provided in classes (Wan et al., 2012). Self assessment is an important key to SRL. Peer assessments also trigger active learning which is supportive of SRL. The long-term benefits of assessment are based on the SRL skills that are developed and reinforced (Mao & Peck, 2013). Active learning provides the opportunities to reflect on the concerns of academic subjects in a meaningful way (Meyers & Jones, 1993). Pantoya, Hughes, and Hughes (2013) found the impact of active learning to be greater confidence in pursuing research projects and greater clarity on learning goals and gains. Active learning can also be thought of as the involvement of participants in the learning process in order to achieve engagement in higher order thinking. This is counter to
109
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
treating learners as passive participants waiting to be filled with knowledge. Active learning was found to produce more positive results in achieving learning outcomes than passive methods. Specifically, active methods increased perception of usefulness of library resources, decreased anxiety, increased self-efficacy, and enhanced time savings and effort reduction (Detlor, Booker, Serenko, & Julien, 2012). The treatment of learners as passive participants has many more negative consequences including marginalizing them in the education experience. A lack of SRL skills is a significant factor in student dropout from CBL courses. Especially prominent in these regulation behaviors are goal commitment, locus of control (LOC), academic self-efficacy, lack of resilience, and underestimation of time required for a successful online course experience (Lee & Choi, 2011). SRL is thought to be experienced differently by novices as compared to experts in a particular field of study. One of the primary differences is related to the LOC. Novices tend to place it externally while experts place it internally. Additionally, novice thinkers have less organization in their thinking patterns when learning a new concept. This leaves them less able to make necessary adjustments when learning becomes more complex or when contradictions are discovered (Nilson, 2013). Metacognitive self-regulation (MSR) and academic locus of control (ALOC) were found to be statistically significantly higher in completers versus those who dropped out of CBL courses. MSR and ALOC accounted for 55% and 34% of the variance in completion rates, respectively (Lee, Choi, & Kim, 2013). Academic self-efficacy is confidence in one’s own learning performance. It has been found that the variables which have a direct impact on student achievement in CBL environments include effort regulation and login time. Variables with indirect impacts on student
110
achievement include metacognition, interaction, intrinsic goal orientation, and academic selfefficacy (Cho & Shen, 2013). The composite based approach to learning described earlier aligns well with a framework that also informs CBL-based course design and implementation. Since learning theories and technological strategies are at the core of these approaches, the next section discusses instructional design principles and the current understandings of cognitive operations.
INSTRUCTIONAL DESIGN Instructors in CBL environments often have specific challenges to face in designing their instruction and educational experiences for students. Technology has changed the roles of teachers, trainers, and learners (de Jong & Pieters, 2006). Whether the course is being developed as a new course for a CBL environment or is being redesigned from a traditional on-ground course to be implemented in a CBL setting, instructional design theory plays a guiding role. The CBL environment presents unique and stimulating challenges to teaching and learning. These include engaging students in sustainable ways that persist long after the completion of a course. Effective instruction of all types is developed with a strong grounding in instructional design theory. Constructivism, which is rooted in the research of Vygotsky and Piaget’s theories of knowledge construction, is a philosophical perspective that contends individuals form or construct what they learn and understand (Bruning, Schraw, Norby, & Ronning, 2004). This new learning is situated in contexts (Bredo, 2006). The knowledge that develops is based on the individual’s beliefs and experiences in a wide variety of situations (Cobb & Brewer, 1999). Constructivism has influenced the
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
areas of curriculum and instruction. Specifically, it emphasizes the need for an integrated curriculum which can be studied from multiple perspectives (Schunk, 2012). Additionally, students access content through materials and social interaction and self-regulate their learning by monitoring their progress. Managing one’s own learning is one of the most valuable and greatest challenges in the college experience. When students have to grapple with projects that are long-term, unclearly defined, and have multiple solutions, they are gaining the skills that they will use in real-life experiences (Ambrose, Bridges, Lovett, DiPietro, & Norman, 2010). In engaging, active learning classrooms, instructors challenge students to think for themselves, to brainstorm potential solutions, to reflect on all possibilities, and to try out a solution. This way of teaching is vastly different from the traditional “sage on the stage” model in which instructors share their knowledge, and students are passive followers who take notes and memorize key information from the instructor’s lectures. The burden lies with college instructors for facilitating meaningful and effective opportunities for growth in higher education courses. Therefore, educators have a responsibility to create and provide powerful learning environments that stimulate this growth. These learning environments promote active learning, utilize collaborative activities, and offer realistic learning experiences (van Merrienboer & Paas, 2003). Component Display Theory (CDT) is ideal for the development of CBL content because it examines individual components of instructional design and their relationships to learning (Mills, Lawless, & Pratt, 2006). It treats individual pieces of instruction as stand-alone lessons, but emphasizes the synergy created when such pieces are combined for a complete teaching strategy. CDT is related to CPM, which is the framework described earlier, through the identification of different types of learning including content
and performance. Content is the material with which students interact whereas performance is the nature of the interaction between learner and material; this is what learners will do with the newly acquired content (Merrill, 1983). CDT has been found to align well with cognitively intense instruction (Parra, 2012). CDT calls on associative memory to enhance academic performance. This type of memory is hierarchical and operates within a network, known as a context for learning (Merrill, 1983). A core component of CDT is learner control which allows students to adapt the instructional experience to meet their educational needs. The student control of learning allows for ownership and engagement as well as escalation on the hierarchy of learning domains, such as that presented by Bloom’s Taxonomy (Mills, Lawless, & Pratt, 2006). A key part of creating a powerful learning environment involves setting up appropriate cognitive learning objectives and course activities. A well-known classification system is Bloom’s Taxonomy. The original taxonomy included six categories: (a) knowledge; (b) comprehension; (c) application; (d) analysis; (e) synthesis; and (f) evaluation (Bloom, Englehard, Furst, Hill, & Krathwohl, 1956). This taxonomy was revised to reflect cognitive functions: (a) remember; (b) understand; (c) apply; (d) analyze; (e) evaluate; and (f) create (Anderson et al., 2001). Course design should focus on striking a balance among the levels of Bloom’s Taxonomy that are utilized. In a study regarding the use of discussion boards in courses offered in a CBL environment, researchers found that those discussion questions based at the highest levels of Bloom’s taxonomy elicited responses from students that evidenced clear and thoughtful engagement with the content. Responses from students to these questions were more developed. In addition, student interaction (students’ comments on others’ responses) was also more evident with questions or prompts at the higher levels of the taxonomy. Findings
111
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
have indicated that the levels of critical thinking engaged in by learners are impacted by the challenges presented by the instructors and by peers (Ertmer, Sadaf, & Ertmer, 2011). Recently, taxonomies have been designed to help instructors evaluate their assessments in relation to course goals or objectives. One example is Webb’s Depth of Knowledge Levels. This taxonomy was developed to ensure course assessments reach the levels of critical thinking designated by course goals and learning objectives. Level 1 (Recall) involves simple recall or application of procedures. Level 2 (Skill/Concept) requires students to move beyond simple recall with some processing or comprehension behavior. Level 3 (Strategic Thinking) involves reasoning, planning, and citing evidence. Finally, Level 4 (Extended Thinking) asks students to perform complex analyses and develop thinking, sometimes over a period of time (Holmes, 2012). This taxonomy is useful in helping instructors design course assessments at the same level of rigor and cognitive challenge as the course goals and objectives. Taxonomies are effective representations of teaching concepts because they guide practices. They also help relate multiple concepts such as learning theories in such a way as to be implementation-ready. The previously described design models have helped to create the theories described on learning and presentation. Meaning-making and developmental teaching are at the heart of these learning theories. Meaning is derived when students learn to integrate new knowledge and skills into their thinking and practices. Developmental teaching is the process of meeting learners where they currently function and building on their knowledge base with higher order thinking challenges, additional components of learning, and expanded application of learned material. This helps build efficacy which, as will be described next, is central to the success of learners in a CBL environment.
112
MOTIVATION AND ENGAGEMENT IN COMPUTER-BASED ENVIRONMENTS There are several important trends in motivation research that have developed in the last century which have influenced current models of motivation in education. Anderman and Dawson (2011) noted that one of the most prominent shifts in motivation research was the shift to a more cognitive and social-cognitive perspective. Self-efficacy work has served as a vehicle for that shift. It originated from Bandura’s research in the 1970’s and acknowledged the role of beliefs, particularly related to individuals’ perceptions of their abilities, in completing tasks. Self efficacy has been identified as a situational construct. Students in traditional classrooms may perceive their abilities as sufficient to succeed in the course; however, those same students may perceive their abilities less favorably if a course is offered through a CBL environment. Students with a strong sense of self-efficacy in a particular class environment may be more likely to be cognitively engaged, whereas those students with a lower sense of self-efficacy are more likely to withdraw from the course or disengage from the content (Pintrich & Schunk, 2002). Learner control is also central to the sense of efficacy and the drive to maintain engagement in the learning process. Pintrich and Schunk (2002) define motivation as “the process whereby goal-directed activity is instigated and sustained” (p. 5). According to Anderman and Anderman (2010), the classroom environment, students’ background knowledge about the task, and the specific activity affect learners’ motivation. Motivation impacts student academic success in many ways, most of which are magnified in a computer-based environment. Many computer-based courses lose students at an alarming rate (Allen & Seaman, 2010). In a study using path analysis, motivation was measured and correlated with other variables. Motivation has
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
been found to be impacted by learning strategies. Additionally, motivation affects self-efficacy, the value placed on a task, and course satisfaction (Wang, Shannon, & Ross, 2013). Motivation has also been identified as a determinant of student engagement (Milligan, Littlejohn, & Margaryan, 2013). Because of the linkage between motivation and self-regulated learning (Paechter, Maier, & Macher, 2010), defining and promoting self-regulated learning are key to student success. Self-regulated learning has been described as an activity requiring an orientation toward goals, self-control, and a drive toward tasks which are cognitive in nature (Pintrich, 1995). Expectancy-value theory also plays an important role, particularly for courses in a CBL environment. Expectancy-value theory is centered around students’ perceptions of assigned tasks as related to their likelihood of success and the value for the task itself (Pintrich & Schunk, 2002). Instructors can adopt behaviors to help students develop their expectation for success or to increase students’ perceived task value. These behaviors include providing specific, detailed feedback on student work, assigning tasks that are reasonably challenging without being too difficult, avoiding public comparison of students’ achievements and abilities, sharing with students the instructional purpose or goal of assignments given, modeling interest in or value of the course content and assignments, and providing autonomy for students through choice and control when possible (Pintrich & Schunk, 2002). Students who have a high expectancy for success and who value the tasks that are assigned are more likely to engage with course content in a meaningful way. To foster motivation, several qualities have been identified as important, particularly for courses in a computer-based learning environment. These include ensuring appropriate technology competence in students, use of technology tools to advance reflection and problem-solving, accurate and timely feedback that is fair and regular, as
well as proper modeling of interactive and rich discussions (Kranzow, 2013). Additionally, other practices used during the administration of courses in a CBL environment have been found to increase student engagement and motivation. These include use of short messaging services specifically for feedback and providing content that is tailored to student learning needs (Chaiprasurt & Esichaikul, 2013). Micro-blogging is a practice of blogging using SMS on mobile devices. This practice was found to enhance motivation and group dynamics in a CBL course (Pauschenwein & Sfriri, 2010). These are especially important qualities in the pursuit of sustaining student motivation and engagement beyond the initial interest in a computer-based course. Student autonomy and ownership of their learning experiences is key in the findings described above and the theories underlying metacognition, reflection, and SRL. Especially integral is the persistence of lifelong learning skills and the ability to transfer knowledge and skills to new environments and challenges. Case study findings and strategies aligned with these concepts follow.
CASE STUDY FINDINGS In recognition of the importance of developing students’ metacognitive and reflective abilities in computer-based learning environments, a focus group interview was conducted with five recent graduates (within the last year) of an online Master’s of Education program to learn about their perspectives. This program is offered at a private, liberal arts college in Southern California and draws students from multiple campuses across the state. In the Master’s of Education program, the core courses are offered fully online. The courses being offered in the fully online format are delivered using the university’s adopted learning management system, which is Blackboard. Within Blackboard, instructors use a variety of tools to deliver content and engage
113
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
learners. SoftChalk, a web-based lesson design software, is used in a similar format to lecture in a face-to-face classroom. Students read content, link to internet resources, watch videos (instructor-made and/or embedded from other sources), and participate in quizzes and activities to check their understanding of the content. In addition to SoftChalk lessons, instructors utilize collaborative tools such as discussion boards, blogs, and wikis to help students process course content. The majority of the online courses are delivered in an asynchronous fashion. Occasionally, students may be asked to attend one or more synchronous sessions throughout the course for the purpose of student presentations or other course activities. Instructors typically use either Adobe Connect or WebEx web conferencing software for these sessions. The core curriculum in the Master’s of Education program consists of four main content areas: educational assessment, methods of research, current issues in teaching, and a culminating master’s project. The focus of these classes is to give students additional skills that will help them keep current in their respective fields in order to be more effective in the classroom and also to help them develop leadership skills and knowledge to improve the field of K-12 education. The conversation of the focus group centered around the impact of class activities on students’ metacognition and self-regulated learning as seen from the students’ perspectives. The six open-ended interview questions focused on the graduates’ experiences with course activities that involved metacognition and reflection. In addition, participants shared how those experiences transferred to their jobs as classroom teachers. In analyzing the transcripts of the focus group interview, several main themes emerged. First, computer-based learning environments, by their very nature, require students to demonstrate selfregulated learning. Focus group participants discussed the general format of the computer-based learning environment and how they responded to
114
it. One participant stated, “Online you have to be more self-motivated and be more on top of it... so I think that it helps you self-regulate, not just your learning but also your time.” With regard to the computer-based learning environment, another participant said, “It kind of empowers you. It teaches you self-management skills.” A third participant shared, “I think I realized that I actually do have the ability to be much more self-regulated and much more effective with my time management...I think online courses are a really, really good way of teaching self-regulated learning.” Another theme that emerged from the focus group interview was the importance of interactive reflective activities in computer-based learning environments. For example, many of the focus group participants mentioned assignments that involved written reflections on class content and readings. They were required to not only post their original reflections, but also comment and respond to their classmates’ reflections to create authentic dialogue around class content. One participant commented, “The interaction helped me reflect on what we did, and ‘Oh, okay. We got it.” It’s not just from book learning or other types of learning.” Another participant shared, “I think reading other people’s posts helped to further reflect and develop your own thoughts because I would write my reflection, and then I would read somebody else’s, and they would have a completely different perspective. That would be helpful in figuring out what I actually thought.” In comparing written reflections in a computerbased learning environment to oral discussions in a traditional classroom, a third participant stated, “When I reflect in a written form, that makes it more concrete for me, and that helps me because you first of all have to think about it and formulate your thoughts enough to actually get them on to the screen.” A third theme from the focus group interview was the significance of the instructor’s role in establishing a computer-based learning
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
environment that enabled students to develop self-regulated learning skills. Focus group participants mentioned instructional strategies such as checkpoints, checklists, and outlines that helped them break down large projects into manageable tasks and keep them on target with deadlines. One participant commented, “I loved the check off charts which let me know where I should be every step of the way.” Another participant said, “For me, having an outline for the project made it easy to keep myself on track.” A third participant explained, “I think in each class there was some sort of way of breaking major assignments down, so I was able to be much more self-regulated and effective with my time management.” They also mentioned a heightened sense of individual accountability in the computer-based learning environment, not only for timely participation, but also the quality of their work. One participant said, “When you’re responding online, you don’t just have the teacher’s eyes on you. You have a group of people’s eyes on you. It forced me to make sure that I had a good argument or I knew what I was talking about because you know they’re going to come back with questions, and you have to know what you’re talking about…[it] kind of pushes you to another level.” Another participant shared, “Students have to be able to organize their thoughts in an academic way in an online setting because you don’t know each other...you need to be able to articulate.” A final theme present in the focus group interview was the participants’ application of skills and strategies acquired in the computer-based learning environment to their professional practice in their roles as K-12 classroom teachers. One participant spoke about implementing project checkpoints and accountability with her middle school students. She said, “I learned I need to have that level of accountability with my kids. Just saying, ‘Ok, it’s due on this date’ is not enough for them. They need the sectioned out projects where I’m checking their progress as we go.” Another participant spoke to the importance of self-regulation and time
management. She specified, “As a teacher, you’re really your own boss. Nobody’s going to sit there and say, ‘Ok, you know you have this deadline.’ I have a lot of deadlines because I have reports every single week, and I have to be responsible for those deadlines.” She also spoke about implementing project checklists and timelines with her students, saying, “I do that with my kids in my classroom too. Everything that we do, they check off. So they know each step of the way.” A third participant shared, “Having all of these different modalities of teaching and learning taught me I really need to work on doing that with my students. It’s so easy to fall into teaching in your own learning style, teaching in what’s comfortable and what you like.” Another participant spoke about the importance of being aware of learning styles and implementing this awareness in the classroom. She said, “I don’t do that with my kids, and I think that could make the work that they do together so much more impactful for them and more effective for everyone.” After analyzing and categorizing the focus group participants’ responses, it became clear that the participants further developed as selfregulated learners as a result of participation in the computer-based learning environment. They developed strategies in self-regulated learning that they are now implementing in their K-12 classrooms. However, it is also clear that the focus group participants attribute this development and progress to the computer-based learning environment itself rather than to the specific course activities and assignments. Throughout the focus group interview, participants rarely commented regarding metacognition in their own experiences as students or with the students in their classrooms. While participants spoke to implementing self-regulated learning strategies, they did not speak to honing their own metacognitive skills or to helping their students at the K-12 level develop metacognitive skills. This is a reflection of a lack of explicit instruction related to metacognition in the master’s courses
115
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
themselves. A key learning is that course activities and assignments in computer-based learning environments should be revised to more directly engage students in conversations or reflections around metacognition. In addition, course activities should more clearly emphasize the importance of providing similar opportunities for K-12 students to develop their own metacognitive skills. Based on students’ comments in the focus group interview, feedback from student course evaluations, conversations about best practices among faculty who teach in the computer-based learning environment, and a review of the literature, we have developed a list of strategies that we have found to support students in developing self-regulation and metacognitive skills and ways to assess those skills.
OVERVIEW OF STRATEGIES The use of strategies, which emerged in the 1950s, is rooted in information processing theory (Miller, Gallanter, & Pribram, 1960). A strategy “is composed of cognitive operations over and above the processes that are natural consequences of carrying out the task, ranging from one such operation to a sequence of interdependent operations” (Pressley, Forrest-Pressley, Elliot-Faust, & Miller, 1985, p. 4). To help students develop and use self regulated learning and metacognition effectively, instructors must incorporate course strategies that align with students’ academic needs as well as their career goals. Students accustomed to lecture-style teaching are often underprepared for the challenges of learning in a CBL environment (McAllister & Watkins, 2012). Therefore, instructors need to be cognizant of how to set up their courses using techniques that will help students navigate in a CBL environment. To help students be successful, instructors must employ a variety of strategies in all phases of the course, including course design, facilitation of course activities, and assessment.
116
Course Design Designing a course for a computer-based learning environment involves much planning on the part of the instructor prior to the course going live. Instructors need to create a detailed plan for assignments, activities, and assessment that will support students in understanding the content as well as developing their reflective skills. Given the unique setting of the CBL environment and the lack of face-to-face time between students and instructor, care must be taken to establish a course that is comprehensible, clear, and user-friendly. The strategies in this section should be considered as the instructor is planning for and designing the course for the CBL.
Course Orientation Video and Course Scavenger Hunt Prior to the beginning of the course, the instructor should record an orientation video for students introducing them to the content and layout of the CBL environment. The video should include the organization of the CBL environment, where to find materials, and how to troubleshoot issues specific to the environment. The instructor can then create an activity such as a course scavenger hunt or a matching activity to help students check for their understanding. This type of activity provides students with immediate feedback as to their understanding of the course set-up and structure. It also provides students with the opportunity to ask specific clarifying questions right from the beginning of the course. In addition, it encourages students to go back and search for information or to review a video to answer questions that they are stuck on. The instructor can also use the information gathered from this activity to reflect on their course set-up. If the instructor is seeing the same question asked, they know that they need to clarify that content area in the course overview.
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Instructors designing a course scavenger hunt may wish to include questions such as the following:
•
• •
•
• • • • •
Where can you find the course syllabus? Where can you find the rubrics for the course assignments? Name two places where you can find an overview of the course assignments. Under which course link would you find contact information for your classmates? What is the tool called that allows you to submit your assignments online? On what day/time are new modules posted? How many points are deducted for each day an assignment is submitted past the due date?
Time Management Questionnaire Another key element of helping students develop self-regulated learning skills involves helping students understand the expectations of the course. A key part of this is having students understand the time commitment involved in the course. From there, students need to develop a plan for how they are going to budget their time to be successful. Instructors can provide a questionnaire at the beginning of the course that helps students develop their understanding of the course time requirements as well as providing them with guiding questions to develop a daily, weekly, or monthly time management plan. Many times, students take online classes because they think there will be less work or that there will not be any specific deadlines. Providing clear expectations of the time requirements of the course dispels these myths right from the beginning. Time management questionnaires should include questions such as the following: •
How do you think coursework in a CBL environment differs from coursework in a face-to-face environment?
•
• • •
How much time have you set aside to complete coursework in a typical week? When are you available for virtual group meetings with your peers? What personal or life events do you anticipate this term that will require special scheduling? Do you work better in short (1 hour) blocks of time or long (2-3 hour) blocks of time? Do you prefer to work at a specific time of day? Do you have a suitable work environment in which to complete your classwork?
Problem-Based Learning The actual instructional approach used by the instructor can also help develop SRL skills in students. Problem-based learning is an instructional strategy that incorporates components such as goal setting, establishing task value, and metacognition (Cho & Shen, 2013). Through problem-based learning, students are engaged in identifying a problem or question to be solved, completing research related to the problem or question, and then applying what was learned to solve the problem or question. For example, students in a graduate-level education course may be asked to design a new math curriculum for a local elementary school. In order to complete the project effectively, students are engaged in examining learning theories, instructional approaches for teaching math, and the population of the local school. Through self-directed study, students gather this information and apply it to complete the task assigned. In a CBL environment, students have access to a wide variety of tools and resources to help them complete research related to their task. The CBL environment also lends itself to documenting the research completed, helping students to document and process the information collected. Often, problem-based learning requires students to work in collaborative teams. Students can use
117
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
online collaboration tools in the CBL environment to help keep their group organized and on task. Throughout problem-based learning tasks, learning is active rather than passive, requiring students to have a strong understanding of SRL and the study skills that they find most effective.
Reflective Journaling
for missing deadlines must be established. For example, if a student submits an assignment after the deadline, the instructor may choose to not accept the assignment at all or to accept the assignment for partial credit. Consistent enforcement of such consequences can aid students in developing the ability to self-regulate with regards to important course timelines and due dates.
Instructors operating in a CBL environment should also take steps to ensure accountability and student success for course activities such as assigned reading. Creating reflective journal entries or processing activities based on required readings not only establishes accountability for the assignment, but it also helps students to reflect on their understanding of what was read. These processing activities can be based on the specific content of the reading, but they can also include metacognitive questions such as the following:
Facilitating Course Activities
•
Discussion Boards, Wikis, and Blogs
• • • •
What was something new you learned from the reading? What questions do you have after reading? What strategies did you use while reading to help you comprehend the content? What connections did you make between what you read and information or experiences acquired previously? How will you use the new knowledge gained from this reading assignment?
This style of reflection moves the student beyond the content and encourages development of metacognitive skills.
Assignment Deadlines and Expectations Accountability in a CBL environment is a vital key to a successful course. In designing the course, the instructor must establish clear and consistent accountability for students. Assignments must have clear criteria and deadlines, and consequences
118
When setting up a CBL course, it is critical that the instructor consider how to incorporate intentional and meaningful opportunities for students to use their SRL skills. Students need opportunities to reflect on their learning and growth and get feedback from other students, as well as the instructor (Rowe & Rafferty, 2013). The strategies presented in this section can assist in providing such opportunities.
One way to promote SRL is through the use of online conversation tools such as discussion boards, wikis, and blogs. These tools can be used in a variety of ways. Students can write a reflective blog entry regarding their views/background knowledge of a topic prior to viewing the new content (Bixler, 2008). Students can then go back and reflect on their opinions after the lesson. Another option is to have students use a wiki to share images and explanations that students draw from the lesson. The wiki becomes a collaborative representation of students’ understanding of the content presented, along with connections to other experiences or knowledge. The process of sharing their understanding with classmates, as well as viewing their classmates’ attempts at processing the information, can aid students in developing their own metacognition and arriving at a better understanding of how to process new information. Students can also pose comprehension questions to their peers through a discussion
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
board format to strengthen understanding. Using students’ contributions to the discussion boards, the instructor can ask clarifying questions that will support the students in reflecting on what they have written.
Concept Maps Assigning students to create a concept map at the beginning of the course to show their knowledge of a given topic is an ongoing way to have students reflect on what they know and to help them build connections throughout the course. Students can use both words and images to demonstrate what they already know about, for example, the Common Core Standards. This can be shared with the instructor to help the instructor understand their students’ level of background knowledge. As the course continues, students build on their concept maps adding their new learnings and connecting topics visually. Students can also go back and change any misinformation they had at the beginning of the course. These concept maps also become study guides for students, and instructors can analyze the concept maps at the end of the course to reflect on what connections the students made. There are many computer-based concept mapping tools available for courses being offered in a CBL environment. Many of these tools also allow for collaborative development of concept maps, as well as easy sharing of concept maps with peers or instructors.
Student Self-Assessment Another effective self-regulation and assessment strategy is to have students assess their prior knowledge using a Likert Scale. The instructor creates a list of concepts and skills that students should have prior to the course as well as content that will be developed during the course. Students assess their competence for each item on a 4-point scale with 1 meaning “ I have limited knowledge of the skill/concept” through 4 meaning “I have
enough understanding of the concept to be able to explain it to another student”. This activity requires students to reflect on their existing knowledge and to classify the level of their understanding. This information then allows the instructor to design appropriate activities to meet students’ diverse needs and to set up groups in which there are differing knowledge levels (Ambrose et al., 2010).
Journals Journals provide another opportunity for students to self-regulate and reflect on the process of their learning. When using journals, it is important for the instructor to provide prompts that will encourage this self-regulation and reflection. Some sample prompts include: • • • • • •
Which content areas in this course have been easy for you to grasp? Why do you think that content was easy for you? Which content areas in this course have been more difficult for you to grasp? Why do you think that content was difficult for you? What time management strategies have you used so far in the course and how effective have they been? Based on what you have learned so far, what are your next steps for strengthening your knowledge in this class?
Journals are also an ideal space for students to set their own individual goals for the class and to reflect on the progress they are making towards achieving those goals.
Connective Language Students often struggle to organize their new learning; many times they view each concept as an isolated element and therefore have trouble remembering everything independently. Instruc-
119
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
tors can help students reflect on how concepts fit together by using connective language in the videos they produce for their computer based courses. For example, an instructor can begin a new lesson by saying, “Last week, we discussed what social reform is and its importance. This week we will build on that knowledge by digging deeper and exploring the different types of social reforms that have occurred over the last decade.” In addition to the instructor using connective terminology, it is important to provide students with opportunities to do the same. Students can be given assignments with key words such as “How does a democratic society compare to a dictatorship? How do these two types of government differ?” Students can also be asked to create a summary of the content in which they use connective terms such as • • • • •
The overall/idea/concept is…. The supporting details are…. This connects to what we already learning about…. I see similarities between…. These two concepts are related because….
tool to help students focus on one component at a time and to map out a timeline to follow. To be able to break down a large task into smaller components, students must reflect on all of the steps to completing the assignment and how long it will take to finish each step.
Cooperative Group Activities Cooperative or collaborative assignments can be very effective in the CBL environment. Students operating in this type of environment often feel isolated from their classmates. Cooperative group assignments help them connect with their peers in the class to achieve a specific goal. When students in a CBL environment complete a group assignment, it is important to have them reflect on the process of their participation to increase their awareness of their metacognition and self regulation skills in this unique context. Upon completion of the group assignment, students can be asked questions such as the following: •
How effective was the group in working together? How effective was the group in completing the assignment? What steps could have been taken to help the group work more effectively? What steps could have been taken to help you, individually, work more effectively within the group?
Graphic organizers, such as Venn Diagrams or hierarchical charts, can be very useful tools that provide visual support for using relational terms.
•
Gantt Charts
•
Another key tool that instructors can incorporate into their CBL courses to develop self-regulation skills is the use of project checklists. To begin, the instructor should model this behavior by providing detailed instructions for large projects that includes a breakdown of the required components and the steps students should take to be successful. As the class progresses, students can adopt this behavior for themselves through tools such as a Gantt Chart. The Gantt Chart helps students break large projects/assignments into manageable chunks or phases and determine the amount of time needed for each phase. It serves as a visual
Such reflections help students think beyond the assignment itself and identify collaborative skills that need improvement. In addition, more formalized self and peer assessments of contributions to the group can be factored in to assignment scores or grades. Students can assign Likert scale scores or grades for each member of the group, adding justification for their scores. Formalizing the reflection in this way adds an additional layer of opportunities for students to increase their metacognitive awareness.
120
•
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Figure 1. Sample Gantt Chart
Simulations and Scenarios
Current Events
Simulations and scenarios are very useful activities in the CBL environment. These types of learning activities help students connect their computerbased learning to real-life examples and events and provide a forum for students to develop and test out their problem solving abilities. Students may be given a scenario of a struggling business, and then they develop a business plan using their course learnings. Students in a current issues class in education may be assigned a project to educate K-12 parents about the Common Core Standards in a comprehensible way. Providing authentic projects such as these engages students in exercising problem solving skills in a safe environment. Participating in simulations and scenario activities also develops critical aptitudes for later experiences in professional settings.
Another way to help students apply classroom training to real world problems is through dramatic videos and articles that illustrate current events and ideas. Students begin by viewing a relevant, compelling video or reading an impactful article. They follow up by reflecting on the significance of their classroom training in light of the illustration. Through leading questions, they develop their own point of view based upon knowledge and experience. Finally, they consider the implications these real life situations have on their lives. For example, students in a business ethics class might watch a “60 Minutes” video on social media’s impact on modern advertising and read a relevant article. They may be asked to evaluate how ethical considerations addressed in the class might be relevant to social media. They
121
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
could then develop their own views about the interrelationship between businesses and social media. Finally, they would consider how social media impacts them, how they might use it to their benefit in their professional careers, and what steps they might wish to consider to protect their own privacy. Computer based learning environments offer students convenient access to audiovisual resources as well as print media and a variety of ways to communicate their thoughts with other class members.
Assessment A third area for instructors to incorporate opportunities for students’ self-regulation is in the assessment arena. In the early stages of the course, students can take a pre-assessment to gauge their locus of control orientation. It is important for students who have an external locus of control orientation to realize that they do have control over their experience and ultimate success in a computer based learning course (Lee, Choi, & Kim, 2013). When such perceptions are identified early on by the instructor, additional strategies such as those described below can be utilized to help the student begin to develop an internal locus of control orientation.
Pre-Assessments Non-graded pre-assessments on the content of the course can help students identify their strength areas and the content areas where they will need to dedicate more time. Starting a class in a computerbased learning environment by focusing on their prior knowledge allows students to maximize their study time. This also helps students focus on their metacognition from the very start of the course. The pre-assessment can be revisited or repeated at key points throughout the course to help students monitor their progress in relationship to the course content. It may also be helpful to ask students to reflect on their confidence related
122
to course content or assignments. Through selfassessment of confidence, students can begin to observe patterns in their approaches to course content or assignments which allows them to build upon those approaches that have led to success and address any flaws that may be found.
Anticipation/Reaction Guides This strategy creates an opportunity for students to reflect on what they know about a topic and what their knowledge is based on (the anticipation part) prior to interacting with the content in the computer-based learning environment. Then students participate in the course reading, instructor presentation, or problem-based learning activity. After their interaction with the content, students go back and revisit their original responses and either keep them or change them. In developing their reaction responses, students use evidence from the course presentation or reading to support their position.
Student Generated Test Questions Another way to involve students in reflecting on their learning is to assign students to create review/test questions on the content areas that they have struggled with the most. This activity causes students to reflect on specific content that they have grappled with, and then they create questions that they feel will help others assess whether they understand that content. These questions can be built into a practice test that is posted on the course management system, or the instructor may use the questions in a formal assessment.
Analogies Creating assessment activities that require students to reflect regarding what they have learned and how it applies to other things in their lives is a critical component to make learning meaningful. One such way to do this is by having students work
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Table 1. Sample anticipation/reaction guide My Opinion Prior to Class Agree/Disagree
Reason for My Opinion
Statement
My Opinion After Agree/Disagree
Reason for My Opinion After
The best way to learn another language is to spend a year in another country. It is easier for young children than adults to develop fluency in a language. Grammar rules should be the first thing taught when learning a new language. Knowledge in one language can help the learner master a second language.
on analogies using the course content. Analogies require deep understanding of content to determine relationships between concepts. Students are given the first part of an analogy, they analyze the relationship between the two concepts, and then they complete the second half of the analogy using concepts from their background knowledge that share a similar relationship. Students then need to describe the relationship they see between the concepts. For example, in a statistics class, the instructor gives the students, “Non-parametric statistics: parametric statistics:: ___________ ____:__________________.” Students might describe non-parametric statistics as being less precise than parametric statistics and then they think of something from their own lives that would have a less precise to more precise relationship. They end up creating the following analogy: “Non-parametric statistics: parametric statistics:: a stopwatch: a chronometer.” The analogies can be posted in a blog in which students also share their reasoning behind the second half of their analogies. Viewing different analogies can help students discover different relationships between the concepts and to help them connect their new learning to an already familiar concept.
Peer Review Students can sometimes feel isolated in a computer based learning environment. When they need clarification on an assignment, they might not know whom to turn to other than the instructor. Having students read their classmates’ work and provide constructive feedback is a natural way to build in support and reflection. This exercise prompts students to start dialoguing about the assignment, and seeing another example provides information that can also help students go back and reflect on and review their own work and then revise it. For this activity to be successful, it is important that the instructor provides specific criteria to address in the peer review. In addition, it is important to model how to give constructive feedback that is meaningful and objective. This can be achieved by the instructor providing a list of guiding questions. For example, in a teacher credential class in which students are writing lesson plans, the questions might include: •
What connection is there between the lesson objectives and the activities described in the lesson?
123
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
• • •
What evidence is there of student engagement in the lesson? Which activities in the lesson might be difficult for students and why? How is technology incorporated in the lesson?
focused on errors made, students can then chart or categorize those errors. This helps them see patterns, growth, and areas of weakness across multiple assignments. In turn, students then have a clear record of areas that require additional study or effort.
Assignment Reflections
Eportfolios
Assignment-specific reflections are another valuable strategy for classes in a CBL environment. These types of reflections ask students to consider the process of completing the assignment itself. This helps students begin to make connections between their time and effort invested and the results in terms of grading and feedback from the instructor. Assignment-specific reflections can also be implemented for course assessments, asking students to think about the amount of time spent studying for the test and any study strategies they may have used. Questions may include the following:
Another effective assessment strategy is the use of Eportfolios. They can be used to document the student’s academic journey throughout their educational program and serve as a reflective database. Eportfolios can be very structured with specific requirements as to what students need to include, or they can give students more flexibility with guidelines such as “Upload five work samples that best show your progress and learning in the class.” Giving students the choice of what to include in their Eportfolio often leads to deeper reflection. Students can also be asked to include written reflections based on the work included, answering questions such as the following:
• • • • • • •
How many days prior to the assignment deadline did you begin working on it? How many drafts/attempts did you create? Did you solicit any peer feedback prior to submitting your assignment? Which (if any) strategies did you use in completing the assignment? On a scale from 1-10, how would you rate your effort on this assignment? What challenges did you face in completing this assignment? List three things you learned from completing this assignment.
Instructors can also require students to reflect on instructor feedback on an assignment or exam. Analyzing the feedback in the context of the effort put into the assignment or exam can help students recognize patterns and develop metacognitive or SRL skills. For example, if instructor feedback is
124
• • •
Why did you include these artifacts? What do these artifacts illustrate about your learning? What were your key learnings from this course?
Eportfolios also allow for peer review and critique of student artifacts. Students can share their Eportfolios with peers for additional feedback. Finally, Eportfolios encourage students to continue to reflect on their learning after the course is completed. As new artifacts are added, students will often first review and reflect on what they have already included in their Eportfolios.
Course Reflection At the conclusion of the course, culminating assessments and reflective activities can be an
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
effective tool for helping students recognize their progress and further develop metacognitive or SRL awareness and skills. Activities such as writing a letter to future students of the same course can aid in this process of reflection. Students can provide advice to future students about how to be successful on individual course assignments and in the course overall. Such reflection helps students process their own experiences in the class and identify which behaviors led them to be successful. Table 2 lists these strategies within the appropriate cognitive function categories.
ASSESSING METACOGNITION AND REFLECTION Effective use of assessments and evaluation is a necessary, and often overlooked, component when developing metacognition and reflectivity. When students integrate reflection and metacognition into the education experience, assessment becomes a method for improving learning in addition to being an evaluation of performance. Students who emphasize effort, reflection, and self-improvement are more motivated to learn and are more willing to accept failure as a stepping stone to further
Table 2. Strategies that support self-regulated learning, metacognition, and assessment in a computerbased learning environment Strategy Course Orientation Video
Self-Regulated Learning
Metacognition
Assessment
X
Scavenger Hunt
X
Time Management Questionnaire
X
X
X
Problem-based Learning
X
X
X
Reflective Journaling
X
X
X
Discussion Boards, WIkis, and Blogs
X
X
X
Concept Mapping
X
X
X
Student Self Assessment
X
X
X
Connective Language
X
Gantt Chart
X
Pre-Assessment
X
X
Cooperative Group Activities
X
X
Current Events
X
X
X
Analogies
X X
Peer Review
X
Assessment Reflections
X
Assessment Feedback Analysis
X
Student Written Assessment Questions
X X
X
X
X
X
Eportfolios
X
X
X
Course Reflection
X
X
X
125
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
success (Dweck, 2000, 2006). The computerbased environment is an ideal platform for student reflection and developing metacognition. The critical first step begins when online instructors prompt students by asking questions about their own experience with the learning process. These questions should cover several metacognitive components. The first component is self awareness. This component focuses on helping students reflect on their knowledge and effort related to the desired learning. Students focused on self awareness may be asked to discuss their knowledge before, during, and after a lesson. Such comparisons increase students’ metacognition as they think about what they have learned and how. Task awareness is the second component. This component involves students in thinking about the assignment they were given, the parameters of that assignment, and their degree of success. The third component is strategic awareness, which engages students in thinking about how they managed themselves and their resources during the task or assignment. As part of this process, students may be asked to discuss any specific tools or resources they utilized. Alternately, students may be asked questions about how they managed their time or progress on the assignment. Such reflections focus on helping students realize the importance of managing themselves, thus further developing skills of self regulation. The final metacognitive component is planning awareness. This component involves students in analyzing any plans or timelines they developed to help themselves be successful. It may also touch upon any scheduling or time management strategies that were used, along with how well they adhered to these plans and strategies (MacLeod, Butler, & Syer, 1996; McLoughlin, Lee, & Chan, 2006). There are numerous assessments available in a computer-based learning environment that lead to student reflection and metacognition. Involving students in self-assessment is only the first step in the process. Those teaching in computer-based environments must also assess the effectiveness
126
of student metacognition and reflectivity. These primarily informal assessments require careful attention from instructors in order to ensure that students are indeed developing metacognitive and reflective skills. This constitutes the second key step in the assessment process. There are several important considerations when assessing metacognition and reflectivity in online communications. In assessing students’ reflections, instructor attention should not only focus on the depth and substance of students’ thoughts, but also evidence of additional investigation and reflection on the part of the student. Students should be producing reflections that are substantive, insightful, reflective, and challenging. Instructors may elect to impose certain requirements or expectations for students’ reflections. Such requirements may relate to the length of the reflection or the content. For example, the instructor may require in an assignment that students include at least one citation from a related source and references to at least two other concepts from the course. This ensures that students are not only reflecting on the current topic, but that they are also making connections to prior learning. In addition to assessing students’ original responses, instructors can also look for interaction between students. Such interaction may take the form of comments. As with reflections, instructors may decide to set requirements on students’ comments related to the length of the comment or the content. These requirements make clear to students the expectations for the level of reflection and assist the instructor in determining whether the desired level of reflection was reached. Other components instructors can look for in students’ reflections include solutions or suggestions for issues that have been raised or demonstrating understanding and engagement with the topic. As with other assignments, detailed and specific instructor feedback can help students refine their skills of reflection. In assessing students’ metacognition, instructors need to collect evidence from students
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
in the form of journal entries, blogs, or other documentation. In this case, reflective exercises such as those described above may serve a dual purpose in helping the instructor assess students’ metacognitive development. Requiring students to engage in goal setting exercises at the start of a course can set the stage for later reflection related to metacognition. Students can begin the course by outlining specific goals and a plan for achieving them. This plan can then be revisited at set points throughout the course as students reflect on their progress and adjust the plan accordingly (Vonderwell, Liang, & Alderman, 2007; Wilson & Wing Jan, 2008). Evidence of metacognition may also be present in students’ reflections of their progress on an assignment or in a given course. For example, the student may reflect on initial knowledge compared with knowledge gained by the end of the project or course, making note of particular strategies or study skills that have helped them be successful. In addition, students can look forward to future courses and reflect on skills or strategies that can transfer to help them be successful. There are assessments specifically designed provide information related to students’ reflective and metacognitive skills, though none can provide a comprehensive evaluation. They can, however, be very useful in providing baseline data and showing growth over the duration of a program. Prominent examples of these types of assessments include the following: 1. Metacognitive Activities Inventory (MCAI), which is designed to assess students’ metacognitive skillfulness at problem solving. 2. Motivated Strategies for Lear ning Questionnaire (MSLQ), which addresses those portions of classroom instruction that emphasize academic motivation and selfregulated learning. 3. Learning and Study Strategies Inventory (LASSI), which is designed to measure the use of learning and study strategies.
4. Self-Regulated Learning Interview Schedule (SRLIS), which is designed to collect student responses of self-regulatory behavior over time. Of more immediate interest to instructors are evaluations of the strategies employed to enhance student metacognition and reflectivity in a specific course. In most cases, the assessments noted above are too broad to determine the usefulness of specific classroom strategies. Instructors can develop rubrics to assess how effectively students are learning from specific strategies. The following table provides key questions to help instructors assess how effectively strategies are actually influencing student metacognition and reflectivity. Instructors who implement a variety of SRL strategies in their classes may find it difficult to conduct formal evaluations for every task every time they teach. In those cases, it is helpful to select a couple key strategies for closer inspection each time the class is offered. For example, an instructor in a business ethics class utilizes the Current Events strategy. Students in the class view a “60 Minutes” segment on social media, privacy, and advertising. They Table 3. Questions to consider when assessing strategy effectiveness Planning
How effectively were students able to determine • The learning objectives? • What skills, information, and resources will be required? • Estimate how much time will be required?
Monitoring
How effectively were students able to • Monitor their progress during the task? • Make mid-course corrections during the task? • Identify key information and learnings? • Seek help when necessary?
Evaluating
How effectively were students able to evaluate • Their performance on the task? • How closely their performance aligned with expectations? • What they learned from the task? • What gaps exist in their understanding? • What they can do to improve in the future?
127
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
then read an article on the range of information social media provides advertisers. Finally, they respond to a set of prompts on the class blog. The goal of this activity is to help students recognize the information available through social media, as well as how that information can be used to support business. At the same time, this activity allows students to consider the ethical boundaries for the use of such information as defined in class and how that impacts their own values regarding privacy. They conclude the activity by reflecting on their own current use of social media and its implications for their privacy. An instructor reflecting on this assignment after the fact might ask him or herself the following questions about each phase: •
•
•
128
Planning: ◦◦ How well did students’ blog responses illustrate their understanding of the learning objectives? ◦◦ How well did students integrate information from class with the article and video? ◦◦ Were students producing quality blogs on time? Monitoring: ◦◦ How well were students able to process through the article, video, and blog? ◦◦ Were those who had trouble able to seek out help and complete the task? Evaluating: ◦◦ How effectively were students able to evaluate their performance on the blogs? ◦◦ How effective were the discussions that ensued in the blogs? ◦◦ How did students feel about their performance on the blogs? ◦◦ How well did they identify the key learnings?
◦◦ ◦◦ ◦◦
How well did they align information in the video and article with facts from classroom and textbook? Were there gaps in their integration of the information? Are students improving from earlier assignments?
After consideration of the above questions, the instructor will then be able to evaluate the strategy overall to determine if there are improvements that could be made in the assignment that would assist students in future classes to be more successful. Instructors can also look for patterns across several assignments in a particular class to make improvements.
CONCLUSION The findings of this study, along with the growing body of research on self-regulated learning and reflectivity, also lead to several key implications. 1. Metacognition is a critical component in the self-regulated learning process. Students who regulate their own learning effectively are more successful in understanding problems, developing solutions, and evaluating results. They also more easily adapt to changing environments, exhibit greater motivation to succeed, and function more effectively than those who have not developed these skills. Students will benefit from classes that thoughtfully integrate the development of metacognition into the curriculum. 2. Recent research indicates that metacognitive skills can be learned and that specially designed instruction does enhance student metacognition. Classes that clearly articulate how to develop metacognitive skills and provide carefully developed opportunities to
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
3.
4.
5.
6.
practice those skills most improve students’ self-regulated learning. Instructors should offer specific activities to reinforce students’ metacognitive skills through practice. Meaningful student reflection leads to enhanced higher order thinking skills and commitment to lifelong learning. Students should be given more opportunities to integrate classroom learning with specific real life experiences through reflective exercises and to share cogent ideas with others. Interactive reflective activities provide students with structured opportunities to think and then share. Computer-based learning, when properly utilized, provides students with an outstanding environment to expand their metacognitive and reflective processes. It is an ideal medium for ongoing practice with specific metacognitive and reflective strategies. Instructors should utilize more computerbased learning to immerse students in an environment that requires them to exercise their metacognitive and reflection skills. Computer-based learning environments require students to be more in control of their own learning. Therefore, students who do not exhibit strong self-regulated learning processes are at greater risk in these environments. Instructors utilizing computer- based learning settings should make the focus on metacognition an important component in their classes in order to better equip students for academic success. They should also take advantage of computer based instructional strategies such as checkpoints, checklists, and outlines to reinforce task management skills. Recent advancements in technology and social media have provided instructors with outstanding tools to enhance metacognition and reflectivity. While there are already tools
that promote reflection such as blogs and wikis, researchers and product developers should continue to develop new applications that emphasize strategies to enhance self-regulated learning and promote higher level thinking. 7. It is also important that instructors assess the effectiveness of the metacognitive and reflective strategies they employ in their classes. Assessment of these strategies provides instructors with the means to evaluate student performance. In addition, these assessments serve as a pathway to continuous improvement of the educational process.
FUTURE RESEARCH DIRECTIONS Most higher education institutions have recognized that it is important for their students to value lifelong learning. As the global climate grows more dynamic, knowledge of static facts alone is insufficient preparation for success in the world. Educators must prepare learners to understand the process of learning and be able to adapt to ever-changing environments and challenges. Selfmotivation and self-management skills are critical components for success in education and in life. Research shows that self-regulated learning has a positive impact on both of these essential skills. Perhaps most significantly, students who develop metacognition and reflectivity in classroom settings continue to use those skills throughout their careers. Future research should identify and measure the impact of the specific strategies described in this chapter as well as best practices regarding implementation and training. Additionally, models should be developed which specifically address the teaching of metacognition and reflection in CBL environments and the implications for real world application.
129
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
REFERENCES Algeria, D., Boscardin, C., Poncelet, A., Mayfield, C., & Wamsley, M. (2014). Using tablets to support self-regulated learning in a longitudinal integrated clerkship. Medical Education Online, 19, 1–7. PMID:24646438 Allen, E., & Seaman, J. (2010). Learning on demand: Online education in the United States. Babson Survey Research Group. Sloan Consortium. Ambrose, S., Bridges, M., Lovett, M., DiPetro, M., & Norman, M. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: Jossey-Bass. Amzil, A., & Stine-Morrow, E. (2013). Metacognition: Components and relation to academic achievement in college. Arab World English Journal, 4(4), 371–385. Anderman, E. M., & Anderman, L. H. (2010). Classroom motivation. Upper Saddle River, NJ: Pearson. Anderman, E. M., & Dawson, H. (2011). Learning and motivation. In R. E. Mayer & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 219-241). New York: Routledge. Anderson, W., Krathwohl, R., Airasian, R., Cruikshank, A., Mayer, R., Pintrich, P., & Wittrock, M. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of educational objectives (Complete Edition). New York: Longman. Bean, J., & Metzner, B. (1985). A conceptual model of nontraditional undergraduate student attrition. Review of Educational Research, 55(4), 485–650. doi:10.3102/00346543055004485 Bixler, B. (2008). The effects of scaffolding students’ problem-solving process via questions prompts on problem solving and intrinsic motivation in an online learning environment. Dissertation Abstracts International, 68(10), 4261A.
130
Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (Eds.). (1956). Taxonomy of educational objectives: Handbook I, cognitive domain. New York: David McKay. Boud, D., Keogh, R., & Walker, D. (1985). Reflection: Turning experience into learning. New York: Routledge. Brady, M., Seli, H., & Rosenthal, J. (2013). “Clickers” and metacognition: A quasi-experimental comparative study about metacognitive selfregulation and use of electronic feedback devices. Computers & Education, 65, 56–63. doi:10.1016/j. compedu.2013.02.001 Bredo, E. (2006). Conceptual confusion and educational psychology. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (2nd ed.; pp. 43–57). Mahwah, NJ: Erlbaum. Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction (4th ed.). Upper Saddle River, NJ: Merrill/Prentice Hall. Bye, L., Smith, S., & Rallis, H. M. (2009). Reflection using an online discussion forum: Impact on student learning and satisfaction. Social Work Education, 28(8), 841–855. doi:10.1080/02615470802641322 Cavanaugh, J., & Perlmutter, M. (1982). Metamemory: A critical examination. Child Development, 53(1), 11–28. doi:10.2307/1129635 Chaiprasurt, C., & Esichaikul, V. (2013). Enhancing motivation in online courses with mobile communication tool support: A comparative study. International Review of Research in Open and Distance Learning, 14(3), 377–400. Cho, M., & Shen, D. (2013). Self-regulation in online learning. Distance Education, 34(3), 290–301. doi:10.1080/01587919.2013.835770
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Cobb, R., & Bowers, J. (1999). Cognitive and situated learning perspectives in theory and practice. Educational Researcher, 28(2), 4–15. doi:10.3102/0013189X028002004 Cory, T. C. (2013). Aquinas on human selfknowledge. New York: Cambridge University Press. doi:10.1017/CBO9781107337619 Darling-Hammond, L. (2008). The case for university-based teacher education. In M. CochranSmith, S., Feiman-Nemser, D. J., McIntyre & K. E. Demers (Eds.), Handbook of research on teacher education (pp. 333-346). New York: Routledge. de Jong, T., & Pieters, J. (2006). The design of powerful learning environments. In P. A. Alexander & P. H. Winne (Eds.), Handbook of Educational Psychology (pp. 739–745). New York: Routledge. Detlor, B., Booker, L., Serenko, A., & Julien, H. (2012). Student perception of information literacy instruction: The importance of active learning. Education for Information, 29, 147–161. Dewey, J. (1910). How we think. New York: D. C. Heath & Co. doi:10.1037/10903-000 Dewiyanti, S., Brand-Gruwel, S., & Jochem, W. (2005). Applying reflection and moderation in an asynchronous computer-supported collaborative learning environment in campus-based higher education. British Journal of Educational Technology, 36(4), 673–676. doi:10.1111/j.14678535.2005.00544.x Dunlosky, J., & Metcalfe, J. (2009). Metacognition. Los Angeles: Sage Publishing. Dweck, C. (2000). Self-theories: Their role in motivation, personality, and development. New York: Taylor & Francis. Dweck, C. (2006). Mindset: The new psychology of success. New York: Ballantine Books.
Ertmer, P. A., Sadaf, A., & Ertmer, D. J. (2011). Student-content interactions in online courses: The role of question prompts in facilitating higher-level engagement with course content. Journal of Computing in Higher Education, 23(2-3), 157–186. doi:10.1007/s12528-011-9047-6 Flavell, J. H. (1976). Metacognitive aspects of problem solving. In L. Resnick (Ed.), The nature of intelligence (pp. 231–235). Mahwah, NJ: Lawrence Erlbaum. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. The American Psychologist, 34(10), 906–911. doi:10.1037/0003-066X.34.10.906 Fogarty, R. (1994). The mindful school: How to teach for metacognitive reflection. Glenview, IL: IRI/Skylight Publishing. Fox, E., & Riconscente, M. (2008). Metacognition and self-regulation in James, Piaget, and Vygotsky. Educational Psychology Review, 20(4), 373–389. doi:10.1007/s10648-008-9079-2 Grow, O. (1996). Teaching learners to be selfdirected. Adult Education Quarterly, 41(3), 125–149. doi:10.1177/0001848191041003001 Holmes, V. (2012). Depth of teachers’ knowledge: Frameworks for teachers’ knowledge of mathematics. Journal of STEM Education: Innovations and Research, 13(1), 55–71. Howard, T. C., & Aleman, G. R. (2008). Teacher capacity for diverse learners. In M. CochranSmith, S. Feiman-Nemser, D. J. McIntyre, & K. E. Demers (Eds.), Handbook of research on teacher education (pp. 157–174). New York, NY: Routledge. Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. London: Routledge & Kegan Paul. doi:10.1037/10034-000
131
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Knowles, M. (1975). Self-directed learning. A guide for learners and teachers. Prentice Hall. Kranzow, J. (2013). Faculty leadership in online education: Structuring courses to impact student satisfaction and persistence. MERLOT Journal of Online Learning and Teaching, 9(1), 131–139. Lee, Y., & Choi, J. (2011). A review of online course dropout research: Implications for practice and future research. Educational Research and Technology Development, 59(5), 593–618. doi:10.1007/s11423-010-9177-y Lee, Y., Choi, J., & Kim, T. (2013). Discriminating factors between completers of and dropouts from online learning courses. British Journal of Educational Technology, 44(2), 328–337. doi:10.1111/j.1467-8535.2012.01306.x Leong, A. (2012). A comparative study of online self-regulated learning and Its effect on adult learners in the cross-strait regions. International Journal of Continuing Education and Lifelong Learning, 4(2), 81–100. MacLeod, W. B., Butler, D. L., & Syer, K. D. (1996, April). Beyond achievement data: Assessing changes in metacognition and strategic learning. New York: Annual Meeting of the American Educational Research Association. Retrieved from http://ecps.educ.ubc.ca/faculty/ Butler/Confer/AERA Mao, J., & Peck, K. (2013). Assessment strategies, self-regulated learning skills, and perceptions of assessment in online learning. The Quarterly Review of Distance Education, 14(2), 75–95. McAllister, C., & Watkins, P. (2012). Increasing academic integrity in online classes by fostering the development of self-regulated learning skills. The Clearing House: A Journal of Educational Strategies, Issues and Ideas, 85(3), 96–101. doi: 10.1080/00098655.2011.642420
132
McLoughlin, C., Lee, M. J., & Chan, A. (2006, October). Fostering reflection and metacognition through student-generated podcasts. In Proceedings of the Australian Computers in Education Conference (ACEC 2006). Merrill, D. (1983). Component Display Theory. In C. Reigeluth (Ed.), Instructional design theories and models (pp. 279–333). Hillsdale, NJ: Erlbaum Associates. Meyers, C., & Jones, T. B. (1993). Promoting active learning: Strategies for the college classroom. San Francisco: Jossey-Bass. Miller, G. A., Gallanter, E., & Probam, K. H. (1960). Plans and the structure of behavior. New York: Holt, Rinehart, & Winston. doi:10.1037/10039-000 Milligan, C., Littlejohn, A., & Margaryan, A. (2013). Patterns of engagement in connectivist MOOCs. MERLOT Journal of Online Learning and Teaching, 9(2), 149–159. Mills, R., Lawless, K., & Pratt, J. (2006). Training groups of end users: Examining group interactions in a computer-based learning environment. Journal of Computer Information Systems, 104–109. Nilson, L. (2013). Creating self-regulated learners: Strategies to strengthen students’ selfawareness and learning skills. Sterling, VA: Stylus Publishing. Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222–229. doi:10.1016/j. compedu.2009.08.005 Pantoya, M., Hughes, P., & Hughes, J. (2013). A case study in active learning: Teaching undergraduate research in an engineering classroom setting. English Education, 8(2), 54–64. doi:10.11120/ ened.2013.00014
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Paris, S., & Paris, A. (2001). Classroom applications of research on self-regulated learning. Educational Psychologist, 36(2), 89–101. doi:10.1207/ S15326985EP3602_4
Pressley, M., & Ghatala, E. S. (1990). Selfregulated learning: Monitoring learning from text. Educational Psychologist, 25(1), 19–33. doi:10.1207/s15326985ep2501_3
Parra, S. (2012). Component display theory design in a foreign language unit. Journal of Applied Learning Technology, 2(3), 23–32.
Puzziferro, M. (2008). Online technologies self-efficacy and self-regulated learning as predictors of final grade and satisfaction in college level online courses. American Journal of Distance Education, 22(2), 72–89. doi:10.1080/08923640802039024
Pauschenwein, J., & Sfriri, A. (2010). Adult learners’ motivation for the use of micro-blogging during online training courses. International Journal of Emerging Technologies in Learning, 5(1), 22–25. Pintrich, P. (1995). Understanding self-regulated learning. In P. Pintrich (Ed.), Understanding selfregulated learning (pp. 3–12). San Francisco, CA: Jossey-Bass. Pintrich, P. (2000). The role of goal orientation in self-regulated learning. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 451–502). San Diego, CA: Academic Press. doi:10.1016/B978-0121098902/50043-3
Rowe, F., & Rafferty, J. (2013). Instructional design interventions for supporting self-regulated learning: Enhancing academic outcomes in postsecondary E-learning environments. MERLOT Journal of Online Learning and Teaching, 9(4), 590–601. Schon, D. (1983). The reflective practitioner: How professionals think in action. New York: Basic Books. Schunk, D. H. (2012). Learning theories: An educational perspective. Boston, MA: Pearson.
Pintrich, P. R., & Schunk, D. H. (2002). Motivation in education: Theory, research, and applications (2nd ed.). Upper Saddle River, NJ: Merrill/ Prentice Hall.
Siegel, M., & Lee, J. (2001). “But electricity isn’t static”: Science discussion, identification of learning issues, and use of resources in a problembased learning education course. Paper presented at the annual meeting of the National Association for Research in Science Teaching, St. Louis, MO.
Poitras, E., Lajoie, S., & Hong, Y. (2012). The design of technology-rich learning environments as metacognitive tools in history education. Instructional Science: An International Journal of the Learning Sciences., 40(6), 1033–1061. doi:10.1007/s11251-011-9194-1
Silver, N. (2013). Reflective pedagogies and the metacognitive turn in college teaching. In M. Kaplan, N. Silver, D. LaVaque-Manty, & D. Meizlish (Eds.), Using reflection and metacognition to improve student learning (pp. 1–17). Sterling, VA: Stylus Publishing.
Pressley, M., Forrest-Pressley, D., Elliott-Faust, D. L., & Miller, G. E. (1985). Children’s use of cognitive strategies, how to teach strategies, and what to do if they can’t be taught. In M. Pressley & C. J. Brainerd (Eds.), Cognitive learning and memory in children (pp. 1–47). New York: Springer-Verlag. doi:10.1007/978-1-4613-9544-7_1
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89–125. doi:10.3102/00346543045001089
133
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
van Merriënboer, J. J. G., & Paas, F. (2003). Powerful learning and the many faces of instructional design: Towards a framework for the design of powerful learning environments. In E. de Corte, L. Verschaffel, N. Entwistle, & J. J. G. van Merriënboer (Eds.), Powerful learning environments: Unravelling basic components and dimensions (pp. 3–21). Oxford, UK: Elsevier Science. Veenman, M. V. J., Van Hout-Wolters, B. H. A. M., & Afflerbach, P. (2006). Metacognition and learning: Conceptual and methodological considerations. Metacognition and Learning, 1(1), 3–14. doi:10.1007/s11409-006-6893-0 Veenman, M. V. J., Wilheim, P., & Beishuizen, J. J. (2004). The relation between intellectual and metacognitive skills from a developmental perspective. Learning and Instruction, 14(1), 89–109. doi:10.1016/j.learninstruc.2003.10.004 Vonderwell, S., Liang, X., & Alderman, K. (2007). Asynchronous discussions and assessment in online learning. Journal of Research on Technology in Education, 39(3), 309–328. doi:10.1080/1539 1523.2007.10782485 Wan, Z., Compeau, D., & Haggerty, N. (2012). The effects of self-regulated learning processes on Elearning outcomes in organizational settings. Journal of Management Information Systems, 29(1), 307–339. doi:10.2753/MIS0742-1222290109 Wang, C., Shannon, D., & Ross, M. (2013). Students’ characteristics, self-regulated learning, technology self-efficacy, and course outcomes in online learning. Distance Education, 34(3), 302–323. doi:10.1080/01587919.2013.835779 White, B., & Frederisken, J. (2005). A theoretical framework and approach for fostering metacognitive development. Educational Psychology, 40(4), 211–223. doi:10.1207/s15326985ep4004_3
134
Wilson, J., & Wing Jan, L. (2008). Smart thinking: Developing reflection and metacognition. Newtown, Australia: Primary English Teaching Association. Workman, J., & Stenard, A. (1996). Student support services for distance learners. DEOSNEWS, 6(3). Retrieved June 18, 2014, from the Distance Education Online Symposium Website: http:// learningdesign.psu.edu/deos/deosnews6_3.pdf Yang, S. H. (2009). Using blogs to enhance critical reflection and community of practice. Journal of Educational Technology & Society, 12(2), 11–21.
ADDITIONAL READING Ambrose, S., Brudges, M., DiPietro, M., Lovett, M., & Norman, M. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco: Jossey-Bass. Bean, J. (2011). Engaging ideas: The professor’s guide to integrating writing, critical thinking, and active learning in the classroom. San Francisco: Jossey-Bass. Boyer, E. L. (1997). Scholarship reconsidered: Priorities of the professoriate. San Francisco, CA: Jossey-Bass. Brookfield, S. (2011). Teaching for critical thinking: Tools and techniques to help students question their assumptions. San Francisco: Jossey-Bass. Davis, B. (2009). Tools for teaching. San Francisco: Jossey-Bass. Dee, F. (2013). Creating significant learning experiences: An integrated approach to designing college courses. San Francisco: Jossey-Bass. Doyle, T. (2011). Learner centered teaching: Putting the research on learning into practice. Sterling, VA: Stylus Publishing.
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
Kaplan, M., Silver, N., Lavaque-Manty, D., & Meizlish, D. (2013). Using reflection and metacognition to improve student learning: Across the disciplines, across the academy. Sterling, VA: Stylus Publishing. Rickards, W., Diez, M., Ehley, L., Guildbault, L., Loacker, G., Hart, J., & Smith, P. (2008). Learning, reflection, and electronic portfolios: Stepping toward an assessment practice. The Journal of General Education, 57(1), 31–50. Rothstein, D. (2011). Make just one change: Teach students to ask their own questions. Cambridge: Harvard Education Press. Stein, J., & Graham, C. (2013). Essentials for blended learning: A standards-based guide. New York: Routledge. Watson, C. E., & Doolittle, P. E. (2011). ePortfolio pedagogy, technology, and scholarship: Now and in the future. Educational Technology, 51(5), 29–33. Weimer, M. (2013). Learner-centered teaching: Five key changes to practice. San Francisco: Jossey-Bass. Wiggins, G. P., & McTighe, J. (1998). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development. Zimmerman, B. (1996). Developing self-regulated learners: Beyond achievement to self-efficacy. Washington, DC: American Psychological Association.
KEY TERMS AND DEFINITIONS Assessment: The measurement of learning using instruments appropriate for the content. Computer-Based Learning Environment: A virtual classroom accessed and participated in via technologies such as learning management systems (Blackboard), multimedia resources, web-based lessons (SoftChalk), etc. Engagement: Being actively involved with and attentive to a learning environment. Locus of Control: A person’s belief regarding the level of control he/she has over academic success. Metacognition: Thinking about one’s own cognitive processes, awareness and control of one’s own cognitive processes. Motivation: A person’s willingness to do something. Reflection: Examination of one’s ideas or experiences and their impact on practices and applications. Self-Efficacy: A person’s belief regarding his/ her ability to succeed. Self-Regulated Learning: Learning that is guided by metacognition (thinking about one’s thinking), strategic action (planning, monitoring, and evaluating personal progress against a standard), reflection, and motivation to learn.
135
The Value of Metacognition and Reflectivity in Computer-Based Learning Environments
APPENDIX Focus Group Interview Questions 1. When you think of the online courses you took in the MEd Special Emphasis program, what metacognitive or reflective activities do you recall? 2. How did the computer-based learning (CBL) environment affect your ability to participate in these activities? 3. How have the metacognitive or reflective course activities impacted your current role as a classroom teacher? 4. As you reflect back on your experiences as a student, how might the CBL environment have been used more effectively to foster metacognition and reflection? 5. What connections do you see between metacognitive and reflective course activities and higher order thinking skills? 6. What connections do you see between metacognitive and reflective course activities and selfregulated learning?
136
137
Chapter 6
A Framework for Defining and Evaluating Technology Integration in the Instruction of Real-World Skills J. Christine Harmes Assessment Consultant, USA James L. Welsh University of South Florida, USA Roy J. Winkelman University of South Florida, USA
ABSTRACT The Technology Integration Matrix (TIM) was created to provide a resource for evaluating technology integration in K-12 instructional settings, and as a tool for helping to target teacher-related professional development. The TIM is comprised of 5 characteristics of meaningful learning (Active, Constructive, Authentic, Collaborative, and Goal-Directed) and 5 levels (Entry, Adoption, Adaptation, Infusion, and Transformation), resulting in 25 cells. Within each cell, descriptions are provided, along with video sample lessons from actual math, science, social studies, and language arts classrooms that illustrate a characteristic at the indicated level. Throughout development, focus groups and interviews were conducted with in-service teachers and technology specialists to validate the progression of characteristics and descriptive components.
INTRODUCTION As schools continue to invest in technology tools and resources for instruction, it is increasingly important that teachers and school leaders are equipped to leverage this technology to support
students in learning real-world skills. A variety of organizations, agencies, practitioners and scholars have agreed on the importance of preparing students for the 21st century, and have articulated definitions and frameworks for the requisite skills and their instruction (see, for example,
DOI: 10.4018/978-1-4666-9441-5.ch006
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Framework for Defining and Evaluating Technology Integration
International Society for Technology in Education [ISTE], 2007; Partnership for 21st Century Skills, 2011; Saavedra & Opfer, 2012; and United States Department of Education [USDOE], 2014). Common among these are: critical thinking and problem solving, communication, collaboration, and creativity and innovation (National Education Association [NEA], n.d.). These skills can be most effectively taught and learned through the use of more constructivist pedagogies in environments that effectively integrate technology (BrantleyDias & Ertmer, 2013; Saavedra & Opfer, 2012). Teachers and principals have likely received training on specific software or devices, however, there is often a need for additional training in and modeling of the most effective uses of technology for higher-order thinking skills in everyday instruction (NEA, 2008). While technology tools can provide powerful support for instruction, technology is not in and of itself an academic intervention. The model presented here provides a conceptual framework, grounded in sound pedagogy, by which specific uses of technology can be evaluated. This chapter describes the Technology Integration Matrix (TIM; http://mytechmatrix.org) and illustrates how schools can use this framework to plan and evaluate technology-rich instruction and target teacher professional development. The TIM was created to help K-12 schools support their students in learning skills necessary for their success in the 21st century by providing a common, pedagogically-centered language to describe effective technology integration. The TIM was first developed at the Florida Center for Instructional Technology (FCIT) from 2003 to 2005, and updated in 2011 (Welsh, Harmes, & Winkelman, 2011). Fostering learning environments with increasingly authentic instruction is necessary to prepare students for authentic assessments of real-world skills. The TIM provides a framework for situating technology in instructional settings while maintaining a central focus on students. It is organized according to five characteristics of
138
meaningful learning and five levels that describe a progression of pedagogical approaches, thus creating a five-by-five matrix. In addition to the matrix itself as a model, an interactive website, referenced above, includes supportive materials such as detailed descriptors for students, teachers, and learning environments across levels and characteristics, and 100 videos of actual classroom lessons that have been aligned to the TIM. The matrix and accompanying resources have been successfully applied to professional development and planning (e.g., Fodchuk, Schwartz, & Hill, 2014), program evaluation (e.g., Pringle, Dawson, & Ritzhaupt, 2015), and academic research (e.g., Digiovanni, 2015; Kieran & Anderson, 2014; Marcovitz & Janiszewski, 2015; Olson et al., 2015; and Sherman & Cornick, 2015) in a variety of educational contexts in the United States and several other countries. Included within this chapter are a background on technology integration and related models, a complete description of the TIM framework and its components, and an overview of tools and processes for implementing the TIM for professional development, planning, and evaluation.
BACKGROUND ON TECHNOLOGY INTEGRATION Historically, teachers have used a variety of different tools to do their jobs, all of which constitute “technology.” As antiquated as they now may seem, chalkboards were at one time a great innovation in classroom technology. The same is true for calculators, film projectors, televisions, tape recorders, dry erase markers, and even ballpoint pens (Purdue, 2015). The innovative technological tools of one generation become the conventional tools, and eventually the obsoletisms, of succeeding generations. Although technology has been a facet of every historical instantiation of the classroom (e.g. a hornbook is mid-16th century educational technology), in
A Framework for Defining and Evaluating Technology Integration
contemporary scholarship questions that purport to address “technology integration” specifically concern the role of computers (now including a broad range of different digital devices), within teaching contexts. Technology integration was defined by the Technology in Schools Task Force (2002) simply as the “incorporation of technology resources and technology-based practices into the daily routines, work, and management of schools” (p. 75), although the report goes on to state that successful integration involves other factors. Some researchers have described a shift from learning about technology, to learning from technology, to learning with technology (Ertmer & OttenbreitLeftwich, 2013; Saavedra & Opfer, 2012). Such a shift requires a change from an outdated transmission model of technology integration to a model that focuses “on the pedagogy that technology enables and supports, rather than on the technology itself” (Ertmer & Ottenbreit-Leftwich, 2013, p. 175). This is reflected in definitions of technology integration such as Davies and West (2014), “the effective implementation of educational technology to accomplish intended learning outcomes” (p. 843). Beyond a singular definition, a more complex and nuanced understanding of technology integration can be achieved by considering models of technology integration.
Models for Describing Technology Integration Various models and frameworks have been proposed over the years, in an attempt to describe and organize aspects and levels of technology integration in schools. Several of these are based on theoretical models such as the five stages of adoption within the Diffusion of Innovations Theory (Rogers, 1962, 2003) and the related Levels of Use section of the Concerns Based Adoption Model (CBAM; Hall et al., 1975; Hall, 2010), and describe the progress through which innovations become accepted and change occurs within
an existing system. The Levels of Technology Implementation (LoTi) framework is an example of a widely-used model based on the CBAM levels of use, that was developed as a tool for assisting school systems in measuring the degree to which teachers are implementing technology, and in modifying curricula and instructional practices (Moersch, 1995). In its updated form, the LoTi (now referring to Levels of Teaching Innovation) maintains the modified CBAM levels as its foundation, and now incorporates pedagogical changes into the descriptions of teacher progression. The levels of the LoTi for describing teacher technology use are: (a) Non-use, (b) Awareness, (c) Exploration, (d) Infusion, (e) Integration—Mechanical, (f) Integration—Routine, (g) Expansion, and (h) Refinement (Moersch, 2010). Similarly focused on the process of change and levels of use, the Apple Classrooms of Tomorrow research project (ACOT; Sandholtz, Ringstaff, & Dwyer, 1997) resulted in an empiricallybased description of how teachers’ technology integration developed over time. These “Stages of Instructional Evolution” were termed: Entry, Adoption, Adaptation, Appropriation, and Invention. In this framework, Entry was the very first stage as teachers unpacked and began to experiment with their new computers. At the Adoption stage, teachers’ concerns shifted from how to connect the computers to how to incorporate them into instruction, and computers were used by students for drill and practice activities. Moving to the Adaptation stage, teachers were integrating tools such as word processors and graphics programs, and productivity was a major focus. At the Appropriation stage, teachers were achieving mastery of the technology, and their personal attitudes toward the technology shifted. The final stage, Invention, was characterized by teachers using the technology to experiment with teaching styles (e.g., team teaching) and instructional strategies (e.g., project-based learning). As an empirically-grounded model, the ACOT Stages of Instructional Evolution provided an early
139
A Framework for Defining and Evaluating Technology Integration
framework for describing levels of technology implementation in the classroom, and has been a foundation for much of the subsequent research in technology integration. This model served as a starting point for the development of the levels of technology integration used in the TIM. While the models described above are centered on an individual (such as a teacher or a group of teachers), other models for technology integration have focused on a different observational unit, or have followed an approach that does not include levels. The SAMR (Puentedura, 2006) is an example of a model that is based on an aspect other than the teacher, as it focuses on the learning task. As a general framework for describing technology use in classroom settings, SAMR has been used in research studies to classify and evaluate learning activities (e.g., Romrell, Kidder, & Wood, 2014). The first level, Substitution, refers to use in which the technology replaces a conventional tool, and no change is made to instruction. In the next level, Augmentation, the technology again directly replaces a conventional tool, although some level of improvement is present. At the Modification level, the use of technology affords a significant change in the learning task. The highest level, Redefinition, encompasses activities that would not have been possible without technology. In the SAMR model, the primary distinction within each level is the use of technology-based versus nontechnology-based tools. It is noted that the use of technology affords some type of change in tasks when advancing through the levels. A popular example of a model for technology integration that is not based on levels is the Technological Pedagogical Content Knowledge model (TPACK; Koehler & Mishra, 2005, 2009). This framework is built on three types of knowledge related to teaching: content, pedagogy, and technology, and these are graphically presented as three intersecting circles. In general, content knowledge encompasses what the teacher must know about the subject matter he or she is teaching. Pedagogical knowledge refers
140
to the methods for teaching this content, including planning and assessment. Technology knowledge, in this model, includes an understanding of how to use various technology resources and tools, as well as when to apply them. The focus of the TPACK model is on the areas of intersection between the three circles: pedagogical content knowledge, technological content knowledge, technological pedagogical knowledge, and finally, technological pedagogical content knowledge (at the center). The TPACK has been frequently used in research studies related to technology integration (see Vogt et al., 2013 for a review).
TIM FRAMEWORK While the TIM also reflects a progression of change, the focus in the TIM is on the pedagogy with which the technology is being incorporated, and the unit of consideration is a lesson (as opposed to considering a teacher in the LoTi, ACOT, or TPACK models or a task as in the SAMR). As a model for technology integration, the TIM is built on two aspects: pedagogy and technology, with the unit of focus being a lesson. The pedagogical aspect is based on five interdependent characteristics of meaningful learning environments: Active, Collaborative, Constructive, Authentic, and Goal-Directed. Each of these characteristics is then described along five levels of technology integration: Entry, Adoption, Adaptation, Infusion, and Transformation. The matrix is laid out with the characteristics as rows, and the levels of integration as columns, resulting in 25 cells (see Figure 1). Each cell represents a level of technology integration for each of the interdependent characteristics of the learning environment (e.g., Adoption level of Active), and is described in detail, including the focus of the teacher and the students along with components of the learning environment.
A Framework for Defining and Evaluating Technology Integration
Figure 1. Technology Integration Matrix (© 2003-2015, Florida Center for Instructional Technology, USF. Used with permission).
Development of the TIM The TIM was created with the purpose of forming a vocabulary related to technology integration that resounded with educators at various levels, and across content areas. Development of the TIM was funded through the Elementary and Secondary Education Act, Title II, Part D No Child Left Behind Act of 2001 and the American Recovery and Reinvestment Act, and access to the TIM is free of charge. Conceptualization of the TIM began in 2003 with a literature review, followed by iterative classroom observations, focus groups,
and structured interviews with teachers across the state of Florida. Once the first version of the matrix was completed, it then was field tested in 2005 (Allsopp, Hohlfeld, & Kemker, 2007). This version of the TIM was widely adopted, and led to requests for additional tools and expanded explanations. To this end, a large-scale revision of the TIM began in 2009, and Version 2 was released in 2011 (Welsh, Harmes, & Winkelman, 2011). Essential to the revision process was repeating the steps of literature review, structured interviews and focus groups. These efforts resulted in changes to the focus and clarification of some of the descrip-
141
A Framework for Defining and Evaluating Technology Integration
tions in the cells of the matrix. One of the more substantial changes was the articulation for each cell in the matrix of what the teacher is doing, what the students are doing, and what the learning environment is like. In addition to solidifying the foundations of the TIM, these descriptions now serve as resources for educators to get a deeper understanding of the characteristics and their progression.
Levels of Technology Integration
with or use of technology tools in lessons at this level. If students are using technology, it may be related to facts or rote practice of basic skills.
Adoption At the next level, Adoption, the teacher is still the decision-maker about the specifics of technology use. Students are using technology tools for discrete tasks, requiring only a conventional or procedural understanding.
When developing the levels for technology integration in the TIM, the ACOT model, described above, provided a starting point for what such a general progression might look like. The levels from the ACOT Stages of Instructional Evolution (Entry, Adoption, Adaptation, Appropriation, and Invention; Sandholtz, Ringstaff, & Dwyer, 1997) did not provide a range broad enough to capture the possibilities for enhancing instruction (Allsopp, Hohlfeld, & Kemker, 2007). Therefore, the continuum was significantly modified and expanded to include these five levels: Entry, Adoption, Adaptation, Infusion, and Transformation. Although the names of the first three levels are the same as those from the ACOT, the explanations for all levels are quite different. The TIM levels describe a lesson, while the ACOT levels focus on the development of a teacher or group of teachers. Also, some examples of technology use included in the higher levels in the ACOT could be placed lower in the TIM, depending on the full scope of the lesson. Each level of technology integration is described below, according to its role in the TIM.
Adaptation
Entry
Transformation
At this first level, the teacher is using technology to deliver instruction, and is the one making decisions about what technology will be used, and when. It is likely that students do not have direct contact
The highest level of technology integration in this framework is Transformation. This level is marked by student self-direction in the use of technology tools in lessons that focus on higher
142
This is the stage at which technology tools become more integrated within a larger lesson. Students are working from an understanding of the capabilities and uses of the technology tools, and can use them independently. The teacher still maintains decision-making on when to use technology tools, and students may begin exploring on their own how to best use them.
Infusion Several shifts happen at this level. First, a broad range, and sufficient number, of technology tools are available to students throughout their day. Second, the focus of instruction using technology is clearly about learning, and not about the technology tools themselves. A lesson at this level will involve student decision-making, supported by teacher guidance. Infusion level work typically occurs after teachers and students have experience with a particular technology tool.
A Framework for Defining and Evaluating Technology Integration
order learning outcomes, and may have been difficult or impossible without technology. In fact, students are often familiar enough with the technology tools that their use may extend beyond what is conventional. The teacher’s role regarding technology use at this level is that of a guide or model in the use of technology.
Underlying Attributes The TIM levels describe a continuum of pedagogical approaches, with higher levels characterized by meaningful student choices regarding the use of technology tools to achieve learning goals. From Entry to Transformation, the TIM levels reflect differences in four underlying attributes, specifically ownership of learning, characterization of knowledge, use of technology tools, and instructional focus (see Figure 2). Most importantly, lessons at higher TIM levels typically demonstrate greater student ownership of learning. An Entry level lesson may be teacher-focused and teacher-driven, while a
Transformation level lesson is much more likely to be student-centered, with students making meaningful, informed, and strategic choices about the ways in which technology is used. Evidence suggests that student-centered teaching approaches are associated with improved academic outcomes (Cornelius-White, 2007; Polly, Margerison, & Piel, 2014; Wenglinksy, 1999, 2002). Accordingly, ISTE (n.d.) includes student-centered learning as one of 14 essential conditions for leveraging technology use in education. Another difference across levels of integration involves a shift in the focus of content in typical lessons from procedural knowledge (understanding the step-by-step sequence of actions necessary to reach a goal) to conceptual understandings (knowledge of the principles that govern a domain). While procedural knowledge facilitates practical application, conceptual understanding is necessary to generalize learning to novel situations (Baroody, 2003; Berthold & Renkl, 2009). The progression also represents general differences in the ways in which technology tools
Figure 2. Progression across levels of integration (© 2013-2015, Florida Center for Instructional Technology, USF. Used with permission).
143
A Framework for Defining and Evaluating Technology Integration
are used. An Entry level lesson is much more likely to involve simple and conventional use of technology tools, while a Transformation level lesson is more likely to involve unconventional, innovative, or complex uses of technology tools. Finally, the spectrum describes a shift in the instructional focus away from technology tools and toward content. For example, in an Entry level lesson that involves tablet computers, the teacher might plan a portion of instructional time to teach students about the tablets (establishing routines for storage and charging, setting expectations for care and use, explaining access to assignments, etc.). By contrast, a Transformation level lesson using tablet computers would involve little if any instructional focus on the tablets. This shift can also be described as “transparent” use of technology, or “seamless” integration. Some tool-focused lessons may be seen as necessary capacity-building prerequisites to content-focused lessons. However, there is no support for the idea that tool-focused technology integration by itself will lead to greater student achievement (Ertmer & Ottenbreit-Leftwich, 2013; Jonassen, 1995).
Characteristics of Meaningful Learning Environments The underlying attributes described above reflect a general progression of pedagogical change across levels of technology integration. To provide a specific and detailed evaluation of the level of technology integration in a lesson, the TIM includes five characteristics for evaluation: Active, Collaborative, Constructive, Authentic, and Goal-Directed. These five characteristics of meaningful learning make up the rows of the matrix, and are interdependent components that enable students to engage in higher-order thinking and focus on real-world skills. The foundations for these characteristics come from the work by Jonassen, Howland, Moore, and Marra (2003) and were modified to create the five characteristics
144
in the TIM. Based on a constructivist learning perspective, the aspects proposed by Jonassen et al. are: Active, Constructive, Intentional, Authentic, and Cooperative. They describe Active as students working on meaningful tasks, including making adjustments and observing the results. Constructive includes students connecting new experiences and observations with prior knowledge and understanding. Intentional involves learners articulating goals and planning strategies for achieving them. Authentic is described as learning tasks situated in meaningful, real-life contexts. Cooperative focuses on students interacting in knowledge-building communities, conversing with each other to create common understandings related to their tasks. Together these five result in more meaningful learning than the individual characteristics would in isolation. In addition to the work of Jonassen et al., the characteristics used in the TIM are supported by scholarship related to constructivism and its component ideas. In general, constructivist learning approaches have been shown to enhance learning (see Bransford, Brown, & Cocking, 2000 for an overview). In particular, authentic and active learning strategies have been positively linked to student performance (King, Newmann, & Carmichael, 2009; Newmann, Marks, & Gamoran, 1996). Incorporating collaborative learning within instruction has resulted in improved student outcomes over individual or competitive learning contexts (see Goodyear, Jones, & Thompson, 2014 for a review). Teaching strategies that specifically incorporate metacognition (represented as the Goal-Directed characteristic in the TIM) have shown to positively influence students’ knowledge transfer (Bransford, Brown, & Cocking, 2000). As represented in the TIM framework, the Active characteristic focuses on the level of student engagement, and distinguishes between lessons in which students passively receive information and lessons in which students discover, process, and apply their learning. The Collaborative dimension
A Framework for Defining and Evaluating Technology Integration
describes the degree to which technology is used to facilitate or support students in working together with peers and outside experts through the use of a range of technology tools. The Constructive characteristic describes student-centered instruction that allows for connecting new information to students’ prior knowledge, while allowing flexibility and choice of technology tools. The Authentic dimension focuses on relevance to the real world, using technology to facilitate learning that extends beyond the classroom. Finally, the Goal-Directed characteristic centers on technology use that supports meaningful reflection and metacognition through activities such as setting goals, planning activities, monitoring progress, and evaluating results. While the matrix shown in Figure 1 above includes a basic descriptor for each level of all of the five characteristics, there are also supportive materials that describe what might be observed in a representative lesson from each cell of the matrix, with regard to the student, teacher, and learning environment. The descriptions below present each of the five characteristics of meaningful learning by perspective (i.e., student, teacher, environment). The levels of integration (Entry, Adoption, Adaptation, Infusion, and Transformation) are described within each of the three perspectives.
Active: Student The key question for this characteristic is, “What are the students actually doing?” At the Entry end of the Active spectrum, students may be primarily receiving information from the teacher or from other sources. In a classroom at this level, the students may be watching an instructional video on a website, copying notes from a computer-based presentation, or using a computer program for “drill and practice” activities with math facts or phonics. The students may not be directly using technology at this level.
Moving toward Adoption, students are using standard tool-based applications in standard ways. Students may be working on typing a book report in a word processing application. The uses of technology are conventional and the teacher is the locus of control. The Adaptation level represents a movement from teacher control to student control in the choice of technology and how it is used. Students may have the opportunity to select the technology that they are using to accomplish a specific task. Students may be adapting the use of an application to fit their needs, which may include some unconventional uses of software. Although the students have some choices on how technology is used, their range of choices is still rather limited. At the Infusion level, students are beginning to select from many applications and types of technology throughout the day and across subject areas. Rather than having the use of technology specific to each project and controlled primarily by the teacher, the students know how to use and have access to many types of technology and use them regularly. For example, a science class may be doing collaborative group work on biomes, with each group researching and presenting a different biome. One group may be designing a 3D walk through of a biome and another group might be creating a video. In an Active Transformation lesson, the students are actively engaged in using technology as a tool rather than passively receiving information from the technology. In a Transformation level lesson, there may be elements common to the Infusion level, but the activity in which the students are engaged would be impossible without the technology tools they are using. At this level, students have options on how and why to use different tools and software. The technology itself is “no big deal”; instead the students are focused on what they are able to do with the technology. The technology itself becomes an invisible part of the learning.
145
A Framework for Defining and Evaluating Technology Integration
Active: Teacher When considering the teacher role in the Active dimension, the primary concern is how active the students are allowed to be. In what ways does the teacher facilitate active learning for the students? At the Entry level, the teacher may be the only one actively using technology. This may include using PowerPoint to support delivery of a lecture. The teacher may also have the students complete “drill and practice” activities on computers to practice basic skills, such as typing. At the Adoption level, the teacher is probably still controlling the type of technology and how it is used. The teacher may be pacing the students through a project, making sure that they each complete each step in the same sequence with the same tool. Although the students are more active in their use of technology, the teacher still highly regulates activities. At the Adaptation level, a shift is evident from teacher control to student ownership over processes and products. Teachers working at the adaptation level allow students to choose the technology and the way it is used for some projects. The teacher acts as a facilitator toward learning. At the Infusion level, the teacher guides and advises students in their choice of technology tools. The teacher is flexible and open to student ideas. The teacher uses his or her expertise to guide, inform, and contextualize student choices of technology and applications. Lessons are structured so that students can apply technology tools across content areas and throughout the day. At the Transformation level, the teacher serves as a guide, mentor, and model in the use of technology. The teacher encourages and supports the active engagement of students with technology resources. The teacher helps students locate appropriate online resources to support their projects.
Active: Environment The degree to which the learning environment supports students’ active use of technology in
146
their learning may be indicated by the availability of various technology tools and the infrastructure that supports technology use, such as power and connectivity. This indicator describes the potential for integrating technology in the room for active use, not necessarily how it is being used for this lesson. For instance, a laptop on every desk means that the environment is conducive to active technology use, even if the laptops aren’t being used during a particular lesson. At the Entry level, the classroom is likely arranged for direct instruction and individual seatwork. The students may have no direct access to technology. At the Adoption level, the classroom is likely arranged the same way, but students may now have very limited and regulated access to the technology resources. At the Adaptation level, multiple technology tools are present, and there may be some flexibility in student access to these tools, although the access is still somewhat regulated by the teacher. Indications that the technology is being regulated may include sign-out lists or a rotation schedule. At the Infusion level, multiple technology tools are present and students have access to them throughout the day and across subject areas. For example, if students have established and/or posted procedures for using different types of technology or if students are accessing technology without overt direction from the teacher, that is evidence that they use technology on a regular basis. At the Transformation level, the classroom arrangement is flexible and varied, allowing different kinds of learning activities supported by various technologies, including robust access to online resources. The technology is integral to the daily operations of the classroom.
Collaborative: Student The key question regarding the Collaborative dimension is “To what degree are students working together when using technology or using technology as a way to facilitate working with others?”
A Framework for Defining and Evaluating Technology Integration
Collaboration may involve students working with other students in using technology to complete tasks. It may also involve students using technology to collaborate with peers and experts outside of their classroom. At the Entry level, students primarily work alone when using technology. At the Adoption level, students have opportunities to utilize collaborative tools in conventional ways. These opportunities for collaboration with others through technology or in using technology are limited, and are not a regular part of the day. For example, students from one class may use email to collaborate with another class within the school to complete a project. At the Adaptation level, students can select technology tools for use in collaborative work and there are multiple opportunities for students to use technology in collaborative ways. For example, on one project the students may have the opportunity to choose between using a blog or a wiki to collaborate on a group presentation. At the Infusion level, technology use for collaboration by students is regular and normal. Students can select the tools they wish to use, are comfortable and familiar with collaboration tools, and will be relatively self-directed in their use. At the Transformation level, all students regularly use email, voice chat, video chat, blogs, wikis, or other technologies to work with peers and experts irrespective of time zone or physical distances.
Collaborative: Teacher What role is the teacher playing in determining how technology is used for collaboration? The teacher’s role may range from instructing students to work alone when using technology, to encouraging students to use technology as a means to work together, to seeking out ways that technology can be regularly used to facilitate collaborative experiences outside the classroom. At the Entry level, the teacher directs students to work alone on tasks involving technology. At the Adoption level, the teacher directs students
in the conventional use of tool-based software. These opportunities for collaboration with others through technology or in using technology are limited, and are not a regular part of the day. For example, the teacher may provide students with email contacts and a step-by-step guide to complete the project. At the Adaptation level, the teacher may allow students to select a tool and use it in different ways to accomplish the task at hand. For example, on a given assignment the teacher may provide multiple collaboration tools within an online course platform to complete a literature response project. At the Infusion level, control of the collaboration tools shifts from the teacher to the students. The teacher encourages students to use technology tools collaboratively across the day and guides students in making appropriate independent choices. At the Transformation level, the teacher seeks out and facilitates opportunities for collaboration that would not be possible without technology. The teacher may seek partnerships outside of the classroom to allow students to access experts and peers in other locations. At this level, the teacher may provide the framework or guiding questions and allow the students to seek collaborative partners and answers.
Collaborative: Environment For technology to be used collaboratively or for collaborative purposes, the learning environment is key. Students must have access to the appropriate tools to allow collaboration, and the environment must be set up to allow for this collaboration to take place on a regular basis. Elements of this environment may include computers, tablets, devices such as digital cameras or video cameras, and Internet access. The space must be arranged to facilitate students working together when using the technology. At the Entry level, the classroom is arranged for direct instruction and individual seat-work. Progressing to the Adoption level, the classroom environment allows for the possibility of group work. At Adaptation, desks and workstations
147
A Framework for Defining and Evaluating Technology Integration
are arranged so that multiple students can access technology tools simultaneously. At the Infusion level, technology tools that allow for collaboration are permanently located in the classroom. And finally, at Transformation, computers connect to text, voice, and video chat applications and the network access has sufficient bandwidth to support the use of these technologies.
Constructive: Student When considering students’ constructive technology use, the focus becomes how technology is being used in the classroom to help students build their knowledge and experiences. Building knowledge here is contrasted with receiving knowledge or memorizing facts. Distinctions among levels focus on the ways in which students are using technology to develop personal knowledge. At the Entry level, students receive information from the teacher via technology. For instance, the students may view a slideshow created by the teacher. At the Adoption level, students begin to utilize constructive tools such as graphic organizers to begin to construct meaning. For example, at the teacher’s direction, students may use conceptmapping software to build a character map while reading a novel. The construction of the character map helps students understand relationships between ideas in the story. In this case, the teacher chooses the software and directs the way it is used. At the Adaptation level, students may have opportunities to select technology tools and use them to aid in constructing their understanding. For example, using earthquake coordinate data, students could choose to use a spreadsheet program to create a scatter plot graph that represents a rough map of the Earth’s fault lines. In this example, the teacher might allow the students to choose from among many different technology tools to build a model of the phenomenon they are studying. At the Infusion level, students choose technology tools to assist them in the construction of understanding. These choices occur throughout
148
the school day and across disciplines. Unlike the adaptation level, these choices are not limited to a certain lesson or single part of the curriculum. For example, students who are trying to understand how laws of force and motion are applied in a practical example might take the initiative to seek out an online simulation of a rocket launch. At the Transformation level, students use technology to construct, share, and publish knowledge in ways that would be impossible without technology. Extending the force and motion example, students might collaborate with their teacher to design, build, and launch model rockets in an open field, testing the effects of different propellants and designs. The students could record their reflections and observations to create a podcast to share with other students worldwide. In this example, in addition to the constructive elements, there are elements of active learning, collaboration, authenticity, and goal-directedness. When a lesson is at the Transformation level for one characteristic, it may also be at the Transformation level for others.
Constructive: Teacher When examining the teacher role with regard to the Constructive dimension, the focus in on the extent to which the teacher encourages students to build knowledge through personal experience and interaction using technology. At the Entry level, the teacher uses technology to deliver information to students. At Adoption, teachers provide some opportunities for students to use technology in ordinary ways to build knowledge and experience. For instance, a language arts teacher might provide examples of character maps, guidance in the details that should be included by the students, and instructions about how those details should be represented within a concept mapping application. In this example, the students are constructing knowledge about the relationships, but the teacher is making the choices regarding technology use. At Adaptation, the teacher allows students to select a technology
A Framework for Defining and Evaluating Technology Integration
tool to use to build an understanding of a concept. The teacher gives the students access to a wide variety of software and guides them to appropriate sources of data. The teacher lets the students choose within the bounds of a given lesson or subject, and is supportive of them experimenting with the software. At the Infusion level, the teacher allows more decisions to be made by the students as they construct meaning. Increasingly, the teacher serves as a guide. The teacher consistently allows students to select technology tools from among many available choices. At the Transformation level, the teacher facilitates learning opportunities in which students regularly engage in activities that would have been impossible to achieve without technology. For example, students in a science class may use real-time video, audio, or text chat to monitor experiments performed by scientists in remote locations, testing their own hypotheses and building their understanding of the concepts demonstrated through direct observations.
Constructive: Environment When considering the elements of the learning environment related to the Constructive dimension, emphasis is on how the learning environment affords student use of technology to build experiences, test theories, and gain direct access to data. Software that supports constructive learning includes: concept mapping, simulation creation, 3D modeling, animation construction, and video editing, among many others. At the Entry level, the classroom is arranged so that all students can view the teacher’s presentation. At the Adoption level, technology tools that allow for open-ended exploration and representation are available to students on a limited basis. At the Adaptation level, classroom computers include a variety of tool-based software. At the Infusion level, classroom computers also include access to rich webbased resources. At the Transformation level, the
classroom includes robust access to a wide variety of tool-based software, access to online resources and communities, and the ability to publish new content to the Internet.
Authentic: Student For the Authentic dimension, the primary question is “To what extent do the students use technology to engage with the world outside the classroom and to address problems that concern them and their community?” At the Entry level, students complete assigned activities that are generally unrelated to problems beyond the classroom. In some cases student activities may involve hands-on use of technology, but when considering Entrylevel authenticity, the technology use doesn’t support connections beyond the classroom. At the Adoption level, students have opportunities to apply technology tools to some content-specific activities that are based on real-world problems. For example, students in a hurricane-prone area might be given an assignment to use historical data to plot on an interactive whiteboard the paths of major storms. At the Adaptation level, students have opportunities to select technology tools to solve problems based on issues in the world beyond the classroom. For example, students could create an oral history podcast consisting of interviews with parents and grandparents about the history of their community. At the Infusion level, students select appropriate technology tools to complete authentic tasks. Tasks may involve multiple subject areas. For example, in response to a community need, students might initiate a research project to study American Sign Language, using online resources and creating resource videos to teach other students. At the Transformation level, by means of technology tools, students participate in outside-of-school projects and problem-solving activities that have meaning for the students and their community.
149
A Framework for Defining and Evaluating Technology Integration
Authentic: Teacher When considering authenticity, levels can be distinguished by the ways in which the teacher designs projects that allow students to engage with materials and tasks that have meaning to them, extend lessons beyond the classroom, and include student choice in the technology tools they employ. At the Entry level, the teacher assigns work based solely on a predetermined curriculum unrelated to the students or to issues beyond the classroom. At the Adoption level, the teacher directs students in their technology use, and activities occasionally have real-world connections. At the Adaptation level, the teacher directs the choice of technology tools and provides access to information on community and world problems. At the Infusion level, the teacher encourages students to use technology tools to make connections to their personal lives, and to choose the technology that best matches their needs. At the Transformation level, the teacher encourages and supports students’ innovative use of technology in higher-order learning activities that encourage making connections to their own lives and the world beyond the classroom.
Authentic: Environment An authentic learning environment is one in which students have access to the technology tools and resources that allow them to complete meaningful projects and tasks. At the Entry level, resources available via technology in the classroom include primarily textbook supplementary material and reference books or websites, such as encyclopedias. At Adoption, students have access to information about community and world events, and information about primary source materials. At the Adaptation level, student have access to primary source materials. At Infusion, access is provided to rich online resources, including information outside of the school and primary source materials that are available in sufficient quantities. Once at Transformation, the learn-
150
ing environment provides the capability for all students to simultaneously engage with primary source material related to their community, and to explore their own interests using a variety of technology tools and online resources.
Goal-Directed: Student This dimension focuses on the ways in which the students use technology tools to plan, organize, analyze, and reflect upon their work. At the Entry level, students receive directions, guidance, and feedback through technology, either from the teacher or through computer-based programmed instruction. For example, students may work through levels of a computer program that provides progressively more difficult phonics practice activities. At the Adoption level, from time to time, students use technology to plan, monitor, or evaluate an activity. For example, students may begin a K-W-L chart using concept-mapping software. At the Adaptation level, students may have an opportunity within one assignment or project to select technology tools to facilitate goal-setting, planning, monitoring, and evaluating specific activities. It is likely that student choices are limited to a subject area or a period of the day. At the Infusion level, students use technology tools to set goals, plan activities, monitor progress, and evaluate results throughout the curriculum. The students know how to use and have access to a variety of technologies, and the students choose which technologies to use. For example, students may choose to use a blog for peer mentoring that will help them work toward their own writing goals. At the Transformation level, students engage in ongoing metacognitive activities at a level that would be unattainable without the support of technology tools. For example, students might interact with climate researchers, politicians, and environmental groups through a class-run blog to aid in creating a recycling and waste reduction plan for their community. Through the blog, they could report on the carbon footprint of their school and
A Framework for Defining and Evaluating Technology Integration
monitor their progress toward its reduction. This example involves high levels of collaboration in addition to goal-directed behavior.
Goal-Directed: Teacher The key question in evaluating this dimension with regard to the teacher is, “To what extent does the teacher allow students to set goals, organize, and analyze their own work?” At the Entry level, the teacher uses technology to give students directions and monitor step-bystep completion of tasks. For example, the teacher may incorporate computer-based programmed instruction. At the Adoption level, the teacher leads students together through every step as they use software to plan, monitor, or evaluate an activity. At the Adaptation level, the teacher allows students to choose technology tools to set goals, plan, monitor progress, and evaluate outcomes for a given project or assignment. For example, the teacher may allow students to choose a spreadsheet program or concept mapping software to plan and monitor progress. At the Infusion level, the teacher creates a learning environment that incorporates technology tools for planning and monitoring throughout the day and across subject areas. In the blogging example, the teacher would provide a blog space for students to publish original writing and provide feedback to one another related to their individual goals. At the Transformation level, the teacher creates a rich learning environment in which technology use is integral, seamless, and indispensable. For example, a teacher allows students to use a variety of tools (blogs, wikis, concept mapping software, etc.) to plan, monitor, and evaluate their own work. The key element to consider is the teacher’s role in the lesson.
including the ability to track students’ progress across levels. At Adoption, access is provided to software that allows students to plan, monitor, and evaluate their work. When at the Adaptation level, the setting includes access to tool-based software such as graphic organizers, calendars, spreadsheet software, and timeline software. At the Infusion level, access is provided to a variety of technology tools for all students to use in planning and monitoring their work. Finally, at Transformation, the setting includes access to a wide variety of tool-based software and robust access to online resources from which students can select and implement when planning, monitoring, and evaluating their work.
Illustrative Classroom Examples The website that accompanies the TIM includes 100 videos of classroom lessons that illustrate each cell within the matrix. These lessons were observed and recorded in real classrooms in Florida (i.e., they were not staged). As shown in Figure 3, there are lessons for each cell representative of each of four subject areas: math, science, social studies, and language arts. Clicking on one of the subject area icons within each cell takes the user to a page for the lesson that includes the video, related objectives, procedure, materials, grade level, and the NETS profile for technology literate students (ISTE, 2007). To illustrate how these indicators change from level to level, the following section describes lessons taken from the Authentic section of the TIM. Since the Authentic characteristic tracks how closely learning is linked to activities in the real world, lessons that illustrate the progression of this characteristic are especially meaningful for this volume.
Goal-Directed: Environment
Entry
At the Entry level, the classroom setting includes access to skill building websites and applications,
At the Entry level, student use of technology is generally unrelated to the world outside of
151
A Framework for Defining and Evaluating Technology Integration
Figure 3. Resources provided on the TIM website (© 2015, Florida Center for Instructional Technology, USF. Used with permission).
the instructional setting. A typical example of technology use at this level is the “Math Skills Practice” video. In this third-grade lesson, the teacher assigns a specific math fractions drill and practice game to reinforce skills that meet a particular instructional objective. The students complete the game by practicing fraction addi-
152
tion problems that appear on the dashboard of a virtual car. Four flying insects arrive each holding a possible answer. The student clicks on the insect that is holding the correct answer. The lesson is merely a multiple choice practice activity. Since we don’t drive cars by answering math problems and insects don’t generally fly around holding
A Framework for Defining and Evaluating Technology Integration
answer choices, the activity bears no relation to the real world. The technology is employed in the hope of placing a more engaging frame around a worksheet of fraction addition problems. The only resource available to students in this lesson is the companion website to their primary mathematics textbook. The technology here is being utilized at a very low level.
Adoption The Adoption level is characterized by the guided use of technology with some meaningful context. A Social Studies lesson entitled “This Day in History” exemplifies this level. The teacher of this fourth-grade lesson assigns each of her students a specific date in history. Each student uses the Internet to research what happened on the assigned date and selects the one event that is of the most interest. The students then collect images and other primary sources from public domain sites such as Wikimedia Commons and other repositories available at their school. The teacher guides the students through the conventional use of a video editing program to produce a segment for the school-wide morning news broadcast. She specifies the format, titling, and effects the students are to use. Although the teacher has selected the technology tool and determined how it is to be used, this lesson falls into the Adoption level because students have some ability to select content of interest to them, they have access to primary sources and information about events, and the final product is actually used in the daily news program for all students at the school. This is a first step beyond Entry level, but doesn’t begin to explore the opportunities technology brings to Authentic learning.
Adaptation Technology use at the Adaptation level becomes much more student-centered. Students use the technology independently and content is con-
nected to their own lives. This is demonstrated in the sixth-grade Science lesson, “Can Acids and Bases Remedy the Body?” The teacher has prepared a simulated stomach acid using water and vinegar for each of the four groups of students. The students use science probes connected to their laptops to measure the pH of the solution. They then introduce various over-the-counter and homemade stomach remedies and use probeware on their laptops to graph the effectiveness of each. They also test the effects of analgesics on the simulated stomach to discover which ones can cause stomach upset. With the data they collected, students evaluate the cost effectiveness of various remedies. While the video does not specifically mention the use of outside resources, the students have connected laptops with them in the lab and the teacher emphasizes the open-ended nature of the lab activity and the ability of the students to follow up with their own questions. Although the choice of technology tools at the Adaptation level is still determined by the teacher, students now use the tools on their own in activities that have meaning beyond the instructional setting.
Infusion At the Infusion level, the teacher encourages students to make connections to the outside world and students are permitted to select the most appropriate tools and use them as they see fit to accomplish the task. One of the exemplar videos from this level is a high school marketing management lesson. The assignment is to do marketing research for a local business. Most of the students select a business where they are currently working after school so there is an immediate real-world connection. The students research the customer service offerings of the chosen business and compare the offerings to those of its nearest competitor. In the course of their research, the students use outside sources to locate information about the two businesses and the demographics of the community. Based on this information, the students develop a survey for the
153
A Framework for Defining and Evaluating Technology Integration
customers of their business, analyze the results of the survey, and develop a services promotion plan. The students utilize whatever software they deem appropriate in the course of conducting and presenting their research. The teacher notes the impact that laptops have had on this assignment. Previously, the class had access to technology only when they moved to the school’s computer lab. Now, students have continual access to the laptop when they are in class, allowing the technology to be infused throughout the assignment.
Transformation The Transformation level is achieved when the innovative use of technology is coupled with higher order learning activities in a local or global context. This is seen in the middle school lesson called “Dollars for Darfur: Using Technology to Make a Difference by Informing, Influencing, and Impacting Others.” The unit began with a study of Holocaust literature with an emphasis on non-fiction involving teenagers. The discussion then turned to whether this type of genocide could happen now. Most of the students felt that a contemporary genocide was not possible, so the teacher sent them to the website, SaveDarfur. org. There they read about the tragic situation in Darfur where 300,000 have been killed. The students were in shock that a genocide like this was currently happening. They found other sites where they could read journalists’ accounts and view podcasts and photos of the situation in Darfur. The teacher then challenged the class to devise ways of informing others about the Darfur genocide. That open-ended assignment led to a variety of technology activities including sending email to government leaders and public opinion influencers, creating a Google group for Darfur activism, organizing a benefit concert, creating a website with information about the genocide, and using technology to design brochures and fliers to inform others. Using these techniques, many of which would have been impossible without
154
technology, the students were able engage with a global community on a significant issue of importance to them.
TIM IMPLEMENTATION Many teachers, schools, districts, and states implement the TIM as a regular part of instructional planning, practice, and evaluation. Because the TIM model is freely available online (mytechmatrix.org/), no reliable estimate can be provided for the scope of its adoption. However, FCIT has received rich anecdotal feedback from many schools regarding their uses. Separate from the TIM itself, the TIM Evaluation Tools (described below) are currently licensed for use in schools (public, private, charter, and colleges of education) in more than 25 U.S. states, and three other countries. The TIM has been implemented using a wide variety of approaches, ranging from use by an individual teacher to incorporation within a largescale system. It can serve to provide a common language for a group, or provide an organizing framework for an individual.
Strategies: Teacher-Led, Centralized, and Evaluation Schools and school districts that align their professional development with the TIM take a variety of approaches depending on their individual contexts, the needs of their teachers and students, the demands of the curriculum, the resources available, and their vision for technology integration. The TIM framework does not dictate a single approach; instead, the framework is broad enough to allow professional development approaches adapted to local needs. The TIM provides a common language and a method of describing the pedagogical approaches to technology integration without prescribing a single approach to addressing needs.
A Framework for Defining and Evaluating Technology Integration
Some schools and districts that use the TIM have adopted a teacher-led, or teacher-focused, approach. In this type of implementation, teachers use the TIM to guide lesson planning, reflection, and setting goals for personal growth. Teachers (typically grouped by grade level or subject area) observe each other’s lessons and provide structured feedback about technology integration in terms of the TIM. Teachers also have direct input into the content of professional development, targeting areas of need and aligning offerings with individual professional growth plans. Other schools and districts adopt a more centralized model in which school or district leadership uses the TIM to articulate overall goals for teachers’ integration of technology. The TIM language may be incorporated into informal feedback provided to teachers by school leaders or technology coaches. The model may also be incorporated into formal teacher evaluation. A third option that may exist alongside either of these models is program evaluation. In some cases, the TIM is used as a program evaluation framework, sometimes in relation to grant funding (e.g., Pringle, Dawson, & Ritzhaupt, 2015). A school or district may choose to use the TIM solely in this summative evaluation mode, with or without incorporating it into professional development or coaching.
tion of technology resources. In reality, however, technology integration often begins with a given technology resource in search of a classroom problem to solve. For example, a science teacher may learn that the school has a complete set of probeware sitting on a shelf. A media specialist may find information about purchased software that is not currently in use. A teacher may have access to tablet computers and begin by looking for a way to integrate them into his or her classroom. Regardless, as Ren (2014) expressed, “Effective technology integration requires sensitivity to the potential of various technologies as well as a profound understanding of specific disciplines and associated pedagogical practices” (p. viii). When planning technology integration, the teacher should consider each of these areas flexibly and iteratively. Regarding technology, the teacher should consider the affordances and limitations of available technologies. A teacher may ask questions such as:
Instructional Planning
•
Although the TIM can be implemented across a school system for professional development, planning, and evaluation, the model functions at the classroom level and can be implemented by an individual teacher seeking to improve technology integration in the absence of a system-wide plan. Figure 4 depicts an instructional planning model that incorporates a focus on three areas: available technology, curriculum demands, and student needs. In an ideal scenario, understanding student and curriculum needs precedes and informs the alloca-
•
• • • •
Can a given technology support use by groups, individuals, or both? Does it foster interaction synchronously, asynchronously, or not at all? In what ways does this technology limit or encourage creative expression? In what ways can this technology help reveal a student’s understanding of a concept? In what ways can this technology shape a student’s understanding? How similar is this technology to others these students have used previously?
The teacher must also consider the demands of the curriculum, addressing questions such as: • • •
What are the instructional goals of a given lesson? What content standards will the lesson address? What domain- or discipline-specific pedagogical practices may apply to the content?
155
A Framework for Defining and Evaluating Technology Integration
Figure 4. TIM Instructional Planning Model (© 2015, Florida Center for Instructional Technology, USF. Used with permission).
Most importantly, the teacher must factor in the needs of his or her individual students. Questions include: • • •
What kinds of activities help my students learn? What are the differentiated needs of my students? What are the strengths and interests of my students?
Regardless of whether the teacher begins with technology, curriculum, or student needs, careful consideration of all three areas enables the teacher
156
to make strategic choices about the TIM level that would best suit the lesson. In any of these scenarios, the TIM Evaluation Tools described in the next section provide useful data to support implementation.
TIM Evaluation Tools Utilization of the TIM as a framework for evaluation, professional development, and planning may be informed by the application of data collection instruments. A suite of tools created by FCIT includes an interactive observation tool, a reflection tool, and a survey.
A Framework for Defining and Evaluating Technology Integration
Observation Tool (TIM-O) The TIM Observation Tool (TIM-O) is designed to guide principals, teachers, and others through the process of evaluating the TIM levels of technology integration within a particular lesson. Using the web-based instrument, the user answers a series of questions about observable activity during the lesson. The TIM-O applies an adaptive model and continues to choose and supply questions to the observer only until it has arrived at an estimated TIM level for each of the five characteristics (e.g., Active-Adoption, Collaborative-Entry, Constructive-Adoption, etc.). The questions are not designed to be exhaustive and do not explore every aspect of the lesson. Rather, the question set was designed to be as efficient as possible based on the assumption that an observer in a school setting will have a limited amount of time in which to complete the observation. When the system has a reasonable estimate of the TIM levels based on the answers provided, the TIM-O reports those estimates on an observation summary screen. The user has the option of making adjustments to any of the five levels and recording notes. As an alternative, an observer may choose to bypass the questions altogether and directly select the TIM levels. The TIM-O is designed to support a variety of different implementation contexts, including teacher-led, centralized, and evaluation models. It is important that schools or districts develop implementation strategies that specify observation procedures appropriate to their needs. For example, if the purpose of the observation is to support peer coaching, the observation notes should include specific suggestions for improvement. If the observation is part of a district-wide grant evaluation and will not be used for teacher feedback, consistency of the data is of primary importance. Each TIM-aligned observation provides a piece of evidence about a teacher’s strengths and abilities with regard to technology integration. Structured
observation data supports rich conversations between educators and goal-setting for professional growth. Multiple observations can help evaluators get a clear picture of the professional development necessary to facilitate meaningful technology integration.
Reflection Tool (TIM-R) The TIM Reflection Tool (TIM-R) parallels the TIM-O but is designed to be used by the teacher to reflect on his or her own practice. The TIM-R follows the same adaptive question set to guide reflection and allows for the teacher to bypass questions and choose levels directly. A teacher may choose to complete the TIM-R before a lesson, after a lesson, or both. Upon completion, the teacher may share the reflection with another user within the system. Schools may find it effective to pair observation and reflection data.
Technology Uses and Perceptions Survey (TUPS) The Technology Uses and Perceptions Survey (TUPS) is a web-based survey that captures a wide spectrum of data regarding teachers’ beliefs about the role of technology in the classroom, confidence with technology, and the frequency with which they and their students use different tools and strategies. Survey sections include: (a) Technology Access & Support, (b) Preparation for Technology Use, (c) Perceptions of Technology Use, (d) Confidence and Comfort Using Technology, (e) Technology Integration, (f) Teacher & Student Use of Technology, and (g) Technology Skills & Usefulness. This survey is an updated and expanded version of a survey described in studies by Barron, Kemker, Harmes, and Kalaydjian (2003) and Hogarty, Lang, and Kromrey (2003). Results from the TUPS can help identify professional development needs at the teacher, school, or larger aggregate levels. Schools currently use TUPS data to identify likely early adopters of new
157
A Framework for Defining and Evaluating Technology Integration
technology, provide a baseline for comparison, and to evaluate the effectiveness of professional development programs. In one teacher-driven implementation, data were analyzed by a team of teachers who made recommendations for allocation of professional development and procurement of technology. Survey data can be particularly informative when combined with observation and reflection data.
CONCLUSION When the first version of the TIM was released, technology integration was becoming a popular phrase, but it lacked a clear definition and detail. Thus, an important initial contribution of the TIM was presenting a comprehensive framework and vocabulary for describing effective technology use in instructional settings. Further, it has provided resources for teachers to see what lies beyond the level at which they are currently comfortable, and where they might choose to focus their technologyrelated goals and professional development. Compared with other frameworks for technology integration, the TIM provides a unique, research-based model for evaluating lessons, accompanied by a rich set of supporting materials. Organizations that have adopted other models for evaluating technology integration need not abandon them to embrace the TIM, but instead may find value in using them together. The SAMR and TPACK models, described earlier, are different enough in their focus and approaches that they might be considered for use in sequence with the TIM. For instance, the simplicity of the SAMR makes it particularly useful as a starting point for introducing the idea of assessing levels of classroom technology use. Once comfortable with rating individual tasks, educators could move to the TIM framework and evaluate the entire lesson in which tasks are situated, considering all five characteristics of meaningful learning. Similarly, the TPACK model might serve as a
158
useful complement to the TIM. A lesson might be first be evaluated based on the five characteristics in the TIM, then, within each cell, the TPACK framework could be used to specify the various knowledge bases (particularly content) that would be required of a teacher to effectively use that lesson, or to modify it to change the level of integration for one or more characteristics. The unifying purpose of the various models for technology integration is to help teachers effectively incorporate technology into their instruction in ways that best facilitate student learning of academic content and real-world skills. To this end, the focus should be on technology integration in which pedagogy and instructional goals are central (Ertmer & Ottenbreit-Leftwich, 2013), rather than simply increasing technology implementation such that technology tools themselves have the greatest emphasis (ISTE, n.d.). Similarly, The TIM focuses on pedagogy related to technology use, as opposed to focusing on specific software applications or digital devices, and it was designed to be applicable across content areas. A foundational concept of the TIM is a focus on matching the technology use to the lesson, as opposed to suggesting that every teacher should be implementing transformational lessons all the time. The central goal is that teachers be comfortable designing and guiding students in lessons at all levels, so that technology integration can best match the curriculum needs. As stated in a report from the USDOE, “Technology is not a silver bullet and cannot—by itself—produce the benefits we seek in learning, but without technology, schools have little chance of rising to 21st-century expectations” (2014, p. 15). As a framework informed by research and practice, the TIM offers teachers a structure, vocabulary, and examples for meaningfully integrating technology in preparing their students for success in the 21st century workplace. Further, it provides administrators with an evaluation framework to help ensure that technology is integrated in ways that emphasize critical thinking, collaborative
A Framework for Defining and Evaluating Technology Integration
problem solving, and connection to the world outside of school. With the increased emphasis on real-world skills, tools such as the TIM can be critical in helping schools and educational systems leverage technology advances to focus instruction on next generation learning.
REFERENCES Allsopp, M. M., Hohlfeld, T., & Kemker, K. (2007). The Technology Integration Matrix: The development and field-test of an Internet based multi-media assessment tool for the implementation of instructional technology in the classroom. Paper presented at the annual meeting of the Florida Educational Research Association, Tampa, FL. Baroody, A. J. (2003). The development of adaptive expertise and flexibility: the integration of conceptual and procedural knowledge. Mahwah, NJ: Erlbaum. Barron, A. E., Kemker, K., Harmes, C., & Kalaydjian, K. (2003, Summer). Large-scale research study on technology in K-12 schools: Technology integration as it relates to the National Technology Standards. Journal of Research on Technology in Education, 35(4), 489–507. doi:10.1080/153915 23.2003.10782398 Berthold, K., & Renkl, A. (2009). Instructional aids to support a conceptual understanding of multiple representations. Journal of Educational Psychology, 101(1), 70–87. doi:10.1037/a0013247 Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, school (Expanded ed.). Washington, DC: National Academy Press.
Brantley-Dias, L., & Ertmer, P. A. (2013). Goldilocks and TPACK: Is the construct “just right?”. Journal of Research on Technology in Education, 46(2), 103–128. doi:10.1080/15391523.2013.10 782615 Cornelius-White, J. (2007). Learner-centered teacher-student relationships are effective: A meta-analysis. Review of Educational Research, 77(1), 113–143. doi:10.3102/003465430298563 Davies, R. S., & West, R. E. (2014). Technology integration in schools. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 841–853). New York, NY: Springer. doi:10.1007/978-1-4614-3185-5_68 Digiovanni, L. (2015). Rethinking Instructional Technology in a Graduate Early Childhood Education Class: Moving Away From TPACK. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 2006-2007). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). Ertmer, P. A., & Ottenbreit-Leftwich, A. (2013). Removing obstacles to the pedagogical changes required by Jonassen’s vision of authentic technology-enabled learning. Computers & Education, 64, 175–182. doi:10.1016/j.compedu.2012.10.008 Florida Center for Instructional Technology. (n.d.). The Technology Integration Matrix. Retrieved from http://mytechmatrix.org/ Fodchuk, A., Schwartz, K., & Hill, T. (2014, June 30). Say “hello” to TIM! (Technology Integration Matrix). Presentation at ISTE 2014, Atlanta, GA.
159
A Framework for Defining and Evaluating Technology Integration
Goodyear, P., Jones, C., & Thompson, K. (2014). Computer-supported collaborative learning: Instructional approaches, group processes and educational designs. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. 439–451). New York, NY: Springer. doi:10.1007/978-1-4614-3185-5_35 Hall, G. E. (2010). Technology’s Achilles heel: Achieving high-quality implementation. Journal of Research on Technology in Education, 42(3), 231–253. doi:10.1080/15391523.2010.10782550 Hall, G. E., Loucks, S. F., Rutherford, W. L., & Newlove, B. W. (1975, Spring). Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of Teacher Education, 26(1), 52–56. doi:10.1177/002248717502600114 Hogarty, K., Lang, T., & Kromrey, J. (2003, February). Another look at technology use in classrooms: The development and validation of an instrument to measure teachers’ perceptions. Educational and Psychological Measurement, 63(1), 139–162. doi:10.1177/0013164402239322 International Society for Technology in Education. (2007). ISTE standards for students. Retrieved from http://www.iste.org/standards/ iste-standards/standards-for-students International Society for Technology in Education. (n.d.). Student-centered learning. Retrieved from http://www.iste.org/standards/essentialconditions/student-centered-learning Jonassen, D., Howland, J., Moore, J., & Marra, R. (2003). Learning to solve problems with technology: A constructivist perspective (2nd ed.). Upper Saddle River, NJ: Merrill Prentice Hall. Jonassen, D. H. (1995). Computers as cognitive tools: Learning with technology, not from technology. Journal of Computing in Higher Education, 6(2), 40–73. doi:10.1007/BF02941038
160
Kieran, L., & Anderson, C. (2014). Guiding preservice teacher candidates to implement student-centered applications of technology in the classroom. In M. Searson & M. Ochoa (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2014 (pp. 2414-2421). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). King, M. B., Newmann, F. M., & Carmichael, D. L. (2009, January/February). Authentic intellectual work: Common standards for teaching social studies. Social Education, 73(1), 43–49. Koehler, M. J., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research, 32(2), 131–152. doi:10.2190/0EW701WB-BKHL-QDYV Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge? Contemporary Issues in Technology & Teacher Education, 9(1), 60–70. Marcovitz, D., & Janiszewski, N. (2015). Technology, models, and 21st-Century learning: How models, standards, and theories make learning powerful. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 1227-1232). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). Moersch, C. (1995, November). Levels of Technology Implementation (LoTi): A framework for measuring classroom technology use. Learning and Leading with Technology, 23(3), 40–42. Moersch, C. (2010, February). LoTi turns up the H.E.A.T. Learning and Leading with Technology, 37(5), 20–23.
A Framework for Defining and Evaluating Technology Integration
National Education Association. (2008). Technology in schools: The ongoing challenge of access, adequacy and equity. Retrieved from http://www. nea.org/assets/docs/PB19_Technology08.pdf National Education Association. (n.d.). Preparing 21st century students for a global society: An educator’s guide to the “Four Cs.” Retrieved from http://www.nea.org/assets/docs/A-Guideto-Four-Cs.pdf Newmann, F. M., Marks, H. M., & Gamoran, A. (1996, August). Authentic pedagogy and student performance. American Journal of Education, 104(4), 280–312. doi:10.1086/444136 Olson, T. A., Olson, J., Olson, M., Capen, S., Shih, J., Adkins, A., . . . Thomas, A. (2015). Exploring 1:1 tablet technology settings: A case study of the first year of implementation in middle school mathematics classrooms. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 2736-2742). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE). Partnership for 21st Century Skills. (2011). Framework for 21st century learning. Retrieved from http://www.p21.org/storage/documents/1.__ p21_framework_2-pager.pdf Polly, D., Margerison, A., & Piel, J. (2014). Kindergarten teachers’ orientations to teachercentered and student-centered pedagogies and their influence on their students’ understanding of addition. Journal of Research in Childhood Education, 28(1), 1–17. doi:10.1080/02568543 .2013.822949 Pringle, R. M., Dawson, K., & Ritzhaupt, A. D. (2015). Integrating science and technology: Using technological pedagogical content knowledge as a framework to study the practices of science teachers. Journal of Science Education and Technology, 24(5), 648–662. doi:10.1007/s10956-015-9553-9
Puentedura, R. R. (2006). Transformation, technology, and education. Retrieved from http:// hippasus.com/resources/tte/ Purdue University. (2015). The evolution of technology in the classroom. Retrieved from http://online.purdue.edu/ldt/learning-design-technology/ resources/evolution-technology-classroom Ren, Y. (2014). Information and communication technologies in education. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (pp. vii–xi). New York, NY: Springer. Rogers, E. M. (1962). Diffusion of innovations. New York: Free Press of Glencoe. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press. Romrell, D., Kidder, L. C., & Wood, E. (2014). The SAMR model as a framework for evaluating mLearning. Journal of Asynchronous Learning Networks, 18(2). Saavedra, A. R., & Opfer, V. D. (2012, October). Learning 21st-century skills requires 21st-century teaching. Phi Delta Kappan, 94(2), 8–13. doi:10.1177/003172171209400203 Sandholtz, J. H., Ringstaff, C., & Dwyer, D. C. (1997). Teaching with technology: Creating student-centered classrooms. New York: Teachers College Press. Sherman, D., & Cornick, S. (2015). Assessment of PreK-12 educators’ skill and practice in technology & digital media integration. In D. Slykhuis & G. Marks (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2015 (pp. 1286-1291). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
161
A Framework for Defining and Evaluating Technology Integration
Technology in Schools Task Force. (2002). Technology in schools: Suggestions, tools, and guidelines for assessing technology in elementary and secondary education. Washington, DC: U.S. Department of Education, National Center for Education Statistics. United States Department of Education. (2014, June). Learning technology effectiveness. Retrieved from http://tech.ed.gov/learning-technology-effectiveness/ Vogt, J., Fisser, P., Roblin, N., Tondeur, J., & van Braak, J. (2013, April). Technological pedagogical content knowledge – A review of the literature. Journal of Computer Assisted Learning, 29(2), 109–121. doi:10.1111/j.1365-2729.2012.00487.x Welsh, J., Harmes, J. C., & Winkelman, R. (2011). Florida’s Technology Integration Matrix. Principal Leadership, 12(2), 69–71. Wenglinsky, H. (1999). Does it compute? The relationship between educational technology and student achievement in mathematics. Educational Testing Service Policy Information Center. Wenglinsky, H. (2002). The link between teacher classroom practices and student academic performance. Education Policy Analysis Archives, 10(12).
KEY TERMS AND DEFINITIONS Active: A characteristic of meaningful learning within the TIM framework that describes how students actively create knowledge by discovering, processing, and applying their learning.
162
Authentic: A characteristic of meaningful learning within the TIM framework describing technology use for learning that includes experiences that have relevance to the world outside the classroom. Collaborative: A characteristic of meaningful learning within the TIM framework that describes how technology is used to facilitate or support students in working together with peers and outside experts. Constructive: A characteristic of meaningful learning within the TIM framework describing student-centered instruction that facilitates students connecting new information to their prior knowledge, while allowing flexibility and choice of technology tools. Goal-Directed: A characteristic of meaningful learning within the TIM framework that describes technology use that supports meaningful reflection through activities such as setting goals, planning activities, monitoring progress, and evaluating results. Technology integration: The use of technology to enhance, extend, or enrich learning. Technology: Digital devices, software, and connectivity that allow the use of digital content in the classroom. Digital devices are any hardware devices that students or teachers can use to search for, create, manipulate, or consume digital content. TIM Evaluation Tools or TIM Tools: A web-based suite of measurement instruments that guides the collection of data about technology integration. TIM: The Technology Integration Matrix (TIM), a pedagogically-centered model for planning, describing, and evaluating technology integration.
163
Chapter 7
Equipping Advanced Practice Nurses with Real-World Skills Patricia Eckardt Stony Brook University, USA
Marie Ann Marino Stony Brook University, USA
Brenda Janotha Stony Brook University, USA
David P. Erlanger Stony Brook University, USA
Dolores Cannella Stony Brook University, USA
ABSTRACT Nursing professionals need to assume responsibility and take initiative in ongoing personal and professional development. Qualities required of nursing graduates must include the ability to, “translate, integrate, and apply knowledge that leads to improvements in patient outcomes,” in an environment in which “[k]nowledge is increasingly complex and evolving rapidly” (American Association of Colleges of Nursing, 2008, p. 33). The ability to identify personal learning needs, set goals, apply learning strategies, pursue resources, and evaluate outcomes are essential. Nursing professionals must be self-directed learners to meet these expectations. Team-based learning (TBL) is a multiphase pedagogical approach requiring active student participation and collaboration. Team-based learning entails three stages: (1) individual preparation, (2) learning assurance assessment, and (3) team application activity. National health care has undergone a dramatic restructuring where inter-professional teams comprised of nurses, physicians, dentists, social workers, dieticians, pharmacists, and ancillary paraprofessionals deliver healthcare to the US population. This transformation in the health care delivery system has included a call to educate thousands of nurses as advanced care (graduate education completed) providers, and hundreds of thousands as entry-level (undergraduate education
completed) providers, to organize and lead these healthcare teams, while delivering direct patient care and conducting outcomes effectiveness research. This is a daunting task, as nurses enter the practice of professional registered nursing from diverse trajectories. As the entry into practice differs, so does the educational pathways through nursing undergraduate and graduate studies differ, with many traditional professional nursing educational programs lacking the resources to
DOI: 10.4018/978-1-4666-9441-5.ch007
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Equipping Advanced Practice Nurses with Real-World Skills
provide students with the skills required for interprofessional success. Nursing programs also need to meet the challenges of educating nurses who live and practice in communities that historically do not have access to academic medical center care and education. In this chapter, we outline how programs of nursing studies within universities and colleges can meet these challenges, by incorporating innovative methods for curricula delivery and learning evaluation. Computer-based education is a critical component for successfully educating nurses- particularly when these nurses are located throughout the world serving in the armed forces or with a humanitarian mission. This chapter provides exemplars of how to prepare nurses with the real-life skills needed to practice in and lead inter-professional care delivery and research teams across their communities. These three case studies illustrate the effectiveness of three distinct computer-based innovative approaches to curricula and evaluation: a social cognitive constructivist approach to graduate nursing computer- based statistics education, a team-based learning approach to undergraduate nursing computer-based statistics education, and a hybrid (face to face and computer-based sessions) team science approach to advanced practice nursing education. The incorporation of these approaches within advanced and entry level practice nursing programs can provide the essentials for the clinical real world- patient practice skills needed to deliver quality patient care to complex patient populations.
BACKGROUND Current State of Health Care Delivery System and Nursing Curriculum Response The national healthcare delivery model has changed drastically over the past few years and
164
further changes are underway. These changes include who provides primary healthcare, where the healthcare is delivered, guidelines for health management of populations, and reimbursement and accountability for healthcare services payment (Dykema Sprayberry, 2014; Forbes, 2014; Scott, Matthews, & Kirwan, 2014; Spetz,2014). The nursing workforce in the United States is approximately 3.5 million and is expected to increase over the next ten years (U.S. Department of Health and Human Services HRSA, 2014). Nurses are being called to increase their leadership skills, scientific knowledge and practice competencies, educational preparation and to practice to the fullest extent of their education (IOM, 2011). As the educational and competency requirements increase and role definitions for practice models expand, nursing curriculum content and delivery methods have changed in response (AACN Essentials for Education, 2010, 20111, 2012). However, faculty is insufficient to provide the expertise required to deliver the curriculum and evaluate student learning. Faculty are insufficient in number and often in training to meet the suggested curriculum essentials (IOM, 2011). Faculty lack of preparation in research and statistical knowledge are cited across programs as a roadblock to preparing our students to meet the new educational and practice environment demands (Hyat, Eckardt, Higgins, Kim, & Schmeige, 2013). Nursing student populations are more diverse than ever. Many students are now entering nursing programs after attaining undergraduate and graduate degrees in other disciplines, and some specific student populations, such as males and minorities, are increasing as compared to the trajectories of the past twenty years (Banister, Bowen-Brady, & Winfrey, 2014). To increase the number of nurses educated to practice, and meet the new guidelines, nursing programs now offer many different pathways to the entry and advanced levels of practice (AACN, 2012). For example, some schools admit students from high school
Equipping Advanced Practice Nurses with Real-World Skills
for programs that lead to doctoral degrees, while in contrast, others admit only master’s prepared nurses to doctoral programs of study (Starr, 2010). In addition to the changes within the healthcare system in the US, and the nursing professional and educational environments, the way that information is shared on an individual, local, and global level has been revolutionized by the computer and the internet. Like much of the world’s citizens, nursing students and faculty rely on the internet and computers for personal, social, and educational needs. The reliance on the internet and computer usage for daily information has a significant impact on the needs and learning patterns of students and educators and should be considered when structuring curriculum delivery and evaluation (Costa, Cuzzocrea, & Nuzacci, 2014). Nursing educators have been incorporating elements of computer –based learning into nursing curriculum for both students and practitioners for over thirty five years (Johnson-Hofer, & Karasic, 1988; Love, 1974). However, the incorporation has not been uniformly implemented or evaluated across the discipline. Reasons for lack of continuity in implementation are resource availability, attitudes and beliefs towards technology adoption, and organizational constraints (Chow, Herold, Choo, & Chan, 2012). Regardless of reasons for lack of continuity across the discipline in using computer-based learning to deliver curriculum, the need for nursing students to have access to the most current state of nursing science education persists and must be addressed. Varied approaches to implementing computerbased learning that will successfully meet this need and prepare nurses for practice are currently in practice. Successful adoption of computerbased delivery of curriculum requires evidence of its effectiveness in real-world settings. Due to the resources required for this type of research, investigator-initiated adoption of the various approaches to implementing computer-based learning is feasible only with a pilot or case study design. Funding mechanisms are available for
implementation and evaluation of nursing education programmatic redesign in response to the changes in the external environment (Stevens & Ovretveit, 2013). These funding opportunities are from private and governmental sources that are stakeholders in the future of the delivery of health care services (Blum, 2014; Thompson & Darbyshire, 2013). However, these funding sources remain very competitive and most require some evidence of an intervention on a pilot or case study level before consideration for funding support of research of a larger scale. Our faculty have been implementing and evaluating small investigatorinitiated studies to lay the foundation for programmatic research initiatives centered on computerbased or computer-supported curriculum delivery. This chapter outlines three investigator-initiated pilot case studies on computer-based curriculum delivery: a social cognitive constructivist approach, a team-based learning approach, and an interprofessional approach to computer-based learning. Each approach provides evidence that supports further investigation on a larger scale.
THREE CHOSEN CASE STUDIES FOR COMPUTER-BASED DELIVERY OF INSTRUCTION Background for Case Study 1: A Social Cognitive Constructivist Approach to Graduate Nursing ComputerBased Statistics Education Graduate nursing educators are called to prepare nurse leaders who can engage in higher level practice by deriving and translating evidence from population level outcomes into practice through innovative care models (AACN, 2011). To meet these graduate education essential outcomes requires a sound undergraduate foundational level knowledge of statistics. As a fixed estimate of prior statistical learning is not appropriate given
165
Equipping Advanced Practice Nurses with Real-World Skills
the variability of the population of students, an initial assumption of no prior knowledge at the outset of each learning module is wise. To teach from an initial assumption of no prior knowledge requires an individualized instruction strategy that incorporates a later adjustment in teaching to account for individual students’ prior knowledge bases, assumptions around learning, and learning styles. Individualized instruction strategy is an application of social cognitive constructivist learning theory that also provides tools for educators and learners to enhance knowledge retention and knowledge development (Luciano & Ontario Institute for Studies in Education, 2001). Learning can be framed by four principles: 1. Social and formal education is critical for learning development to occur. 2. Learning is motivated by needs of the learner. 3. Instruction should be scientifically based. 4. Instruction should consider individual differences (Schunk, 2004). One approach to instruction is one that is: learner-centered, based on knowledge of the skills and processes employed by experts/successful learners (e.g., what does the successful quantitative researcher know), and seeks to help students develop the cognitive processes that are used by “skilled practitioners” (Schunk, Pintrich, & Meece, 2008). This approach fits well for professional nurses who must be skilled in academics of their science as well as their clinical practice. A grand theory of learning that incorporates other learning theories, such as constructivist, attribute and goal theory, is Bandura’s Social Cognitive Theory (Zimmerman & Schunk, 2003). Social cognitive theory stresses the idea that much of learning occurs in a social environment by observing others, and that the individual learner constructs his own knowledge based on prior learning and new information. According to social cognitive theorists, this knowledge construction is a function of triadic reciprocity. Triadic reciprocity
166
is the interaction of the learner’s personal factors, behaviors, and environmental factors. Learner personal factors include age, learning style, and personal theory of intelligence. Behavioral factors to consider are self- regulatory behaviors and self-evaluator mechanisms. Lastly, environmental factors may include social setting of instruction, tools used by instructor, and mode of delivery of curriculum (Zimmerman & Schunk, 2001). Computer-based instruction provides the opportunity for manipulation of each of the elements of the triadic reciprocity functions (students’ personal factors, behaviors, and environmental factors), and is well-suited for applied statistics content when a constructivist approach is used. In applied statistics learning, there are two distinct lines of inquiry, computation and conceptualization. Computation involves learning the use of rules, procedures, algorithms, while conceptualization is the learning to use problem solving strategies. Graduate nursing statistical knowledge requirements should focus on conceptualization, with minimal requirements in basic computational skills. With computational skill development, students’ errors reflect their knowledge construction and arise from exposure to new problem types or poor knowledge of facts. A key goal of computation instruction is for the learner to use the most efficient strategy to solve a problem. Computational skill is first represented as declarative knowledge, facts about steps are memorized through mental rehearsal and overt practice; after more practice, representation becomes “domain-specific procedural representation” and eventually full automaticity is achieved. In conceptualization, problem solving involves problem translation, and problem categorization. Here, students must accurately identify the problem type through relevant information, then select appropriate strategy. Problem categorization requires student to attend to problem type rather than content resulting in deep instead of surface structure learning (Schunk, 2004). Conceptualization allows for a generalization of knowledge to a
Equipping Advanced Practice Nurses with Real-World Skills
different problem, and can incorporate elements of computational skill. Computer-based podcast and video cast libraries that are linked by problem type and also by content augment this process. Generalization is an example of authentic learning (Zimmerman & Schunck, 2003). Authentic learning is a deep understanding of learned material that can be demonstrated by the application of knowledge gained in one setting to another setting or situation. Nurses are very familiar with authentic learning in their clinical practices as generalization needs to occur when treating multiple patients and various patient subpopulations. A social cognitive constructivist approach to instruction allows the teacher to provide authentic learning in the virtual classroom through learning that is vicarious, peer learning, and active learning. Vicarious instruction is when the student learns through observing expert solution to a problem. Peer learning and active learning is such that a student accommodates or assimilates knowledge through scaffolded direction. Through a consideration and manipulation of the elements of triadic reciprocity, a constructivist approach to education promotes and demonstrates outcomes of deep authentic learning. Computer-based learning tools, such as interactive synchronous lesson reviews, podcast libraries of specific problem steps and solutions that can be reviewed as often as needed, and computer-simulated distribution construction can be integral to vicarious, and peer and active learning for graduate nursing students. Vygotsky’s sociocultural perspective of learning provides a constructivist theory of learning within a social cognitive framework for statistical learning (Henson, 2003). This perspective of learning emphasizes the importance of society and cultural environment of learning for promoting cognitive growth and knowledge development. Social interactions during the learning process are critical, as knowledge is co-constructed between two people, such as the teacher-student dyad or the student-student dyad. In the computer-assisted learning environment the instructor can continu-
ally engage students in meaningful and challenging activities, and help them perform those activities successfully even when in an asynchronous format. The social setting of the graduate online computer-based classroom is ideal for the formation of multiple dyadic learning experiences, as well as reciprocal experiences that involve more than two persons through active and vicarious learning. Both active and vicarious learning involve self-regulation of learning. Self-regulation is an important component of learning in constructivist learning theory, and is developed through internalization (developing an internal representation) of actions and mental operations that occur in social learning interactions. Learning development occurs through the cultural transmission of tools, such as language and symbols. Language is the most critical tool as it aids in learning for self-regulation by internalizing others’ speech (professor or peers) to develop private speech to guide learning and steps of problem solving, that eventually becomes covert speech (or inner speech) to guide and direct problem solving and learning. An example of this is the mouthing of words as you read a manuscript with new language, such as this one. You have mastered reading, now you are mastering the application of Vygotsky’s learning theory components with private speech in an online nursing curriculum. The development of inner speech is nurtured by accessible and repeatable on-line lecture libraries of course lessons that students can view and play and practice as often as needed individually. Inner speech is further honed as a skillset in the asynchronous and synchronous discussion threads available to students throughout the semester. Participation is neither graded nor mandatory, but rather gentle redirection and discussion points and worked examples are inserted into the discussions by the professor. These discussion formats are not available in traditional classroom settings. The use of the tool of language for learning occurs within each student’s individual zone of proximal development (ZPD). The ZPD is the
167
Equipping Advanced Practice Nurses with Real-World Skills
difference between what statistics students can do on their own in constructing new knowledge, and what they can do with interaction. Interactions with peers and professors (competent others) in the ZPD promote cognitive development. Obuchenie is a term used by Vygotsky that represents the interaction between teaching and learning and occurs within the ZPD through scaffolding. Scaffolding of students is the guided instruction of students into new knowledge development that is outside of their current domain. Scaffolding provides continual feedback and clarification as students incorporate new learning of the language and tools of statistics into their existing framework, also known as schema (Niess, 2005). The new knowledge development and its association to existing schema generalize the knowledge and result in increased authentic learning. Scaffolding and teaching within the ZPD can be achieved with an online computer-based approach to teaching graduate statistics. Using the resources available to them such as an online stats knowledge portfolio, instructor created podcast lesson library, virtual office “minutes”- as often as needed increases the opportunity for each student to have an educational experience that is tailored to their unique learning needs. Situated learning is a familiar model in nursing clinical practicum educational settings, and can also be applied to a cognitive constructivist approach to teaching graduate statistics. The learner begins by studying a model of an expert’s approach to the problem. Experts’ approaches tend to focus on deeper aspects of problem rather than its surface features. The learner engages in vicarious knowledge acquisition by observing the expert with a worked example. An example of this is the observation of graduate students as the expert provides coaching for statistical approach and analysis to “real world-real time” questions brought by other students and faculty
168
to the virtual classroom setting. This use of “real world-real time” worked examples for statistical learning involves the use of situated cognition by the expert and the novices. Although instructional design approaches, such as the novice to expert, typically begin with analysis of skills entailed by performance of expert, this task analysis of expert performance does not capture the expert’s capacity to respond to the variability in the real work situation, whereas the situated learning worked examples does. Here the talk- alouds of the expert are genuine and demonstrate the responsiveness of the expert’s decisions for design and analysis to the incremental information elicited from and then provided by the novice practitioner engaged in the situated learning dyad. Though these exchanges require expertise and comfort in full exposition of thought by the teacher, they also provide a cross-sectional view into the machinations of decision-making and refining of decisions during statistical problem solving of real issues. Situated learning examples are incorporated into a graduate statistics course with invited expert nurses bringing their research proposal or project to the virtual classroom for a statistical consult within the classroom setting by the expert faculty. This is an introduction to application of statistics within a practice setting for students who are being prepared to lead healthcare delivery reform in their current practices. Using a social cognitive constructivist approach to the design and delivery of graduate nursing online computer-based statistics education allows the teacher to employ multiple modalities of instruction, and teaching aids and add-ons while also individualizing student mentoring and resources to meet each student at their ZPD. The following Case Study A describes and evaluates the effectiveness of this approach over one semester of instruction.
Equipping Advanced Practice Nurses with Real-World Skills
Application Case Study 1: The Effect of a Social Cognitive Constructivist teaching and Evaluation Approach on Graduate Student’s Statistical Efficacy and Knowledge The aim of this case study was to explore the effect of approaching online graduate nursing course in statistics from a social cognitive constructivist approach for one semester (Spring 2011). The course had been developed in an online computerbased format and delivered in a static mode (read a posted presentation, answer test exam questions, receive a grade) for the previous four semesters. A secondary goal of the case study was to provide data from the student and professor perspective that supported a paradigm shift for delivery of all quantitative method curriculum in the social cognitive constructivist framework within the School of Nursing graduate nursing program.
Methods This was a retrospective observational mixed methods case study. A grounded theory approach to the analysis of the qualitative data was used and descriptive data for the analysis of quantitative data.
Sample The convenience sample (n=36) consisted of graduate nursing students from one school of nursing and three graduate nursing programs from one semester. The students represented the larger nursing school graduate student population in regard to demographics (age, gender, and race), entry level into programs, program of study, and placement in course progression.
Setting The school of nursing graduate program is a part of a large suburban academic medical center health
sciences center in the Northeastern United States. Students within the graduate program have applied and been accepted into a program that was delivered completely on-line except for clinical practicums that additionally required on-site intensive curriculum. Students are in their second to third semester of their programs, on average, when they take this course. This is the only graduate statistics course in the curriculum for all students in the graduate nursing program. A prerequisite of an undergraduate statistics course within ten years prior to admission to the program is required. This course is a 15 week 3 credit statistics course with a state approved course outline and syllabus.
Intervention The state-approved curriculum was unchanged for the case study. The delivery of the material and the supporting resources presented on-line and via add-ons such as Skype and FaceTime and google chat were additions to mode of delivery and support. A self- assessment of statistical knowledge was completed by students before coursework commenced. The self-assessment included twenty basic statistical knowledge questions and definitions and a 0-10 Likert scale to rank their statistical knowledge and competence. After self-assessments were completed, a library of podcasts and interactive skill-building applications were made available to students on the course webpage. To support all learning styles and encourage vicarious and peer learning, students were given the choice to work individually or in teams of up to four members for all assignments. Students were given the chance to submit assignments as many times as needed to meet the course objectives. Each iteration of the assignment that was submitted was graded and given individual feedback from instructor using a shared computer screen of the submission and a voice-over and highlighter marking points to discuss and revisit. A worked-example, or exemplar, of each assignment was available to students for comparison
169
Equipping Advanced Practice Nurses with Real-World Skills
and contrast to their own assignments to increase knowledge building. Discussions with students and student groups were available by appointment through non-traditional office “minutes” via phone, shared computer screen, and through various applications such as: Skype, google chat, FaceTime, GoToMeeting, and join.me. The seven course assignments all built upon each previous assignment and culminated in a final project where the goal was to incorporate all prior summative evaluations and resubmit their previous summative work product. The final also included a narrative section for students to describe their experience with the course delivery and their beliefs about their statistical efficacy after taking the course.
Results and Discussion A total of 35 students (97%) who began the case study course completed it. The one student who dropped the course did so after one week as they left the program all together for health reasons. The students were majority white women (70%) enrolled in the neonatal nursing program (60%), with an average age of 35 years. Most students chose to work in teams (77%) while some students chose to work independently (23%). All students resubmitted at least one assignment, and the majority of students resubmitted over half of the assignments (82%). Students self –efficacy pre-course assessment for statistics was an average of 3.34 (SD 2.07). The qualitative data after the course ended was analyzed using a grounded theory framework. Two researchers read the responses and coded independently and grouped into themes independently then met to compare common themes identified. Themes that emerged were: feeling better about statistics; importance of statistics applied to practice; pride with own ability; high satisfaction with course structure; high satisfaction with professor availability and feedback. Saturation was reached on each theme that emerged after 14 narratives analyzed. Feedback from students at their program exit evaluation
170
(twelve months and eighteen months after course completion) continued to demonstrate authentic learning and self-efficacious belief with statistical application to practice and studies. Students reported using the knowledge acquired in the statistics course within their practice settings to appraise research reports for application to patient care, and also reported using the skills obtained in other course assignments (e.g. Research and advanced pharmacology courses). One student reported at the end of the program that the “statistics course took the distant out of distance learning”. Though anecdotal this comment provided additional support for this approach to graduate statistics education. The results are limited due to the nonexperimental nature of the course and lack of randomization, a control group, and a pre and post- test of the same measure. The results however do provide support for a social cognitive constructivist approach to graduate statistics course delivery. However, as this framework and approach to teaching and evaluation requires faculty resources of time and comfort with technology, the availability of such faculty can be a limitation. An effective constructivist approach to teaching entails competence in three areas: content knowledge, pedagogical content knowledge, pedagogical knowledge (Shulman, 1987). This requires faculty preparation in each area. There are varied approaches to attaining these faculty competencies. Some involve the use of experts in each area to co-teach or scaffold educators through identified areas of lack of expertise, while others suggest additional education of professors or interdisciplinary approaches to statistics education (Garfield, Pantula, Pearl, & Utts, (2009). Implementing the constructive approach requires a shift in thinking and a willingness to be open to change for nurse educators. It likely that instructors’ of statistics learned statistics under the traditional problem solving model of instruction, in a lecture based classroom setting and with a focus on computations and methods. Utilizing technol-
Equipping Advanced Practice Nurses with Real-World Skills
ogy, recent findings from statistics education research, and planning active learning activities and exercises may initially involve substantial effort and time investment. The potential reward of such efforts may include transforming student attitudes and mindsets towards the topic of statistics into one of fun, excitement, enthusiasm, and deeper understanding of its relevance and practical use.
Conclusion There are no known standardized competency guidelines or curriculum standards for teaching statistics to nursing students. The AACN essentials publications only vaguely refer to a need for students to learn something about statistics. This is especially problematic for nurse faculty developing curricula for graduate nursing students. For example, the PhD Essentials publication (AACN, 2010) only includes a single mention of statistics on page 5, citing the phrase “Advanced research design and statistical methods” in the section on Expected Outcomes and Curricular Elements. This brief mention is not informative or useful in developing course objectives, deciding on course content, pedagogy, or depth of material. Statistics is a stand-alone discipline and is composed of many areas, specialties, and sub-disciplines. For example, the meaning and content of “basic applied statistics” may be interpreted quite differently by different statistics educators. There is a great need for tailored degree-specific standardized competency guidelines for teaching statistics to graduate nursing students. The approach presented here may be useful and effective in addressing many challenges that naturally arise with teaching statistics to graduate nursing students. Some of these challenges may include the diversity in student background and preparation for statistics coursework, the anxiety and fear of the topic, and the daunting task of balancing didactic and clinical coursework in an already full nursing curriculum. The case study described here provide a computer-based approach
to meeting the challenges of educating nurses in statistics within a social cognitive constructivist framework. The timeliness of this approach to instruction is well aligned with the focus on patient centered care and personalized medicine and will serve to address the need for graduate nursing students to learn and understand statistics.
Background for Case Study 2: A Team-Based Learning Total Computer-Based Undergraduate Nursing Statistics Course The TBL strategy was conceived by Larry Michaelsen in the late 1970’s to allow for the benefit of small group learning in large classes (Parmelee, Michaelsen, Cook, & Hudes, 2012). According to Michaelsen, at that time he was a professor of Business at the University of Oklahoma and he developed TBL as a solution to better know what students were thinking during his lecturing, and to provide them with opportunities for engaging in real-world problems they would face after graduation (Parmelee et al.). In 2001, the US Department of Education Find for the Improvement of Postsecondary Education funded TBL promotion for faculty development workshops, symposiums, and the scholarship of teaching and learning (Parmelee et al.). At present, TBL is used in more than 60 US and international health science professional schools at several levels of education: undergraduate, graduate, and continuing education (Parmelee et al.). According to the literature, there are four reasons use of TBL in higher education is rapidly being adopted. One reason is the increased need for active engagement of students in larger classes while maintaining positive student learning outcomes, and TBL provides this opportunity (Parmelee, Michaelsen, Cook, & Hudes, 2012). Another reason is that many higher education accrediting agencies are requiring that schools document student active learning and using the established TBL pedagogical approach makes
171
Equipping Advanced Practice Nurses with Real-World Skills
this possible (Liaison Committee on Medical Education, 2011). Additionally, students must be equipped with real-world skills needed to work with interprofessional teams, which TBL enforces (Interprofessional Education Collaborative Expert Panel, 2011). Lastly, faculty are increasingly frustrated with poor attendance at lectures, and using TBL requires students to actively participate in their educations (Parmelee et al.).
What Is Team-Based Learning? Team-based learning is a pedagogical approach designed to scaffold learning through high performing team interactions and provide opportunities for significant learning by engaging the teams (Michaelsen, Knight, & Fink, 2002). It allows students opportunities to learn and apply course materials, develop skills needed for working on teams, and foster appreciation for the team approach to solving intellectual tasks (Millis & Cottell, 1998). Team building follows a trajectory of collaboration that includes four stages: forming, storming, norming, and performing (Michaelsen, 2008). These stages allow the team to become an effective unit. The TBL approach, designed to encourage team collaboration and impact team outcomes, shifts the passivity of learning to a more active and constructive process (Grady, 2011). Promoting active participation in learning is valued as it has far reaching potential and provides students with skills that impact life-long learning (Li, An, & Li, 2010). The TBL approach design was developed for use in on-site classrooms but the concepts can be adapted to the meet the demands of distance education. Team-based learning is an established and structured collaborative team approach to education. There is significant evidence to support the benefits of collaborative learning in all disciplines (Parmalee, 2010). Collaborative-problem solving is necessary to ensure 21st century higher education graduates’ success. Use of TBL provides
172
collaborative learning opportunities by way of individual and team discovery (Cheng, Liou, Tsai, & Chang, 2014). In all three stages of TBL, students actively contribute and participate in individual and team learning. Research shows that this type of learning consistently improves student performance (Beatty, Kelley, Metzger, Bellebaum, & McAuley, 2009; Cheng, Liou, Tsai, & Chang, 2014; Chung, Rhee, & Baik, 2009; Grady, 2011; Marz, Plass, Weiner, 2008). Data demonstrates that weaker students have the greatest overall performance improvement with TBL (Koles, Stolfi, Borges, Nelson, & Parmelee, 2010). The literature supports TBL use, in courses using the TBL strategy student overall performance is significantly improved (Carmichael, 2009; Cheng, Liou, Tsai, & Chang, 2014; Letassy, Fugate, Medina, Stroup, & Britton, 2008; Persky & Pollack, 2011; Pogge, 2013, Tan et al., 2011; Thomas & Bowen, 2011; Zgheib, Simaan, & Sabra, 2010; Zingone et al., 2010). The TBL pedagogy allows for all learners a potential for improved outcomes (Fujikura et al., 2013).
Team-Based Learning Design Teams Teams are central to the design of the TBL experience. In TBL design, teams are distinctly different than groups. While both teams and groups have more than two members that interact on a common activity, teams are characterized by high levels of individual commitment to, and trust, for the members (Michaelsen, 1999; Sweet & Michaelsen, 2012). The teams of TBL are purposely formed and managed by the instructor. According to TBL theorists, team membership should be diverse, cohesive, and permanent for the term of the course (Michaelsen & Black, 1994; Michaelsen, Black & Fink, 1996). To facilitate diversification of the team, assignments should be made considering student ethnicity, gender, and academic abilities
Equipping Advanced Practice Nurses with Real-World Skills
(Michaelsen, Fink, & Knight, 1997). Diversification should promote cohesiveness as it reduces the chance of subgroups based on background factors. Instructors should also consider and avoid including members with previously established relationships when assigning teams (Michaelsen, 1999). Instructor organization of teams is an essential first step in the implementation of TBL design. The development of the team is essential to the success of TBL. Distance education team development poses challenges, but may be fostered by applying some essential concepts. Evaluating student accountability is a key strategy to encourage team-building. Michaelsen (2008) recommends encouraging team-building within teams by having students conduct peer assessments of team-mates for predetermined percentages of the course grade. The peer-evaluation process reinforces this accountability, and should be used to provide constructive feedback. Teambased learning theorists also recommend public
posting of team learning assurance assessment scores to encourage competition between teams (Michaelsen, 2008). The underlying premise with TBL design is that no member of a team outperforms the team as a whole (Michaelsen & Black, 1999). The TBL strategy targets learners at different levels of knowledge and understanding (Fujikura et al., 2013). The TBL design is a multiphase approach that requires students to be active participants in learning (Michaelsen, 1999). The motivation for student participation in the TBL teaching and learning strategy is encouraged by a grading policy that allocates significant and student selected percentages to learning assurance assessments, peer evaluations, and team application activities (Michaelsen, 1999). Stages of TBL There are three stages that makeup the TBL instructional activity sequence (Figure 1). The stages
Figure 1. TBL instructional activity sequence
173
Equipping Advanced Practice Nurses with Real-World Skills
include (1) individual preparation, (2) readiness assurance assessments, and (3) course concept team application activities (Michaelsen, Sweet, & Parmalee, 2009). When using TBL design, it is recommended the course be divided into approximately five to seven major instructional units over the course of a semester (Michaelsen, 1993). The three stages of TBL are then repeated for each unit. Stage one of the TBL sequence begins with faculty developing and assigning individual preparation to students. This is an individual student activity and completion is required prior to participating in the instructional unit (Michaelsen, Sweet, & Parmalee, 2009). Examples of individual preparation include: completion of required readings, viewing of podcasts, reviewing presentations prior to the instructional unit, or gathering data/ evidence on a given topic. Students are accountable for completing this stage and are evaluated on their preparation in stage two. Stage two in the TBL sequence is readiness assurance assessments. These are performed by the students individually and again as a team. According to Michaelsen, Parmalee, McMahon, and Levine (2008), the instructor diagnoses student understanding and provides immediate and frequent feedback during stage two. Diagnosing student understanding is done using the results of the readiness assurance assessment (Sweet & Michaelsen, 2012). Since TBL was developed for onsite instruction, readiness assurance assessments are recommended to be provided in-class. The readiness assurance assessment process was adapted for computer-based learning. Individual readiness assurance assessments were provided asynchronously, students were provided a schedule for submission. The team readiness assurance assessment is the same examination provided to each individual student, but is now provided to the teams. The team readiness assurance assessment for distance education requires team collaboration. The team must complete the team readiness assurance assessment synchronously, so the team must
174
establish a time to work collaboratively within the established schedule. The teams are provided immediate feedback through the distance education learning management system. Team readiness assurance assessment grades are determined by the number of tries the team requires to choose the correct response. Using the learning management system, these attempts can be tracked. If the team chooses the correct answer on the first attempt, the team earns full credit. If the team takes several attempts to correctly answer the question, the grade earned reflects the number of attempts. The readiness assurance assessments are dependent on the individual student and the student as a team member’s participation in stage one, individual preparation. The process of assurance assessment promotes individual accountability for materials required for preparation prior to class. Results of the readiness assurance assessments are then used by the faculty to focus the review of course content for the major instructional unit (Michaelsen, Parmalee, McMahon, & Levine, 2008). This is possible with computer-based learning by adding rationale with specific resources for each readiness assurance assessment topic/ question. According to Michaelsen, Sweet and Parmalee (2009), the entire second stage should take up approximately 45-75 minutes. Stage three of the TBL sequence involves application of course concepts through in class team application activities. The team application activities should account for most of the time allotted for the course, upwards of one to four hours per major instructional unit (Sweet & Michaelsen, 2012). The team application activities require the team effectively interact and collaborate (Michaelsen, Watson & Black, 1989). Team application activities require team collaborative problem-solving, and possibly even more so when computer-based curriculum is delivered. Ideal team application activities of stage three are designed to assess student teams’ mastery of subject matter (Michaelsen & Sweet, 2008). Team application activities foster accountability within
Equipping Advanced Practice Nurses with Real-World Skills
and between teams (Sweet & Michaelsen, 2012). Strong team application activities adhere to the “4S’s” which denotes that each assignment: be significant to the student, be the same assignment for all teams, require specific choice to be made, and have all teams simultaneously report (Michaelsen, 2008). A significant problem should be an authentic representation of a situation the students would encounter in the professional realm. Answers for the significant problem should be complex and require discussion within the team (Parmelee, Michaelsen, Cook & Hudes, 2012). Every team should be working on the same problem at the same-time, ideally the different teams will each provide alternative answers for the same assignment. The assignment should allow the teams to provide a specific choice for easy distribution to all teams and not lengthy documentation (Parmelee et al.). All teams should distribute their specific choices for the same significant problem at the same-time. This poses somewhat of a challenge with distance education and requires creative use of assignments. Appeals Process The TBL design provides opportunity for students to appeal the team readiness assurance if they choose to challenge an answer. The appeals process requires the team provide a re-written question or rationale with references supporting the appeal. Only teams that take the steps to write the appeal should be eligible for credit if this is supported (Parmelee, Michaelsen, Cook, & Hudes, 2012). Peer Evaluation/ Grading Percentages Student to student peer evaluation is also part of the TBL process and encourages accountability. It is recommended that the students, not the faculty, establish the percentages of the final course grade for the individual and team readiness assurance assessments, team application activities, and peer
evaluations (Michaelsen, 1993). This can be done by conducting an anonymous poll with several options for the students to choose percentages. Peer evaluation ensures student accountability (Sweet & Michaelsen, 2012). This type of evaluation is easily implemented in computer-based learning. Michaelsen (2008), offers standardized evaluation forms that can easily be adapted for distance education, as individual accountability does not change based on curriculum delivery method. It is recommended that peer evaluation be done anonymously, however students should be encouraged to speak directly to their team members providing feedback. There are several models for conducting peer evaluation outlined later in the “Getting Started” section.
Getting Started with TeamBased Learning An initial and integral part of developing a TBL computer-based learning course includes team formation. There are four principles that should be applied to the team formation process: students are not permitted to self-select, identify determinants of a successful team member, ensure there is representative diversity in each team including success determinants, and make the team assignment process transparent to all students (Parmelee, Michaelsen, Cook, & Hudes, 2012). Students need to be oriented to the process of TBL. Most higher education students are not accustomed to preparing prior to class as is required with TBL and thus this must be explained to them prior to their participation. Orientation can be accomplished using a module that is a sample session. As mentioned previously, peer evaluation is an integral part of the TBL process. There are several methods recommended for peer evaluation. Peer evaluations can be quantitative or qualitative. There are guidelines and tools developed by
175
Equipping Advanced Practice Nurses with Real-World Skills
experts for the evaluation of team members that can be utilized. A percentage of the TBL course grade is devoted to peer evaluation and this may be assigned by a team member to a team member or by the faculty. One method is by allowing the team members to assign grades for their teammates based on their interpretation of peer contribution. Another method is for faculty to assign a grade to the team member who is submitting an evaluation of a teammate on the thoroughness and objectivity of the evaluation. Overall course grade percentages are also integral the TBL design. There is a significant amount of time students devote to preparation and this must be reflected in the overall course grade. Additionally, each component of the TBL process should carry some weight as each is essential: the individual and team assurance assessments, the team application activities, and peer evaluation. The grading percentage breakdown can be predetermined by the educator and administration or it can be determined by the students. Educators must develop TBL modules using backward design (Wiggins & McTighe, 1998). The process of backward design includes three steps in the following order: establish learning goals, develop feedback and assessment activities, and create teaching and learning activities. First the educator must write clear, specific, and meaningful learning goals using Bloom’s taxonomy of expertise and mastery. Once these goals are established the educator needs to create or find an authentic interprofessional scenario for the team application activity applying the 4 S’s. Finally, the educator prepares the readiness assurance assessment. TBL requires faculty development. A formal program for faculty TBL development and support throughout the process should be established prior to implementation. Workshops should be provided to faculty and administration, either by
176
supporting attendance at local and national TBL conferences or by inviting consultants to campus. Creating a TBL community on campus, possibly an interprofessional faculty community is important. Student orientation, as mentioned early is essential for students. Also, collection and review of constructive student feedback is important to the ongoing evaluation of TBL. Lastly, physical space and environment is important to consider when implementing TBL. The TBL design has been implemented in large lecture halls with fixed seating and some universities have developed classrooms to accommodate TBL teams specifically. Ideally, TBL can be implemented in any setting if the educator has the ability to circulate the space and if all the students are able to speak and be heard by all.
Why Team-Based Learning? There is significant data supporting the use of collaborative problem-solving in higher education. The TBL process is versatile: it can be adapted for large classes or small classes, it can be used for entire courses or to cover certain topics blending with lectures, it can be applied to onsite and computer-based learning. The team formation of TBL strengthens the team members’ abilities to work in teams which is essential especially in the health professions. The advance preparation required with TBL will assist students in developing skills to guide their own learning. Team-based learning is a structured stepwise approach that allows for collaborative problem-solving for students in a 21st century technology-rich environment. The use of TBL for computer-based learning is a unique opportunity for educators to develop real-world skills in student graduates. There is increasing evidence to support the use of TBL and the academic effectiveness of this collaborative teaching and learning strategy.
Equipping Advanced Practice Nurses with Real-World Skills
Case Study 2: The Effect of a Team-Based Learning Approach on Students’ Experiences in an Undergraduate Online Statistics Course The aim of this case study was to examine the effects of a Team-based learning approach on students’ experiences in an undergraduate online statistics course to inform further curriculum development with team-based learning design.
Methods This was a retrospective observational qualitative design. Grounded theory approach to the analysis of the qualitative data was used to identify themes.
Sample and Setting The sample (n=38) consisted of the students in an on-line undergraduate statistics course. All undergraduate nursing students are required to take a statistics course. Students self-selected into on-line or traditional face- to- face delivery of instruction. Students within the on-line course were comparable to the face-to-face courses in demographic composition. The course was delivered completely online over a fifteen week semester.
Intervention A systematic approach was utilized to adapt the undergraduate nursing statistics course to a distance education TBL design course. An expert panel was convened that included faculty with nursing education experience, educators with statistical degrees, and TBL design consultants. The process applied to the development and implementation of TBL pedagogy in the distance education undergraduate nursing statistics course for Registered Nurse to baccalaureate students is outlined in the following section.
Before adapting the undergraduate nursing statistics course to a TBL designed computerbased learning course from the traditional distance education the expert panel asked the course faculty to divide the course into five to seven distinct modules. Each module had a theme and specific content outlined. These modules became the foundation for the course design. Then the expert panel, with the assistance of the course faculty and administration, determined the success determinants for this undergraduate statistics course. The success determinants that were agreed upon included: previous formal statistics courses, and degrees in statistics or related fields. Teams were also comprised of males and females to insure heterogeneity of students in teams. A formal orientation module was designed for this specific course and to be used in all future courses adapted to include TBL pedagogy. The module required all students log in to the course for this mandatory orientation, the time of the mandatory online orientation was provided prior to registration for the undergraduate nursing statistics course. Once students were officially registered they received an email with an article attached and instructions to read the article prior to the orientation. The article used is available at: http://www.teambasedlearning.org/ Resources/Documents/TBL+Handout+Aug+16print+ready+no+branding.pdf The orientation module followed the TBL process. The module was delivered using an online video conferencing software. Immediately students were asked to complete an individual readiness assurance assessment in the learning management system, this was a timed 10 multiple choice question quiz based on the article assigned. All students were then informed of their groups, which were randomly selected for this activity. They were provided 20 minutes to complete the same readiness assurance assessment now as a team. They could see their scores on the team readiness assurance assessment with
177
Equipping Advanced Practice Nurses with Real-World Skills
immediate feed-back as the system provided them with a correct or incorrect indicator with each submission. They were provided the opportunity to continue until they chose the correct answer. This was followed by a team application activity. The team application activity was an assignment that required the students to discuss the merits of TBL and design a grading percentage structure using guidelines provided to them which provided ranges for each of the TBL graded components. Using all of the grading percentage submissions developed by the teams during the orientation team application activity a student poll was anonymously distributed. The students were then permitted to select a grading percentage scale from the submissions. The grading percentage was determined by majority vote. The process of adapting the asynchronous undergraduate statistics course curriculum began using the backward design. The course was offered in the past as an 8 module course over a 15 week semester to undergraduate nursing students earning a baccalaureate degree in nursing. The expert panel reviewed the former course objectives and adapted them to include language that incorporated the collaborative problem-solving focus of the new curriculum. The second step with backward design is the development of team application activities incorporating the 4 S’s. This was accomplished through the efforts of the expert panel using realworld scenarios and creating activities that had significance to the nursing profession with specific choices to conclude. A calendar with dates to allow students a small time-frame to meet virtually was drafted, taking into consideration each students’ hectic scheduling. This time-frame and an online forum shared by all students registered for this course allowed for simultaneous submission. The third and final step with developing a TBL course applying the backward design is composing readiness assurance assessments. The course faculty, most familiar with the content and requirements of the undergraduate nursing
178
statistics course, was charged with formulating 10 multiple choice questions for each module. These questions were then loaded into the learning management system for students to take both as individuals and then as virtual teams.
Materials Distance Education Course Adaptation Checklist 1. Review course objectives. 2. Determine measurable course outcomes. 3. Divide course content into five to seven major instructional units. 4. Establish semester schedule with dates for all TBL modules and activities. 5. Develop all individual preparation assignments. 6. Create learning assurance assessments using course objectives and measurable outcomes. 7. Adapt team application activities to meet measurable outcomes. 8. Apply standardized TBL peer evaluation tools. 9. Implement evaluation protocol for student learning. 10. Use evaluation data to modify as needed.
Procedure Expert Panel Recommendations for Distance Education Course Sequencing The expert panel for TBL design implementation in distance education established recommendations for replication. These recommendations are presented as sequencing for course events, with specifics on how these recommendations were operationalized for the nursing statistics course (Table 1). The nursing statistics distance education course design followed the checklist and recommendations for sequencing. The course was modified
Equipping Advanced Practice Nurses with Real-World Skills
Table 1. Recommendations for distance education course sequencing Sequence Recommendation
Implementation Action in Undergraduate Nursing Statistics Course
At the beginning of the course, all students are invited to participate in an anonymous online poll to determine the percentage of the course final grade allotted to each graded TBL segment (learning assurance assessments, peer evaluations, and team application activities).
Link to anonymous online poll with three (3) options for students to choose from determining the percentage of the course final grade allotted to learning assurance assessments, peer evaluations, and team application activities. Option one: learning assurance assessments- 2.5 points Individual/ 2.5 points Team, peer evaluations- 2.5 points, and team application activities- 2.5 points Option two: learning assurance assessments- 1 point Individual/ 4 points Team, peer evaluations- 2.5 points, and team application activities- 2.5 points Option three: learning assurance assessments- 4 points Individual/ 1 point Team, peer evaluations- 1 point, and team application activities- 4 points
The course content is divided into five to seven major instructional units.
The course content was divided into the following seven major instructional units: descriptive statistics, inferential statistics, hypothesis testing, correlational techniques, research methods, statistics in epidemiology, and statistics in medical decision making.
Individual preparation assignments should incorporate multiple delivery modalities to engage all learner preferences.
Individual preparation assignments include required text readings, podcasts, and power point presentations with animation.
Asynchronous individual learning assurance assessments are provided with schedules for submission.
Individual learning assurance assessments are provided with a schedule for asynchronous submission.
Synchronous team learning assurance assessments, the same examination provided to each individual student, are provided to the teams based on the schedule the team determined.
Team learning assurance assessments are provided for synchronous team determined submission.
The teams are provided immediate feedback on team learning assurance assessments through the distance education learning management system. Immediate and frequent instructor feedback is provided in the form of rationale and resources. This feedback is provided immediately following final submission of each question on the team learning assurance assessment.
The teams are provided immediate feedback on team learning assurance assessments through the distance education learning management system. The immediate and frequent instructor feedback is provided in the form of rationale and resources for the topic/ content on the assessment. Feedback is provided immediately following final submission of each question on the team learning assurance assessment.
Team learning assurance assessments are graded in learning management system based on number of attempts required for correct response. The team grade incrementally decreases with every wrong choice as indicated by the system.
Team learning assurance assessments are set up to be graded in learning management system based on number of attempts required for correct, students earn 100% for a correct response first attempt, 75% for a correct response second attempt, 50% for a correct response third attempt, and 0% for a correct response fourth attempt.
Team application activities are course worksheets required for submission.
Seven team application activities or course worksheets are required for submission.
Team participation should be enforced by having each student’s contribution being acknowledged.
Team participation should be enforced by having each student use a different color ink.
The peer evaluation process requires each team member comment on participation of peers. Use of a standardized TBL peer evaluation form or development of a new tool.
The peer evaluation process had students use a standardized TBL evaluation form available from Michaelsen & Sweet (2008).
over a semester prior to implementation. Students were aware they were enrolled in a TBL designed course as it was described in the course description. Faculty members teaching the course were
familiar with the TBL design, and had experience teaching TBL design on-site. The course followed the sequencing, and no unforeseen problems were encountered.
179
Equipping Advanced Practice Nurses with Real-World Skills
Results and Discussion The majority of students (74%) completed the course evaluation and narrative responses asked to describe their overall impression of the course, the course structure, and team-based learning. The theories that emerged from the qualitative narratives were: overall satisfaction with team-based learning; enjoyment of the interaction with peers; high perceptions of own ability in the course. As is common with any curriculum delivery design change, there are lessons to be learned. Student buy-in is necessary; therefore, it is important to include the following: (1) background of TBL design to students, (2) the benefits to them as learners, and (3) the expectations required in a TBL course. The peer evaluation process provided some student distress, as students verbalized they did not feel their evaluations should be reflected in a classmates final course grade. A solution to this would be to allow students to develop the peer evaluation tool as a group providing faculty guidance and suggestions. As is consistent in the literature, faculty workload with TBL design implementation is increased. Provision of faculty resources and support to assist with the increased work load associated with this change.
Background for Case Study 3: An Interprofessional Approach to a Nursing and Dental Graduate Student Hybrid Course on Team Science Delivery of Community Care The delivery of care to at-risk and traditionally underserved communities continues to be an issue of unmet access needs for many in the New York area. This care is not limited to medical care, it also includes nursing, dental, and behavioral health care. Of particular issue are the lack of access of at-risk populations to screening and monitoring of health issues to prevent chronic disease and hospitalization. The populations we serve are comprised of multiple at-risk populations: un-
180
documented immigrants and migrant workers, working poor families, and poverty-level elderly (Gaines & Kaimer, 1994; U.S. Census Bureau, 2011). Unmet oral and primary health care needs are more prevalent in individuals whose access to health care services is compromised by a shortage of qualified health providers and/or lack of resources and/or access to multiple healthcare specialties (Allukian, 2008). Additionally, the provision of quality health care requires a complex response from a team of health professionals. These teams are often called interprofessional healthcare delivery teams. Although interdisciplinarity has become a favored model of care delivery, the assumption that interdisciplinary work is intuitive and can be performed without training is short-sighted (Larson, Cohen Gebbie, Clark, & Saiman, 2001). Interprofessional education requires education in both the machinations of the interprofessional care delivery model and the competency crosstraining of care delivery to be successful (Charles & Alexander, 2014). Interprofessional education has instituted in academic medical centers across the United States over the past ten years and is increasing in popularity in response to the health care reformation of the past five years (Bowser, Sivahop, & Glicken, 2013). The success of the educational initiatives have been evaluated with measures of clinical competencies, interdisciplinary respect and valuing collegiality and practice expertise (Delunas & Rouse, 2014; Larson, et al. 2001). Interprofessional education has been demonstrated to be more effective in clinical competency development and practice expertise with the use of simulated patient experiences with computer-assisted programs. Authentic learning has been demonstrated and participation in activities increased with the addition of on-line self-directed computer-based curriculum to enhance the in-classroom meetings (Clouder, 2008). This mode of curriculum delivery is often called a hybrid model. Interprofessional health care teams educated within a hybrid in-
Equipping Advanced Practice Nurses with Real-World Skills
terprofessional education model will develop cross- disciplinary competencies and mutual respect that supports better health outcomes for at-risk patient populations.
Case Study 3: The Effect of a Hybrid Interprofessional Course on Advance Practice Nurses’ and Dental Students’ Competencies and Confidence with Health Screenings and Interprofessional Education Aim: Educators from the Stony Brook University School of Nursing (SON) and School of Dental Medicine (SDM) implemented an interprofessional education (IPE) model that expanded opportunities to engage and educate advanced practice registered nursing (APRN) and dental students to work in interprofessional teams, improve oral-systemic health outcomes and meet professional education standards.
Methods This was a prospective observational mixed methods study. A phenomenological approach to the analysis of the lived experience of graduate nursing and dental students in a hybrid interprofessional education course was used to interpret the qualitative findings. Simple descriptives analysis was used to examine quantitative data.
Sample and Setting The sample (n=44) consisted of APRN and dental students enrolled in their first year of studies in the nursing and dental school professional programs. Participants were representative of the populations sampled as enrollment was not optional, but rather was a required course in both programs. The participants were mostly young adult women (68%). The average age was 35 years with a majority of white participants (80%). The nursing and dental students differed in demographics with nursing
students comprised of more women, on average older than dental students, and more white participants.
Intervention The model was designed to enhance confidence and credible familiarity with established screening tools for oral systemic disease. Learning outcomes for the APRN students included becoming conversant and skilled in performing oral cancer screening exams, salivary analysis, denture prostheses evaluation, and caries/periodontal risk assessment. Learning outcomes for the dental students included clinical fluency in the screening and monitoring of hypertension, type 2 diabetes mellitus, and nutritional/hydration status, as well as implementation of a smoking cessation protocol. By targeting the vulnerable elderly community, this initiative was designed to strengthen the utilization of medical and dental screening tools and early referral while expanding the healthcare community’s engagement in oral-systemic health issues. Prior to implementation of the IPE model, APRN and dental students completed assessments regarding their perceived readiness and ability to participate in interprofessional team-based care, and had the opportunity to state their expectations about IPE and provide input regarding IPE activities. To obtain foundational knowledge related to oral health and health promotion, APRN and dental students completed on-line self-directed learning modules which were embedded within each student’s program curricula. Oral health content was delivered through the Smiles for Life program; a free, online, comprehensive oral health curriculum designed specifically for primary care clinicians (Clark, et al, 2010). Students completed the following modules: The Relationship of Oral and Systemic Health, Adult Oral Health Promotion and Disease Prevention, The Oral Exam, and Geriatric Oral Health. Online modules related to Smoking Cessation and Motivational Interviewing were completed
181
Equipping Advanced Practice Nurses with Real-World Skills
via the Tobacco Recovery Resource Exchange (Professional Development Program, et al., n.d.). Presentations focusing on health promotion and prevention were developed specifically for this program and included instruction on management of diabetic and hypertensive patients, medication reconciliation, and the use of the Fagerstrom Nicotine Dependence Test (Heatherton, et al, 1991) and the Mini Nutritional Assessment (Vellas, et al., 2006) in clinical practice. Following the self-study modules, APRN and dental students were brought together to facilitate the development of competencies in interprofessional collaborative practice, including interprofessional teamwork; interprofessional communication; roles and responsibilities; and, values and ethics for interprofessional collaborative practice (IPEC, 2011). The five-hour session began by using interactive strategies to assign students to interprofessional practice teams and team-based learning exercises that include individual and group assessments to evaluate understanding of online self-study materials and discussion of the application of interprofessional principles in clinical practice. Students then engaged in a series of simulated exercises, including faculty-facilitated skills stations and encounters with standardized patients. During these exercises, students applied their knowledge and skills as they engaged in team-based activities related to oral health (e.g., caries and periodontal risk assessment; salivary analysis; head and neck exams), general health (e.g., mini nutritional assessment, tobacco and alcohol assessment, management of diabetes and hypertension), and attainment of interprofessional competencies. Following these simulated experiences, students once again completed self-assessments regarding their perceived readiness and ability to participate in interprofessional team-based care. Utilizing the knowledge and skills gained during earlier components of the model, the APRN and dental students were again placed in interprofessional practice teams and engaged in
182
collaborative practice at a state veterans home (SVH). The SDM had a well-established clinical rotation program for dental students at the SVH and the SON established a similar clinical rotation for APRN students to create opportunities for interprofessional collaboration. During their rotations, APRN students had opportunities to join the dental team and participate in the provision of oral health services and dental students had opportunities to participate in medical rounds and interprofessional care planning meetings for SVH residents. Students were evaluated by their respective program faculty and assessed on their ability to work collaboratively as part of an interprofessional team. The team-based model culminated in an interprofessional oral cancer screening and health promotion community health fair. Student teams provided health care services and health promotion education to veterans, caregivers, and their families living in the community served by the SVH.
Results and Discussion To evaluate the success of this program, at the conclusion of their experience APRN and dental students evaluated their ability to learn about each other, from each other, and with each other and provided feedback about their ability to work effectively in interprofessional care teams. Several themes emerged: • •
•
Recognition that both APRN’s and dentists are primary care providers; By working together, APRN and dental students can enhance their knowledge, strategies and approaches to patient care; and Interprofessional teams ensure care of the whole patient which lead to improved patient outcomes.
Evaluation data related to specific aspects of the model are helpful to faculty for revising and
Equipping Advanced Practice Nurses with Real-World Skills
refining the model. Student evaluation data are especially useful when assisting faculty to plan professional development activities related to teamwork and collaborative practice and for the continuous quality improvement of courses and teaching methods. Evaluation of the students by faculty of both programs described competence from providers in all areas of clinical and IPE performance. Themes that emerged from the evaluation of the care provided to the community and veteran’s home residents and staff were comfort level increase with the provision of care by teams over independent providers and increase in self-care knowledge was also reported. Data regarding the quality of the clinical practice site and experience can inform the development of additional clinical practice environments that facilitate team-based competencies and a culture of collaborative practice. The resources required to provide this level of IPE and interprofessional health care delivery are significant when compared to traditional delivery model of curriculum resources. The results of this pilot study support further development of further research studies around the effect of IPE and interprofessional health care teams on the student and community health outcomes.
CONCLUSION Health care reform continues to be a subject of national debate and concern. As the political agenda and representation change over the next few years, constraints around health care delivery models are also expected to change. However, the needs of patient populations and the nurses and the inter-professional teams that deliver their care will not change. Access to quality care for health maintenance and treatment by well-prepared practitioners will always remain a constant. Incorporating advances in technology and computer-based instruction provides nursing educators with the
tools needed to meet the increasing demand to educate more nurses while maintaining quality. The model for nursing educational interventions will continue to change and adapt to the world around it in order to meet its populations needs. Moving forward, the next steps in our research plan include the development and incorporation of psychometrically sound tools to measure the effectiveness of the computer-based models of curriculum delivery within our nursing student populations. Though many tools do exist measuring efficacy of nursing students with computer-based educational interventions, measures of effectiveness of the intervention on learning outcomes are not as convincing. There are many diverse subpopulations of nursing students in graduate and undergraduate courses of study. These populations each need to be assessed for effectiveness of the curricula delivery interventions on a pilot scale before any large scale deployment. The three approaches outlined in the case studies in this chapter are all scientifically and theoretically supported approaches to teaching in other student populations. The daunting task here will be to assure validness and reliability in the effectiveness of these interventions within and between our student subpopulations. We hypothesize this will require adjustment of interventions to be effective for each student phenotype identified. Though this will require multiple iterations of testing and tool development and redesign of interventions, we expect a more robust intervention will be the end product for increased effectiveness of curriculum delivery for our students.
ACKNOWLEDGMENT The authors would like to thank Ms. Sarah A. Eckardt, MS, and Ms. Amanda Tischler for their continuing assistance and contributions to this chapter.
183
Equipping Advanced Practice Nurses with Real-World Skills
REFERENCES Allukian, M. Jr. (2008). The Neglected Epidemic and the Surgeon General’s Report: A Call to Action for Better Oral Health. American Journal of Public Health, 98(Suppl 1), S82–S85. doi:10.2105/ AJPH.98.Supplement_1.S82 PMID:18687628 American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. Washington, DC: American Association of Colleges of Nursing. American Association of Colleges of Nursing. (2010). The Research-Focused Doctoral Program in Nursing: Pathways to Excellence. Washington, DC: American Association of Colleges of Nursing. American Association of Colleges of Nursing. (2011). The Essentials of Master’s Education for Advanced Practice Nursing. Retrieved August 15, 2015, from http://www.aacn.nche.edu/educationresources/MastersEssentials11.pdf American Association of Colleges of Nursing. (2012). New AACN Data Show an Enrollment Surge in Baccalaureate and Graduate Programs amid Calls for More Highly Educated Nurses. Retrieved August 15, 2015, from http://www.aacn. nche.edu/news/articles/2012/enrollment-data Banister, G., Bowen-Brady, H. M., & Winfrey, M. E. (2014). Using Career Nurse Mentors to Support Minority Nursing Students and Facilitate their Transition to Practice. Journal of Professional Nursing, 30(4), 317–325. doi:10.1016/j. profnurs.2013.11.001 PMID:25150417 Beatty, S. J., Kelley, K. A., Metzger, A. H., Bellebaum, K. L., & McAuley, J. W. (2009). Team based learning in therapeutics workshop sessions. American Journal of Pharmaceutical Education, 73(6), 100. doi:10.5688/aj7306100 PMID:19885069
184
Blum, C. A. (2014). Evaluating Preceptor Perception of Support Using Educational Podcasts. International Journal of Nursing Education Scholarship, 11(1), 1–8. doi:10.1515/ijnes-2013-0037 PMID:24615492 Bowser, J., Sivahop, J., & Glicken, A. (2013). Advancing Oral Health in Physician Assistant Education: Evaluation of an Innovative Interprofessional Oral Health Curriculum. Journal of Physician Assistant Education, 24(3), 27– 30. doi:10.1097/01367895-201324030-00005 PMID:24261168 Carmichael, J. (2009). Team based learning enhances performance in introductory biology. Journal of College Science Teaching, 38(4), 54–61. Charles, G., & Alexander, C. (2014). An Introduction to Interprofessional Concepts in Social and Health Care Settings. Relational Child & Youth Care Practice, 27(3), 51–55. Cheng, C., Liou, S., Tsai, H., & Chang, C. (2014). The effects of team-based learning on learning behaviors in the maternal-child nursing course. Nurse Education Today, 34(1), 25–30. doi:10.1016/j. nedt.2013.03.013 PMID:23618848 Chow, M., Herold, D. K., Choo, T., & Chan, K. (2012). Extending the technology acceptance model to explore the intention to use Second Life for enhancing healthcare education. Computers & Education, 59(4), 1136–1144. doi:10.1016/j. compedu.2012.05.011 Chung, E., Rhee, J., Baik, Y., & A, O.-S. (2009). The effect of team-based learning in medical ethics education. Medical Teacher, 31(11), 1013–1017. doi:10.3109/01421590802590553 PMID:19909042
Equipping Advanced Practice Nurses with Real-World Skills
Clark, M. B., Douglass, A. B., Maier, R., Deutchman, M., Douglass, J. M., Gonsalves, W., … Bowser, J. (2010). Smiles for Life: A National Oral Health Curriculum. 3rd Edition. Society of Teachers of Family Medicine. Retrieved August 15, 2015, from http://www.smilesforlifeoralhealth. com Clouder, D. L. (2008). Technology-enhanced learning: Conquering barriers to interprofessional education. The Clinical Teacher, 5(4), 198–202. doi:10.1111/j.1743-498X.2008.00243.x Costa, S., Cuzzocrea, F., & Nuzacci, A. (2014). Uses of the Internet in Educative Informal Contexts. Implication for Formal Education. Comunicar, 22(43), 163–171. doi:10.3916/C43-2014-16 Delunas, L. R., & Rouse, S. (2014). Nursing and Medical Student Attitudes about Communication and Collaboration Before and After an Interprofessional Education Experience. Nursing Education Perspectives, 35(2), 100–105. doi:10.5480/11716.1 PMID:24783725 Forbes, T. H. III. (2014). Making the Case for the Nurse as the Leader of Care Coordination. Nursing Forum, 49(3), 167–170. doi:10.1111/nuf.12064 PMID:24393064 Fujikura, T., Takeshita, T., Homma, H., Adachi, K., Miyake, K., Kudo, M., & Hirakawa, K. et al. (2013). Team-based learning using an audience response system: A possible new strategy for interactive medical education. Journal of Nippon Medical School, 80(1), 63–69. doi:10.1272/ jnms.80.63 PMID:23470808 Gaines, L., & Kamer, P. M. (1994). The incidence of economic stress in affluent areas: Devising more accurate measures. American Journal of Economics and Sociology, 53(2), 175–185. doi:10.1111/j.1536-7150.1994.tb02584.x
Grady, S. E. (2011). Team-Based Learning in Pharmacotherapeutics. American Journal of Pharmaceutical Education, 75(7), 136. doi:10.5688/ ajpe757136 PMID:21969722 Hayat, M. J., Eckardt, P., Higgins, M., Kim, M., & Schmiege, S. (2013). Teaching Statistics to Nursing Students: An Expert Panel Consensus. The Journal of Nursing Education, 52(6), 330–334. doi:10.3928/01484834-20130430-01 PMID:23621121 Heatherton, T. F., Kozlowski, L. T., Frecker, R. C., & Fagerstrom, K. (1991). The Fagerstrom Test for Nicotine Dependence: A revision of the Fagerstrom Tolerance Questionnaire. British Journal of Addiction, 86(9), 1119– 1127. doi:10.1111/j.1360-0443.1991.tb01879.x PMID:1932883 Interprofessional Education Collaborative Expert Panel (IPEC). (2011). Core competencies for interprofessional collaborative practice. Report of an expert panel. Washington, DC: Interprofessional Education Collaborative. IOM (Institute of Medicine). (2011). The Future of Nursing: Leading Change, Advancing Health. Washington, DC: The National Academies Press. Johnson-Hofer, P., & Karasic, S. (1988). Learning about Computers. Nursing Outlook, 36(6), 293–294. PMID:3186471 Jones, R., Higgs, R., DeAngelis, C., & Prideaux, D. (2001). Changing face of medical curriculum. Lancet, 357(9257), 699–703. doi:10.1016/S01406736(00)04134-9 PMID:11247568 Koles, P. G., Stolfi, A., Borges, N. J., Nelson, S., & Parmelee, D. X. (2010). The impact of team based learning on medical students’ academic performance. Academic Medicine, 85(11), 1739– 1745. doi:10.1097/ACM.0b013e3181f52bed PMID:20881827
185
Equipping Advanced Practice Nurses with Real-World Skills
Larson, E. L., Cohen, B., Gebbie, K., Clock, S., & Saiman, L. (2011). Interdisciplinary research training in a school of nursing. Nursing Outlook, 59(1), 29–36. doi:10.1016/j.outlook.2010.11.002 PMID:21256360 Letassy, N. A., Fugate, S. E., Medina, M. S., Stroup, J. S., & Britton, M. L. (2008). Using team-based learning in an endocrine module taught across two campuses. American Journal of Pharmaceutical Education, 72(5), 103. doi:10.5688/aj7205103 PMID:19214257 Li, L., An, L., & Li, W. (2010). Nursing students self-directed learning. Chinese General Nursing, 8(5), 1205–1206. Liaison Committee on Medical Education. (2011). Accreditation standards. Retrieved August 15, 2015, from www.lcme.org/standard.htm Love, R. L. (1974). Continuing Education Garnished with Computer Assisted Instruction. Journal of Allied Health, 3, 86–93. MacDonald, C. J., Archibald, D., Trumpower, D., Cragg, B., Casimiro, L., & Jelley, W. (2010). Quality standards for interprofessional healthcare education: Designing a toolkit of bilingual assessment instruments. Journal of Research in Interprofessional Practice and Education, 1(3), 1–13. Michaelsen, L. K. (1983). Team learning in large classes. In C. Bouton & R. Y. Garth (Eds.), Learning in Groups. New Directions for Teaching and Learning Series (Vol. 14). San Francisco: Jossey-Bass. Michaelsen, L. K. (1998). Three keys to using learning groups effectively. Teaching Excellence: Toward the Best in the Academy, 9(5), 1997-1998. Michaelsen, L. K. (1999). Myths and methods in successful small group work. National Teaching & Learning Forum, 8(6), 1–4.
186
Michaelsen, L. K., Bauman-Knight, A., & Fink, D. (2003). Team-based learning: A transformative use of small groups in college teaching. Sterling, VA: Stylus Publishing. Michaelsen, L. K., & Black, R. H. (1994). Building learning teams: The key to harnessing the power of small groups in higher education. In S. Kadel & J. Keehner (Eds.), Collaborative learning: A sourcebook for higher education (Vol. 2). State College, PA: National Center for Teaching, Learning and Assessment. Michaelsen, L. K., Black, R. H., & Fink, L. D. (1996). What every faculty developer needs to know about learning groups. In L. Richlin (Ed.), To improve the academy: Resources for faculty, instructional and organizational development (Vol. 15). Stillwater, Oklahoma: New Forums Press. Michaelsen, L. K., Fink, L. D., & Knight, A. (1997). Designing effective group activities: Lessons for classroom teaching and faculty development. In D. DeZure (Ed.), To improve the academy: Resources for faculty, instructional and organizational development (Vol. 17). Stillwater, OK: New Forums Press. Michaelsen, L. K., Knight, A. B., & Fink, L. D. (2002). Team-based learning: A transformative use of small groups. Westport, CT: Greenwood Publishing Group. Michaelsen, L. K., Parmalee, D., McMahon, K., & Levine, R. (Eds.). (2008). Team-based learning for health professions education: A guide to using small groups for improving learning. Sterling, VA: Stylus Publishing. Michaelsen, L. K., & Sweet, M. (2008). Teamwork works. NEA Advocate, 25(6), 1–8. Michaelsen, L. K., Sweet, M., & Parmalee, D. (2009). Team-based learning: Small group learning’s next big step. New Directions for Teaching and Learning, 116, 7–27.
Equipping Advanced Practice Nurses with Real-World Skills
Michaelsen, L. K., Watson, W. E., & Black, R. H. (1989). A realistic test of individual versus group consensus decision making. The Journal of Applied Psychology, 74(5), 834–839. doi:10.1037/0021-9010.74.5.834 Millis, B. J., & Cottell, P. G. Jr. (1998). Cooperative learning for higher education faculty. Phoenix, AZ: Oryx Press. Persky, A. M., & Pollack, G. M. (2011). A modified team-based learning physiology course. American Journal of Pharmaceutical Education, 75(10), 204. doi:10.5688/ajpe7510204 PMID:22345723 Pogge, E. (2013). A team-based learning course on nutrition and lifestyle modification. American Journal of Pharmaceutical Education, 77(5), 103. doi:10.5688/ajpe775103 PMID:23788814 Professional Development Program, Rockefeller College, University at Albany, State University of New York. (n.d.). Tobacco Recovery Resource Exchange. Retrieved August 15, 2015, from http:// www.tobaccorecovery.org/ Schunk, D. (2012). Learning Theories: An Educational Perspective (6th ed.). Boston: Pearson. Scott, P. A., Matthews, A., & Kirwan, M. (2014). What is nursing in the 21st century and what does the 21st century health system require of nursing? Nursing Philosophy, 15(1), 23–34. doi:10.1111/ nup.12032 PMID:24320979 Spetz, J. (2014). How Will Health Reform Affect Demand for RNs? Nursing Economics, 32(1), 42–44. PMID:24689158 Sprayberry, L. D. (2014). Transformation of America’s Health Care System: Implications for Professional Direct-Care Nurses. Medsurg Nursing, 23(1), 61–66. PMID:24707672 Starr, S. S. (2010). Associate degree nursing: Entry into practice -- link to the future. Teaching and Learning in Nursing, 5(3), 129–134. doi:10.1016/j. teln.2009.03.002
Stevens, K. R., & Ovretveit, J. (2013). Improvement research priorities: USA survey and expert consensus. Nursing Research and Practice, 2013, 1–8. doi:10.1155/2013/695729 PMID:24024029 Sweet, M. S., & Michaelsen, L. K. (2012). Team-based learning in the social sciences and humanities: Group work that works to generate critical thinking and engagement. Sterling, VA: Stylus Publishing. Tan, N. C., Kandiah, N., Chan, Y. H., Umapathi, T., Lee, S. H., & Tan, K. (2011). A controlled study of team-based learning for undergraduate clinical neurology education. BMC Medical Education, 11(1), 91. doi:10.1186/1472-6920-11-91 PMID:22035246 Thomas, P. A., & Bowen, C. W. (2011). A controlled trial of team-based learning in an ambulatory medicine clerkship for medical students. Teaching and Learning in Medicine, 23(1), 31–36. doi:10.1080/10401334.2011.5368 88 PMID:21240780 Thompson, D. R., & Darbyshire, P. (2013). Reply... Thompson D.R. & Darbyshire P. (2013) Is academic nursing being sabotaged by its own killer elite? Journal of Advanced Nursing 69(1), 1–3. Journal of Advanced Nursing, 69(5), 1216–1219. doi:10.1111/jan.12123 PMID:23521594 U. S. Census Bureau. (2011). Profile of selected social characteristics: Suffolk County, N.Y. Author. U.S. Department of Health and Human Services, Health Resources and Services Administration, National Center for Health Workforce Analysis. (2014). The Future of the Nursing Workforce: National- and State-Level Projections, 2012-2025. Rockville, MD: Author. Vellas, B., Villars, H., Abellan, G., Soto, M. E., Rolland, Y., Guigoz, Y., & Garry, P. et al. (2006). Overview of the mini nutritional assessment: Its history and challenges. The Journal of Nutrition, Health & Aging, 10(6), 456–465. PMID:17183418
187
Equipping Advanced Practice Nurses with Real-World Skills
Wiener, H., Plass, H., & Marz, R. (2009). Teambased learning in intensive course format for firstyear medical students. Croatian Medical Journal, 50(1), 69–76. doi:10.3325/cmj.2009.50.69 PMID:19260147
Interprofessional Education Collaborative Expert Panel (IPEC). (2011). Core competencies for interprofessional collaborative practice. Report of an expert panel. Washington, DC: Interprofessional Education Collaborative.
Wiggins, G., & McTighe, J. (1998). Understanding by design. AlexandrIa, VA: ASCD.
IOM (Institute of Medicine). (2011). The Future of Nursing: Leading Change, Advancing Health. Washington, DC: The National Academies Press.
Zgheib, N. K., Simaan, J. A., & Sabra, R. (2010). Using team-based learning to teach pharmacology to second year medical students improves student performance. Medical Teacher, 32(2), 130–135. doi:10.3109/01421590903548521 PMID:20163228 Zingone, M. M., Franks, A. S., Guirguis, A. B., George, C. M., Howard-Thompson, A., & Heidel, R. E. (2010). Comparing team based and mixed active learning methods in an ambulatory care elective course. American Journal of Pharmaceutical Education, 74(9), 160. doi:10.5688/aj7409160 PMID:21301594
ADDITIONAL READING American Association of Colleges of Nursing. (2011). The Essentials of Master’s Education for Advanced Practice Nursing. Retrieved August 15, 2015, from http://www.aacn.nche.edu/educationresources/MastersEssentials11.pdf American Association of Colleges of Nursing. (2012). New AACN Data Show an Enrollment Surge in Baccalaureate and Graduate Programs amid Calls for More Highly Educated Nurses. Retrieved August 15, 2015, from http://www.aacn. nche.edu/news/articles/2012/enrollment-data Hayat, M. J., Eckardt, P., Higgins, M., Kim, M., & Schmiege, S. (2013). Teaching Statistics to Nursing Students: An Expert Panel Consensus. The Journal of Nursing Education, 52(6), 330–334. doi:10.3928/01484834-20130430-01
188
Jones, R., Higgs, R., DeAngelis, C., & Prideaux, D. (2001). Changing face of medical curriculum. Lancet, 357(9257), 699–703. MacDonald, C. J., Archibald, D., Trumpower, D., Cragg, B., Casimiro, L., & Jelley, W. (2010). Quality standards for interprofessional healthcare education: Designing a toolkit of bilingual assessment instruments. Journal of Research in Interprofessional Practice and Education, 1(3), 1–13. Michaelsen, L. K. (1983). Team learning in large classes. In C. Bouton & R. Y. Garth (Eds.), Learning in Groups. New Directions for Teaching and Learning Series (Vol. 14). San Francisco: Jossey-Bass. Michaelsen, L. K. (1999). Myths and methods in successful small group work. National Teaching & Learning Forum, 8(6), 1–4. Michaelsen, L. K., Knight, A. B., & Fink, L. D. (2002). Team-based learning: A transformative use of small groups. Westport, CT: Greenwood Publishing Group. Schunk, D. (2012). Learning Theories: An Educational Perspective (6th ed.). Boston: Pearson. Scott, P. A., Matthews, A., & Kirwan, M. (2014). What is nursing in the 21st century and what does the 21st century health system require of nursing? Nursing Philosophy, 15(1), 23–34. doi:10.1111/ nup.12032
Equipping Advanced Practice Nurses with Real-World Skills
Tan, N. C., Kandiah, N., Chan, Y. H., Umapathi, T., Lee, S. H., & Tan, K. (2011). A controlled study of team-based learning for undergraduate clinical neurology education. BMC Medical Education, 11, 91.
Wiener, H., Plass, H., & Marz, R. (2009). Teambased learning in intensive course format for first-year medical students. Croatian Medical Journal, 50(1), 69–76.
189
Section 2
Technology Tools for Learning and Assessing Real-World Skills Chapters in this section deal with core topic of technology tools and the wide range of applications aimed for learning and assessing of real-world skills.
191
Chapter 8
Simulations for Supporting and Assessing Science Literacy Edys S. Quellmalz WestEd, USA
Barbara C. Buckley WestEd, USA
Matt D. Silberglitt WestEd, USA
Mark T. Loveland WestEd, USA
Daniel G. Brenner WestEd, USA
ABSTRACT Simulations have become core supports for learning in the digital age. For example, economists, mathematicians, and scientists employ simulations to model complex phenomena. Learners, too, are increasingly able to take advantage of simulations to understand complex systems. Simulations can display phenomena that are too large or small, fast or slow, or dangerous for direct classroom investigations. The affordances of simulations extend students’ opportunities to engage in deep, extended problem solving. National and international studies are providing evidence that technologies are enriching curricula, tailoring learning environments, embedding assessment, and providing tools to connect students, teachers, and experts locally and globally. This chapter describes a portfolio of research and development that has examined and documented the roles that simulations can play in assessing and promoting learning, and has developed and validated sets of simulation-based assessments and instructional supplements designed for formative and summative assessment and customized instruction.
INTRODUCTION Digital and networking technologies permeate school, work, personal, and civic activities. They are central, transformative tools for addressing goals and challenges in all walks of life. Conceptualizations of 21st century skills and new literacies go beyond traditional views of academic,
disciplinary learning to emphasize the need to take advantage of the affordances of technologies to foster application of domain knowledge and competencies in real-world contexts, goals, and problems. Research in cognitive science about how people learn has long documented the importance of transferable knowledge and skills and how learning situated in one context must be explicitly
DOI: 10.4018/978-1-4666-9441-5.ch008
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Simulations for Supporting and Assessing Science Literacy
scaffolded to promote use in multiple contexts for new problems. Currently, research and development on the affordances of a vast, ever expanding array of digital and networking technologies are providing evidence of the power of technologies for transforming learning environments and the methods for monitoring and evaluating learning progress. Technologies are revolutionizing the ways that learning can be both promoted and assessed. Interactive technologies such as computer-based learning environments and physical manipulatives enhanced by digital technologies provide teachers with powerful tools to structure and support learning, collaboration, progress monitoring, and formative and summative assessment. These digital tools enable new representations of topics that are difficult to teach and new approaches to individualized learning, that supports a wider range of learners’ needs. Large-scale national and international studies are providing evidence that technologies are truly changing and improving schools by enriching curricula, tailoring learning environments, offering opportunities for embedding assessment within instruction, and providing collaborative tools to connect students, teachers, and experts locally and globally (Quellmalz & Pellegrino, 2009; Quellmalz & Kozma, 2003; Law, Pelgrum, & Plomp, 2008). In this chapter, we will describe projects in WestEd’s Science, Technology, Engineering and Math (STEM) program that are capitalizing on the affordances of digital tools to deepen and extend the kinds of science learning highlighted in the Framework for K–12 Science Education and the Next Generation Science Standards (National Research Council [NRC], 2012a, 2012b). These projects draw upon a broad range of recent research to develop and evaluate interactive technologies for learning and assessment. This chapter will describe the principles extracted from work in the learning sciences, model-based reasoning,
192
multimedia research, universal design for learning (UDL) and evidence-centered design (ECD) and employed in the design and development of these technology tools. We will summarize strategies for successful implementation of these new digital learning tools in current educational settings, as well as studies of the interventions’ technical quality and impacts on learning. We will discuss how these interactive technologies support the development of learning progressions and multilevel, balanced assessment systems. We conclude the chapter with a discussion of additional lines of research and development. This article is based upon work supported by the US Department of Education (Grant 09-2713-126), the National Science Foundation (Grants 0733345, 1108896, 1221614, and 1420386), and the Institute of Education Sciences, U.S. Department of Education (Grants R305A100069, R305A120047, R305A120390, and R305A130160). Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the U.S. Department of Education, the Institute of Education Sciences, or the National Science Foundation.
BACKGROUND The research and development projects in WestEd’s STEM program draw upon theory and findings from cognitive science and multimedia research and emphasize the schematic and strategic knowledge involved in systems thinking and the science practices related to inquiry-based problem-solving for real-world issues. The focus on real-world applications shifts attention from the inert retention of disconnected scientific domain knowledge to understanding the science relevant to environmental and social issues, making informed decisions, and communicating about the issues.
Simulations for Supporting and Assessing Science Literacy
Focus on Significant Knowledge and Skills In K–12 schooling, frameworks and standards recommend the knowledge and processes central within traditional academic domains and for 21st century skills. These documents lay out goals for what should be taught in K–12 education, recommending development of not just declarative and procedural knowledge, but integrated knowledge structures (schema), strategic use of knowledge, and transfer of knowledge to solve novel problems. Learning sciences research has documented that the mental models of experts can be represented as large, organized, interconnected knowledge structures, called schema, that are used in conjunction with domain-specific problem-solving routines (Bransford, Brown & Cocking, 2000). In the domain of science, models of science systems can serve as schema for organizing knowledge about dynamic system phenomena. Thus, formation of models and development of model-based reasoning is a foundational practice in science. Moreover, the ever-widening horizons enabled by digital tools expand conceptualizations of literacy in science and other academic subjects to the larger context of “new literacies,” a term that has emerged in recognition of the expanded ways that knowledge and information can be represented, accessed, processed, shared, and expressed. New literacies require expertise in the use of a range of digital media and information and communication technologies and exercised in academic and applied settings to collaborate, communicate, solve problems, and achieve goals (Quellmalz & Haertel, 2008).
Design of Learning Environments Numerous national reports summarize key research findings about how to design effective learning environments and assessments for academic domains and 21st century skills (Branford, Cocking, & Glaser, 2000; Pellegrino & Hilton,
2013). For example, the reports How People Learn and Applying Cognitive Science to Education distill decades of learning research that informs strategies for supporting deep learning.
Representations of Science Phenomena The use of physical, conceptual, and mathematical models has greatly benefitted scientific discovery. Models and simulations have profoundly changed the nature of inquiry in mathematics and science— for scientists, as well as for students (Nersessian, 2008). For example, economists, mathematicians, and scientists employ simulations to model alternative outcomes of complex systems. Multimedia learning researchers have examined the effects of pictorial and verbal stimuli in static, animated, and dynamic formats, as well as the effects of active versus passive learning enabled by degrees of learner control (Clark & Mayer, 2011; Mayer, 2005; Lowe & Schnotz, 2008). Mayer’s Cambridge Handbook of Multimedia Learning (2005) and Clark and Mayer’s recently updated book, eLearning and the Science of Instruction summarize multimedia research and offer principles for multimedia design (Clark & Mayer 2011). The majority of multimedia design principles address how to focus students’ attention and minimize extraneous cognitive processing. Research addresses how to guide attention by making the most important information salient and omitting irrelevant representations (cf., Betrancourt, 2005; Clark & Mayer, 2011). Studies also recommend that complex simulations should be carefully focused to foster desired learner outcomes. Rather than realistically portraying every detail of systems, it is more important to ensure that the most relevant parts are easily discernible (cf., Lee, Plass, & Homer, 2006; van Merrienboer & Kester, 2005). Extensive research has been conducted on external forms of stimulus representations. Research on the perceptual correspondence of models to the
193
Simulations for Supporting and Assessing Science Literacy
natural systems they represent (e.g., cells, circuits, ecosystems) suggests features to consider in designing science learning environments. Research on models’ physical similarity to natural systems and the ways in which system interrelationships are depicted through conventional physical and symbolic forms and signaled or highlighted can inform the design of science learning and assessment activities. The use of visual cues such as text consistency, color, and arrows can help students map between representations and gain deeper conceptual understandings, increasing the “readability” of dynamic visualizations (cf., Ainsworth, 2008; Kriz & Hegarty, 2007; Lowe & Schnotz, 2008). In a review of principles in multimedia learning, Betrancourt (2005) noted that multimedia representations have evolved from sequential static text and picture frames to increasingly sophisticated visualizations. Animations are considered particularly useful for providing visualizations of dynamic phenomena that are not easily observable in real space and time scales, cf., plate tectonics, circulatory system, animal movement (Betrancourt, 2005; Kühl, Scheiter, Gerjets, & Edelmann, 2011). Dynamic representations are well suited for portraying changes in temporal scale, spatial scale, and for depicting multiple viewpoints. For example, to represent changes in spatial scale, visual call-outs are frequently used for magnification. Cross-sectional views, cutaway views, and exploded views are used in both static and animated depictions of dynamic events. Color can cue key features of complex scenes, the ordering of events, and the categorization of structures so that learners can extract relevant information. Signaling in complex animations may include giving cues such as “there will be three steps” and directly instructing students to reason through the components of systems to increases comprehension (Hegarty 2004; Tversky et al., 2008). A growing body of research is developing principles for organizing and displaying information that will help focus learner attention (Ware, 2004).
194
User Control User control refers to the degree of control the user can exert while interacting with representations. User control may allow students to pause, rewind, and replay dynamic visualizations, and manipulate features and sequences. Controlling the pace of presentation can increase the likelihood that students will learn from and understand the display (cf., Lowe & Schnotz, 2008; Schwartz & Heiser, 2006). Digital media can also allow learners to explore, manipulate, and display the results of investigations of dynamic representations. Animations become interactive simulations if learners can manipulate parameters as they generate and test hypotheses, thereby taking advantage of technological capabilities suited to conducting scientific inquiry. Simulations can provide technology enhancements for science instruction by representing dynamic science systems “in action,” making invisible phenomena observable and enabling manipulations of these models for active investigations of authentic problems (Gobert & Clement, 1999). For example, Rieber, Tzeng, and Tribble (2004) found that students given graphical feedback with short explanations during a simulation on laws of motion far outperformed those given only textual information. Plass, Homer, and Hayward (2009) found that manipulation of the content of a visualization, not just the timing and pacing, can improve learning outcomes compared to static materials.
Universal Design Building on work by Rose and Meyer (2000), CAST (2008) developed a framework for Universal Design for Learning (UDL) recommending three kinds of flexibility: (1) representing information in multiple formats and media, (2) providing multiple pathways for students’ action and expression, and (3) providing multiple ways to engage students’
Simulations for Supporting and Assessing Science Literacy
interest and motivation. Digital learning and assessment environments can present information in more than one modality (e.g. auditory and visual, static and dynamic), allow simultaneous presentation of multiple representations (e.g., scenes and graphs), and vary simple and complex versions of phenomena and models. Multiple pathways for expression may include interactivity, hints and worked examples, and multiple response formats (drawing, writing, dragging and dropping). Universal Design for Computer-Based Testing (UD-CBT) further specified how digital technologies can create tests that more accurately assess students with a diverse range of physical, sensory, and cognitive abilities and challenges through the use of accommodations (Harns, Burling, Hanna, & Dolan, 2006; Burling et al., 2006). Accommodations are defined as changes in format, response, setting, timing, or scheduling that do not alter in any significant way the constructs the test measures or the comparability of scores (Phillips, 1993). UDCBT has been found to level the playing field for English language learners (ELL) and students with disabilities (Wang, 2005; Case, Brooks, Wang, & Young, 2005). Tools already built into students’ computers can allow multiple representations (text, video, audio); multiple media; highlighters, and zoom magnification (Twing & Dolan, 2008; Case, 2008).
Model-Based Learning Researchers in model-based learning suggest that learners’ mental models of science phenomena are formed, used, evaluated, and revised as they interact with phenomena in situ and with conceptual models, representations (including text), and simulations (Gobert & Buckley, 2000; Buckley, 2012; Clement & Rea-Ramirez, 2008). For example, cycles of model-based reasoning help learners build deeper conceptual understandings of core scientific principles and systems, interpret patterns in data, and formulate general models to
explain phenomena (Stewart et al., 2005; Lehrer et al., 2001). A highly significant finding of cognitive research is that learners who internalize schema of complex system organization—structures, interactions, and emergent behaviors—can transfer this heuristic understanding across science systems (e.g., Goldstone, 2006; Goldstone & Wilensky, 2008).
Simulations for Science Learning Numerous studies illustrate the benefits of simulations for science learning. Simulations can support the development of deeper understanding and better problem-solving skills in areas such as genetics, environmental science, and physics (Krajcik, Marx, Blumenfeld, Soloway, & Fishman, 2000; Schwartz & Heiser, 2006; Rieber et al., 2004; Buckley et al., 2004; Buckley et al., 2010). Students using simulations tend to rely more on conceptual approaches than on algorithmic approaches or rote facts during problem-solving (Stieff & Wilensky, 2003; White & Frederiksen, 1998), and can make causal connections among the levels of science systems (Hmelo-Silver, et al., 2008; Ioannidou, et al., 2010). Using dynamic, interactive simulations to make these connections explicit and salient benefits students’ learning (Slotta & Chi, 2006). Taking Science to School summarizes researchbased recommendations for learning environments, suggesting that knowledge and skills be taught and tested in the context of larger investigations linked to driving questions, rather than teaching and testing individual ideas and skills separately (Duschl, Schweingruber, & Shouse, 2007). Learning theory holds that the environments in which students acquire and demonstrate knowledge should be situated in contexts of use (Simon, 1980; Collins, Brown, & Newman, 1989). Learning environments should involve active problem solving and reasoning. Cycles of feedback and scaffolding should be designed to
195
Simulations for Supporting and Assessing Science Literacy
promote and monitor learning progress. Cycles of feedback, revision, and reflection are aspects of metacognition critical for students to regulate their own learning (Pashler et al., 2007; White & Frederiksen, 1998). Scientific literacy incorporates the goal that individuals can engage in science-related, real-world issues and ideas as reflective citizens. Interactive technologies can support the development of new literacies through affordances that help students develop collaboration and communication skills as they engage in deep, extended problem solving.
Evidence-Centered Design Evidence-centered design (ECD) facilitates coherence of assessment and learning environments by linking the targeted knowledge and skills with evidence of proficiency, and with tasks and items to elicit that evidence (Messick, 1994; Mislevy, Amond, & Lucas, 2004; Mislevy & Haertel, 2007). The process begins by specifying a student model of the knowledge and skills to be addressed. Schematic, systems thinking about science phenomena should begin with explication of the kind of mental model that is to be constructed by the learner and for what purpose or application. The ECD design process aligns the student model with an evidence model that specifies which student responses are evidence of targeted knowledge and skills, how student performances will be analyzed, and how they will be reported. The student and evidence models are then aligned with a task model that specifies features of tasks and questions intended to elicit student performances that provide evidence of the targeted knowledge and skills. The WestEd science projects used evidence-centered design to align the science content and practices addressed to scoring and reporting methods, and then to principled design of tasks that elicit evidence of understanding and use of the targeted science knowledge and skills.
196
WESTED STEM RESEARCH AND DEVELOPMENT Design Principles for TechnologyEnhanced Interactive Learning Environments WestEd technology-enhanced science projects address topics ranging from middle and high school to graduate courses. The projects address significant science and knowledge and practices aligned with national science frameworks and standards. The digital environments are designed according to the principles derived from learning research described above and distilled in Taking Science to School (Duschl, Schweingruber, & Shouse, 2007). The principles include actively engaging students in meaningful, real world problems, cycles of feedback, and scaffolding to promote learning. Findings from multimedia research inform the design of multiple, overlapping representations that cue attention to relevant features of the science phenomena and offer user control of a range of responses and expression. The projects use evidence-centered design (ECD) to structure the alignment of the science content and practices addressed (student models) with the types of instructional and assessment activities (task models) and the forms of evidence that are collected to document and summarize learning (evidence models) (Mislevy, Almond, & Lucas, 2004). The SimScientists program (simscientists. org) developed suites of simulation-based assessments designed to promote and assess model-based learning in existing middle school science curricula. Each suite is composed of two or three curriculum-embedded modules that the teacher inserts into a unit. A summative simulation benchmark assessment is administered at the end of the unit. These interactive modules feature a simulation environment based on scientific
Simulations for Supporting and Assessing Science Literacy
principles for a model of a science system that is grade-appropriate and specifies core ideas to be applied during problem-driven inquiry activities. The modules are designed as supplements to ongoing curriculum units, to be implemented by the teacher at points in the curriculum sequence when key ideas have been introduced and the teacher judges that students can apply the concepts as they conduct the simulation-based investigations. ChemVLab (chemvlab.org) is a collaboration between Carnegie Mellon University (CMU) and WestEd. This work is based on an existing Java user interface developed at CMU that simulates a chemistry stockroom and workbench for carrying out a wide array of investigations, along with a newly developed Flash-based user interface and programming interface between the Flash and Java components of the system that allow for delivery of structured tasks to students and assessment of their performance within the simulated environment. Designed for integration into high school chemistry lab courses, the activities improve upon the typical paper-based practice problems and provide students with practice that includes practical, simulated exposure to wet-lab work, data collection and interpretation, problem solving, and sense making. The system offers real-time customized feedback to guide student investigations and provides error correction in the application of chemistry concepts. Reports to students and teachers provide ongoing progress monitoring and allow teachers to adjust instruction based on gaps in students’ knowledge and abilities (Davenport, Rafferty, Timms, Yaron, & Karabinos, 2012; Davenport, Rafferty, Yaron, Karabinos, & Timms, 2014). The Voyage to Galapagos project (VTG, voyagetogalapagos.org) has created web-based software to help students “follow” the steps of Darwin through a simulation of the Galapagos Islands, guiding students’ learning about natural selection and evolution. Students are encouraged to explore the islands, take pictures of iguanas, evaluate the animals’ characteristics and behaviors,
and use scientific methodology and analysis to “discover” evolution as they explore the virtual open environment of the Galapagos Islands. The program encourages students to follow the steps of good scientific inquiry, e.g., developing hypotheses, collecting and analyzing data, and drawing conclusions, while revealing basic principles of evolution theory to students. Voyage to Galapagos is investigating the question: How much assistance is the right amount to provide to students as they learn with educational technology? To investigate this, VTG has been developed to provide middle school students with opportunities to do simulated field work, including data collection and analysis during investigation of three key biological principles: variation, function, and adaptation. The goal of the project is to find the right balance between minimum and full support, allowing students to make their own decisions and, at times, mistakes. Learning goals and tasks aligned with NGSS have been used to create an intelligent tutoring system to collect data about student actions, assign probabilities of students having made certain errors, and make decisions about error feedback and hints to provide students. In the sections below, we use the evidencecentered design framework to describe the designs of the WestEd STEM simulation projects.
Student Models The STEM technology-enhanced projects begin with specifications of the knowledge and skills to be fostered and assessed. National science frameworks and standards have been the major sources. For example, the College Board Standards for Science Success, the National Research Council Framework for K–12 Science Education, and the Next Generation Science Standards (NGSS) recommend deeper learning of the fundamental nature and behavior of science systems, along with the practices scientists use to study system dynamics (College Board, 2009; NRC, 2012a, 2012b). The
197
Simulations for Supporting and Assessing Science Literacy
projects then focus on science knowledge and practices particularly suited to dynamic, interactive modalities and that are difficult to promote and assess in static formats. The technology affordances permit visual representations of the structure, function and behaviors of systems “in action” that are typically too big, small, fast, slow, or dangerous for students to experience directly in classrooms. In addition, the technologies allow active investigations that support use of NGSS science and engineering practices. In the sections below we describe the sets of interrelated learning targets that serve as the student models of the projects. SimScientists Student Models The overarching design of the SimScientists assessment and instructional modules integrates the frameworks of model-based learning and evidence-centered design (Buckley, 2012; Mislevy, Almond & Lucas, 2004). Incorporating the learning principles described above, design begins with specification of the science knowledge and practices to be addressed. The SimScientists computer-based modules are designed as supplements to ongoing curricula, therefore they selectively focus on integration of knowledge and application of science practices. The knowledge integration occurs within the organizational frame of an integrated science system model consisting of three tiers: 1) the system components, 2) interactions among components, and 3) the emergent system phenomena. The three-level science system model is intended to help learners form a schema of the organizational structure of all science systems (Bransford, Brown, & Cocking, 2000). The system model framework also serves as the target for the model-based reasoning promoted (Buckley, 2012). The projects reframe content standards identified by NGSS, the American Association for the Advancement of Science (AAAS), and the National Assessment
198
of Educational Progress (NAEP) science in terms of multilevel science system models that explicate and integrate understanding of the system’s components, their interactions, and behaviors that emerge from these interactions (Clement & ReaRamirez, 2008; Hmelo-Silver & Pfeffer, 2004; Grotzer, 2003; Perkins & Grotzer, 2000). The projects also reframe science practices in terms of the model-based reasoning needed for students to demonstrate and extend their understanding of the system models through investigations. The first level of specification for the SimScientists student model is the System Target Model. As shown in Figure 1, SimScientists’ ecosystems assessments and instructional modules focus on multiple levels of ecosystem organization, the interactions of components within levels and across levels, and the changes that emerge from those interactions over time. We characterize these levels as components and their roles, interactions between components, and emergent behavior that results from component-component interactions within communities over time. For the middle school grades, the ecosystem levels are represented in terms of food for energy and building blocks for growth and maintenance, organisms and their roles in dyad interactions (producers/consumers, predator/prey) and food webs (diagrams that represent the flow of matter and energy through ecosystems). The population changes that emerge from interactions among organisms and with abiotic factors in the environment are represented in models that include both the organisms and graphs of populations. The model levels described above—components, interactions, and emergent behavior—are ubiquitous in science systems ranging in size from molecules to biospheres. The core ideas focus on understanding ecosystem components, interactions, and population behaviors and the science practices for studying ecosystems’ dynamic phenomena.
Simulations for Supporting and Assessing Science Literacy
Figure 1. Life science ecosystem target model
ChemV Lab Student Model The student model for ChemVLab focuses on conceptual understanding of chemistry. At the submicroscopic level, the student model integrates processes involving atoms and molecules to procedural knowledge such as quantitative problem solving. At the macroscopic level, the student model includes causal models for macroscopic processes based upon understanding the submicroscopic processes. Voyage to Galapagos Student Model
Level 3 - Adaptation: Environmental factors have an impact on the observed biological functions in the animals. The levels involve a conceptualization of increasingly complex ideas as students progress through the various levels of the software. Table 1 describes learning goals and tasks aligned with the NGSS that have been used to create an intelligent tutoring system to collect data about student actions, assign probabilities of students having made certain errors, and make decisions about error feedback and hints to provide students.
In VTG, the student model specifies an understanding of evolution and natural selection at three levels:
Task Models
Level 1 - Variation: Among species of animals, key trait variations are observed across populations. Level 2 - Biological Function: Observed animal trait variations are tied to biological function.
The STEM projects design tasks to elicit evidence that students understand core ideas and can use them in a range of practices to study science systems. Technology supports the design process by allowing development of re-usable templates for
199
Simulations for Supporting and Assessing Science Literacy
Table 1. Alignment of levels within the VTG application to NGSS Crosscutting Concepts
Practices for K–12 Science Classrooms
Level 1
Patterns
4. Analyzing and Interpreting Data
Level 2
Patterns
3. Planning and Carrying out Investigations
Cause and Effect
4. Analyzing and Interpreting Data 6. Constructing Explanations
Level 3
Patterns
4. Analyzing and Interpreting Data 6. Constructing Explanations
task types for investigating science phenomena. The templates specify key features of representations of system phenomena that are appropriate for the grade level. Multimedia research provides techniques for directing attention to relevant parts of the representations of the science phenomena. The templates also specify the types of responses that students are asked to make. Typically, the templates specify sets of tasks that students will complete as they use science practices to address real world problems. Problems posed for investigation represent iconic problems addressed by scientists studying science phenomena such as observing components of a system, studying interactions, and conducting studies to predict and explain emergent system behaviors. Models for task types deliberately incorporate design principles from learning research that include, among other features, multiple linked representations of system interactions and dynamic phenomena that are difficult to observe and manipulate in classrooms because of the phenomena’s interactions at multiple scales, temporal dynamics, causal mechanisms. Based on recommendations from learning research, learners participate in active inquiry by designing, conducting, and interpreting iterative investigations and explaining conclusions. Scaffolding in the form of feedback and customized coaching guides and reinforces the learning. 200
Life Science Disciplinary Core Idea LS4 Biological Evolution: Unity and Diversity LS4.B Natural Selection How does genetic variation among organisms affect survival and reproduction? LS4.C Adaptation How does the environment influence populations of organisms over multiple generations?
SimScientists Task Models In the SimScientists program, the conceptual framework guiding research and development is grounded in the belief that learners develop understanding and mental models of dynamic phenomena through a variety of routes that depend on the learner’s starting point and interactions with phenomena and representations. These phenomena arise from complex systems of interacting components, which themselves may be complex systems. For example, learning about ecosystems might begin with a simple partial mental model of the ecosystem such as the idea that living creatures have survival needs—food, shelter, ability to avoid predators, etc. The first incomplete mental model of an ecosystem may be one of the organism—what it eats and who eats it. This simple mental model can become more complete and complex when learners consider the competing needs of populations of organisms over time, perhaps by conducting investigations with simulations. So for the development of a model of ecosystems, a learning trajectory could begin with tasks requiring identification of component organisms, adding understanding of their interactions before proceeding to a more complete model of the ecosystem emergent phenomena of changing population levels over time. Science practices that focus on developing and using models, conducting investigations, and interpreting data are particularly relevant to helping students develop, test, and evaluate their
Simulations for Supporting and Assessing Science Literacy
mental models of science systems. Simulations of diverse types can enable students to conduct investigations with complex systems and system models. In SimScientists, a progression of tasks both develops and elicits students’ conceptual understandings of the system model and associated science practices. Cognition and multimedia learning research guide the design of the representations of the system components, interactions, and emergent phenomena, in addition to ways that cueing and learner control guide student interactions with the simulations. The SimScientists modules include two major types of assessment tasks. Curriculum-embedded modules are designed to foster integration of core ideas and their use in investigations. Each is designed to require one period. As recommended by learning research, the modules present real world problems that are recurring significant problems addressed by scientists in an area. Domain analyses provide one source for problems addressed by scientists. From these, problems are selected that focus on complex systems, in order to help counteract the fragmented understandings occurring among science learners. Examination of research papers published by scientists ensures accuracy and informs development of simulations of these complex systems. The assessment tasks present real world problems, require use of core ideas, and focus on the investigations and reasoning of scientists as students create, observe, evaluate, and revise their models of phenomena. For example, students identify components and interactions, make predictions, design experiments, interpret data, evaluate their predictions, and explain the results and their reasoning, all key science practices. The embedded modules further incorporate principles derived from learning research by providing opportunities for formative assessment during the sequence of investigations. The simula-
tions provide individualized feedback as students perform a task or respond to questions. The feedback is accompanied by graduated coaching in the form of increasingly more information and, finally, a worked example. For example, within a unit on ecosystems, the teacher inserts the first embedded module after students have learned about different types of organisms in an ecosystem. The module engages students in helping to develop material for an interpretive center to describe a mountain lake ecosystem to visitors, beginning with an animation of organisms in the lake. At the component level of the ecosystem model, students observe what the organisms eat and identify their roles as consumers or producers. At the interaction level of the ecosystem model, students are asked to draw a food web that depicts the flow of energy and matter as organisms interact. The simulation uses affordances of the technology to provide immediate feedback about whether the arrow drawn connects the correct organisms and is in a direction showing the flow of energy and matter from the source. As shown in Figure 2, feedback highlights an incorrect arrow and includes coaching for the student to observe the animation of organisms eating in order to draw the arrow from the food source. If incorrect arrows remain, the following screen would show a correctly drawn worked example and require the student to draw the arrows correctly. This process is formative because the system evaluates a student response, provides feedback on its appropriateness, and offers additional instruction. Figure 3 presents a formative embedded assessment of investigation practices and science knowledge of ecosystem emergent behaviors represented by changing population levels. The feedback addresses students’ predictions about population level change over time, and is accompanied by coaching to analyze the graph in order to match observations to predictions. Figure 4 presents a task asking students to build a model of the circulatory system by drag-
201
Simulations for Supporting and Assessing Science Literacy
Figure 2. Mountain Lake embedded module, Draw Foodweb task
Figure 3. Mountain Lake embedded assessment, Predict Population task for science investigation practices of analyzing ecosystem population level emergent behaviors
202
Simulations for Supporting and Assessing Science Literacy
Figure 4. Human Body Systems embedded assessment, Build Circulatory System task
ging and dropping images of organs into the body and lung loops. The assessment provides feedback and coaching if the organ placements or sequences are incorrect. Each curriculum-embedded simulation-based module intended for formative purposes is followed by an off-line reflection activity designed to adjust instruction based on progress reports that indicate which core ideas and practices need more attention. The reflection activities promote transfer of the core ideas and science practices to new settings.
The second major type of task model in the SimScientists assessment modules is a simulationbased benchmark assessment administered at the end of the unit. These assessments generate summative reports of student proficiencies on the targeted core ideas and practices. The tasks are parallel to those in the embedded modules, but do not provide feedback and coaching. Again, students’ abilities to apply core ideas to a new ecosystem are assessed. Figure 5 shows a benchmark assessment set in an Australian grassland. The real world problem is
Figure 5. Screen shots of foodweb and population investigation tasks in a SimScientists ecosystem benchmark assessment
203
Simulations for Supporting and Assessing Science Literacy
to restore the grassland after a wildfire. Students must observe the eating behaviors of grassland organisms to identifying system components and construct a food web depicting interactions, and then manipulate the numbers of organisms reintroduced into the grassland ecosystem in order to restore a balanced ecosystem (the emergent phenomena). The two screens are sampled from the food web and population dynamics task sets, illustrating how task model templates can be reused to create parallel tasks set in new contexts. In addition, to broaden participation across a diverse range of students, the SimScientists assessments provide the three most common accommodations allowed in state testing programs—text-to-speech, screen magnification, and segmentation that supports re-entry into tasks when extended time is needed. ChemVLab Task Models An overarching goal of the ChemVLab project is to contextualize the procedural knowledge used in chemistry to solve problems and conduct investigations (Davenport et al., 2014). To this end, a series of activities has been developed to address topics common to high school chemistry curricula. The activities are all designed around a common approach: students investigate phenomena at the atomic and macroscale levels and solve problems using the properties of atoms and molecules to make predictions and to explain observations of properties of bulk matter. Through this approach, students gain a deeper understanding of the utility of the chemists problem-solving “toolbox” for reasoning about the world around them, rather than simply committing to memory a disconnected set of algorithms. Task models designed around concepts in chemistry include atomic, molecular, and bulk features, and a set of investigation “tools” that allow chemists to use observations at one scale for making inferences at another scale. These tools are specific to the domain and may be specific to the concept addressed in the task.
204
While the ChemVLab portfolio of activities does not cover all of chemistry, the selected problems are common to high school chemistry curricula and address typical applications, while transforming these applications from discrete procedures to sets of contextualized, interrelated tasks. For example, in the Acid-Base activity (Figure 6), students are introduced to the mathematics that relate the concentrations of ions in these solutions to the primary logarithmic scale used to characterize their unique properties at the macroscale, the pH scale. The tasks for the student involve mixing acidic and basic solutions in order to change the concentrations of ions and the related pH in a way that reveals the nature of the logarithmic relationship between the two properties. Subsequent tasks then ask students to use their understanding of this relationship to predict the properties of given mixtures, and to explain macroscale phenomena as the result of interactions within systems of interacting ions. Voyage to Galapagos Task Models The VTG software encourages the student to follow the steps of good scientific inquiry, e.g., developing hypotheses, analyzing data, drawing conclusions, and reveals the basic principles of evolution to the students. The open learning environment provides latitude for variability of student actions—and student errors—allows for a wide variety of assistance, and the ability to either intervene after those actions are taken with help—or not. In VTG, students progress through a series of levels in which they complete a series of tasks using a set of virtual tools provided in the application. The task model in the VTG program is organized around the three main phases, or levels of the application: Level 1: Variation. Students are tasked to explore the islands in the Galapagos Archipelago in search of evidence of trait variation among
Simulations for Supporting and Assessing Science Literacy
Figure 6. The task that introduces the relationship between concentration and pH
iguanas found there. They use a camera to photograph a representative sample of iguanas. Back in a virtual lab, they then measure specific animal traits (body length, tail width, claw length, snout length, and color) with a Schemat-o-meter, classify the variation of the traits (e.g., for claw length, very long, long, neutral, short, and very short), and classify the variation of traits with a Schemat-o-meter and use their data to analyze geographic distribution of variant populations. Level 2: Biological Function. The students then return to the island to find evidence of iguana functions (e.g., eating, swimming, foraging for food) by viewing videos found on some of the paths they have explored. They then are asked to hypothesize about the biological function of iguana trait variations (e.g., long claws are better for climbing rocks). After
returning to the lab, they are provided with a Trait Tester, an instrument with which they can test animals for relative performance. Level 3: Adaptation. Students are asked to review the island path steps where they found their iguanas and associate an environment with each sample animal. After examining the environments where animals with specific biological functions live, students hypothesize about selective pressures, use the Distribution Chart to plot where animals with different trait variations in order to draw conclusions about natural selection. Within each level, there are 3 steps that correspond to: • • •
Sample collection and hypothesizing. Data testing and analysis. Synthesizing ideas.
205
Simulations for Supporting and Assessing Science Literacy
Through the cyclical process of repeated exposure to employing these practice skills, students are given the ability to increase their proficiency in scientific inquiry. Across the WestEd STEM projects, the task models use multimedia principles in the design of the representations of the science phenomena and the student interactions. Key features of the task models are the use of multiple representations, an array of technology-enabled cueing mechanisms, and a focus on active investigations that take advantage of the technology capabilities. In addition, the tasks collect learner responses for analysis of learning progress.
Evidence Models A valuable affordance of computer-supported learning environments is their ability to capture, evaluate and summarize student responses to tasks and questions in problem-based modules. Each of the projects has designed an underlying database, a learning management system (LMS), to gather evidence of the targeted learning. In this section we describe the evidence models of the STEM projects. SimScientists Evidence Models The SimScientists embedded assessments generate progress reports based on the level of assistance students needed to complete the tasks. Typically, students have three opportunities to complete tasks and questions correctly. After each “try,” students receive increasingly more coaching, with the last try a worked example. Each of the tasks and questions are aligned with knowledge and practice targets. The progress rubrics use the number of tries to classify student responses into the levels of “Needs Help,” “Progressing”, or “On Track” for each of the knowledge and practice targets. As shown in Figure 7, the progress report for an individual student describes performance for understanding core ideas related to the model levels
206
of components, interactions, or system behavior. The progress report on the Predator/Prey simulation-based curriculum-embedded assessments documents progress on core ideas related to the emergent level of the system model: population dynamics. Progress reports also reflect students’ application of science practices. The progress reports are provided to individual students. The teacher also receives reports of each individual’s progress and class summaries. The progress reports provide data for teachers to use formatively: to adjust instruction during an off-line reflection activity in the next class period. The reflection activities are designed to provide additional instruction and practice on core ideas and science practices on which progress reports indicate students need additional help or extension. For simulation-based benchmark assessments administered at the end of units, the evidence model incorporates evaluations of student responses into a Bayesian Estimation Network (Bayes’ Net) that then reports the proficiency levels for individual students and for the class on the NGSS core idea targets and science practices. Figure 9 shows a class level report on proficiencies for core ideas within the three model levels (roles [components], interactions, populations [emergent] and for the science practices). A segment of the benchmark report for individual students is also shown. ChemVLab Evidence Model The ChemVLab project developed input variables aligned to the targets in the student model for each task model. The evidence model was then developed as a set of algorithms that use input variables to generate indicators of mastery. These algorithms are tailored to each input variable, and provide for multiple approaches to problem solving. For example, in a task that includes preparing a solution of a given concentration, input variables for the quantities of each substance are meaningless without a comparison included in the algorithm.
Simulations for Supporting and Assessing Science Literacy
Figure 7. Student level progress report for a Life Science Ecosystem embedded assessment about food webs
Individual indicators of mastery are aggregated in order to make inferences about students’ abilities with respect to the targets. Voyage to Galapagos Evidence Model VTG uses Bayes’ Nets to monitor when a student needs assistance in applying the relevant science practices for each task. When the probability that a student needs help reaches a threshold value, the assistance system switches on and can provide different levels of assistance. VTG has been used for studies in which five levels of assistance are being examined: (1) no support, (2) error flagging only, (3) error flagging and text feedback on errors, (4) error flagging, text feedback on errors, and hints, and (5) preemptive hints with error flagging, error feedback, and hints. The aim of the study is to learn which levels of assistance work best in an exploratory science learning environment.
For example in Level 1, students are asked to collect a sample of iguanas from the islands which shows the range of variation among the iguana populations. As they undertake the data collection task by exploring the islands, taking photos of iguanas that they see and saving them to their logbook, a Bayesian Network is used to collect data about student actions and assign probabilities of students having acted in such as way that suggests they are struggling with the task. The Bayes’ Net contains a decision node that, when the probability exceeds a threshold value, turns on the assistance. The Bayes’ Net has three top layers that range from the most general to most specific—the Knowledge, Skills, and Abilities (KSA) Layer, the Error Evaluation Layer, and the Error Diagnosis Layer. The specific nodes at each of these layers have associated error feedback and hints that are triggered when the nodes
207
Simulations for Supporting and Assessing Science Literacy
Figure 8. Student level progress report for a Life Science Ecosystem embedded assessment about population dynamics
Table 2. Spectrum of Assistance. The basis for the experimental design is a matrix that crosses Frequency of Intervention with Level of Support. Frequency of Intervention → Never Level of Support ↓
208
Error Flagging
Condition 1 No support
When Struggling
Always
Condition 2 Flagging errors when struggling
Skipped Condition
Error Flagging + Error Feedback
Condition 3 Flagging errors & providing feedback when struggling
Skipped Condition
Error Flagging + Error Feedback + Hints
Condition 4 Flagging errors & providing feedback and hints when struggling
Condition 5 Full support beginning with a preemptive hint is always provided
Simulations for Supporting and Assessing Science Literacy
Figure 9. Class level report for Ecosystems benchmark assessment
at the associated level reach a certain threshold. Whether a student receives the feedback or hints is configurable according to (a) what experimental condition they are in and (b), in the case of hints, whether they request help. With assistance having been configured this way, the conditions of assistance have been created that are the focus of the experimental design. The designs of the WestEd STEM projects’ student, task, and evidence models merge two critical affordances of simulations. Dynamic visualizations permit use of cueing and multiple representations of science phenomena that may not be directly seen or investigated. In addition, underlying databases can record learner actions and generate immediate reports, feedback, and customized scaffolding. The projects address deep learning in the form of integrated knowledge and
processes ensconced within meaningful problem situations. The interactive capabilities of simulations support active learning and result in better measurement of inquiry skills than produced by static test formats (Quellmalz, et al., 2013)
Research and Development Methods Overview The WestEd STEM technology-enhanced learning projects employ systematic, iterative design and development processes. Following the design phase, when learning outcomes are specified, tasks are designed, and evidence models are detailed, the projects seek expert reviews of the science content and assessment tasks. Cognitive labs are conducted with individual students to confirm intended construct validity and usability. Classroom
209
Simulations for Supporting and Assessing Science Literacy
Figure 10. Assistance Provided in VTG. For condition 4 when assistance is available, errors are flagged and students are provided with feedback and hints when they are struggling.
tryouts then proceed from small-scale feasibility testing to pilot and field testing with progressively larger numbers of students and teachers. In this section we summarize the research and development methods for the WestEd STEM projects.
Implementation Studies The STEM projects have been implemented in a range of classrooms representing the intended student populations. The sections below summarize data about the projects’ use in classrooms.
SimScientists Implementation In 2010, a large-scale implementation study was conducted to determine whether simulation-based assessments could be delivered in a wide range of settings (Quellmalz, Timms, Silberglitt, & Buckley, 2012). Over 5,000 students participated in the classrooms of 55 teachers in 39 different schools, from 28 school districts in 3 states. Table 3 shows that this sample represents a wide range of student backgrounds, including students with disabilities and English language learners.
Table 3. Total numbers of English language learners (ELL) and students with disabilities (SWD) SWD*
ELL
FRL
Caucasian
Hispanic
AfricanAmerican
Asian
Other**
12%
6%
34%
66%
13%
11%
4%
6%
*11% IEPs; >1% 504 plans **multiracial, native American, Pacific Islanders, or unknown
210
Simulations for Supporting and Assessing Science Literacy
The implementation study included two suites developed by the SimScientists program: Ecosystems and Force & Motion. Each teacher used one of the two suites. A total of 3,529 students completed the Ecosystems assessments and 1,936 students completed Force & Motion. Each suite included 2–3 simulation-based, curriculum-embedded assessments with feedback and assessment. Teachers participated in a 1½-day professional development workshop prior to using the suites in their classrooms. In addition to familiarizing teachers with the assessments and reflection activities, this workshop focused on two key components of the implementation: curriculum integration and formative use of assessment data. The process of curriculum integration included several steps before, during, and after the PD workshop. This process was supported by facilitators who helped participants understand the prerequiste knowledge required for each embedded assessment so that teachers would schedule the assessments after the core ideas and practices had been addressed in the teachers’ ongoing curricula. Teachers determined the lesson sequence and the precise timing of the embedded assessments. The purpose of curriculum integration was to ensure that embedded assessments would serve as appropriate “checks for understanding” as well as opportunities for integration of knowledge about the components, interactions, and emergent behavior of each science system and active investigation of the dynamic phenomena. Such knowledge integration and active inquiry remain uncommon in traditional modes of instruction. The process of curriculum integration began with a teacher survey, completed prior to the PD workshops. In this survey, teachers indicated the number of days they planned to teach particular aspects of the topic, including science practices and concepts. Teachers were asked to bring their curricula to the workshops and use their states’ standards to bring together alignments of their curricula and the modules, which had been aligned to each states’ standards during design and devel-
opment. The teachers then decided at what points in the unit to insert the embedded assessments. After each of the embedded modules, teachers completed follow-up surveys to indicate how closely the implementation resembled their plan, and whether they used progress reports from embedded modules as formative evidence to adjust their instruction. As with any software technology, there were myriad potential pitfalls on the way to implementing. To address the implementation challenges, the SimScientists team devised protocols for troubleshooting, programmed safeguards to protect against data loss, and provided real-time help to teachers using telephone and email help lines. During the use of the simulation-based, curriculum-embedded modules, teachers were given options to have students work one-to-one with computers, work in teams of two or three, or a hybrid approach in which students each have their own computer, but work side-by-side to support each other in learning. After implementation of the last embedded assessment and reflection activity, teachers administered the benchmark assessment. The benchmark assessment was designed as a summative assessment, which students completed independently and without the assistance of feedback and coaching. Data from complex interactions and problem solving patterns were interpreted using Bayes’ Nets, and reports were generated that categorized students’ performance on each assessment target in one of four levels: advanced, proficient, basic, or below basic. In the implementation study, the fidelity of implementation was evaluated by the UCLA Center for Research and Evaluation of Standards and Students (CRESST). Following observation of the professional development sessions, the evaluation sampled classrooms to observe as students used the embedded and benchmark simulation-based assessments. Teachers completed surveys describing their ongoing curriculum unit and how they used the simulation-based assessments to monitor their students’ progress and adjust their instruction.
211
Simulations for Supporting and Assessing Science Literacy
A sample of teachers was interviewed about their perceptions of the feasibility and instructional utility of the simulations. In addition, completion rates documented in the Learning Management System corroborated that students were able to complete the simulation-based assessments in a class period. The CRESST evaluation documented that the simulation-based assessments could be implemented across a wide range of schools with diverse populations, science curricula and infrastructures (Quellmalz, et al., 2012). The evaluation findings suggested that participating in the SimScientists program was beneficial to learning and feasible and useful in middle school classrooms. ChemVLab Implementation The ChemVLab project has been implemented extensively. In one study, 13 teachers and 1334 secondary students used four ChemVLab activities. Students completed pre- and post tests as well as the modules. Teachers participated in a 3-hour professional development workshop and completed surveys during the implementation. Researchers conducted classroom observations and collected student demographic information (Davenport et al., 2014). Findings from this research are anticipated in a manuscript currently in draft. Voyage to Galapagos Implementation In the early development phase, the project conducted cognitive labs with 12 students to identify usability issues and establish initial construct validity. Initial classroom feasibility testing was then conducted with 7th grade classes, 161 students, in two schools. Data were collected in the LMS as students worked through the application. The interactions and communication between the flash-based application, the LMS database, and the Bayes’ Net was demonstrated to operate effectively in providing real-time feedback and assistance
212
to students. The cognitive labs and classroom observations indicated that students with greater assistance advanced further through the tasks. Pilot studies were conducted in two schools with 258 7th grade students. The software was embedded within the normal classroom lessons and used as a curriculum supplement. Students completed pre- and post tests and had three class periods to use VTG. In addition to using the VTG software, teachers completed two 2-hour professional development sessions that provided guidance for embedding the software in their curriculum. They also participated in interviews following the implementation. Classroom observations and student demographic information along with LMS data and selected case studies were analyzed to validate and refine the Bayes’ Nets that provide assistance for the different research conditions (Brenner, Timms, McLaren, Brown, Weihnacht, Grillo-Hill, et al., 2014).
Technical Quality Evaluations of the technical quality of the WestEd STEM projects combine qualitative and quantitative methods. Specifications of significant content and skills document alignments to national standards and frameworks. External experts review alignments and grade-level appropriateness of task features. Think alouds and classroom trials provide data on reliability and validity. The sections below summarize the projects’ technical quality studies. SimScientists Technical Quality The quality and validity of the SimScientists simulations have been documented for multiple topics, in multiple projects by employing established evaluation methodologies: alignment with national standards for science, expert review of scientific content and task and item quality by the
Simulations for Supporting and Assessing Science Literacy
American Association for the Advancement of Science (AAAS), cognitive analyses of students thinking aloud, and analyses of teacher and student data gathered from classroom testing (AERA/ APA/NCME, 2014; Pellegrino, 2002; Quellmalz, et al., 2012; Quellmalz, et al., 2005). Technical quality of the SimScientists assessments was established by standard measures of reliability and by gathering evidence of validity from a variety of sources. Independent, expert reviews of task alignments with science standards, accuracy of science system models, and grade-level appropriateness established initial construct validity of the simulation-based tasks prior to programming. Once programmed versions were developed, researchers administered the assessments to individuals, including both students and teachers, asking examinees to think aloud while completing the tasks. Recordings of the computer screen, together with audio, were reviewed by content experts for further evidence of validity, as well as usability of the interface. Tasks were subsequently revised as needed to improve their validity. To establish the validity of the classifications in the embedded reports, a one-way ANOVA was conducted using scores on the simulation-based benchmark. Standard psychometric analyses were conducted for the summative benchmark assessments. For the Ecosystems and Force & Motion benchmark assessments, which include a variety of dichotomous and polytomous items of various formats, Cronbach’s alpha was 0.76 and 0.73, respectively. To establish the validity of the benchmark scores, correlations were measured between the simulation-based benchmark assessments and a set of traditional multiple-choice items aligned to the same assessment targets and administered to students in tandem with the benchmark assessments. Correlations were moderate (0.57 to 0.64), showing that the two types of assessments measured similar constructs, but the measures were not exactly the same. Further, correlations between the dimensions of science practice and
content were lower within each benchmark (0.70 and 0.80) than within each set of post test items (0.85 and 0.92), suggesting that the simulationbased benchmark assessments were better for detecting differences between students’ abilities in each dimension. (Quellmalz, et al., 2012). ChemVLab Technical Quality Analyses were conducted using data on student engagement and learning in the ChemVLab activities, including classroom observations, pre- and posttests, logs of students’ interactions with the online activities, and interviews with teachers (Davenport et al., 2012, 2014). Classroom observations recorded that students stayed on task while using the virtual lab, and that discussions between students focused on the content of the activities. Students’ scores improved between pre- and posttest administrations of a measure composed of released items from an American Chemical Society exam and researcher-developed items. Data mining of the log files from students interactions and problem solving processes revealed changes in student behavior over the course of each activity. Evidence included comparisons between parallel tasks, in which students needed fewer attempts to complete later tasks, and were less likely to pursue incorrect lines of investigation in the virtual lab, such as continuing to add water to a solution after the target concentration had been reached. During interviews, teachers indicated that the activities were feasible for classroom use and helpful to improve students’ abilities. Analyses of the reliability and validity of the activities as assessments themselves is currently underway. Voyage to Galapagos Technical Quality The critical interactions and communication between the flash-based application, the LMS database, and the Bayes’ Net were demonstrated to operate effectively in providing real-time feedback and assistance to students. Analyses were conducted on data gathered from multiple
213
Simulations for Supporting and Assessing Science Literacy
classroom feasibility studies in the 7th grade classes in two schools with 260 students. The coded data from the LMS, Bayes’ Net, cognitive labs, and classroom observations revealed that the different experimental conditions could be distinguished and that students with greater assistance advanced further through VTG than those with less assistance. A randomized controlled study in classrooms of 12 teachers is underway to help understand how much guidance students need as they learn—and how to cater guidance to the prior knowledge level of students—and thus to be able to appropriately design software to best support student learning.
Impacts on Learning SimScientists Impacts To study whether the simulation-based curriculum embedded assessments, intended to provide formative assessment and adjusted instruction, had positive impacts on learning, in 2012 a clusterrandomized controlled study was conducted in the classrooms of 26 teachers, with 2,318 students. Each teacher’s classes were randomly assigned to one of two conditions: the treatment condition, which included a suite of simulation-based assessments and off-line classroom reflection activities embedded into a teacher’s regular instruction, a simulation-based benchmark assessment, and a traditional multiple-choice pre- and post test, or the control condition, which included the same number of days of instruction, with only the
simulation-based benchmark assessment and the pre- and post tests. Effect sizes were determined using a two-level HLM with terms for the nesting of students within classes and classes within teachers. As shown in Table 4, based on ability estimates from posttests composed of traditional multiplechoice items, treatment effects (the effects of the embedded, formative assessments) were small but significant overall for the Ecosystems suite, and within each suite, for inquiry in Ecosystems and for content in Atoms & Molecules. Given that students only experienced the simulation-based embedded assessments for two or three times during multi-week units, the effects supported the promise of the active inquiry, individualized feedback and coaching in the simulation-based assessments and the additional reinforcement and adjusted instruction in the subsequent reflection activities for promoting progress, particularly on inquiry practices. Table 5 shows that, based on ability estimates from the simulation-based benchmark assessments, treatment effects were small to moderate and statistically significant overall and for each dimension. These data documented the benefit of formative use of the simulation-based embedded assessments. They also provide evidence that such effects are more likely to be detected by measures that employ similar formats, compared to more traditional tests. (Quellmalz, Timms, Buckley, Loveland & Silberglitt, 2012; Quellmalz, Silberglitt, Timms, Buckley, Loveland & Brenner, 2012)
Table 4. Treatment effects based on posttest ability estimates for ecosystems and atoms & molecules Ecosystems
Treatment Effect Size
p value
Overall
0.061
0.047
Content
0.057
Inquiry
0.092*
* Significant at p < .05
214
Atoms & Molecules
Treatment Effect Size
p value
Overall
0.075
0.079
0.082
Content
0.106*
0.019
0.020
Inquiry
0.029
0.597
Simulations for Supporting and Assessing Science Literacy
Table 5. Treatment effects based on benchmark ability estimates for ecosystems and atoms & molecules Ecosystems
Treatment Effect Size
p value
Atoms & Molecules
Treatment Effect Size
p value
Overall
0.286*
< 0.0001
Overall
0.390*
< 0.001
Content
0.148*
0.0005
Content
0.327*
< 0.001
Inquiry
0.297*
< 0.0001
Inquiry
0.498*
< 0.001
* Significant at p < .05
English Learners and Students with Disabilities To evaluate the benefits of simulation-based assessments for English language learners (ELLs) and students with disabilities (SWDs), performance of each focal group was compared to the general population on the simulation-based benchmark assessments and on posttests. Total numbers of each sample are listed for two topics, ecosystems and force & motion, in Table 6 below. Analyses of performance on each assessment found significant differences between performances of each focal group and the general population. Although performances on all assessments were lower for the focal groups, gaps in performance were smaller on the simulation-based benchmark assessments than gaps on the multiple-choice posttests. Figure 11 compares the average percent correct on each assessment for ELLs and SWDs. Also included are comparisons of average scale scores on four administrations of the 8th grade NAEP science. (Quellmalz & Silberglitt, 2011) These data provide evidence for the benefits of the simulations for assessment. These benefits include presentation formats with multiple representations and response formats that engage
students in active investigations and on-screen manipulations. These modes provide alternatives to text.
Multilevel Assessment Systems Science simulations are being included in national and international tests as means for measuring students’ proficiciencies in science practices. Although large scale testing programs have limited time for administration, simulations designed for classroom use can offer opportunities for more extended investigations with individualized feedback and coaching. Reports can be used formatively by teachers for adjusting instruction during curriculum units. WestEd SimScientists projects are developing science assessment system designs for formative and summative purposes at multiple levels of the educational system in which variants of templates for simulation environments can be used in classrooms during and at the end of units, and in district, state and national assessments—templates for observing phenomena at different scales, building models of a science system, and conducting investigations of the emergent behaviors resulting
Table 6. Total numbers of English language learners (ELLs) and students with disabilities (SWD) Group
Ecosystems Posttest
Ecosystems Benchmark
Force & Motion Posttest
Force & Motion Benchmark
English learners
123
126
50
50
Students with disabilities
183
189
153
153
215
Simulations for Supporting and Assessing Science Literacy
Figure 11. Gaps between ELL, SWD and the general population
from interactions among system components. By taking advantage of principled specifications for simulation-based assessments, coherent, vertically articulated science systems can be achieved.
216
In the large-scale implementation study described above, the primary goal was to determine the suitability of simulation-based assessments for a state science assessment system and to de-
Simulations for Supporting and Assessing Science Literacy
scribe models for incorporating them (Quellmalz, et al., 2012). The simulation-based assessments consisted of two or three curriculum-embedded assessments with feedback and coaching to be used by the teacher as formative resouces to adjust instruction. An end-of-unit benchmark assessment, without feedback and coaching was designed to serve as a summative measure of the state of students’ proficiencies on specified core ideas and science practices. A six-state Design Panel reviewed the study findings supporting the technical quality, feasibility, and utility of the benchmark assessments and judged that the SimScientists simulation-based assessments could serve as credible components of a state science assessment system. Interviews of state representatives by UCLA’s Center for Research on Evaluation of Students, Standards, and Testing (Herman, Dai, Htut, Martinez & Rivera, 2011) documented positive feedback overall. The state representatives reported that the SimScientists assessments worked well, and that teachers were willing to participate. The state representatives, impressed with teachers’ reactions and the nature of the assessments and associated reflection activities, encouraged development and implementation in additional science topics and in subject areas beyond science, such as mathematics. The six states on the Design Panel collaborated with WestEd to formulate two models for states to use simulation-based science assessments. The models aimed to describe how simulation-based assessments could become part of balanced state assessment systems, at the classroom, district, and state levels, with common designs that would make them mutually reinforcing (Pellegrino, et al, 2001; Quellmalz & Moody, 2004). The two models created combinations of simulation-based science assessments that would be coherent with each other, comprehensive in coverage of state science standards, and provide continuity of assessments through multiple forms and occasions.
The two models proposed included using classroom assessment proficiency data to augment state reports and use of a sample of simulationbased “signature tasks” parallel to those in the benchmarks to administer as part of state or district tests. Figure 12 presents a sample report that could be generated in the “Side-by-Side” model in which data at the state, district, and classroom levels are mutually aligned and complementary. District and classroom assessments can provide increasingly rich sources of information, allowing a fine-grained and more differentiated profile of a classroom, school, or district that includes aggregate information about students at each level of the system. In this “Side-by-Side” model, the unit benchmark assessments can function as multiple measures administered after science units during the school year, providing a continuity of in depth, topic-specific “interim” or “throughcourse” measures that are directly linked in time and substance to units on science systems. Figure 13 portrays the “Signature Task” model in which states and districts draw upon specifications and rich simulation environments developed for classroom assessments to create a new, parallel set of tasks. These signature tasks could be administered in a matrix sampling design during the state or district testing to collect data on inquiry practices and integrated knowledge not fully measured by traditional item formats on the state test. For example, the first task in each row shows a signature task for investigating the effect of forces on objects. On the state test, the object is a train. On the classroom assessment, the object is a fire truck. The masses, forces, and results of the investigations vary between the parallel tasks, but the simulation interface and the task structure are otherwise identical. This model assures coherence of task types in different levels of the assessment system. The two models can provide a template for states to
217
Simulations for Supporting and Assessing Science Literacy
Figure 12. Side-by-Side Model, showing how data reported from unit benchmark assessments can augment information from district and state science reports
Figure 13. Signature Task model, showing how parallel tasks can be developed for state and classroom assessments
218
Simulations for Supporting and Assessing Science Literacy
begin moving closer to the goal of a system for state science assessment that provides meaningful information drawn from a system of nested assessments collected across levels of the educational system. In two recent SimScientists projects, a life science and a physical science strand of assessment suites are being developed for multiple units within a grade level. Each suite consists of simulation-based, curriculum-embedded assessments for formative use and end-of-unit benchmark assessments for summative evidence. Sets of simulation-based signature tasks are being developed from the template specifications used for the curriculum-embedded and benchmark assessments. End-of-year assessments are being developed for the life science and physical science strands that will consist of sets of simulation-based signature tasks. These studies of the implementation, technical quality, and impacts on learning of the WestEd STEM projects provide evidence of the value of simulations for promoting and assessing science learning. Coupled with a principled approach to the design of simulation-based learning environments, the rigorous development and validation process can serve as a strong model for the design and empirical study of other technology-enhanced projects.
FUTURE DIRECTIONS The WestEd STEM projects are conducting further research on the impacts of simulations on learning and assessment. Projects are also extending simulation designs into other genre of technology enhanced learning environments, including 3D simulations and games.
Research Directions Learning Progressions Two SimScientists projects are beginning to investigate the affordances of simulations for supporting and assessing the development of students’ understanding of system models of natural and engineered phenomena studied throughout the school year. The Model Progressions project targets middle school students’ understanding of genetics, evolution, and ecosystems as well as their ability to use genetics models to reason about evolution and ecosystems and the interactions among the three topics. The SimScientists Crosscutting Concepts: Progressions in Earth Systems aims to investigate learning trajectories for three crosscutting concepts (scale, cycles, systems) and will study development of these learning trajectories across three middle school Earth science topics (geosphere, climate, ecosystems). Learning progressions should focus on foundational and generative ideas and practices of the domain, be grounded in research, possess an internal conceptual coherence, and be empirically testable (Corcoran, Mosher, & Rugat, 2009; Duncan, 2009). Like Songer and colleagues (2009), the SimScientists projects are investigating learning progressions from a disciplinary perspective. The focus is on foundational systems of a domain and science practices that enable learners or scientists to generate and test hypotheses. In model-based learning terms, learning progressions describe pathways by which learners’ mental models of dynamic phenomena become more complex, accurate, and interconnected as they approximate the targeted system model. In classrooms the path is shaped by the curriculum for the year and the sequence in which topics are addressed.
219
Simulations for Supporting and Assessing Science Literacy
Simulations provide learners with opportunities to interact with representations of phenomena. SimScientists modules scaffold learners’ interactions with the simulations and provide an instructional pathway for the elaboration of learners’ mental models as well as the development of science practices that further learners’ ability to develop and use simulations to understand complex systems. SimScientists modules enable students to build, test and revise their models. Current projects are exploring how to help learners connect systems across topics in the life and physical sciences, for example, how at the emergent level, organisms’ genetics lead to a variety of traits that interact and evolve within ecosystems. These projects are also exploring how evidence models can detect patterns of learner responses that might characterize learning progressions.
Development Directions Simulations offer enormous potential for representing significant dynamic phenomena in science, social science, arts, and humanities. The technology can display and overlay phenomena that change in scale, time, and distance. In science, simulations can juxtapose microscopic and macroscopic representations, local, global, and galactic phenomena. In social science, simulations can slide back and forth in time and from place to place. In art and the humanities, simulations can embed visual arts into cultural and historical contexts, and fast-forward performances. To date, the SimScientists simulation-based assessments embedded within a unit have had small, but significant impacts on science learning, importantly, the use of inquiry practices. The logistics of computer availability and teachers’ pacing guides limited the number of periods that teachers could schedule access to computers during a unit. As the SimScientists projects develop strands of the simulation-based assessment suites for additional units, further research can seek evidence of potentially stronger impacts on learning,
220
particularly improvement in inquiry practices, over multiple units across the school year.
In Touch With Molecules In the In Touch With Molecules project (molecules. wested.org), collaborators at The Scripps Research Institute and WestEd are using physical models to represent biological structures and to simulate the functions that emerge from interactions among these structures, from individual nucleotides in DNA, to viral capsids composed of many hundreds of proteins and the genetic material they encapsulate. This project builds upon the groundbreaking work of Dr. Arthur Olson, who leads the Molecular Graphics Laboratory at Scripps. In his lab, components of the physical models are created with 3D printers, embedded with magnets, and assembled into articulating models with conformational preferences. The lab has also developed augmented reality that merges the physical world with computer graphics, tracking interactions of the physical models through the camera in a mobile device and combining the images on the device’s screen. The In Touch With Molecules project is integrating model use into teaching and learning in a range of contexts, from 9th-grade general biology to graduate courses. Each activity scaffolds the initial interactions with the model, challenges students to make predictions about phenomena that can be simulated with the model, and then scaffolds the process of using the model to test these predictions. The activities also challenge students to make connections between the model and the actual molecules and processes it can be used to represent, recognizing the affordances. The goal of the learning is to be able to explain how interactions between the components of biological molecules give rise to more complex structures and associated functions. For example, in the DNA model components give rise to structural properties, such as the helical
Simulations for Supporting and Assessing Science Literacy
shape of a single strand of DNA, and to interactions between structures, such as complementary base pairing that brings two strands together in a double helix. Magnets in the models simulate these interactions: the forces of attraction between the two strands. The simulated forces can be felt when the model is assembled, and again when pulling apart the two strands to simulate the “unzipping” of DNA that precedes replication and gene expression. Through a simulated process of replication, students can gain first-hand experience with the structural and functional consequences of complementarity between DNA strands and the semi-conservative nature of DNA replication. Rather than simply constructing the final, double-stranded model as a puzzle to be assembled in an arbitrary way, the task asks students to consider how each component is added during the process of DNA replication. Table 7 below shows how the task model for DNA replication integrates model construction and use with aspects of the content, including the structure and function of DNA. This task scaffolds the simulation of an important bio-molecular process, and simultaneously prompts students to consider how function arises from structure. The In Touch with Molecules project is developing evidence models of student understanding by exploring the data and generating hypotheses about how interactions in the activities can be interpreted. Evidence of conceptual understanding
is gathered by documenting how students use the models to answer questions and test predictions as they simulated processes. For example, students produce video recordings of the replication process as simulated with the DNA model. In the future, tracking capabilities that augment the video recording could be used to capture students’ interactions with the model. An evidence model could then be employed for interpreting interactions, providing feedback to students and teachers, and monitoring student progress in mastering the relevant targets.
SimScientists Games The rapidly growing field of educational games is a particularly promising and logical direction for extending the design of simulation-based learning environments. Scientifically principled simulations can provide models and laboratories for investigating systems in the natural and designed world. System models can become digital environments for “serious” games that address the Next Generation Science Standards. Games are seen as a promising strategy for immersing students in the excitement of doing science. Educational games can offer a sharp contrast to the prevailing activity structure in U.S. science classrooms characterized as “motivate, inform, and assess,” treating science as a “final form” of solved problems and theories to be transmitted
Table 7. Task model for DNA replication Step in Model Construction
Integrating Task
Add one nucleotide to begin the complementary strand.
How did you know which base to add? Explain how features of the model support your answer.
Add a second nucleotide to continue forming the complementary strand.
The hydrogen bonds in the new base pair form first, before the covalent bond in the backbone. Explain why the bonds form in this order. (Hint: what would happen if they didn’t?)
Add a third nucleotide to the complementary strand.
Will the template strand and complementary strand of DNA be identical? Explain how features of the model support your answer.
Add a fourth nucleotide to the complementary strand.
Each strand of DNA is considered to be a polymer. What is the monomer of DNA? Explain how features of the model support your answer.
Add the fifth nucleotide to the complementary strand.
Bases can pair in other ways than by the base-pairing rules you learned in class. How do incorrect pairs affect the structure of DNA? Use the model to find at least two ways.
221
Simulations for Supporting and Assessing Science Literacy
(Duschl, Schweingruber, & Shouse, 2007; Linn & Eylon, 2006). Such static transmission of science not only fails to promote deep science learning, but also squelches students’ interest in the study of science. Reports by the NRC and others have summarized the potential of games to enhance motivation, conceptual understanding, and science process skills, but noted that much more research is needed on game design and learning impacts (Honey & Hilton, 2011; Martinez-Garza, Clark, & Nelson, 2013; Quellmalz et al., 2009). Games are renowned for their appeal, but also for their dearth of focus on educationally significant, deep knowledge, strategic problem solving, and research-based mechanisms to promote or assess academic learning. To date, evidence of valued science learning is patchy, but studies are emerging about the benefits of games for science learning (Clark, Tanner-Smith, & Killingsworth, 2014; Honey & Hilton, 2011; Quellmalz, Timms, & Schneider, 2009). Research from cognitive learning, model-based reasoning, achievement motivation, and evidence-centered assessment design can be merged with the conventions of game design to produce activities that make learning effective and fun. The SimScientists simulation-based supplementary curriculum-embedded assessments could be employed to conduct further research and development on how a new genre of cognitivelyprincipled science learning games can promote, assess, reinforce, and extend deep science learning and also harness gameplay to motivate and engage. To foster learning, game features would include a focus on clear learning goals, compelling narrative quests, a balance of challenge and scaffolding with just-in-time feedback, hints, and explanation, adaptive problems, visual concrete and idealized representations, and user control (Clark, Nelson, Sengupta, & D’Angelo, 2009; Moreno & Mayer, 2005; Salen & Zimmerman, 2003; Squire, 2006). Within a game, students would take the role of empowered actors who must actively apply content knowledge and science practices to achieve
222
a goal (Barab, Gresalfi, & Ingram-Goble, 2010). The games would provide adaptive levels of difficulty that challenge and engage students without interrupting the flow of play (Shute, Rieber, & Van Eck, 2011; Gee, 2007), and the scaffolding and engagement needed for students to engage in important science practices called for in the NGSS (Clark, et al, 2012; Kafai, Quintero, & Felton, 2010; Squire & Jan, 2006; Steinkuehler & Duncan, 2008). In addition to outcomes for science concepts and practices, games would promote and assess 21st century skills such as collaboration; the game platforms could allow massive, multiplayer games that promote collaborative problem solving.
CONCLUSION This chapter describes how research can inform the design of simulations that model science systems with the aim of promoting understanding of core ideas about systems in the natural and designed world along with the application of science and engineering practices to study and learn about these systems. The lines of research inform the design and linking of specified knowledge and skills, tasks for representing science phenomena and for eliciting observations of students’ understanding of core ideas practices, and alignments of goals and tasks to elicit and evaluate evidence of learning, and to report it. Design principles derived from research and best practice inform designs of simulation-based environments to promote and assess science learning, along with research methods for evaluating the quality and validity of simulation projects. The findings from empirical work in schools demonstrate their technical quality and their impacts on learning. When guided by findings from learning research, technology-enhanced environments using simulations can fundamentally transform science education, and provide future directions for research and development.
Simulations for Supporting and Assessing Science Literacy
REFERENCES Ainsworth, S. (2008). The educational value of multiple-representations when learning complex scientific concepts. In J. K. Gilbert, M. Reiner, & M. Nakhleh (Eds.), Visualization: Theory and practice in science education (pp. 191–208). Dordrecht: Springer. doi:10.1007/978-1-40205267-5_9 Barab, S. A., Gresalfi, M., & Ingram-Goble, A. (2010). Transformational play using games to position person, content, and context. Educational Researcher, 39(7), 525–536. doi:10.3102/0013189X10386593 Betrancourt, M. (2005). The animation and interactivity principles in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 287–296). New York, NY: Cambridge University Press. doi:10.1017/ CBO9780511816819.019 Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Brenner, D. G., Timms, M., McLaren, B. M., Brown, D. H., Weihnacht, D., Grillo-Hill, A., . . . Li, L. (2014). Exploring the Assistance Dilemma in a Simulated Inquiry Learning Environment for Evolution Theory. Paper presented at the 2014 Annual Meeting of the American Education Research Association, Philadelphia, PA. Buckley, B., Gobert, J., Kindfield, A., Horwitz, P., Tinker, R., Gerlits, B., & Willett, J. et al. (2004). Model-based teaching and learning with BioLogica™: What do they learn? How do they learn? How do we know? Journal of Science Education and Technology, 13(1), 23–41. doi:10.1023/B:JOST.0000019636.06814.e3 Buckley, B. C. (2012). Model-based teaching. In M. Norbert (Ed.), Encyclopedia of the sciences of learning (pp. 2312–2315). New York: Springer.
Case, B. J. (2008). Accommodations to improve instruction and assessment. In R. C. Johnson & R. E. Mitchell (Eds.), Testing deaf students in an age of accountability. Washington, DC: Gallaudet Research Institute. Case, B. J., Brooks, T., Wang, S., & Young, M. (2005). Administration mode comparability study for Stanford diagnostic Reading and Mathematics tests. San Antonio, TX: Harcourt Assessment, Inc. CAST. (2008). Universal design for learning guidelines version 1.0. Wakefield, MA: Author. Clark, D., Nelson, B., Sengupta, P., & D’Angelo, C. (2009, October). Rethinking science learning through digital games and simulations: Genres, examples, and evidence. Paper presented at Learning science: Computer games, simulations, and education: Workshop conducted from the National Academy of Sciences, Washington, DC. Clark, D., Tanner-Smith, E., & Killingsworth, S. (2014). Digital Games, Design and Learning: A Systematic Review and Meta-Analysis (Executive Summary). Menlo Park, CA: SRI International. Clark, D. B., Martinez-Garza, M. M., Biswas, G., Luecht, R. M., & Sengupta, P. (2012). Driving assessment of students’ explanations in game dialog using computer-adaptive testing and hidden Markov modeling. In D. Ifenthaler, D. Eseryel, & G. Xun (Eds.), Assessment in game-based learning: Foundations, innovations, and perspectives (pp. 173–199). Springer New York. doi:10.1007/9781-4614-3546-4_10 Clark, R. C., & Mayer, R. E. (2011). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. Wiley.com. Clement, J., & Rea-Ramirez, M. A. (2008). Model based learning and instruction in science. Dordrecht: Springer. doi:10.1007/978-1-4020-6494-4
223
Simulations for Supporting and Assessing Science Literacy
College Board. (2009). Science: College Boards standards for college success. Retrieved from http://professionals.collegeboard.com/profdownload/cbscs-science-standards-2009.pdf Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). London: Routledge. Corcoran, T., Mosher, F. A., & Rogat, A. (2009). Learning progressions in science: An evidencebased approach to reform. CPRE Research Report# RR-63. Philadelphia: Consortium for Policy Research in Education. Davenport, J. L., Rafferty, A., Timms, M. J., Yaron, D., & Karabinos, M. (2012). ChemVLab+: Evaluating a Virtual Lab Tutor for High School Chemistry. The Proceedings of the 2012 International Conference of the Learning Sciences, (pp. 381–385). Academic Press. Davenport, J. L., Rafferty, A., Yaron, D., Karabinos, M., & Timms, M. (April, 2014). ChemVLab+: Simulation-based Lab activities to support chemistry learning. Paper presented at the 2014 Annual Meeting of the American Educational Research Association, Philadelphia, PA. Duncan, R. G., & Hmelo‐Silver, C. E. (2009). Learning progressions: Aligning curriculum, instruction, and assessment. Journal of Research in Science Teaching, 46(6), 606–609. doi:10.1002/ tea.20316 Duschl, R. A., Schweingruber, H. A., & Shouse, A. W. (2007). Taking science to school: Learning and teaching science in grades k–8. Washington, DC: The National Academies Press. Gee, J. P. (2007). What video games have to teach us about learning and literacy (2nd ed.). Palgrave Macmillan.
224
Gobert, J. D., & Buckley, B. C. (2000). Introduction to model-based teaching and learning in science education. International Journal of Science Education, 22(9), 891–894. doi:10.1080/095006900416839 Gobert, J. D., & Clement, J. J. (1999). Effects of student-generated diagrams versus studentgenerated summaries on conceptual understanding of causal and dynamic knowledge in plate tectonics. Journal of Research in Science Teaching, 36(1), 39–53. doi:10.1002/(SICI)10982736(199901)36:13.0.CO;2-I Goldstone, R. L. (2006). The complex systems see-change in education. Journal of the Learning Sciences, 15(1), 35–43. doi:10.1207/ s15327809jls1501_5 Goldstone, R. L., & Wilensky, U. (2008). Promoting transfer through complex systems principles. Journal of the Learning Sciences, 17(4), 465–516. doi:10.1080/10508400802394898 Grotzer, T. A. (2003). Learning to understand the forms of causality implicit in scientifically accepted explanations. Studies in Science Education, 39(1), 1–74. doi:10.1080/03057260308560195 Hegarty, M. (2004). Dynamic visualizations and learning: Getting to the difficult questions. Learning and Instruction, 14(3), 343–351. doi:10.1016/j. learninstruc.2004.06.007 Herman, J., Dai, Y., Htut, A. M., Martinez, M., & Rivera, N. (2011). Evaluation of the enhanced assessment grants (EAG) SimScientists program: Site visit findings. Los Angeles: CRESST. Hmelo-Silver, C. E., Jordan, R., Liu, L., Gray, S., Demeter, M., Rugaber, S., & Goel, A. et al. (2008). Focusing on Function: Thinking below the Surface of Complex Natural Systems. Science Scope, 31(9), 27–35.
Simulations for Supporting and Assessing Science Literacy
Honey, M. A., & Hilton, M. (Eds.). (2011). Learning science through computer games and simulations. Washington, DC: National Academies Press. Ioannidou, A., Repenning, A., Webb, D., Keyser, D., Luhn, L., & Daetwyler, C. (2010). Mr. Vetro: A Collective Simulation for teaching health science. International Journal of ComputerSupported Collaborative Learning, 5(2), 141–166. doi:10.1007/s11412-010-9082-8 Kafai, Y. M., Quintero, M., & Felton, D. (2010). Investigating the ‘why’ in Whypox: Casual and systematic explorations of a virtual epidemic. Games and Culture, 5(1), 116–135. doi:10.1177/1555412009351265 Krajcik, J., Marx, R., Blumenfeld, P., Soloway, E., & Fishman, B. (2000, April). Inquiry based science supported by technology: Achievement among urban middle school students. Paper presented at the 2000 Annual Meeting of the American Educational Research Association, New Orleans, LA. Kriz, S., & Hegarty, M. (2007). Top-down and bottom-up influences on learning from animations. International Journal of Human-Computer Studies, 65(11), 911–930. doi:10.1016/j. ijhcs.2007.06.005 Kühl, T., Scheiter, K., Gerjets, P., & Edelmann, J. (2011). The influence of text modality on learning with static and dynamic visualizations. Computers in Human Behavior, 27(1), 29–35. doi:10.1016/j. chb.2010.05.008
Lehrer, R., Schauble, L., Strom, D., & Pligge, M. (2001). Similarity of form and substance: Modeling material kind. In S. M. Carver & D. Klahr (Eds.), Cognition and instruction: Twentyfive years of progress (pp. 39–74). Mahwah, NJ: Lawrence Earlbaum Associates. Linn, M. C., & Eylon, B. S. (2006). Science education: Integrating views of learning and instruction. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (pp. 511–544). New York, NY: Routledge. Lowe, R., & Schnotz, W. (Eds.). (2008). Learning with animation: Research implications for design. New York, NY: Cambridge University Press. Martinez-Garza, M. M., Clark, D., & Nelson, B. (2013). Advances in Assessment of Students’ Intuitive Understanding of Physics through Gameplay Data. [IJGCMS]. International Journal of Gaming and Computer-Mediated Simulations, 5(4), 1–16. doi:10.4018/ijgcms.2013100101 Mayer, R. E. (2005). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New York, NY: Cambridge University Press. doi:10.1017/CBO9780511816819.004 Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23. doi:10.3102/0013189X023002013
Law, N., Pelgrum, W., & Plomp, T. (2008). Pedagogy and ICT use in schools around the world: Findings from the IEA SITES 2006 study. New York: Springer. doi:10.1007/978-1-4020-8928-2
Mislevy, R. J., Almond, R. G., & Lukas, J. (2004). A brief introduction to evidence-centered design. CSE Technical Report. Los Angeles: The National Center for Research on Evaluation, Standards, Student Testing (CRESST), Center for Studies in Education, UCLA.
Lee, H., Plass, J. L., & Homer, B. D. (2006). Optimizing cognitive load for learning from computer-based science simulations. Journal of Educational Psychology, 98(4), 902–913. doi:10.1037/0022-0663.98.4.902
Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidence‐centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6–20. doi:10.1111/j.17453992.2006.00075.x
225
Simulations for Supporting and Assessing Science Literacy
Moreno, R., & Mayer, R. E. (2005). Role of guidance, reflection, and interactivity in an agentbased multimedia game. Journal of Educational Psychology, 97(1), 117–128. doi:10.1037/00220663.97.1.117
Perkins, D. N., & Grotzer, T. A. (2000). Models and moves: Focusing on dimensions of causal complexity to achieve deeper scientific understanding. Retrieved from http://files.eric.ed.gov/ fulltext/ED441698.pdf
National Research Council (NRC). (2012a). A framework for k–12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: National Academies Press.
Phillips, S. E. (1993). Legal implications of highstakes assessment: What states should know. Retrieved from http://files.eric.ed.gov/fulltext/ ED370985.pdf
National Research Council (NRC). (2012b). Next generation science standards. Available at: http://www.nextgenscience.org/next-generationscience-standards
Plass, J. L., Homer, B. D., & Hayward, E. O. (2009). Design factors for educationally effective animations and simulations. Journal of Computing in Higher Education, 21(1), 31–61. doi:10.1007/ s12528-009-9011-x
Nersessian, N. J. (2008). Creating scientific concepts. Cambridge, MA: MIT Press. Pashler, H., Bain, P. M., Bottge, B. A., Graesser, A., Koedinger, K., McDaniel, M., & Metcalfe, J. (2007). Organizing instruction and study to improve student learning. Retrieved from http://ies. ed.gov/ncee/wwc/pdf/practice_guides/20072004. pdf Pellegrino, J. W. (2002, February). Understanding how students learn and inferring what they know: Implications for the design of curriculum, instruction and assessment. In M. J. Smith (Ed.), NSF k–12 mathematics and science curriculum and implementation centers conference proceedings (pp. 76–92). Washington, DC: National Science Foundation and American Geological Institute. Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press. Pellegrino, J. W., & Hilton, M. L. (Eds.). (2013). Education for life and work: Developing transferable knowledge and skills in the 21st century. National Academies Press.
226
Quellmalz, E., & Kozma, R. (2003). Designing assessments of learning with technology. Assessment in Education: Principles, Policy & Practice, 10(3), 389–407. doi:10.1080/0969594032000148208 Quellmalz, E., Timms, M., & Buckley, B. (2005). Using science simulations to support powerful formative assessments of complex science learning. Paper from the annual meeting of the American Educational Research Association, San Diego, CA. Quellmalz, E. S., Davenport, J. L., Timms, M. J., DeBoer, G., Jordan, K., Huang, K., & Buckley, B. (2013). Next-generation environments for assessing and promoting complex science learning. Journal of Educational Psychology, 105(4), 1100–1114. doi:10.1037/a0032220 Quellmalz, E. S., & Haertel, G. D. (2008). Assessing new literacies in science and mathematics. In J. Coiro, M. Knobel, C. Lankshear, & D. J. Leu (Eds.), Handbook of research on new literacies (pp. 941–972). Mahwah, NJ: Lawrence Erlbaum Associates.
Simulations for Supporting and Assessing Science Literacy
Quellmalz, E. S., & Moody, M. (2004). Models for multi-level state science assessment systems. Paper commissioned by the National Research Council Committee on Test Design for K-12 Science Achievement.
Rieber, L. P., Tzeng, S. C., & Tribble, K. (2004). Discovery learning, representation, and explanation within a computer-based simulation: Finding the right mix. Learning and Instruction, 14(3), 307–323. doi:10.1016/j.learninstruc.2004.06.008
Quellmalz, E. S., & Pellegrino, J. W. (2009). Technology and testing. Science, 323(5910), 75–79. doi:10.1126/science.1168046 PMID:19119222
Rose, D., & Meyer, A. (2000). Universal Design for Learning. Journal of Special Education Technology, 15(1), 67–70.
Quellmalz, E. S., Timms, M. J., Buckley, B. C., Davenport, J., Loveland, M., & Silberglitt, M. D. (2012). 21st century dynamic assessment. In M. Mayrath, J. Clarke-Midura, & D. H. Robinson (Eds.), Technology-based assessments for 21st century skills: Theoretical and practical implications from modern research. Charlotte, NC: Information Age.
Salen, K., & Zimmerman, E. (2003, November). This is not a game: Play in cultural environments. In DiGRA ’03 – Proceedings of the 2003 DiGRA International Conference: Level Up. Retrieved from http://www.digra.org/digital-library/ forums/2-level-up
Quellmalz, E. S., Timms, M. J., & Schneider, S. A. (2009). Assessment of student learning in science simulations and games. Washington, DC: National Research Council. Quellmalz, E. S., Timms, M. J., Silberglitt, M. D., & Buckley, B. C. (2012). Science assessments for all: Integrating science simulations into balanced state science assessment systems. Journal of Research in Science Teaching, 49(3), 363–393. doi:10.1002/tea.21005 Quellmalz, S., & Timms, B. (2012). Multilevel assessments of science systems: Final report. Redwood City, CA: WestEd. Quellmalz, Timms, Buckley, Loveland, & Silberglitt. (2012). Calipers II: Using simulations to assess complex science learning: Final report. Redwood City, CA: WestEd. Quellmalz & Silberglitt (2011, February) Integrating simulation-based science assessments into balanced state science assessment systems: Findings and implications. Workshop from the 2011 Meeting of the Technical Issues in Large-Scale Assessment, State Collaboratives on Assessment and Student Standards, Atlanta, GA.
Schwartz, D. L., & Heiser, J. (2006). Spatial representations and imagery in learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences. New York, NY: Cambridge University Press. Shute, V. J., Rieber, L., & Van Eck, R. (2011). Games... and... learning. In R. Reiser & J. Dempsey (Eds.), Trends and issues in instructional design and technology (3rd ed.; pp. 321–332). Upper Saddle River, NJ: Pearson Education. Slotta, J. D., & Chi, M. T. H. (2006). The impact of ontology training on conceptual change: Helping students understand the challenging topics in science. Cognition and Instruction, 24(2), 261–289. doi:10.1207/s1532690xci2402_3 Songer, N. B., Kelcey, B., & Gotwals, A. W. (2009). How and when does complex reasoning occur? Empirically driven development of a learning progression focused on complex reasoning about biodiversity. Journal of Research in Science Teaching, 46(6), 610–631. doi:10.1002/tea.20313 Squire, K. (2006). From content to context: Videogames as designed experience. Educational Researcher, 35(8), 19–29. doi:10.3102/0013189X035008019
227
Simulations for Supporting and Assessing Science Literacy
Squire, K. D., & Jan, M. (2007). Mad City Mystery: Developing scientific argumentation skills with a place-based augmented reality game on handheld computers. Journal of Science Education and Technology, 16(1), 5–29. doi:10.1007/ s10956-006-9037-z Steinkuehler, C., & Duncan, S. (2008). Scientific habits of mind in virtual worlds. Journal of Science Education and Technology, 17(6), 530–543. doi:10.1007/s10956-008-9120-8 Stewart, J., Cartier, J. L., & Passmore, C. M. (2005). Developing understanding through modelbased inquiry. In M. S. Donovan & J. D. Bransford (Eds.), How students learn (pp. 515–565). Washington, DC: National Research Council. Stieff, M., & Wilensky, U. (2003). Connected chemistry—incorporating interactive simulations into the chemistry classroom. Journal of Science Education and Technology, 12(3), 285–302. doi:10.1023/A:1025085023936 Tversky, B., Heiser, J., Mackenzie, R., Lozano, S., & Morrison, J. (2008). Enriching animations. In R. K. Lowe & W. Schnotz (Eds.), Learning with animation: Research implications for design (pp. 263–285). New York, NY: Cambridge University Press. Van Merrienboer, J. J., & Kester, L. (2005). The four-component instructional design model: Multimedia principles in environments for complex learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 71–93). New York, NY: Cambridge University Press. doi:10.1017/CBO9780511816819.006 Wang, S. (2005). Online or paper: Does delivery affect results? Administration mode comparability study for Stanford diagnostic Reading and Mathematics tests. San Antonio, TX: Harcourt Assessment.
228
Ware, C. (2004). Information visualization: Perception for design. San Francisco: Morgan Kaufmann. White, B. Y., & Frederiksen, J. R. (1990). Causal model progressions as a foundation for intelligent learning environments. Artificial Intelligence, 42(1), 99–157. doi:10.1016/00043702(90)90095-H White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16(1), 3–118. doi:10.1207/s1532690xci1601_2
KEY TERMS AND DEFINITIONS Dynamic: Phenomena changing in time and scale. Evidence-Centered Design: Specifications of assessment design in terms of knowledge and skills to be assessed (student model), tasks to elicit observations of the knowledge and skills (task model), and evaluations of student responses (evidence model). Model-Based Learning: Framework characterizing learners’ formation, use, evaluation, and revision of their mental models of phenomena as learners interact with phenomena in situ and with conceptual models, representations (including text), and simulations of phenomena. Multilevel Assessment Systems: Coherent, articulated assessment systems from the classroom to district, to state to national levels based on common specifications of learning standards and task models. Multimedia: Representations of phenomena and means of expression employing a variety of static, active, and interactive modalities such as pictures, graphics, text, animations, and simulations.
Simulations for Supporting and Assessing Science Literacy
Representations: Static, active, and interactive renderings of phenomena. SimScientists: Program of research and development projects at WestEd studying the capabilities of simulations for promoting and assessing science learning.
Universal Design for Learning: Methods for offering alternative means for representing information in multiple formats and media, providing multiple pathways for students’ action and expression, and multiple ways to engage students’ interest and motivation.
229
230
Chapter 9
Using the Collegiate Learning Assessment to Address the College-to-Career Space Doris Zahner CAE, USA
Roger W. Benjamin CAE, USA
Zachary Kornhauser CAE, USA
Raffaela Wolf CAE, USA Jeffrey T. Steedle Pearson, USA
ABSTRACT Issues in higher education, such as the rising cost of education, career readiness, and increases in the achievement gap have led to a movement toward accountability in higher education. This chapter addresses the issues related to career readiness by highlighting an assessment tool, the Collegiate Learning Assessment (CLA), through two case studies. The first examines the college-to-career space by comparing different alternatives for predicting college success as measured by college GPA. The second addresses an identified market failure of highly qualified college graduates being overlooked for employment due to a matching problem. The chapter concludes with a proposal for a solution to this problem, namely a matching system.
INTRODUCTION This chapter is intended to serve multiple purposes, initially focusing on documenting some of the more salient issues facing the American higher-education system, including the rising cost of education, inequality in the opportunities available to students following graduation, increases in
the achievement gap, and the resulting movement toward accountability in higher education. Next, the chapter presents tools that have been developed to meet these challenges, including the Collegiate Learning Assessment (CLA). The CLA has been demonstrated to be a psychometrically valid test, and research utilizing the CLA has shown it to be a useful predictor, at the institutional level, of
DOI: 10.4018/978-1-4666-9441-5.ch009
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Using the Collegiate Learning Assessment to Address the College-to-Career Space
students’ college grade point averages (GPAs). An additional tool presented here is CLA+, which has been shown to also be reliable at the student level and assesses competencies not measured by the original CLA. The authors then present research comparing the CLA’s and the SAT’s relative capacities to predict college sophomore and senior GPAs. Finally, the goal of the end of this chapter is to provide information regarding extensions of the use of CLA+. Specifically, the authors identify a market failure and present a research argument that illustrates how a standardized assessment, such as CLA+, may be used to anchor education policy surrounding career readiness and employability (Benjamin, 2012, 2014). The research portrays the matching problem students have in finding jobs appropriate for the skill levels they have achieved in college, and provides recommendations for how this matching problem might be solved.
ISSUES IN THE AMERICAN HIGHER-EDUCATION SYSTEM AND THE MOVEMENT TOWARD ACCOUNTABILITY A number of changes have affected the highereducation landscape in the 21st century, but perhaps none of these changes are as significant as the movement toward greater accountability. The history of higher education in America has been one in which an exceptional amount of faith has been placed in these institutions of higher education to educate the newest generation of America’s young adults. This faith, however, has allowed institutions to operate without feeling pressure to be accountable for the education they provide to their students (American Association of State Colleges and Universities, 2006). Recently, however, greater attention has been placed on a number of issues affecting higher education, leading to an increased focus on accountability.
One prime issue affecting higher education is the rising cost of a college education. College prices were relatively stable during the 1970s, but increases in tuition and fees began to exceed rises in the consumer price index during the 1980s, causing much public concern about college affordability. Prices increased more rapidly during the earlier part of the 1990s, as costs of attending both public and private institutions rose between 10% and 14% per year (U.S. Department of Education, 2004). For the 2011-2012 academic year, the average cost of attending a private four-year institution stood at about $33,000. By comparison, the average cost of attending a private four-year institution in 1980 was just $13,000, after adjusting for inflation. The problem is not only limited to private institutions; public institutions currently charge an average of about $14,000 a year, which far exceeds the average yearly price of $6,500 in 1980 (U.S. Department of Education, 2013). Staggering as these numbers may be, there does not seem to be an end to the rise in costs in the foreseeable future. In fact, current projections indicate that by 2020 four years at a top-tier school will cost $328,000, and that by 2028, it will cost $598,000 (Taylor, 2011). Another important issue is the question of how much students are learning in college. The National Association of Adult Literacy asserts that, between 1992 and 2003, average prose literacy (the ability to understand narrative texts, such as newspaper articles) decreased among those holding a bachelor’s degree or higher, as did average document literacy (the ability to understand practical information, such as instructions for taking medicine) (National Association of Adult Literacy, 2004). One consequence of this decline in literacy is that employers are increasingly complaining that American college graduates are not prepared for the workplace and lack the skill sets necessary for successful employment and continued career development. (U.S. Department of Education, 2006).
231
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Another issue facing the higher-education system is the achievement gap, or inequality in educational outcomes. The number of students who entered the higher education system in 2012 was 17.7 million, which is a 48% increase from 1990 when the total enrollment was 12 million (Kena, et al., 2014). In addition, educational attainment as a whole has increased for all college students from 23 to 34 percent between the years 19902013. Nevertheless, the educational gap between white and minority Americans widened during this time. For example, between 1990-2013 educational attainment among white students increased from 26 to 40 percent, although the increase was smaller for African-American (13 to 20 percent) and Hispanic (8 to 16 percent) students. Differing explanations offered as to why the gap in educational attainment by race has not attenuated. One explanation is that the college readiness of minority students is typically lower than that of White students. For instance, according to one calculation 40% of white students who graduated high school were deemed college ready as compared to 23% of African-American students and 20% of Hispanic students (Kena et al., 2014). A second factor is the financial aid system, which leaves an average unmet need for white students of 56%, 75% for black/African-American, 68% for Hispanic, and 70% for Asian students (Long & Riley, 2007). A final issue presented here is effect that institutional selectivity has on employment outcomes., Pascarella and Terenzini (2005) determined that more selective institutions have a significant positive effect on student earnings. More specifically, the authors determine that attending a college with an average SAT score that is 100points higher is associated with a 2% to 4% increase in earnings. Others, however, have had less conservative estimates, including Kane (1998), who found that the increase in earnings is between 3% and 7%. It should also be noted that this effect is non-linear and that the most elite institutions at the top of the selectivity distribution have the greatest positive
232
impact on earnings (Pascarella & Terenzini, 2005). Another factor complicating this issue is that it is not necessarily clear if institutional characteristics are responsible for higher salaries among students that attend these more selective institutions or if higher salaries are related to the earning capacity of students. Without a reliable way of distinguishing between students, employers may simply choose students who represent the strong ‘brand’ of elite institutions (Dale & Krueger, 1999). The challenges detailed above represent just some of the many problems that the U.S. highereducation system is currently facing. These problems are not recent, although they have attracted an increasing amount of attention, particularly in the last five years (Liu, Bridgeman, & Adler, 2012). These concerns gained steam when the former secretary of education Margaret Spellings organized a commission of educational leaders and researchers in order to address the many problems currently affecting the higher-education system, while also providing suggestions for the future. The Spellings’ Commission, as it became known, documented a number of issues affecting the American higher-education system, including the rising costs of higher education, the complex financial aid system, and issues relating to access to higher education, specifically among low-income or minority students (U.S. Department of Education, 2006). After detailing this litany of issues, some of which were longstanding, the commission addressed one issue that had not received much attention in the public sphere: Compounding all of these difficulties is a lack of clear, reliable information about the cost and quality of postsecondary institutions, along with a remarkable absence of accountability mechanisms to ensure that colleges succeed in educating students. The result is that students, parents, and policymakers are often left scratching their heads over the answers to basic questions, from the true cost of private colleges (where most students don’t pay the official sticker price) to which institutions do a better job than others not only of graduating
Using the Collegiate Learning Assessment to Address the College-to-Career Space
students but of teaching them what they need to learn. (U.S. Department of Education, 2006, p. vii) Not surprisingly, the Spellings’ Commission set into motion a number of other initiatives that strongly urged colleges to measure student learning. One such initiative is the Voluntary System of Accountability (VSA), which was proposed by the American Association of State Colleges and Universities (AASCU) and the Association of Public and Land-grant Universities. “The overriding purpose of the VSA is to evaluate core educational outcomes in higher education and improve public understanding about the functions and operations of public colleges and universities” (Liu, 2011). In order to maintain institutional accountability, the VSA called for the use of three tests: the ETS Proficiency profile, the Collegiate Assessment of Academic Proficiency (CAAP), and the Collegiate Learning Assessment (CLA).
CAREER READINESS AND EMPLOYABILITY: INSTITUTIONAL OBLIGATIONS Given this focus on accountability, one of the main responsibilities of institutions is to prepare college graduates to enter the workforce. Career readiness or employability is often defined by confusing and contradicting statements. Some definitions focus on specific knowledge, skills, or abilities that pertain to a particular job, requiring individuals to know or to be able to do a concrete task, such as typing 60 words per minute. Others outline broader “soft skills,” such as collaboration and communication. Career readiness, however, is more holistic than either of these definitions (Achieve, 2014). Research has indicated that employers across all industries anticipate requiring higher levels of education and skills across all sectors of the job market, but particularly in what is called “middleskills” jobs, those which require post-secondary education. What is also becoming evident is that
employers are requiring higher-education degrees for jobs that have traditionally not had a minimum education requirement (e.g., administrative or secretarial positions; SHRM & Achieve, 2012). As colleges and universities face greater pressure to ensure that their students possess essential career skills, it is increasingly important for institutions to measure and assess their students. Studies of career readiness routinely show that employers are primarily concerned about critical-thinking, problem-solving, and communication skills (Hart Research Associates, 2009, 2013; NACE, 2013). These are higher-order skills, which are difficult to infer from transcripts because of the grade inflation problem that academia has been experiencing over the past two decades (Eiszler, 2002; Johnson, 2003; Mansfield, 2001; Sabot & Wakeman-Linn, 1991).
Career Readiness as a Construct and Improving Career Readiness Career readiness is an intricate construct that includes, but is not limited to, having the knowledge, skills, and ability to secure employment; having the ability to transfer within an organization; and having the ability or skills to obtain employment at another organization (Hillage & Pollard, 1998). In fact, career readiness can be classified into two categories: specific content knowledge (e.g., as reflected by academic performance) and noncontent specific “soft skills” (e.g., communication; Bhaerman & Spill, 1988). Non-academic soft-skills, such as communication, can be transferred across multiple domains (Chamorro-Premuzic, Arteche, Bremner, Greven, & Furnham, 2010). Thus, regardless of what a student’s college major was, if he or she possesses, for example, strong communication skills, research suggests that this individual is more career ready and, therefore, likely to be employable (Finch, Nadeau, & O’Reilly, 2012; Hart Research Associates, 2013; Nickson, Warhurst, Commander, Hurrell, & Cullen, 2012).
233
Using the Collegiate Learning Assessment to Address the College-to-Career Space
In addition to communication, analyticreasoning and problem-solving skills have also been identified as essential non-content specific skills necessary for career readiness and employability (Fallows & Steven, 2000; Hart Research Associates, 2009, 2013; Reid & Anderson, 2012; Stiwne & Jungert, 2010; Wellman, 2010). Analytic-reasoning and problem-solving skills are very broad domains. However, a few examples of analytic reasoning include the ability to extract pertinent information from various types of sources, identify assumptions in arguments, make justifiable inferences, and evaluate the reliability of information. Problem-solving skills can include drawing a reasoned conclusion, deciding on a course of action for a given challenge, and considering alternative solutions to a problem (Newell & Simon, 1972). Therefore, it appears that career readiness is a multi-faceted construct encompassing a range of specific skills from, for example, content knowledge skills, such as being able to program a computer to, at the other end of the spectrum, “soft skills,” such as communication and collaboration (Finch, Hamilton, Baldwin, & Zehner, 2013; Finch et al., 2012). Since there is not universal agreement on what constitutes “career readiness,” there is not necessarily a set of agreed upon practices for what can be done to improve career readiness amongst college students. Nevertheless, research and policy has implicated certain factors in improving career readiness including early education, as scholars believe that strengthening early learning can place children on a trajectory for future success (ACT, 2013). A second factor is increasing social and emotional learning as evidence indicates that social and emotional learning competency is related to higher order thinking skills, employability, and life skills (Dymnicki, Sambolt, & Kidron, 2013). Social and emotional learning can be promoted through character education, prevention interventions, and the systematic adoption of educational standards and the implementation of programs that target these standards (Dymnicki, et al.,
234
2013). Adoption of such standards will clearly communicate expected levels of performance “so that everyone knows ‘how good is good enough’” (ACT, 2014, p. 16). Third, the implementation of a high quality assessment system is seen as another method through which career readiness might be improved. These assessment systems should be longitudinal in nature so that students can be monitored over time, and should continuously inform stakeholders of students’ progress (ACT, 2014).
Measuring College Success College readiness can be defined as the preparation a student needs in order to succeed in courses offered in a baccalaureate institution, without remediation (Conley, 2007). A variety of criteria have been proposed as measures of success in college including persistence into second year of college, graduation within a specified time (often six years), exemption from remediation courses, freshman year grades, grades in specific courses such as algebra and writing composition, and overall grade point average. In order to determine if students are college ready, assessments are used in a manner such that students who score above a cut score are considered ready to succeed in college (Camara, 2013). Measuring college success is important because, in order to be successful in careers, students must be prepared for the workforce. Greater emphasis and more attention has been given to college success and career readiness because employers now, more than ever, expect college graduates to possess written-communication, critical-thinking, and problem-solving skills (Hart Research Associates, 2006, 2009, 2013) that fulfill the changing demands of available jobs (Autor, Levy, & Murname, 2003). Careers in the 21st century are evolving to include positions that never existed before, such as a social-media manager or a chief listening officer (Casserly, 2012). The knowledge and skills required to perform these types of jobs
Using the Collegiate Learning Assessment to Address the College-to-Career Space
are not only taught inside the classroom, but can be acquired through activities and learning, both inside and outside the classroom. The educational community has also begun to emphasize these 21stcentury skills in addition to knowledge in specific content domains (Arum & Roksa, 2011; Porter, McMaken, Hwang, & Yang, 2011; Silva, 2008; Wagner, 2008) in order to foster the development of critical-thinking, problem-solving, communication, collaboration, creativity, and innovation skills (Porter et al., 2011) so as to better prepare their students for careers in the 21st century. Indeed, nearly 80% of member institutions in the Association of American Colleges and Universities have a list of general learning outcomes for all students regardless of their academic programs (Hart Research Associates, 2009). Educators and researchers have long engaged in discourse on how to best define and effectively measure college success (McPherson & Schapiro, 2008). Possibilities include tests of proficiency in the use of higher-order thinking skills (Jerald, 2009; Silva, 2008) and reported learning outcomes, such as graduation and GPA (ACT, 2009; Atkinson & Geiser, 2009; Silva, 2008; Zwick & Sklar, 2005). There are several well-established predictors of college GPA, most notably, high school grade point average (HSGPA) (Atkinson & Geiser, 2009) and SAT or ACT scores (ACT, 2009; Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008; Rothstein, 2004). HSGPA is recognized as the best single predictor of first-year college GPA, accounting for approximately 30% of the variance in first-year college GPA (Atkinson, 2001; Kobrin et al., 2008). The utility of HSGPA as a predictor of first-year college GPA persists despite differences in grading standards across high schools (Zwick & Himelfarb, 2011). One likely explanation is that HSGPA is based on repeated sampling of performance over time and across many different academic settings. Another possible explanation is that both HSGPA and college GPA are based on similar kinds of academic evaluations (e.g., quizzes, term
papers, labs, class participation, exams), so prior performance on these types of tasks will likely be predictive of later performance on the same task types (Geiser & Santelices, 2007). To address concerns about differences in grading standards across high schools, college admissions offices commonly consider standardized admissions test scores (SAT or ACT), in addition to HSGPA, as indicators of college readiness (and, therefore, as predictors of college success). In combination, HSGPA and scores from such tests account for 37.2% of the variance in first-year college GPA (Kobrin et al., 2008). In prediction studies, first-year college GPA is frequently used as the criterion measure of college success. However, college GPA from later years must also be examined. For example, the SAT, which has been established as a good predictor of first-year college GPA, is less effective in predicting senior-year college GPA (Atkinson & Geiser, 2009). In fact, research indicates that the best predictors of senior-year GPA is HSGPA in combination with the SAT Writing subject test, which accounted for approximately 30% of the variance in senior-year college GPA (Atkinson & Geiser, 2009). These findings catalyzed the development of a new (and soon to be retired) form of the SAT, which consists of writing, critical reading, and mathematics sections. Despite the addition of the writing section, the new SAT has not been found to be statistically superior in predicting college success (Atkinson & Geiser, 2009; Geiser & Santelices, 2007). So what might an alternative predictor of college GPA be? Performance-based assessments, such as CLA, are assessments of higher-order thinking and writing skills (Klein, Benjamin, Shavelson, & Bolus, 2007), which are necessary for success in college and the 21st-century workplace (Silva, 2008). On its own, the CLA has been shown to predict first-year college success, accounting for 17% of the variance in first-year college GPA (Arum, Roksa, & Velez, 2008). In an effort to better understand the prediction of
235
Using the Collegiate Learning Assessment to Address the College-to-Career Space
college success, the study reported in the following section examines the relative efficacy of various combinations of HSGPA, SAT, and CLA as predictors of college GPA.
STUDY #1: COMPARING ALTERNATIVES IN THE PREDICTION OF COLLEGE SUCCESS The authors evaluated multiple indicators of college readiness, as predictors of college success as measured by college GPA. Specifically, the prediction of college GPA was examined mid-way through and at the end of participants’ college careers using established predictors of first-year college GPA: high school GPA (Atkinson & Geiser, 2009) and college entrance exam scores (ACT, 2009; Kobrin et al., 2008). The authors’ research adds to the knowledge of college success by examining the utility of an open-ended, performance-based assessment of critical-thinking and written-communication skills as an additional predictor. This type of performance-based assessment may improve the accuracy of the prediction of college GPA, since HSGPA and college entrance exams may not capture these higher-order skills and also do not account for the elapsed time between college entrance and graduation. The performance-based assessment used in this study, the CLA, consisted of two different types of constructed-response (essay) tests, a performance task and an analytic writing task. The data come from a five-year longitudinal study, funded by the Lumina Foundation, examining gains in criticalthinking and written-communication skills during college (Klein, Steedle, & Kugelmass, 2009).
Participants Participants were recruited as entering freshmen in the fall semester of 2005 from 50 colleges and universities, each of which enrolled approximately 300 students, to take the CLA and answer a short
236
demographic survey. Subsequent testing occurred towards the end of the spring 2007 semester, as the participants were completing their sophomore year, and again near the end of the spring 2009 semester, as the participants were completing their senior year. The sampled institutions consisted of small liberal arts colleges as well as large research institutions, both public and private, from all regions of the United States. A number of historically Black- and Hispanic-serving institutions were part of this sample. The participants were demographically diverse with 18% Black and 8% Hispanic. Thirty-seven percent of the participants were males, and 21% of the participants reported that English was not the primary language spoken at home. They represented all of the fields of study available at their schools. A total of 9,167 freshmen completed testing in fall 2005. Of these students, 3,137 (34%) tested again during spring 2007, and 1,330 (13%) completed all three phases. Attrition was due mostly to institutions, rather than individual students, dropping out of the study, although some schools may have dropped out of the study due to difficulty recruiting participants. On average, an institution that participated in all three phases of the study lost about one-third of its participants.
CLA Instrument Development Task and item development is an iterative process. A team of researchers and writers generate ideas for Performance Task (PT) storylines and then develop and revise the prompts and PT documents. For analytic writing tasks, multiple prompts are generated, revised, and pre-piloted, and those prompts that elicit good critical-thinking and written-communication responses during pre-piloting are further revised and submitted for more extensive piloting. For the PTs, developers craft documents, known as a “document library,” to present information in multiple formats (e.g., tables, figures, news articles). A list of the intended content from each
Using the Collegiate Learning Assessment to Address the College-to-Career Space
document is tracked to ensure that the documents clearly convey that content and that no additional and unintentional information is imbedded in any of the documents. After several rounds of revision, the most promising tasks are selected for pre-piloting. Task developers examine student responses to identify what pieces of information are unintentionally unclear in the PT documents or analytic writing prompts, and what pieces of information are inadvertently included in the documents that should be removed. After revision and additional pre-piloting, the tasks that best elicit the intended types and ranges of student responses are selected for full piloting. During piloting, students complete both an operational task and one of the pilot tasks. At this point, draft scoring procedures are revised and tested when grading the pilot responses, and final revisions are made to the tasks to ensure that the task is eliciting the types of responses intended.
Administration For this study, each participant took one PT and one analytic writing task, which consisted of two sections, “Make-an-Argument” and “Critique-anArgument”. The prompts within each task type were randomly assigned and participants were never given the same prompt in subsequent testing sessions. A total of six PTs and eight analytic writing tasks (four “Make an Argument” and four “Critique an Argument”) were used in this study. In the PT, participants were asked to draft a document, such as a letter or a memo, to address a real-world problem (see, Mitigation of Test Bias in International, Cross-national Assessments of Higher-Order Thinking Skills). Participants had a total of 90 minutes to analyze and evaluate the information in the documents, synthesize and organize that information, draw conclusions, and craft a written response. Next, participants completed the analytic writing prompts, consisting of two sections. First, participants were allotted 45 minutes for
the “Make-an-Argument” section in which they were required to take a position in response to an argumentative statement and create a persuasive argument in support of that position. For instance, “Government funding would be better spent on preventing crime than dealing with criminals after the fact.” Following this, participants then had 30 minutes for the “Critique-an-Argument” section, which required them to identify and describe logical flaws in the assumptions and claims of a given argument. For example: Butter has now been replaced by margarine in Happy Pancake House restaurants throughout the southwestern United States. Only about two percent of customers have complained, indicating that 98 people out of 100 are happy with the change. Furthermore, many servers have reported that a number of customers who still ask for butter do not complain when they are given margarine instead. Clearly, either these customers cannot distinguish margarine from butter, or they use the term “butter” to refer to either butter or margarine. Thus, to avoid the expense of purchasing butter, the Happy Pancake House should extend this cost-saving change to its restaurants in the southeast and northeast as well. Tasks were timed separately and administered by computer under proctored conditions at each school during a multi-week testing window. Participants completed the PT before they were administered the analytic writing prompts.
Scoring PT responses were scored by trained human scorers, and analytic writing task responses were scored using an automated scoring engine (Elliot, 2011; Klein, 2008). The automated scoring engines were developed using a broad sample of responses scored by multiple human-scorers trained in the use of the established rubrics for the CLA. All responses were assigned raw total
237
Using the Collegiate Learning Assessment to Address the College-to-Career Space
scores that holistically reflected critical-thinking and written-communication skills. Raw scores were placed on a common scale to adjust for differences in task difficulty. This was achieved by converting the raw scores for a particular task to a score distribution with the same mean and standard deviation as the SAT total scores of the population of freshmen who took that assessment. The seniors’ raw scores for that task were converted to scale scores using the same formulas used with freshmen, so that any differences in answer quality between classes would not be obscured by the scaling process. A participant’s CLA total scale score was the weighted sum of his or her PT (weighted at .50), “Make-an-Argument” (weighted at .25), and “Critique-an-Argument” (weighted at .25) scale scores. Participants’ CLA total scores were used in the analyses for this study.
Analysis Two sets of simple and multiple regression analyses were conducted using participants’ HSGPA, SAT/ACT, and freshman CLA scores as predictors of college GPA. The first set of analyses was conducted using college GPAs of participants at the end of their sophomore year. The second set of analyses was conducted using college GPAs of participants at the end of their senior year. Participants’ HSGPAs were converted to a 4.0 scale, and ACT scores were converted to the SAT scale using an established concordance table (ACT, 2008). Only schools with at least 25 participants were used in the analyses to support the statistical precision of results within a school. Although the correlation between self-reported and actual HSGPA is only .74 (Shaw & Mattern, 2009), many studies use self-reported HSGPA in the prediction of college success. This study used participants’ actual HSGPA and SAT/ACT scores as reported by university registrar offices. The enhanced accuracy of these data may cause results to differ from previous prediction studies.
238
Additionally, the results of the study may differ from previous research because, unlike previous studies in which there was a single analysis using student data from many schools, the regression analyses were first conducted within schools and then results were aggregated across schools. As a result, lower validity coefficients might be expected because the range of scores tends to be more restricted within schools. The analyses were conducted within schools in order to accommodate for the difference in grading standards between schools. Furthermore, admissions officers may be interested in the efficacy of these predictors for a specific school rather than across many schools.
Results Predicting Sophomore GPA Table 1 presents the simple and multiple correlations between participants’ end-of-sophomoreyear college GPA and all possible combinations of HSGPA, SAT, and CLA. The average correlations are reported in the last row of the table. All three predictors are individually and collectively positively correlated with end-of-sophomore-year college GPA. The efficacy of the individual predictors varied dramatically across schools, ranging from .03 to .68 for HSGPA, .02 to .65 for SAT, and .03 to .56 for the CLA. The range of predictive validity of the combinations of predictors was also large across the participating institutions. For example, all three predictors, in combination, were much more strongly correlated with end-of-sophomore-year college GPA at school 15 (.74) than at school 16 (.24). This difference is potentially a reflection of institutions’ admissions policies (e.g., restriction of range on predictors). When looking across all schools, at the end of students’ sophomore year in college, it appears that HSGPA is the single best predictor of college GPA, accounting for approximately 24% of the variance. When SAT is added to the prediction,
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Table 1. Simple and multiple correlations between end-of-sophomore-year college GPA and HSGPA, SAT, and CLA School
N
HSGPA
SAT
CLA
HSGPA & SAT
HSGPA & CLA
SAT & CLA
HSGPA, SAT, & CLA
1
126
0.53
0.36
0.30
0.54
0.54
0.39
0.55
2
140
0.39
0.44
0.31
0.48
0.43
0.45
0.49
3
74
0.56
0.34
0.34
0.60
0.59
0.39
0.61
4
82
0.46
0.46
0.31
0.56
0.48
0.47
0.56
5
70
0.56
0.38
0.35
0.61
0.60
0.43
0.63
6
157
0.44
0.38
0.31
0.50
0.48
0.42
0.51
8
162
0.24
0.36
0.25
0.39
0.33
0.37
0.40
10
51
0.56
0.46
0.29
0.63
0.57
0.49
0.64
11
66
0.44
0.46
0.40
0.54
0.52
0.52
0.57
12
138
0.62
0.38
0.43
0.64
0.67
0.49
0.67
13
145
0.54
0.65
0.40
0.71
0.58
0.65
0.71
14
140
0.35
0.22
0.03
0.37
0.35
0.22
0.37
15
117
0.63
0.59
0.48
0.71
0.68
0.65
0.74
16
92
0.03
0.02
0.23
0.04
0.24
0.24
0.24
17
76
0.49
0.26
0.32
0.51
0.52
0.34
0.52
18
40
0.58
0.41
0.56
0.62
0.70
0.59
0.71
19
84
0.46
0.38
0.33
0.53
0.52
0.44
0.56
21
201
0.30
0.31
0.30
0.37
0.41
0.37
0.43
23
48
0.60
0.32
0.27
0.60
0.60
0.36
0.60
24
116
0.66
0.43
0.52
0.66
0.69
0.56
0.69
25
37
0.49
0.62
0.48
0.67
0.59
0.65
0.68
26
65
0.52
0.58
0.44
0.63
0.58
0.61
0.65
27
64
0.68
0.44
0.42
0.71
0.71
0.51
0.72
28
66
0.56
0.43
0.43
0.65
0.63
0.57
0.70
29
142
Mean
0.41
0.53
0.27
0.58
0.45
0.53
0.58
0.49
0.42
0.36
0.56
0.54
0.48
0.58
the variance explained increases to 31%, whereas the combination of HSGPA and CLA accounts for 29% of the variance. (It should be noted that the reported variances are the squared simple and multiple correlations reported in Table 1.) Thus, at the end of sophomore year, HSGPA and SAT are slightly better at predicting college GPA than HSGPA and CLA.
Predicting Senior GPA Table 2 shows the simple and multiple correlations between participants’ end-of-senior-year college GPA and all possible combinations of HSGPA, SAT, and CLA. As with the analyses of sophomore GPA, all three predictors were also found to be positively correlated with end-of-senior-year
239
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Table 2. Simple and multiple correlations between end-of-senior-year college GPA and HSGPA, SAT, and CLA School
N
HSGPA
SAT
CLA
HSGPA & SAT
HSGPA & CLA
SAT & CLA
HSGPA, SAT, & CLA
1
114
0.51
0.32
0.20
0.52
0.53
0.32
0.53
2
60
0.28
0.35
0.42
0.37
0.46
0.45
0.46
3
65
0.46
0.35
0.42
0.52
0.56
0.45
0.56
4
48
0.26
0.33
0.34
0.36
0.38
0.37
0.38
5
73
0.50
0.42
0.14
0.59
0.59
0.42
0.59
6
70
0.45
0.30
0.40
0.47
0.51
0.43
0.51
7
70
0.20
0.41
0.42
0.42
0.52
0.52
0.52
8
56
0.20
0.27
0.26
0.30
0.35
0.32
0.35
9
60
0.18
0.32
0.30
0.35
0.40
0.37
0.40
10
50
0.49
0.51
0.34
0.62
0.63
0.55
0.63
11
49
0.55
0.42
0.49
0.59
0.63
0.55
0.63
12
99
0.66
0.45
0.44
0.68
0.71
0.54
0.71
13
83
0.65
0.45
0.38
0.67
0.68
0.47
0.68
14
97
0.26
0.22
0.03
0.29
0.30
0.22
0.30
15
109
0.59
0.58
0.49
0.68
0.71
0.65
0.71
17
57
0.46
0.22
0.20
0.49
0.49
0.25
0.49
18
29
0.82
0.25
0.57
0.82
0.88
0.61
0.88
20
67
0.59
0.47
0.42
0.62
0.63
0.51
0.63
21
110
0.15
0.22
0.29
0.24
0.34
0.31
0.34
22
29
0.56
0.57
0.12
0.62
0.62
0.57
0.62
23
53
0.75
0.42
0.44
0.75
0.75
0.51
0.75
26
50
0.39
0.57
0.45
0.58
0.62
0.61
0.62
27
45
0.51
0.53
0.33
0.63
0.63
0.53
0.63
29
87
0.49
0.51
0.31
0.58
0.59
0.51
0.59
0.46
0.39
0.34
0.53
0.56
0.46
0.56
Mean
college GPA. Correlations ranged from .15 to .82 for HSGPA, .22 to .58 for SAT, and .03 to .57 for the CLA. Once again, large differences were observed in the predictive validity of HSGPA, SAT, and CLA across the schools. For example, HSGPA was most strongly correlated with senior-year college GPA for many schools in this sample, but for some schools (e.g., school 7), the SAT and the CLA had much stronger correlations.
240
When looking across all schools, at the end of students’ senior year in college, HSGPA is still the single best predictor of college GPA, accounting for approximately 20.8% of the variance. HSGPA and SAT together account for 28.3% of the variance, and CLA and HSGPA accounts for 31.5%. (It should be noted that the reported variances are the squared simple and multiple correlations reported in Table 2.) For seniors, although the difference is small, HSGPA and CLA are now
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Discussion
better predictors of senior-year GPA than HSGPA alone or HSGPA and SAT. Table 3 shows the average and weighted average (based on sample size) amount of variance in sophomore- and senior-year GPA that is accounted for by HSGPA, SAT, and CLA. As expected, when measurements are separated in time, the validity of each individual predictor drops slightly between the end of sophomore and senior years. However, there is an increase in the amount of variance accounted for by the model using HSGPA and CLA. The weighted average variance accounted for by the model using HSGPA and CLA increased from 26.8% at the end of sophomore year to 30.1% at the end of senior year. The amount of variance accounted for in predicting college GPA for all other single predictors and combinations of predictors dropped during this time. Results also show that the weighted average variance of senior-year GPA increased from 27% to 30.1% when CLA was added to the model containing HSGPA and SAT. However, when SAT was added to the model including HSGPA and CLA, the variance accounted for stayed exactly the same at 30.1%. This indicates that thre is some variance in CLA scores that is not accounted for by HSGPA and SAT, and it suggests that the CLA is accounting for some feature of students’ academic preparedness for college not captured by HSGPA and SAT in the prediction of senior-year cumulative GPA.
This study examined the relative utility of various predictors of college success as measured by students’ sophomore- and senior-year college GPAs. The variables used in this prediction study included HSGPA, SAT, and CLA scores. As expected, HSGPA was found to be the best single predictor of college success (Atkinson & Geiser, 2009), accounting for 21.4% of the variance in sophomore-year GPA and 20.0% of the variance in senior-year GPA. It is unclear why HSGPA is the best predictor of college GPA. Some argue that the prediction of college GPA from HSGPA is due to “method covariance,” since student performance in both high school and college is assessed in a large number of courses taken over a period of several years and their performance is based on similar kinds of academic evaluations (e.g., quizzes, term papers, labs, exams) (Geiser & Santelices, 2007). It could also be due to latent traits such as motivation or ambition, where highly motivated or ambitious students will do well, and students with low motivation and ambition will perform poorly regardless of the setting. Despite its predictive efficacy, HSGPA should not be used in isolation when predicting college GPA because standardized tests, such as the SAT, ACT, and CLA, can improve the prediction significantly. Results from this study revealed that
Table 3. Mean percent variance of sophomore and senior-year college GPA accounted for by HSGPA, SAT, and CLA Year
Sophomore Senior Difference Senior – Sophomore
Mean
HSGPA
SAT
CLA
HSGPA & SAT
HSGPA & CLA
SAT & CLA
HSGPA, SAT, & CLA
Mean
23.7
17.6
12.7
31.2
29.5
22.8
33.5
Weighted Mean
21.4
16.8
11.4
28.7
26.8
21.4
30.6
Mean
20.8
15.4
11.6
28.3
31.5
21.1
31.5
Weighted Mean
20.0
15.1
11.0
27.0
30.1
20.0
30.1
Mean
-2.9
-2.1
-1.0
-2.9
2.0
-1.7
-2.1
Weighted Mean
-1.5
-1.8
-0.3
-1.7
3.3
-1.5
-0.4
241
Using the Collegiate Learning Assessment to Address the College-to-Career Space
the best prediction of college GPA was obtained using the combination of HSGPA and a standardized test, which corroborates previous predictive validity research (ACT, 2009; Kobrin et al., 2008; Rothstein, 2004). While most previous research utilized end-of-freshman-year college GPA as the measure of college success, this research examined the prediction of GPAs of sophomores and graduating seniors. The most notable finding from this study is that the CLA and HSGPA together provided the best prediction of senior-year college GPA. Moreover, the amount of variance accounted for by this model increased between sophomore and senior years in college. This could be due in part to using students’ cumulative GPA in the analysis. Previous research has shown that variance for cumulative GPA declines over time whereas it increases sharply for non-cumulative GPA (Geiser & Santelices, 2007). An analysis of CAE’s data yielded similar findings. For sophomores, the standard deviation of cumulative GPA was .53. By senior year, the standard deviation for cumulative GPA decreased to .46. As a result, the increase in the proportion of total variance accounted for by the model may be partially due to this decrease of variance between sophomore- and senior-year college GPA since there is less total variance in senior-year college GPA. However, the amount of variance for all single predictors and other combinations of predictors dropped during this period, which is contrary to results from previous research showing that the predictive validity of a model containing HSGPA and standardized test scores improved after freshman year (Geiser & Santelices, 2007). One can hypothesize that HSGPA and college entrance exams, such as the SAT, assess knowledge of domain-specific content, such as algebra and literature. They are not assessments specifically aimed at measuring critical-thinking and written-communication skills, which is what the CLA strives to be. Therefore, the CLA and SAT appear to capture different aspects of students’ abilities. These higher-order skills are the types of
242
21st century skills that are necessary for college and the next generation of employees (Autor et al., 2003). Institutions that have curricula aimed specifically at improving these higher-order skills may be effective, although further research is necessary to confirm this. It should be noted that despite the overall trend of HSGPA and CLA being predictive of college GPA at the individual school level, different combinations of predictors, including indicators of college readiness not used in this study, may be more effective in predicting college GPA. Thus, it is recommended that schools conduct analyses to identify the effective predictors of college GPA within their institution. Future studies could seek to examine differences in the predictive validity of the CLA based upon varying demographics (e.g., sex, race/ethnicity) and increasing sample sizes. The results from this study underscore the apparent value of open-ended performance-based assessments as indicators of college readiness and, therefore, as predictors of college success. In light of the demand for 21st century skills and the focus on college success, HSGPA and traditional college entrance exams should not be the only variables. In the prediction of college success, there is clearly room for another measure. Indeed, a strong case can be made for open-ended performance-based assessments that measure the higher-order skills and knowledge that are important in determining college and career success.
STUDY #2: THE CLA+: REMEDYING AN IDENTIFIED MARKET FAILURE Background and Rationale Hiring is a way in which employers shape labor market outcomes and may be influenced by a number of factors, including the cultural matching between the applicant and the employer, preexisting social ties the applicant might have to the job, interviewee performance, and the skills
Using the Collegiate Learning Assessment to Address the College-to-Career Space
possessed by the applicant. Rivera (2012) understands cultural matching to be the “shared tastes, experiences, leisure pursuits, and self-presentation styles (Bourdieu 1984) between employers and job candidates” (p. 1000). Rivera found that cultural similarity was a highly salient factor in determining hiring decisions as similarities between people serve “as a powerful emotional glue that facilitates trust and comfort, generates feelings of excitement, and bonds individuals together” (p. 1001). In addition, Fernandez and Weinberg (1997) found that having a personal contact within an organization leads to a greater likelihood of being hired. The authors believe that this is because referrals provide for a quick and inexpensive applicant pool and that they have already been screened by their current employees by virtue of being referred. In regards to interviewee performance, research suggests that multiple characteristics of interviewee performance could affect hiring including impression management, social skills, self-monitoring, and interpersonal presentation (Huffcutt, 2011). In terms of skills, Bills (2003) argues that applicants who have been schooled extensively or possess exceptional merit from their schooling are more likely to be hired since they are seen to possess a great degree of general skills that will be transferrable to the workplace. Most relevant to the current study are the employment outcomes of students who have attended elite institutions. As discussed earlier, there is evidence that there is a difference in earnings between students attending elite and their counterparts who attend less prestigious schools even when other factors, such as students’ academic ability levels, are taken into account. Further, the impact of this effect seems to be greatest for students attending the most elite institutions. Finally, evidence suggests that the earnings differences enjoyed by students attending more prestigious schools has increased time (Pascarella & Terenzini, 2005). It also should be mentioned that the effect of college selectivity on earnings may depend on the type of career students pursue. For instance,
Thomas (2003) found that the selectivity of the institution had the greatest impact for graduates working in the private sector. Further, the academic major that students pursue also appears to play a role in future earnings. The author also notes that while the effects of graduating from a selective school on earnings are evident, the real payoff for attending a selective institution may be from the subsequent advanced degree programs that these students have an easier time gaining admission to. The question of why this happens, however, is not immediately clear. A very interesting look into the hiring practices of elite professional service employers is provided by Rivera (2011). According to the author’s research, firms identify between 10 and 20 “target” schools from which they will accept applications. From this pool, these firms further narrow down their list by targeting five or so “core” schools where they will hold interviews and/or solicit applications. As Rivera notes, the list of “target” and “core” schools primarily depends on the prestige of the school but also might depend on where the hiring firm is located. In regards to résumés, firms prefer to accept them from their “target” or “core” schools, although they do accept them from “nontargeted” schools as well. However, résumés from the “nontargeted” schools appear to arrive at a different pile than those résumés from a preferred school. One recruiter noted, when speaking about résumés sent from a “nontargeted” school: It pretty much goes into a black hole. . . . Look, I have a specific day I need to go in and look at... the Brown candidates, you know the Yale candidates. I don’t have a reason necessarily to go into what we call the “best of the rest” folder unless I’ve run out of everything else (Rivera, 2011, p. 76). Regarding the reason why hiring at these firms is restricted to graduates of elite schools, Rivera notes that firms do not necessarily believe the curricula at these schools are better, but that attending a prestigious institution is reflective of
243
Using the Collegiate Learning Assessment to Address the College-to-Career Space
underlying intelligence. These firms place a lot of faith in the ability of selective institutions to admit only the best and the brightest students. As another interviewee explained, “The top schools are more selective, they’re reputed to be top schools because they do draw a more select student body who tend to be smarter and more able” (Rivera, 2011, p. 79). According to these firms, failure to attend one of these schools was seen as a warning sign. The problem that the relationship between the selectivity of college and future earnings represents can be understood as a market failure. The concept of market failure refers to the condition of too much “noise” between the buyer and seller, which can result in a breakdown in the ability to create an effective market between both parties. In the case of the college-to-career space, it is known that college graduation is required for success in the United States, yet not all post-secondary education institutions are equal. Further, only a small percentage of college students are fortunate enough to have attended these elite institutions, and those students who win enrollment to these colleges tend to have the advantage of significant financial and social support from early childhood throughout high school. But what if there are many more students who graduate with skills appropriate for high value-added jobs than can be accommodated by the selective colleges? (According to Barron’s selectivity index, the selective colleges can only accommodate 940,000 out of the 10,800,000 students enrolled in four-year colleges.) Less-selective colleges produce many graduates who achieve distinction in their careers. However, these graduates face a branding problem. Since these colleges do not have reliable tools that make the case for their strongest graduating seniors, employers may never discover these students. In sum, the market failure between graduating college seniors from less-selective colleges and employers is blocking hundreds of thousands of students from attaining employment appropriate for the high-ability skills they have attained. Too
244
many students do not get to interview for jobs that they have the skills for because employers are unaware of this demographic of students. This is negative for the students in question, their institutions, and employers. The rich diversity of American post-secondary education is correctly cited as a unique strength. However, just as the US made significant changes in admissions requirements for college applicants in the aftermath of World War II by creating admissions tests, such as the SAT and ACT, to complement the high school GPA, the country now faces the need to create a test to complement the graduating college seniors’ GPA. Below is an examination of using CLA+, an updated version of the CLA, to address this market failure.
The CLA+: Leveling the Playing Field CLA+ is the new and improved version of the CLA. In addition to the PT, which measures analysis and problem-solving, writing effectiveness, and writing mechanics, CLA+ has a set of selectedresponse items (Appendix) that measure scientific and quantitative reasoning, critical reading and evaluation, and the ability to critique an argument, all regarded as essential skills by employers (Finch et al., 2013; Hart Research Associates, 2013). Unlike the CLA, which is a reliable instrument (Benjamin et al., 2012; Klein et al., 2007; Klein, Freedman, Shavelson, & Bolus, 2008; Klein, Liu, et al., 2009; Klein, Zahner, Benjamin, Bolus, & Steedle, 2013) of an institution’s value-added to their students (Steedle, 2009, 2012) on criticalthinking and written-communication skills, the CLA+ is an assessment that is valid and reliable for these constructs at both the institutional and student levels (Zahner, 2013). These non-content-specific skills are independent from academic disciplines and are teachable. They are thought to be particularly important skills in today’s “Knowledge Economy” where one can quickly find too many facts via “Google.” The question then becomes whether the student can
Using the Collegiate Learning Assessment to Address the College-to-Career Space
access, structure, and use the correct information, not just whether he or she can remember the content; these skills are highly prized by faculty and college leaders who, like their K-12 counterparts, are moving toward “deeper” learning (Benjamin, 2014). Figure 1 indicates that 68% of the students fall within 1 standard deviation of the mean CLA+ score, 1113 (standard deviation = 156), with the top 25% of the students scoring at least a 1224. The question is, however, how many of the students scoring in the top 10%, 25%, and 50% are in selective versus less-selective colleges. Table 4, using CLA+ data, sets the context, for whether a market failure exists. Over the past 30-plus years, the number of students in the 143 selective colleges has grown by 171,000. Over that same period, the number of students attend-
ing less-selective colleges has grown by over 4,200,000. That is, the largest growth in four-year college attendance is in the less-selective colleges. The list of selective colleges is based on Barron’s selectivity college index (Barron’s, 2014), the same index used by Hoxby and Avery (2012). Barron’s selectivity index is a composite that includes several factors such as SAT or ACT score, high school class rank, average GPA, and percent of students accepted. Based on these factors Barron’s ranks schools as “most competitive,” “highly competitive,” “very competitive,” “competitive,” “less competitive,” “noncompetitive,” and “special,” which are those with specialized programs of study. As of 2011, 6% of schools were classified as “most competitive,” 7% as “highly competitive,” 18% as “very competitive,” 46% as “competitive,” 13% as “less competitive,” and 5%
Figure 1. Frequency distribution of senior CLA+ scores from spring 2014, n = 15,168
Table 4. Proportion of selective vs. less-selective institutions Colleges & Universities Student Enrollment 1980
2012
1980
2012
Selective
143 (5%)
143 (5%)
762,248 (12%)
940,771 (9%)
Less Selective
3,014 (95%)
3,014 (95%)
5,584,841 (88%)
9,823,718 (91%)
All
3,157 (100%)
3,157 (100%)
6,347,089 (100%)
10,764,489 (100%)
245
Using the Collegiate Learning Assessment to Address the College-to-Career Space
as “non-competitive” (Hess & Hochleitner, 2012). Of course, any division of colleges and universities into selective and less-selective categories can be challenged as Barron’s selectivity index might not capture recent trends. Using the percentages of students above 1400, above 1300, and above 1200 (Table 5), Table 6 shows that selective colleges produce a higher percentage (24%) of students above 1400 than the less-selective colleges (6%). However, there are almost twice as many high-ability students graduating from less-selective colleges above the 1400 level. Figure 2 shows that the proportion of high-ability students in the less-selective colleges grows for students testing above 1300 and 1200 and illustrates the significant market failure that exists in this space. It becomes very apparent that there is a large group of students who are being overlooked by potential employers due to their college “selectivity” (Rivera, 2011). Also, the less-selective colleges have large percentages of
low-income (Pell grant) and moderate-income students from diverse backgrounds. Table 7 provides further evidence of the market-failure challenge. Notably, there has been a significant change in the distribution of race and ethnicity from 1980 to 2012 when considering institutional selectivity. For example, the selective colleges are now more diverse. The main point, however, is that the increasingly small proportion of total college enrollment made up by selective colleges versus total college enrollment reinforces the market-failure thesis, warranting a new look by many employers at students from the less-selective college group.
SOLUTIONS AND RECOMMENDATIONS The goal is to develop tools that allow for the demonstration of skills that are important for
Table 5. CLA+ performance for all participating seniors A Actual CLA Performance
Exiting Seniors at CLA+ Institutions Selective Institutions
Less-Selective Institutions
All
Above 1400*
395 (24%)
2,631 (6%)
3,026 (7%)
Above 1300
841 (52%)
8,001 (18%)
8,842 (20%)
Above 1200
1,284 (79%)
17,956 (41%)
19,240 (43%)
All
1,627 (100%)
43,352 (100%)
44,979 (100%)
*These scale points are based on the CLA+ scale, which like the “old” and upcoming SAT, starts from 400.
Table 6. Projected national CLA+ performance B Projected National CLA Performance Above 1400
Bachelor’s Degree Recipients Nationally (2011-12) and Projected CLA+ Performance Selective Institutions
Less-Selective Institutions
53,307 (24%)
95,295 (6%)
All 148,602 (8%)
Above 1300
113,497 (52%)
289,796 (18%)
403,293 (23%)
Above 1200
173,282 (79%)
650,365 (41%)
823,648 (46%)
All
219,572 (100%)
1,570,207 (100%)
1,789,779 (100%)
**Note that the total national percentages differ somewhat from the percentages of students at all CLA+ institutions scoring at given levels, due to a slight underrepresentation of selective colleges taking the CLA+.
246
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Figure 2. Projected national CLA+ performance
Table 7. Distribution of student race and ethnicity, by institutional selectivity, 1980 and 2012 Students
1980
2012
All 1980
All 2012
Selective Institutions
Less Selective Institutions
Selective Institutions
Less Selective Institutions
% Non-Hispanic White
81
56
86
81
58
56
% Hispanic
5
13
3
5
9
13
% Black or African American
9
13
5
10
5
13
% Asian or Pacific Islander
2
6
4
2
13
5
% Other
3
13
3
3
15
12
both employers and students and to make that information accessible. The authors have developed one process that is being piloted to see if it can reduce the market failure, and this process is a matching system. In this matching system, each student receives a score report that indicates his or her level of mastery of skills measured by CLA+. Qualifying students (those with proficient- or advancedmastery levels) may claim a certified badge through a secure vault. Students will be able to store their score reports with online transcript service providers and place their CLA+ scores on employment boards.
Recently, CAE contracted with Brazen Careerist, a leading provider of virtual hiring solutions, to host a virtual career fair for highachieving CLA+ participants. Brazen Careerist and CAE co-hosted a webinar for interested employers in March attended by over 100 human resources professionals. Both Brazen Careerist and CAE leveraged their network contacts to secure virtual career fair participants. Employers were able to purchase a “booth” for up to three participants for a nominal fee of a few hundred dollars. High-achieving students (defined as those having achieved the “Advanced” level of Mastery on CLA+) were invited via an email invitation,
247
Using the Collegiate Learning Assessment to Address the College-to-Career Space
and the registration included a basic profile which candidates were asked to fill out. The event took place over a 3 hour period, with representatives from 6 different companies. Several companies sent more than one representative to interview candidates. Each candidate was able to sign-up for a queue and, once inside a chat room, they had a fixed amount of time (8 minutes) to interact with the corporate representative. Hiring officials were able to view the basic profiles while they engaged in one-on-one chats. After the session with a candidate was complete, the hiring official entered a score on a 1-5 scale for the quality of the candidate just interviewed. 25% of invitees (115 out of 461) clicked to register for the event. 97 completed full registrations, and 44% of those students actually attended the event. A total of 66 conversations took place during the event. 21% of candidates were rated above average or excellent by hiring officials. Figure 3. Matching system
248
The steps outlined in Figure 3 illustrate a path for graduating seniors to help identify themselves as eligible candidates for employers. Seniors can take CLA+, elect to add their CLA+ result to their transcript or résumé, secure a verified electronic badge, and send their results to perspective employers via job boards. If students have CLA+ scores that qualify, they can, in addition, attend a virtual career fair – all of which improves the odds of high-ability students in less-selective colleges to obtain a good job and start a promising career.
FUTURE RESEARCH DIRECTIONS Once the initial research program for reliability and validity studies of CLA+ are completed, it will be time to put in place a long range research program. Does CLA+ have disparate impacts on minorities? Are there gender effects? The
Using the Collegiate Learning Assessment to Address the College-to-Career Space
premise of CLA+ is that the skills measured are important for the work place and that graduating college seniors with high test scores have less unemployment than students who do less well on CLA+. In fact is that the case? The authors have put in place a long-range study to gather evidence about the role of critical-thinking skills in the work place. Finally, can so-called soft skills, such as entrepreneurship and collaboration, be measured at the levels of scientific reliability required by measurement scientists? These and many other questions form a program of research for measurement scientists to further investigate the reliability and validity of CLA+. Again, this is the case with all standardized tests that are designed to have a practical role in assessing teaching and learning for formative and summative purposes. Also, how might the matching system be institutionalized in a manner that links colleges, employers, and graduating seniors? This is a significant, long range challenge. Further, if measurement scientists incorporate entrepreneurship and collaboration into the CLA+ protocol, can those skills be added to the matching system? Finally, can the measurement scientists work with discipline-based researchers to develop performance-based protocols that have the same focus on CLA+ but measure core arts and science disciplines and professional fields, such as business- and health-related fields? These and other questions for future research will help to expand the current knowledge of assessment of higherorder, critical-thinking skills.
CONCLUSION This chapter offered insight into the movement toward accountability in higher education as a result of the multiple issues facing higher education today, such as the rising cost of education, questions regarding how much students are learning,
and increases in the achievement gap. One main concern for both institutions of higher education and employers is students’ college success leading to career readiness. As illustrated in the first study presented by the authors, CLA, an assessment of critical-thinking and written-communication skills, is a tool for predicting college success as measured by cumulative college GPA. Further, the CLA, in conjunction with HSGPA, was shown to be useful in predicting graduating seniors’ cumulative college GPA above and beyond other assessments such as the SAT and ACT. Next, the authors examined the market failure of the college-to-career space by asserting that employers are not hiring highly qualified individuals due to potential variables, such as college brand name (Rivera, 2011). One viable solution to this problem is the use of CLA+, an improved version of the CLA, which is designed to measure criticalthinking and written-communication skills and that is reliable and valid at the individual student level. A second study was presented by the authors to highlight how a standardized assessment, like CLA+, may be used to anchor education policy surrounding career readiness and employability. This study portrays the matching problem students have in finding jobs appropriate for the skill levels they have achieved in college, and recommendations are provided at the end for how this matching problem might be solved. Linking standardized assessments to policy and practice in colleges and universities will require time and resources. However, it will pay off in the continuous improvement of teaching and learning. It will also make accountability systems practical. And, most importantly, it gives promise of providing individual students with information about the skill levels they possess – skills that will impact their life-long employment chances. Scientifically-based assessment and education policy and practice can be combined with positive effects.
249
Using the Collegiate Learning Assessment to Address the College-to-Career Space
REFERENCES Achieve. (2014). What is college and career readiness. The Future Ready Project. Retrieved from http://www.futurereadyproject.org/sites/frp/files/ College_and_Career_Readiness.pdf ACT. (2008). ACT-SAT concordance. Compare ACT & SAT Scores. Retrieved, from http://www. act.org/aap/concordance/ ACT. (2009). National overview: Measuring college and career readiness - The class of 2009. Iowa City, IA: ACT. Retrieved from http://www. act.org/newsroom/data/2009/pdf/output/NationalOverview.pdf ACT. (2013). College and career readiness: The importance of early learning. Retrieved from: http://www.act.org/research/policymakers/pdf/ ImportanceofEarlyLearning.pdf ACT. (2014). The condition of college and career readiness. Retrieved from: http://www.act.org/ research/policymakers/cccr14/pdf/CCCR14NationalReadinessRpt.pdf American Association of State Colleges and Universities. (2006, Spring). Value-added assessment: accountability’s new frontier. Perspectives. Retrieved from http://www.aascu.org/uploadedFiles/ AASCU/Content/Root/PolicyAndAdvocacy/ PolicyPublications/06_perspectives%281%29.pdf Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press. Arum, R., Roksa, J., & Velez, M. (2008). Learning to reason and communicate in college: Initial report of findings from the CLA longitudinal study. Retrieved from Social Science Research Council website: http://www.ssrc.org/workspace/uploads/ docs/CLA_Report.pdf
250
Atkinson, R. (2001). Standardized tests and access to American universities. The 2001 Robert H. Atwell Distinguished Lecture. American Council on Education. Retrieved from https://escholarship. org/uc/item/6182126z Atkinson, R. C., & Geiser, S. (2009). Reflections on a century of college admissions tests (CSHE.4.09) Retrieved from Center for Studies in Higher Education website: http://www.cshe.berkeley. edu/sites/default/files/shared/publications/docs/ ROPS-AtkinsonGeiser-Tests-04-15-09.pdf Autor, D. H., Levy, F., & Murname, R. J. (2003). The skill content of recent technological change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279–1333. doi:10.1162/003355303322552801 Barron’s. (2014). Barron’s profiles of American colleges (Vol. 31). Hauppauge, NY: Barron’s Educational Series. Benjamin, R. (2013). The principles and logic of competency testing in higher education. In S. Blomeke, O. Zlatkin-Troitschanskaia, C. Kuhn, & J. Fege (Eds.), Modeling and measuring competencies in higher education: Tasks and challenges (pp. 127–136). Rotterdam: Sense Publishers. doi:10.1007/978-94-6091-867-4_9 Benjamin, R. (2014). Two questions about critical thinking tests in higher education. Change: The Magazine of Higher Learning, 46(2), 32–39. do i:10.1080/00091383.2014.897179 Benjamin, R., Klein, S., Steedle, J. T., Zahner, D., Elliot, S., & Patterson, J. A. (2012). The case for generic skills and performance assessment in the United States and international settings. CAE – Occasional Paper. Retrieved from: http:// www.collegiatelearningassessment.org/files/ The_Case_for_Generic_Skills_and_Performance_Assessment_in_the_United_States_and_ International_Settings.pdf
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Bhaerman, R., & Spill, R. (1988). A dialogue on employability skills: How can they be taught? Journal of Career Development, 15(1), 41–52. doi:10.1177/089484538801500105 Bills, D. B. (2003). Credentials, signals, and screens: Explaining the relationship between schooling and job assignment. Review of Educational Research, 73(4), 441–449. doi:10.3102/00346543073004441 Camara, W. (2013). Defining and measuring college and career readiness: A validation framework. Educational Measurement: Issues and Practice, 32(4), 16–27. doi:10.1111/emip.12016 Casserly, M. (2012, May 11). 10 jobs that didn’t exist 10 years ago. Forbes. Retrieved from: http://www. forbes.com/sites/meghancasserly/2012/05/11/10jobs-that-didnt-exist-10-years-ago/ Chamorro-Premuzic, T., Arteche, A., Bremner, A. J., Greven, C., & Furnham, A. (2010). Soft skills in higher education: Importance and improvement ratings as a function of individual differences in academic performance. Educational Psychology: An International Journal of Experimental Educational Psychology, 30(2), 221–241. doi:10.1080/01443410903560278 Conley, D. T. (2007). Redefining college readiness. Eugene, OR: Educational Policy Improvement Center. Retrieved from http://www.aypf. org/documents/RedefiningCollegeReadiness.pdf Dale, S. B., & Krueger, A. B. (1999). Estimating the payoff to attending a more selective college: An application of selection on observable and unobservables. (NBER Working Paper No. 7322). Cambridge, MA: National Bureau of Economic Research. doi: 10.3386/w7322
Dymnicki, A., Sambolt, M., & Kidron, Y. (2013). Improving college and career readiness by incorporating social and emotional learning. Washington, DC: American Institutes for Research College & Career Readiness and Success Center. Retrieved from http://www.ccrscenter. org/sites/default/files/Improving%20College%20 and%20Career%20Readiness%20by%20Incorporating%20Social%20and%20Emotional%20 Learning_0.pdf Eiszler, C. F. (2002). College students’ evaluations of teaching and grade inflation. Research in Higher Education, 43(4), 483–501. doi:10.1023/A:1015579817194 Elliot, S. (2011). Computer-assisted scoring for Performance tasks for the CLA and CWRA. New York, NY: Council for Aid to Education. Fallows, S., & Steven, C. (2000). Building employability skills into the higher education curriculum: A university-wide initiative. Education + Training, 42(2), 75–82. doi:10.1108/00400910010331620 Fernandez, R. M., & Weinberg, N. (1997). Sifting and sorting: Personal contacts and hiring in a retail bank. American Sociological Review, 62(6), 883–902. doi:10.2307/2657345 Finch, D. J., Hamilton, L. K., Baldwin, R., & Zehner, M. (2013). An exploratory study of factors affecting undergraduate employability. Education + Training, 55(7), 681–704. doi:10.1108/ET-072012-0077 Finch, D. J., Nadeau, J., & O’Reilly, N. (2012). The future of marketing education: A practitioner’s perspective. Journal of Marketing Education, 35(1), 233–258. doi:10.1177/0273475312465091
251
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: High-school record vs standardized tests as indicators of four-year college outcomes. Berkeley: Center for Studies in Higher Education, University of California. Retrieved from http://files.eric.ed.gov/fulltext/ ED502858.pdf Hart Research Associates. (2006). How should colleges prepare students to succeed in today’s global economy? - Based on surveys among employers and recent college graduates. Washington, DC: American Association of Colleges and Universities. Retrieved from https://www. aacu.org/sites/default/files/files/LEAP/2013_EmployerSurvey.pdf Hart Research Associates. (2009). Learning and assessment: Trends in undergraduate education - A survey among members of the association of American colleges and universities. Washington, DC: American Association of Colleges and Universities. Retrieved from http://www.aacu.org/sites/default/files/files/ LEAP/2009MemberSurvey_Part1.pdf Hart Research Associates. (2013). It takes more than a major: Employer priorities for college learning and student success. Washington, DC: American Association of Colleges and Universities. Retrieved from https://www.aacu.org/sites/ default/files/files/LEAP/2013_EmployerSurvey. pdf Hess, H., & Hochleitner, T. (2012). College rankings inflation: Are you overpaying for prestige? Retrieved from American Enterprise Institute: http://www.aei.org/publication/college-rankingsinflation-are-you-overpaying-for-prestige
252
Hillage, J., & Pollard, E. (1998). Employability: Developing a framework for policy analysis (ResearchBrief No.85). London: Department for Education and Employment. Retrieved from http://webarchive.nationalarchives.gov. uk/20130401151715/http://www.education.gov. uk/publications/eOrderingDownload/RB85.pdf Hoxby, C. M., & Avery, C. (2012). The missing “one-offs”: The hidden supply of high-achieving, low income students. Cambridge, MA: National Bureau of Economic Research. doi:10.3386/ w18586 Huffcutt, A. (2011). An empirical review of the employment interview construct literature. International Journal of Selection and Assessment, 19(1), 62–81. doi:10.1111/j.1468-2389.2010.00535.x Jerald, C. D. (2009). Defining a 21st century education. Alexandria, VA: The Center for Public Education. Retrieved from http://www.cfsd16.org/ public/_century/pdf/Defininga21stCenturyEducation_Jerald_2009.pdf Johnson, V. E. (2003). Grade inflation: A crisis in college education. Ann Arbor, MI: Springer. Retrieved from http://www.springer.com/ education+%26+language/book/978-0-38700125-8 Kane, T. (Ed.). (1998). Racial and ethnic preferences in college admission. Washington, DC: The Brookings Institution. Retrieved from http:// www.brookings.edu/research/papers/1996/11/ race-kane Kena, G., Aud, S., Johnson, F., Wang, X., Zhang, J., Rathbun, A., & Kristapovich, P. (2014). The condition of education 2014 (NCES Report 2014-084). Washington, DC: U.S. Department of Education, National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubs2014/2014083.pdf
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Klein, S. (2008). Characteristics of hand and machine-assigned scores to college students’ answers to open-ended tasks. In D. Nolan & T. Speed (Eds.), Probability and statistics: Essays in honor of David A. Freedman (Vol. 2, pp. 76–89). Beachwood, OH: Institute of Mathematical Statistics. doi:10.1214/193940307000000392 Klein, S., Benjamin, R., Shavelson, R., & Bolus, R. (2007). The collegiate learning assessment: Facts and fantasies. Evaluation Review, 31(5), 415–439. doi:10.1177/0193841X07303318 PMID:17761805 Klein, S., Freedman, D., Shavelson, R., & Bolus, R. (2008). Assessing school effectiveness. Evaluation Review, 32(6), 511–525. doi:10.1177/0193841X08325948 PMID:18981333 Klein, S., Liu, O. L., Sconing, J., Bolus, R., Bridgeman, B., Kugelmass, H., …Steedle, J. (2009). Test validity study (TVS) report. U.S. Department of Education, Fund for the Improvement of Postsecondary Education. Retrieved from http://cae.org/ images/uploads/pdf/13_Test_Validity_Study_Report.pdf Klein, S., Steedle, J., & Kugelmass, H. (2009). CLA Lumina longitudinal study: Summary of procedures and findings. New York, NY: Council for Aid to Education. Retrieved from http://cae. org/images/uploads/pdf/12_CLA_Lumina_Longitudinal_Study_Summary_Findings.pdf Klein, S., Zahner, D., Benjamin, R., Bolus, R., & Steedle, J. (2013). Observations on AHELO’s generic skills strand methodology and findings. New York, NY: Council for Aid to Education. Retrieved from http://www.sheeo.org/sites/default/ files/project-files/OBSERVATIONS_FINAL.pdf
Kobrin, J. L., Patterson, B. F., Shaw, E. J., Mattern, K. D., & Barbuti, S. M. (2008). Validity of the SAT for predicting first-year college grade point average. New York, NY: The College Board. Retrieved from https://professionals.collegeboard.com/ profdownload/Validity_of_the_SAT_for_Predicting_First_Year_College_Grade_Point_Average. pdf Liu, O. L. (2011). Value-added assessment in higher education: A comparison of two methods. Higher Education, 61(4), 445–461. doi:10.1007/ s10734-010-9340-8 Liu, O. L., Bridgeman, B., & Adler, R. M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352–362. doi:10.3102/0013189X12459679 Long, B. T., & Riley, E. (2007). Financial aid: A broken bridge to college access? Harvard Educational Review, 77(1), 39–63. doi:10.17763/ haer.77.1.765h8777686r7357 Mansfield, H. C. (2001). Grade inflation: It’s time to face the facts. The Chronicle of Higher Education, 47(30). Retrieved from http://chronicle.com/ article/Grade-Inflation-It-s-Time-to/9332 McPherson, M. S., & Schapiro, M. O. (Eds.). (2008). College success: What it means and how to make it happen. New York, NY: The College Board. Retrieved from https://net.educause.edu/ ir/library/pdf/ff0911s.pdf National Association of Adult Literacy. (2004). Average prose, document and quantitative literacy scores of adults: 1992 and 2003. Retrieved from http://nces.ed.gov/naal/kf_demographics.asp National Association of Colleges and Employers. (2013). Job outlook: The candidate skills/ qualities employers want. Retrieved from https:// www.naceweb.org/s10022013/job-outlook-skillsquality.aspx
253
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall. Nickson, D., Warhurst, C., Commander, J., Hurrell, S. A., & Cullen, A. M. (2012). Soft skills and employability: Evidence from UK retail. Economic and Industrial Democracy, 33(1), 65–84. doi:10.1177/0143831X11427589 Pascarella, E. T., & Terenzini, P. T. (2005). How college affects students: A third decade of research. San Francisco, CA: Jossey-Bass. Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards: The new US intended curriculum. Educational Researcher, 40(3), 103–116. doi:10.3102/0013189X11405038 Reid, J. R., & Anderson, P. R. (2012). Critical thinking in the business classroom. Journal of Education for Business, 87(1), 52–59. doi:10.10 80/08832323.2011.557103 Rivera, L. A. (2011). Ivies, extracurriculars, and exclusion: Elite employers’ use of educational credentials. Research in Social Stratification and Mobility, 29(1), 71–90. doi:10.1016/j. rssm.2010.12.001 Rivera, L. A. (2012). Hiring as cultural matching: The case of elite professional service firms. American Sociological Review, 77(6), 999–1022. doi:10.1177/0003122412463213 Rothstein, J. (2004). College performance predictions and the SAT. Journal of Econometrics, 121(12), 297–317. doi:10.1016/j.jeconom.2003.10.003 Sabot, R., & Wakeman-Linn, J. (1991). Grade inflation and course choice. The Journal of Economic Perspectives, 5(1), 159–170. doi:10.1257/ jep.5.1.159
254
Shaw, E. J., & Mattern, K. D. (2009). Examining the accuracy of self-reported high school grade point average. New York, NY: The College Board. Retrieved from https://research.collegeboard.org/ sites/default/files/publications/2012/7/researchreport-2009-5-examining-accuracy-self-reportedhigh-school-grade-point-average.pdf Silva, E. (2008). Measuring skills for the 21st century. Washington, DC: Education Sector at American Institutes for Research. doi:10.1177/003172170909000905 Society for Human Resource Management & Achieve. (2012). The future of the U.S. workforce. Washington, DC: Achieve. Retrieved from http:// www.achieve.org/future-us-workforce Steedle, J. T. (2009). Advancing institutional value-added score estimation. New York, NY: Council for Aid to Education. Retrieved from http://cae.org/images/uploads/pdf/04_Improving_the_Reliability_and_Interpretability_of_Value-Added_Scores_for_Post-Secondary_Institutional_Assessment_Programs.pdf Steedle, J. T. (2012). Selecting value-added models for postsecondary institutional assessment. Assessment & Evaluation in Higher Education, 37(6), 637–652. doi:10.1080/02602938.2011.560720 Stiwne, E. E., & Jungert, T. (2010). Engineering students’ experiences of transition from study to work. Journal of Education and Work, 23(5), 417–437. doi:10.1080/13639080.2010.515967 Taylor, M. C. (2011). Crash course; U.S. universities unprepared to meet the challenges of the 21st century. The Buffalo News. Retrieved from http://www.buffalonews.com/article/20110220/ OPINION02/302209877
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Thomas, S. L. (2003). Longer-term economic effects of college selectivity and control. Research in Higher Education, 44(3), 263–299. doi:10.1023/A:1023058330965 United States Department of Education. (2004). Paying for college: Changes between 1990 and 2000 for full-time dependent undergraduates (NCES Publiacton 2004-075). Washington, DC: United States Department of Education. Retrieved from http://nces.ed.gov/pubs2004/2004075.pdf United States Department of Education. (2006). A test of leadership: Charting the future of US higher education. Washington, DC: United States Department of Education. Retrieved from https:// www2.ed.gov/about/bdscomm/list/hiedfuture/ reports/pre-pub-report.pdf United States Department of Education. (2013). Total tuition, room and board rates charged for full-time students in degree-granting institutions. Retrieved from http://nces.ed.gov/fastfacts/display.asp?id=76 Wagner, T. (2008). The global achievement gap: Why even our best schools don’t teach the new survival skills our children need--and what we can do about it. New York, NY: Basic Books. Wellman, N. (2010). The employability attributes required of new marketing graduates. Marketing Intelligence & Planning, 28(7), 908–930. doi:10.1108/02634501011086490 Zahner, D. (2013). Reliability and validity of the CLA. New York, NY: CAE. Retrieved from http:// cae.org/images/uploads/pdf/Reliability_and_Validity_of_CLA_Plus.pdf Zwick, R., & Himelfarb, I. (2011). The effect of high school socioeconomic status on the predictive validity of SAT scores and high school grade-point average. Journal of Educational Measurement, 48(2), 101–121. doi:10.1111/j.17453984.2011.00136.x
Zwick, R., & Sklar, J. (2005). Predicting college grades and degree completion using high school grades and SAT scores: The role of student ethnicity and first language. American Educational Research Journal, 42(3), 439–464. doi:10.3102/00028312042003439
ADDITIONAL READING Allen, J., Robbins, S. B., Casillas, A., & Oh, I.-S. (2008). Third-year college retention and transfer: Effects of academic performance, motivation, and social connectedness. Research in Higher Education, 49(7), 647–664. doi:10.1007/s11162008-9098-3 Allen, J., Robbins, S. B., & Sawyer, R. (2009). Can measuring psychosocial factors promote college success? Applied Measurement in Education, 23(1), 1–22. doi:10.1080/08957340903423503 Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press. Benjamin, R., Klein, S., Steedle, J. T., Zahner, D., Elliot, S., & Patterson, J. A. (2012). The case for generic skills and performance assessment in the United States and international settings. CAE – Occasional Paper. Retrieved from: http:// www.collegiatelearningassessment.org/files/ The_Case_for_Generic_Skills_and_Performance_Assessment_in_the_United_States_and_ International_Settings.pdf Bridgeman, B. (1991). Essays and multiple-choice tests as predictors of college freshman GPA. Research in Higher Education, 32(3), 319–332. doi:10.1007/BF00992895
255
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Camara, W., & Quenemoen, R. (2012). Defining and measuring college and career readiness and informing the development of performance level descriptors (PLDs). Retrieved from http://www. parcconline.org/sites/parcc/files/PARCCCCRpaperv141-8-12CamaraandQuenemoen.pdf Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47(1), 1–32. doi:10.1007/s11162-005-8150-9 Cromwell, A. M., & Larsen, K. (2013). College readiness indicators. Pearson Bulletin, (Issue No. 25). Retrieved from http://images. pearsonassessments.com/images/tmrs/TMRSRIN_Bulletin_25CRIndicators_051413.pdf Gore, P. A. (2006). Academic self-efficacy as a predictor of college outcomes: Two incremental validity studies. Journal of Career Assessment, 14(1), 92–115. doi:10.1177/1069072705281367 Harackiewicz, J. M., Barron, K. E., Tauer, J. M., & Elliot, A. J. (2002). Predicting success in college: A longitudinal study of achievement goals and ability measures as predictors of interest and performance from freshman year through graduation. Journal of Educational Psychology, 94(3), 562–575. doi:10.1037/0022-0663.94.3.562 James, E., Alsalam, N., Conaty, J. C., & To, D. (1989). College quality and future earnings: Where should you send your child to college? The American Economic Review, 72(2), 247. Retrieved from http://www.jstor.org/discover/10.2307/1827 765?sid=21105506659633&uid=2&uid=4&uid =70&uid=3739560&uid=2134&uid=3739256 Klein, S. (2002). Direct assessment of cumulative student learning. Peer Review, 4(2/3), 2628. Retrieved from http://nsse.indiana.edu/pdf/ research_papers/testing_linkages.pdf
256
Klein, S. (2008). Characteristics of hand and machine-assigned scores to college students’ answers to open-ended tasks. In D. Nolan & T. Speed (Eds.), Probability and Statistics: Essays in Honor of David A. Freedman (Vol. 2, pp. 76–89). Beachwood, OH: Institute of Mathematical Statistics. doi:10.1214/193940307000000392 Klein, S., Shavelson, R., & Benjamin, R. (2007, February 8, 2007). Setting the record straight, Inside Higher Ed. Retrieved from https://www. insidehighered.com/views/2007/02/08/benjamin Klein, S., Zahner, D., Benjamin, R., Bolus, R., & Steedle, J. (2013). Observations on AHELO’s generic skills strand methodology and findings. New York, NY: Council for Aid to Education; Retrieved from http://www.sheeo.org/sites/default/ files/project-files/OBSERVATIONS_FINAL.pdf Kuh, G., & Ikenberry, S. (2009) More than you think, and less than we need: Learning outcomes assessment in higher education. National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment. org/documents/fullreportrevised-L.pdf Liu, O. L. (2011). Outcomes assessment in higher education: Challenges and future research in the context of voluntary system of accountability. Educational Measurement: Issues and Practice, 30(3), 2–9. doi:10.1111/j.1745-3992.2011.00206.x Markle, R., & Robbins, S. (2013). A holistic view of course placement decisions—Avoiding the HS GPA trap. Princeton, NJ: Educational Testing Service; Retrieved from https://www.ets.org/s/ successnavigator/pdf/holistic_view_course_ placement_decisions.pdf
Using the Collegiate Learning Assessment to Address the College-to-Career Space
Mattern, K. D., Xiong, X., & Shaw, E. J. (2009). The relationship between AP exam performance and college outcomes. New York, NY: The College Board; Retrieved from https://research.collegeboard.org/sites/default/files/publications/2012/7/ researchreport-2009-relationship-between-apexam-performance-college-outcomes.pdf Noble, J., & Sawyer, R. (2002). Predicting different levels of academic success in college using high school GPA and ACT composite score. (ACT Research Report 2002-4). Iowa City, IA: ACT, Inc.; Retrieved from http://www.valees.org/documents/ACT_grades_predictors_of_success.pdf Porter, A. C., & Polikoff, M. S. (2011). Measuring academic readiness for college. Educational Policy, 26(3), 394–417. doi:10.1177/0895904811400410 Robbins, S. B., Allen, J., Casillas, A., Peterson, C. H., & Le, H. (2006). Unraveling the differential effects of motivational and skills, social, and selfmanagement measures from traditional predictors of college outcomes. Journal of Educational Psychology, 98(3), 598–616. doi:10.1037/00220663.98.3.598 Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130(2), 261–288. doi:10.1037/0033-2909.130.2.261 PMID:14979772 Shavelson, R. J., Klein, S., & Benjamin, R. (2009, October 16). The Limitations of Portfolios. Inside Higher Ed. Retrieved from https://www.insidehighered.com/views/2009/10/16/shavelson Steedle, J. T., Kugelmass, H., & Nemeth, A. (2010). What do they measure? Comparing three learning outcomes assessments. Change: The Magazine of Higher Learning, 42(4), 33–37. doi:10.1080/000 91383.2010.490491
Thomas, S. L. (2003). Longer-term economic effects of college selectivity and control. Research in Higher Education, 44(3), 263–299. doi:10.1023/A:1023058330965
KEY TERMS AND DEFINITIONS 21st Century Skills: Advanced cognitive skills, such as critical thinking, which are considered to be important for success in the workplace. Career Readiness: Possession of skills necessary to secure employment and be successful in a work environment. CLA+: A postsecondary assessment that is comprised of a performance task and selected response questions. Higher-Order skills: Advanced cognitive skills, such as critical thinking, that go beyond the recall of factual knowledge. Institutional Selectivity: The difficulty in achieving admission to an institution as defined by percent of applicants admitted. Performance Task: Component of the Collegiate Learning Assessment which requires examinees to articulate an argument based on a prompt and a library of documents. Performance-Based Assessments: Exams that are based on applying advanced cognitive skills that go beyond the recall of factual knowledge. Predictor: The accuracy with which one variable (e.g. SAT score) forecasts the score on a second variable (e.g. college GPA). Value-Added Exams: Measures which estimate the impact that institutions have on students’ learning outcomes. Variance: A statistical term used to explain how well one variable (e.g. CLA score) adds to the predictability of a second variable (e.g. GPA).
257
Using the Collegiate Learning Assessment to Address the College-to-Career Space
APPENDIX: SAMPLE CLA+ SELECTED RESPONSE QUESTIONS Fueling the Future In a quest to solve the energy problems of the 21st-century—that is, to find sustainable and renewable sources of energy that are less destructive to the environment yet economical enough to have mass appeal—scientists throughout the world are experimenting with innovative forms of fuel production. While oil is still the most common source of fuel, there is a finite amount of it, and new alternatives will become necessary to sustain the supply of energy that we are accustomed to. Corn-based ethanol, the most common alternative to traditional fossil fuels (primarily coal, petroleum, and natural gas), is mixed into gasoline in small quantities, and it now accounts for about 10% of the fuel supply from sources within the United States. Because corn is grown on farmland, it is subject to price fluctuations based on supply and demand of the crop, as well as disruptions resulting from naturally occurring events, such as droughts and floods. At present, nearly 40% of the corn grown in the United States is used for fuel, and the demand for corn-based ethanol is rising. To meet this demand, wetlands, grasslands, and forests are all being converted into farmland with the sole intention of growing corn for more ethanol production. Corn grown for ethanol has become a more valuable commodity for farmers than crops grown for food, and this has negatively affected consumers worldwide, as shown by the increasing price of food over time. Another alternative that has gained attention in recent years is the harvesting of biofuel from algae. Biodiesel, a type of biofuel, is produced by extracting oil from algae, much like the process involved in creating vegetable oils from corn or soybeans. Ethanol can also be created by fermenting algae. Algae biofuel has some unique benefits that separate it from other fossil fuel alternatives. To begin with, while all fuels create carbon dioxide when they are burned, algae have the ability to recapture and use that carbon dioxide during photosynthesis while they are growing. In this regard, the advantage is
Figure 4. Food and oil price indices (based on information found at www.fao.org and www.indexmundi. com)
258
Using the Collegiate Learning Assessment to Address the College-to-Career Space
enormous. The process of growing algae actually absorbs more carbon dioxide than is released into the atmosphere when it is burned for fuel. Most manufacturing processes strive for “carbon neutrality”—or the balance between carbon emissions and depletion corresponding to a net carbon output of zero. Even better, algae-based biofuel can be described as “carbon negative.” Other forms of biofuel can make similar claims. For example, ethanol from corn also eliminates carbon dioxide in the atmosphere through photosynthesis. Unlike corn, however, algae grow in water, usually in man-made ponds built on land not used for crops. Additionally, algae do not require fresh water. Instead algae can be grown in salt water, and in some cases even sewage water and other waste material. The most promising aspect of algae biofuel stems from its yield. When compared to other biofuel producers, algae’s fuel yield per harvested acre is over 500 times greater than corn-based ethanol. The following chart compares commonly used biofuel crops on several important factors. 1. Which of the following negatively affects algae biofuel’s ability to be a “carbon negative” energy source? a. It takes 3000 liters of water to create one liter of biofuel from algae, which is highly inefficient and wasteful of resources. b. The process of extracting biofuel from algae requires more energy than is generated by burning the biofuel itself. c. The construction of facilities needed to extract algae biofuel would initially require the use of fossil fuels for energy. d. Algae biofuel is about 25 years away from being commercially viable, by which point there will be more efficient alternative energy sources. 2. The graph shows that food and oil prices increase and decrease together. Which of the following is the most plausible explanation for this phenomenon? a. As the price of food increases due to supply and demand, the cost of oil also rises because less land is available for planting corn. b. Food and oil suppliers dictate the prices of their goods. Therefore, the prices of food and oil rise as consumers can afford to pay more for commodities. c. The prices of oil and food are simultaneously affected by global conditions, such as natural disasters, weather, famine, and political unrest. d. Farmers plant more corn for ethanol when the price of oil increases. The price of food then rises because less food-yielding crops are being produced. Table 8. Comparison of biofuel crops (based on information found at: algaefuel.org and c1gas2org. wpengine.netdna-cdn.com) Product
Ethanol from Corn
Oil Yield Gallons/Acre
Harmful Gas Emissions
Use of Water to Grow Crop
Fertilizer Needed to Grow Crop
Energy Used to Extract Fuel from Crop
18
high
high
high
high
Biodiesel from Soybeans
48
medium
high
low-medium
medium-low
Biodiesel from Canola
127
medium
high
medium
medium-low
10,000
negative
medium
low
high
Biodiesel from Algae
259
Using the Collegiate Learning Assessment to Address the College-to-Career Space
3. What additional information could be added to the table for evaluating the efficiency and viability of algae biofuel compared to other sources of biofuel? a. The average amount of money farmers earn per acre for each biofuel source. b. The costs associated with the extraction of energy from each biofuel source. c. The taxes collected by the government on the sale of each biofuel crop. d. The level of financial support each type of biofuel has received from investors. 4. Which of the following could plausibly occur if algae become a highly efficient and cost effective source of biofuel? a. The price of food would fall because more farmland could be used to produce food rather than corn harvested for ethanol. b. The supply of fresh water would be reduced because of the demands of harvesting algae for biofuel. c. The cost of fuel would rise as the world’s markets become flooded with alternative sources of energy. d. The amount of carbon in the air would increase because more fuels will be burned due to lower costs.
260
261
Chapter 10
Rich-Media Interactive Simulations: Lessons Learned Suzanne Tsacoumis HumRRO, USA
ABSTRACT High fidelity measures have proven to be powerful tools for measuring a broad range of competencies and their validity is well documented. However, their high-touch nature is often a deterrent to their use due to the cost and time required to develop and implement them. In addition, given the increased reliance on technology to screen and evaluate job candidates, organizations are continuing to search for more efficient ways to gather the information they need about one’s capabilities. This chapter describes how innovative, interactive rich-media simulations that incorporate branching technology have been used in several real-world applications. The main focus is on describing the nature of these assessments and highlighting potential solutions to the unique measurement challenges associated with these types of assessments.
INTRODUCTION For over half a century, high-fidelity assessments have proven to be powerful tools for measuring a broad range of knowledge, skills, abilities, and competencies in both the workplace and educational settings. High-fidelity tools are measures that mirror or closely simulate a particular activity or group of activities. For example, these types of assessments include work samples such as driving a bus, running statistical analyses to answer a question, or taking photographs. They also include measures that do not necessarily
replicate the exact activity but simulate it. If an important job activity is to analyze information about a project and then make recommendations on how to proceed, the assessment could create a fictitious project similar to one that would be completed on the job and the test taker could be asked to review the materials and make suggestions on next steps. As another example, a student may be asked to plan an approach to working with classmates to complete an assignment. The validity of high-fidelity assessments is well documented (e.g., Tsacoumis, 2007; Arthur, Day, McNelly, & Edens, 2003; Schmidt & Hunter,
DOI: 10.4018/978-1-4666-9441-5.ch010
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Rich-Media Interactive Simulations
1998; Klimoski & Brickner, 1987; Moses, 1977; Bray & Grant, 1966), and they tend to be well received given their perceived relevance and face validity. However, they typically are resource and time intensive to implement since they involve live role players and because they require the evaluators to observe each test taker as he or she participates in the assessment. Given this, organizations often reserve the use of high-fidelity assessments only to evaluate candidates for their most senior or critical positions. In addition, businesses are continuing to search for more efficient ways to gather the information they need about one’s capabilities. Organizations have become increasingly reliant on technology-based solutions to evaluate students, teachers, and job candidates. In fact, the use of computers to administer traditional multiplechoice tests is now commonplace, and there is a growing trend to create and implement tests with multimedia components that use sound, video, animation, or some combination, along with text. The computer-based counterpart to live highfidelity simulations are “rich media” assessments, which involve animation or video and allow the test taker to “interact” with the simulation and dictate how the assessment proceeds or unfolds. High-end interactive simulations have been used for training, such as pilot simulators, but in order for computer-based versions of high-fidelity simulations to gain traction in more traditional processes to assess job candidates and students, the technology must be easily accessible and the measurement challenges need to be addressed. In truth, the technological tools are available to develop and implement interactive computerbased simulations; however, the actual use of these tools for high-stakes processes, such as personnel selection, is still in its infancy. The objective of this chapter is to describe the lessons learned from developing and implementing several rich media interactive simulations for promotional and developmental purposes in an organizational context, rather than an educational
262
one. That said, the results generalize to any arena focused on creating accurate measures of a variety of skills and abilities. The ultimate goal is to figure out how to use the benefits offered by technology to help master the complexities associated with effectively measuring one’s competencies with enough precision and confidence to make personnel decisions—without the use of live evaluators.
BACKGROUND Organizations, educational institutions, and federal, state, and local governmental agencies use a variety of methods to assess the knowledge, capabilities, and competencies of job applicants, current employees, students, and faculty members, among others. The common forms of these measures are multiple-choice tests and interviews. The former tends to focus on assessing one’s knowledge, aptitude, achievement, interests, or personality, whereas the latter typically measures “soft skills” such as relating with others, conflict management, planning and organizing, and oral communication. The basic format for the interview has withstood the test of time. Variations tend to revolve around the degree of structure associated with the questions and evaluation criteria. In contrast, there have been several modifications to the traditional multiple-choice format, such as using a Likert scale, to capture how much the respondent agrees with the particular statement. Another variation that has grown in popularity, particularly in the employment arena, is the situational judgment test. This type of assessment involves presenting different scenarios, usually in paragraph form, and offering various viable response options to each scenario. Test takers may indicate the effectiveness of each choice, they may select the option that reflects what they would do in that situation, or they may choose the best and worst responses. There is a significant body of literature that describes the nature and use of
Rich-Media Interactive Simulations
these types of tests, particularly in an employment context (e.g., McDaniel, Hartman, Whetzel, & Grubb, 2007). On the other end of the testing spectrum, there are assessments that are exact replicas of a relevant and important task or group of tasks, such as performance assessments or job samples (American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014). As a few examples, a job applicant may be asked to repair a broken piece of equipment, write code for a short computer program, or operate a backhoe. These measures are ideal at capturing one’s procedural knowledge and are commonly used, as appropriate, as tests in the hiring process and as part of credentialing processes for a broad range of organizations. In addition, Lane and Stone (2006) describe how the education community has increasingly embraced performance assessments as valuable tools for directly measuring reasoning as well as the final resulting product. A variation of performance assessment or work samples is a “simulation,” which involves having the test taker complete tasks that mirror the activities of interest, although they are not exact replicas. These measures are used in circumstances when it is impossible to have the individual complete an exact slice of the job. Simulations reflect the relevant tasks and provide a means to assess both declarative and procedural knowledge (Thornton & Mueller-Hanson, 2004) along with a broad range of competencies, including those that are difficult to measure without directly observing them, such as judgment and problem solving, conflict management, planning and organizing, adaptability and resilience, decisiveness, oral communication, relating with others, teamwork, and persuasiveness. Simulations that are developed based on critical tasks enables an assessment of critical knowledge, skills, and abilities linked to those tasks (Goldstein, Zedeck, & Schneider, 1992; Schmitt & Ostroff, 1986).
The fundamental premise associated with simulations is that observing an individual perform under conditions very similar to the job is an ideal way to predict actual job performance. This “behavioral consistency” model was proposed by Wernimont and Campbell (1968), who advocated for meaningful and realistic samples of job behavior by trying to achieve a point-to-point correspondence between predictor and criterion measures. There is extensive literature that suggests that simulations are effective for evaluating complex performance, particularly in the supervisory arena (Tsacoumis, 2007; Thornton & Byham, 1982). They are well accepted by candidates (Rynes & Connerley, 1993; Thornton, 1992), they have a high degree of face validity (Tsacoumis, 2007; Cascio & Phillips, 1979; Schmidt, Greenthal, Hunter, Berner, & Seaton, 1977; Wernimont & Campbell, 1968), and they are harder to fake (Thornton & MuellerHanson, 2004). Job simulations also provide an equal opportunity for candidates to perform and be evaluated on the same job-relevant activities. According to the 2009 U.S. Merit Systems Protection Board report (Job simulations: Trying out for a federal job. Washington, DC), simulationoriented assessments offer several advantages over other assessment methods. They have higher validity than many other types of measures, they fit the job better, and they have a great degree of fairness. Another benefit is that simulations provide an ideal approach to dealing with the problem of construct irrelevance. For instance, a written test of one’s knowledge may also provide some indication of reading ability and/or writing ability rather than strictly a measure of the knowledge domain. In contrast, if the application of that knowledge is displayed by sharing information orally, then a simulation could more accurately measure that knowledge base since it avoids inclusion of features irrelevant to the standard being assessed. Common job simulations include role play exercises, oral presentations, in-basket exercises, and leaderless group discussions (Tsacoumis, 2007; Thornton & Byham, 1982). If the tasks
263
Rich-Media Interactive Simulations
involve interaction between individuals or members of a group, role-play exercises or leaderless group discussion exercises are often developed and structured around the specific content suggested by the tasks. In terms of a role play, the test taker is provided background information about the organization and specific situation that must be addressed, then he or she interacts with a role player to address that situation. For example, the test taker may assume the role of a supervisor who is asked to help an employee with some issues associated with a project or deal with some performance problems. As another example, we may want to evaluate applicants to a sales position by asking them to try to sell something as part of the selection process. They would read information about the product to be sold and about the potential customer they will try to sell it to and then interact with a role player who acts as the potential customer. This type of “role playing” offers an opportunity to observe the candidate’s behavior in a realistic “sales” situation and, in turn, evaluate that individual on a range of critical competencies. Typically, there is at least one other evaluator (in addition to the role player) observing the candidate’s behavior during the assessment. Role plays are ideal for measuring constructs such as relating with others, conflict management, oral communication, judgment and problem solving, planning and organizing, leading others, and decisiveness. Oral presentation exercises require the test taker to review information about a particular situation and then make a presentation in response to that information. In some instances, the goal may be focused on evaluating one’s ability to communicate orally in a more formal setting and to think on one’s feet when posed with questions. In this case, an example scenario may be to simulate a situation where the test taker is sharing information about the organization’s mission and vision to a local civic association. An alternative approach to this type of exercise may be to ask the test taker to conduct an analysis of a project and then present
264
his or her evaluation of that effort and recommendations for moving forward. Clearly, this latter example would provide the opportunity to measure competencies such as decisiveness, planning and organizing, and judgment and problem solving, in addition to oral communication. For a leaderless group discussion, job candidates are given a task that they have to address by jointly working together. For instance, they may each assume the role of a supervisor and they may be tasked to meet together to jointly decide how to distribute a pool of money among several employees in the form of bonuses. Their assignment in this assessment is to evaluate the individuals based on the information provided. It is likely they are assigned one person for whom they should advocate during the group discussion. The interactions among the candidates during this simulation are observed by a panel of assessors who evaluate the participants on abilities such as relating with others, behavior flexibility, persuasiveness, leading others, and oral communication. In-basket exercises are often useful for simulating activities that revolve around scheduling, doing paperwork, and writing memos. During these simulations, test takers are given materials that mirror the types of information that typically comes across the job incumbent’s desk. Candidates are asked to indicate how they would respond to each situation, and their responses are evaluated by at least two assessors. Some of the common abilities measured by in-basket exercises include planning and organizing, judgment and problem solving, decisiveness, written communication, and knowledge of administrative procedures. One major drawback to live simulations is that, in contrast to traditional assessments with closed-ended responses, they are resource intensive to administer and score. Test takers typically prepare for the assessment by reading background materials that explain the scenario in depth. They then complete the simulation by interacting with role players and/or assessors. At best, this means there is a 1-to-1 test taker-to-evaluator ratio which
Rich-Media Interactive Simulations
is vastly different, and in turn more time consuming, than the group administration and scoring afforded by traditional closed-ended tests. That said, many organizations consider the detailed, behavioral information gained from such intensive tools sufficiently valuable that it outweighs the cost and time to implement them. There is no doubt that simulations offer unique insight into one’s core competencies, which helps provide a more complete understanding of an individual’s capabilities to perform. This information is often critical when identifying the right person to serve in a supervisory, managerial, or executive capacity. These tools also provide a rich source of behavioral information that easily lends itself to individualized feedback of strengths and developmental needs. As such, many organizations use simulations to help develop their pool of future leaders. Often simulations are combined into a method called an assessment center. This type of process includes several standardized measures that provide multiple opportunities for behavioral evaluations by a number of observers (i.e., assessors; [International Task Force on Assessment Center Guidelines, 2014]). The basic premise of an assessment center is to offer a variety of measures which, when combined, provide a more complete picture of an individual than any single assessment can offer. This methodology is grounded in the work of German psychologists in the early 1900s who stressed the importance of evaluating an individual’s total personality and abilities in order to effectively assess leadership potential (Thornton & Byham, 1982; Moses, 1977). This is achieved by combining a number of different assessments while ensuring some overlap in what the tests measure to obtain a more complete coverage of the targeted constructs. These early psychologists also asserted that the assessment should mirror natural, everyday situations. Assessors observe the participants’ behaviors, and their resulting judgments are pooled to yield a comprehensive evaluation of each candidate’s standing with regard to the targeted competencies.
Simulations may be administered independently and not part of an assessment center; however, it is more common to see them assembled together in an effort to obtain a more holistic perspective of one’s capabilities. This latter approach also lends itself to stronger content validity evidence since together the measures cover more of the domain space (i.e., relevant knowledge, skills, and abilities). The simulations that comprise assessment centers are typically built following a content-oriented development process. This involves relying on job analytic information to identify critical tasks to simulate and important knowledge, skills, and abilities (or competencies) to assess. The ultimate goal is to develop exercises that simulate typical job situations so that the participants elicit behaviors associated with the targeted competencies. In addition to the content validity evidence, research on assessment centers has consistently demonstrated that overall scores are valid predictors of future job performance (Tsacoumis, 2007; Arthur et al., 2003; Schmidt & Hunter, 1998; Tziner, Ronen, & Hacohen, 1993; Gaugler, Rosenthal, Thornton, & Bentson, 1987). Simulations and the assessment center methodology are very powerful measurement tools that have withstood the test of time. However, as mentioned, since the implementation of these measures requires that each participant interacts with role players and that assessors observe all participants’ behaviors, they are time consuming to administer and score. Given this, they tend to be quite costly, causing some to question their ultimate utility. Even though simulations provide unique information about one’s capabilities beyond what can be captured in traditional closed-ended measures, economic realities and shrinking budgets have had a significant impact on their popularity. As an example, the author has a long-term client who used four job simulations—packaged into a full-day assessment center—to evaluate candidates on their leadership competencies required as a first-line supervisor. The resulting assessment center scores were used to select
265
Rich-Media Interactive Simulations
the individual most qualified to be promoted to specific vacancies. They used this process for a decade until ultimately the executives mandated a change that would significantly reduce the costs to evaluate candidates applying for promotion while still measuring the relevant competencies. Some organizations reduce the costs associated with administering live simulations by using some basic technological solutions to facilitate the administration and/or scoring processes. For example, in some cases, role plays are conducted by using the telephone or by using some type of video-conferencing technology rather than having the test taker interact with the role player in person. Alternatively, some companies record the role-play responses and send the videos to assessors to score remotely. In terms of in-basket exercises, one cost-saving approach involves presenting all the information electronically and having the test taker type their course of action for each item into a response box in the system. Their responses are then electronically sent to assessors, who provide their scores remotely. Others have created multiple-choice in-baskets that do not rely on any role players or assessors to implement or score. In other instances, organizations are explicitly seeking to move to a completely computer-based system for both administration and scoring as an alternative to more traditional methods for measuring soft skills for selection, promotion, and developmental processes. More specifically, they are pressing for engaging online assessments of supervisory or leadership competencies that include an automated scoring process in situations where, in the past, they likely would have sought live job simulations. One viable solution is to explore the feasibility of using rich media to create such measures. Rich media simulations have been used for years to help children learn various concepts. For example, Knowledge Adventure has a video game series called Jump Start in which children interact with a 3D virtual world that helps the user learn various concepts
266
(see http://www.jumpstart.com/). As another example, EcoMUVE helps middle school students learn about ecosystems while interacting in a virtual environment (http://ecomuve.gse.harvard. edu/). Sophisticated computer-based simulations have also been used to train pilots to fly and law enforcement officers to know the principles associated with use of force. All of these are prime examples of using rich media to help teach and reinforce procedural knowledge. The challenge at hand is creating computer-based simulations that allow one to effectively measure soft skills in a reliable and valid manner so they can be used in high-stakes, decision-making situations. This was the exact challenge that the author and her colleagues needed to address. As noted above, she had a client who needed an effective substitute for their promotional assessment center. Simultaneously, other clients were seeking an engaging computer-based alternative for evaluating leadership competencies and for offering targeted developmental feedback on those abilities and knowledge areas. In contrast to the trending option of computer-based delivery that captures open-ended responses that are then scored by live assessors, these clients wanted the entire process to be self-contained. That is, they wanted to remove the human scorers entirely and greatly reduce the personnel needed to administer the process from multiple role players and assessors to one administrator or proctor for a group of test takers. The author had seen a few demonstrations of rich media assessments used for training purposes, but none went as far as trying to simulate a live interaction or address the complicated measurement challenges associated with this type of simulation with enough precision that the resulting score could be used to make personnel decisions. Thankfully, the author was aware of existing software that could be harnessed to create interactive assessments that allow branching in a way that could be comparable to paths people may take in a live role play or other types of live simulations. This, coupled with a strong foundation in
Rich-Media Interactive Simulations
the principles for creating valid assessments for high-stakes employment scenarios, served as the basis for addressing the clients’ needs. The ultimate goal was to develop simulations that effectively measure the targeted competencies with enough accuracy to support high-stakes decisions while providing an engaging, realistic test experience that did not involve any live assessors or scorers.
Rich Media Simulations A high-fidelity simulation is an imitation of a realworld situation in which participants take action based on the scenario and available information, just as they would in real life. As previously noted, common simulations used in the employment testing arena are live role-play exercises, oral presentations, and in-basket exercises. During these assessments, candidates interact with evaluators and provide open-ended responses to work-like situations. For example, there may be a situation where an employee is having problems with a project. In this case, the test taker, who assumes the role of the employee’s supervisor, reads information about the organization, the project, and the employee. Then, the test taker meets with an assessor, who plays the role of the employee, to discuss the project and to share his or her concerns. As another example, if the job requires oral presentation skills, then the test taker may be asked to read some background materials about a work-related problem and present his or her recommendations on how to proceed. As previously described, in-basket exercises involve describing how one would handle specific issues that are similar to those the job incumbent may receive via email, phone, or in writing. For the purposes of this chapter, a rich media simulation (also referred to as a virtual simulation) is a computer-based, high-fidelity assessment that uses animation or video to mimic realistic scenarios. It includes various points of interaction to obtain input from a test taker in order to assess critical knowledge and abilities (also com-
monly known as competencies). Given the clientspecific challenges, the focus is on developing self-contained assessments (i.e., computer-based delivery and scoring) that are created following professional and legal guidelines associated with building valid tests (Equal Employment Opportunity Commission [EEOC], 1978; AERA, APA, & NCME, 2014; Society for Industrial and Organizational Psychology [SIOP], 2003) and that meet the criteria established for using the resulting test scores to make promotion decisions. Let us consider the same situation where an employee is having a problem on a particular project. The scenario could begin with the employee (e.g., an animated character) entering the supervisor’s (i.e., the test taker’s) office and proceeding to share the issues associated with the project. In its simplest form, the test taker could then be presented with a set of response options and asked to: (1) select what should be done, (2) select what he or she would do, (3) select the best and worst responses, or (4) rate the effectiveness of each response. A short vignette like this could represent one item on a rich media situational judgment test. Recall that situational judgment items present the test taker with a scenario and then ask for some type of judgment about possible responses to that situation. Historically, situational judgment items are text-based, although with the growing prevalence of rich media solutions, there is a trend to presenting the scenario using animation or video. There is extensive literature documenting the validity of situational judgment tests for measuring a broad range of soft skills in an employment context (McDaniel et al., 2007). Although the author is a proponent of this testing format, since test takers are simply responding to a specific situation, rather than proceeding through a scenario that develops based on his or her responses, rich media situational judgment items do not fall into the category of a high-fidelity simulation for the purposes of this discussion. Going back to the initial scenario where an employee enters the supervisor’s office to share
267
Rich-Media Interactive Simulations
concerns about a project, this could proceed beyond a simple situational judgment item. Instead, the test taker could then be presented with a broad range of response options, such as asking the employee questions; asking for documents about the project; or talking with coworkers, the client, his or her boss, or others. That is, the test taker could proceed just as he or she would if presented with this situation in real life. To help visualize how this may appear during the actual assessment, Figure 1 captures a screenshot of a virtual simulation demonstration created by the Human Resources Research Organization (HumRRO). This is a situation where the test taker is “conversing” with the male character and then given an option (among others) to try to obtain more information by asking the office assistant to try to reach another coworker. This image shows that assistant entering the test taker’s office to respond to this request.
In terms of the assessment as a whole, the scenario continues to unfold based on the participant’s responses. Therefore, if the test taker requests to see relevant materials, then that information is presented and is accessible at all times during the assessment. If the individual wants to talk with another employee, then that employee “arrives” in the supervisor’s office (i.e., appears on the screen). At various points throughout the scenario, the test taker sees a list of possible responses to the current situation and is asked different questions, such as how effective is each response; how soon would they take this action, if ever; and what he or she would do next. The responses are then compiled and fed into the overall evaluation of key competencies. In the end, an overall assessment score, as well as scores for each targeted competency, could be computed. In this type of interactive assessment, test takers are offered numerous opportunities to de-
Figure 1. Rich media simulation screenshot (from HumRRO demo www.humrro.org/simdemo.html)
268
Rich-Media Interactive Simulations
termine how to proceed and the scenario unfolds based on the selected responses. The inclusion of branching increases the realism and a respondent’s engagement while simultaneously allowing the focal competencies to be assessed with increased fidelity. Granted, branching also means that not all test takers will take identical paths. In turn, they may not all be presented with the same pieces of information, interactions, or sets of questions. Consider the flowchart presented in Figure 2. This assessment starts with Gary and then Matthew entering the test taker’s office. At the conclusion of the interaction with Matthew, the participant is given the choice to interact with Jeffrey, Susan, Tina and Doug, or the former project director. Regardless of their choice, all test takers eventually interact with Jeffrey. If they do not select to talk with Jeffrey, then he appears in the test taker’s office, forcing that interaction.
To ensure sufficient commonality among the test takers, everyone interacts with a core set of individuals (e.g., animated characters) and responds to a common set of questions. Referring back to the flowchart, one can readily identify the core interactions. These scenes along this path and their associated questions are, by far, the large majority of available “scoring” opportunities in the assessment. If a participant does not select the option that involves the core interactions, then the assessment forces those interactions by presenting the core character and the corresponding information. For example, let us say that the character “Jeffrey” has information that is critical to the situation being addressed in the assessment and that it is imperative for the test taker to be evaluated on how he or she responds to that information. In real life, one would have the choice to talk with Jeffrey or not. That same choice is offered in the virtual
Figure 2. Sample branching
269
Rich-Media Interactive Simulations
simulation, so a test taker may choose all the other available options except speaking with Jeffrey. In that case, Jeffrey would appear (e.g., arrive in the test taker’s office) to share the information needed to proceed with the assessment and to score the test taker on the targeted competencies. This is similar to a real-life situation where Jeffrey would seek out the decision maker to communicate the information even if the decision maker did not explicitly seek out Jeffrey. The most common rich media simulation the author has used is the virtual role play (VRP). Similar to a traditional role play, the VRP evokes targeted competencies through realistic, multifaceted scenarios that an incumbent might encounter on the job. However, where a traditional role play involves live actors and assessor-based scoring, the VRP is comprised of animated characters, a variety of closed-ended questions (e.g., “rate the effectiveness of the following sets of responses,” “how soon would you do each of the following?”), and branching capabilities that allow users to determine who they meet with, what they do next, and what type of information to review. Thus, the VRP integrates many of the benefits of traditional role plays while allowing for testing in an online environment, immediate scoring and feedback reporting, and on-demand administration. Another type of virtual simulation is the rich media in-basket assessment. Like other job simulations, the basic context places the participant in an environment that mirrors the target job; however, the items are more discrete than those in a VRP. In the VRP, the test taker is delving into one complex situation, whereas in the in-basket, the test taker is responding to a number of different issues. Each individual item or problem is presented electronically (e.g., via email), through a “phone call,” or by a person (i.e., animated character) walking into the “participant’s” office. For each item, test takers respond to questions or prompts such as “How soon would you do each action?,” “How much of a problem are each of the following issues?,” “How
270
effective is each response?,” or “Rank order the list of possible actions in terms of priority.” It is also feasible to present forms and ask the test taker to click on the sections of the forms where there are errors. Even though the topics addressed by the items in an in-basket are more independent from one another than the questions in a VRP, it is likely there will be some branching in a virtual in-basket that reflects choices a test taker makes. For example, the test taker may be asked if he or she would approve a particular form. If the answer is yes, then the test taker moves on to the next item. If the test taker would not approve the form, then he or she is asked to provide information explaining the rationale for not approving it. During the past few years, the author and her colleagues at HumRRO have developed a suite of unique and custom rich media simulations for a number of clients. As a concrete example, in one of these VRPs. test takers assume the role of an experienced human resource professional, which is the position of interest. They are faced with a situation where the organization just completed a merger and, as a result, they need to research and recommend several new human resources policies. During the simulation, the participant is given the opportunity to review relevant materials about different options and to talk with human resource professionals from the two companies that merged, as well as other coworkers. In the end, they must provide their recommendations associated with the new policies. In another VRP, the participant assumes the role of a new supervisor and is tasked with dealing with a stalled project. During the course of the simulation, the test taker interacts with the current project manager as well as others who are assigned to the project. As the scenario unfolds, it becomes clear that there are several differing opinions and personnel conflicts that need to be addressed, in addition to uncertainty about how to actually accomplish the project requirements. The test taker is asked to address all of these issues.
Rich-Media Interactive Simulations
The virtual in-baskets that have been developed by the HumRRO staff include items such as requests from employees (e.g., to take time off or to participate in training), invitations to speak at community meetings, complaints from the general public, questions about how to handle a violation of procedures (e.g., taking sensitive material out of the office), and forms to review for accuracy. As can be noted, these types of items are very similar to those found in live in-basket exercises; however, in the animated versions the issues are presented in a variety of manners, such as via email, a phone call, or a visitor to the office. Following a content-oriented development process, the test developers ensure the simulations reflect the specific client-based context and relevant job activities. Some are being used to select candidates for promotion to supervisory positions. Others are being used as developmental tools for people seeking to improve their supervisory, managerial, or executive-level leadership competencies. In all cases, we have developed multiple rich media, interactive assessments as a means to obtain a variety of indicators of one’s capabilities. In some instances the test batteries include a VRP, in-basket, and situational judgment test, whereas in other cases they contain a VRP and situational judgment test. As noted later in the future directions section of this chapter, the author believes that due to the nature of the rich media delivery format, the distinctions among these different types of virtual assessments will be lost. Instead, the features of each individual method (e.g., VRP, in-basket, situational judgment test) will be blended into a single assessment that provides a more holistic test experience that more accurately reflects the target job. Given the focus on simulating realistic and job-related situations, rich media assessments are ideal for training, career development, and selection/promotion processes. However, to ensure they are effective, it is critical that they are grounded in data that reflect the job requirements. This is particularly important for measures used
to make selection or promotion decisions. The literature associated with developing valid live simulations, coupled with the technical, professional, and legal guidelines and standards, serves as the key foundation for the development of the rich media assessments such as the VRPs and animated in-baskets (EEOC, 1978; AERA, APA, & NCME, 2014; SIOP, 2003). Specifically, we follow the basic principles for content validation, which involves developing measures that reflect the content and are representative of the target job (Tsacoumis, 2007). This is accomplished by relying on current job analysis data and input from experienced subject matter experts, as follows: • •
• • • • • • • •
•
Identify the important job tasks. Identify the important knowledge, skills, abilities/competencies, and if the measure will be used is for selection or promotion, ensure the competencies are required at job entry. Review linkages between knowledge, skills, abilities, and tasks. Identify important tasks that can be simulated. Develop the content of the assessment based on a subset of those tasks. Identify subject matter experts who are very knowledgeable about the job/position. Work with those subject matter experts on the content and scoring criteria. Refer back to the linkages between knowledge, skills, abilities, and tasks throughout the development process. Ensure the assessment content reflects the target position. Ensure the test taker had ample opportunities to demonstrate behaviors relevant to the critical knowledge, skills, and abilities/ competencies. Demonstrate linkages between knowledge, skills, and abilities/competencies and each assessment.
271
Rich-Media Interactive Simulations
Ultimately, our goal is to develop assessments that focus on the important aspects of the job, correspond structurally to the job, and possess good measurement properties. Test developers begin by selecting a realistic, job-related scenario or by identifying a number of relevant issues or problems to serve as the foundation of the simulation. This is accomplished by referring to current job analysis information to identify important tasks performed on the job that can be used as the foundation for the vignettes (Tsacoumis, 2014). For example, the job analysis results may reveal the following critical job tasks for a general supervisory position: • • • • • • •
Describe to a subordinate how to get the information needed to proceed on a project. Resolve conflicts among employees. Brief supervisor about recommendations about how to proceed on a project. Provide performance feedback to a subordinate. Review the merit of a proposal for a new project to determine if it warrants implementation. Authorize requests for professional development activities according to office guidelines. Review subordinate’s work products to ensure accuracy, completeness, and compliance with organizational procedural requirements.
Accordingly, we could develop a scenario where the supervisor’s boss is interested in a status report on a project, and at the same time the employee managing that project is overwhelmed and is having difficulty dealing with others working on that project. This assessment would likely simulate the first three tasks listed above, and possibly even the fourth. Another assessment could focus on a proposal for a new project that would require collecting information from a variety of
272
sources and then involve ultimately meeting with the boss to make recommendations about how to proceed. If the scenario is packaged as a VRP that involves branching and various alternative paths, we have learned that it is beneficial to create a flowchart like the one shown above in Figure 2 to depict the interactions and the choices (i.e., potential paths) the test taker will have after each interaction or decision point. A flowchart also is an easy way to ensure that each participant will complete the core interactions which are required for reliable and valid scoring. The next step involves preparing the storyboard or script. This includes information about the organization, context, and characters, as well as the general storyline for the VRP or the specific issues or topics to be addressed in the in-basket. Then, for each simulated interaction, we generate response options targeted to a particular competency. Ideally, we work with subject matter experts to create these options. The goal is to offer a range of realistic responses that are reasonably comprehensive to increase the likelihood that test takers would select one of those choices if faced with the situation in real-life. When developing simulations, we are careful to avoid thinking linearly. We have learned that it is important not to assume that all participants will respond in a similar manner. In the same spirit, we have also learned that developers should not think just in terms of how he or she would approach the situation. Taken all together, we work hard to consider all possible response options and to generate paths that reflects a variety of those choices that can be simulated. By doing this, developers undoubtedly introduce branching. The goal is to offer just enough different paths to help the sense of realism without introducing unwarranted complexity. To facilitate the scoring process, developers ensure that all information is conveyed during the core interactions and that those paths not experienced by all test takers are
Rich-Media Interactive Simulations
fairly innocuous. For example, the individuals on these non-core paths may not be available or may not have any relevant information to share. Therefore, the sense of realism is preserved by offering realistic options in response to the scenario, while the potential differences in test taker experiences are controlled by ensuring these paths do not provide any substantive information. Once the choices for responding are identified, it is critical that the question posed after each set of responses helps provide information associated with the targeted competency and captures the behaviors the test developers want to elicit. An engaging simulation is worthless from a measurement perspective unless we can be certain that the inferences one can make from the resulting score are meaningful. Some potential prompts or questions that can be asked after a set of response options include: •
•
•
•
Rate the effectiveness of each response option 1 = Highly ineffective 4 = Moderately effective 7 = Highly effective How soon should the following actions be taken (if at all)? 1 = This action should not be taken 2 = No rush to take this action 3 = Do soon after acting on top priorities 4 = Top priority; do this immediately How critical is it to obtain the following information? 1 = Not critical 4 = Somewhat critical 7 = Extremely critical How much of a problem does each of the following issues present for the organization? 1 = Not much of a problem 4 = Somewhat of a problem 7 = Significant problem
The best way to determine which question(s) to ask is to consider exactly what one wants to learn about the test taker. This is more of a challenge given the closed-ended nature of these types of rich media assessments since ultimately, the goal is to try to ascertain what the test taker is thinking and to determine what the person would actually do if faced with the same situation in real life. Clearly, the open-ended response format associated with live simulations is an ideal means to accomplish this; however, it is much trickier when the test taker is offered response options in an entirely closed-ended assessment. Our logic is that if the right questions are posed for a well-crafted set of response options, then we can determine what a person is thinking and how she or he would respond in reality. As suggested above, the response options are initially written to target a specific competency. To ensure the choices do in fact measure the intended constructs, the author and her team conduct a retranslation activity that involves explicitly assigning a competency to each option, without knowing which competency was originally targeted. If a majority of the raters cannot agree on the competency elicited by a particular response option, then either the wording is modified so it is clearer what competency is tapped, or the option is dropped. The final assignments are used to compute competency level scores. The basic scoring premise involves combining responses to all options associated with a competency to provide information about one’s standing on that competency. To accomplish this, we compare the test taker’s answers to some “standard.” More specifically, we do this by using a modified version of a simple deviation score that compares a test taker’s answer to the keyed response provided by subject matter experts. First, we collect judgments of the ideal response to each scored question from subject matter experts. For example, if the question is to indicate the effectiveness of each
273
Rich-Media Interactive Simulations
course of action, we ask subject matter experts to indicate the actual effectiveness. Their judgments become the keyed responses that are then used to compute a “distance” score for each response option, that is, how close the test taker’s answer is to the keyed response provided by the subject matter experts. The closer one’s answer is to the keyed response, the higher the score that person will receive on the relevant competency. To illustrate this, if the effectiveness rating provided by the subject matter experts of a response option is a 4 (on a 7-point scale) and if the test taker also rates the effectiveness a 4, then that person scores the best he or she can receive on that item since there is no difference in ratings. A rating of 1 or 7 would be the worst response on this particular item since both are three points away from the keyed response. The distance scores for all options associated with a particular competency are then combined to provide a competency level score. Given the use of branching, there may be a few instances where some test takers have more datapoints for a given competency than other test takers. As noted above, branching is used primarily to introduce realism into the scenario; most of the non-core interactions should be insignificant and none should contain new information. Therefore, if there are any scored questions associated with the non-core interactions, they should be considered “icing on the cake.” There should be a sufficient number of other items associated with that competency to provide a reliable indication of a person’s standing on that construct. In addition to competency-level scores, the distance scores for all responses across all competencies are combined to generate an overall assessment score. When doing this, it is important to keep in mind that the literature demonstrates that distance-based scoring is susceptible to coaching (Cullen, Sackett, & Lievens, 2006). Additional research has also suggested that this type of scoring
274
increases the potential for subgroup differences (McDaniel, Psotka, Legree, Yost, & Weekley, 2011). Given this, we have adopted a variant of the simple distance-based metric, which involves standardizing test taker responses (within-persons, across item responses) and keyed responses prior to calculating the distance score (McDaniel et al., 2011). To summarize, a rich media simulation is an assessment that takes advantage of animation or live video, along with branching technology, to present the test material in a manner that simulates how the scenario may unfold in a real-life situation. All test takers interact with a core set of individuals and stimulus materials and respond to a core set of questions. This helps ensure consistency and standardization among the test takers. Competency and overall assessment scores are computed by calculating the difference between the test taker’s response and the keyed response assigned by subject matter experts. Because the author’s foray into the world of rich media simulations was primarily driven by the needs of several clients, it is important to mention that the goals stated by the various stakeholders were met. These virtual simulations yielded a significant reduction in overall costs as well as in testing time when compared to live simulations or an assessment center (McBeth & Tsacoumis, 2014). Also, given that these measures are completely closed-ended, there is no need for live assessors or scorers, which allows for on-demand testing and addresses the concerns associated with resource constraints. These new assessments also significantly reduce the personnel resources needed to administer the measures since one individual can serve as the testing proctor during a group administration. That said, we are still treading in new territory with new measurement challenges, which warrants additional research and discussion to help us fully understand the true nature of these measures.
Rich-Media Interactive Simulations
Issues, Controversies, and Problems Rich media simulations offer an innovative and engaging approach to presenting relevant information as a means to evaluate test takers’ competencies based on how they address the situations. However, there is little research demonstrating the effectiveness of these measures. As with any assessment, the main issue is ascertaining whether the tool, in fact, measures the intended constructs and, in turn, whether overall scores are related to the criterion of interest (e.g., job performance). A content-oriented development approach helps to ensure that the resulting assessment mirrors important tasks and provides coverage of the targeted competencies. However, since the response format involves answering questions about various options, as opposed to free response, one could argue that content validity evidence is not sufficient. Typically, in the employment testing arena, test developers will rely on content validity as their sole piece of validity evidence only if the manner in which the test taker is demonstrating the relevant knowledge is comparable to the way an incumbent demonstrates that knowledge on the job. Since employees tend to respond in a free, open-ended manner to work-related situations, rather than selecting from a fixed set of options, organizations tend to want additional validity evidence to support their assessments. Given the design and development steps advocated in this chapter, it is very likely that these measures will demonstrate strong concurrent and predictive validity evidence similar to the long history of validity for live simulations. That said, this is an outstanding question. Relatedly, although there is a clear and rigorous methodology for computing competency-based scores, the construct validity of these measures is unproven. In fact, the assessment center literature often demonstrates higher correlations among dimensions or competencies within an assessment than those found between the same competency across assessments (Lance, 2008). Therefore, despite
the strong criterion-related validity evidence associated with assessment centers, the question of construct validity is still questioned. It is possible future research associated with these rich media assessments will yield similar results. When thinking about the question of validity, it is also important to consider the nature of the response options and the impact each could have on the interpretation of the test scores. These assessments are simulating real interactions; however; it is impossible to know exactly how test takers read each response option. That is, what intonation, such as pitch, emotion, or attitude, do they assign to the option? What inferences are they making about how the option would be carried out? Consider the following scenario: You are Chris, a new supervisor, and it is your first day on the job. Robert enters your office and tells you that the president of the company is upset that a project is behind schedule. He informs you that the individual responsible for the project is out of the office and not reachable. Then the test taker receives the following options and is asked to rate the effectiveness of each response on a 7-point scale (1=highly ineffective; 7=highly effective): 1. Ask Robert for more information about the project. 2. Ask your assistant to try to locate the project manager. 3. Ask Robert why he’s involved. 4. Tell Robert that the project manager probably has everything under control. 5. Tell Robert you will take care of it from here. These responses may seem straightforward at first glance, but it is possible for different people to interpret the tone of these responses differently. If this occurs, it is conceivable, even likely, that those different interpretations may provide information about different competencies. For example, consider Option C: Ask Robert why he’s involved. One
275
Rich-Media Interactive Simulations
person may read this as something that is spoken in a very pleasant tone or even neutral, simply asking Robert about his role. In contrast, another person may read this as if it is something stated somewhat harshly, almost accusatory, questioning why Robert is even commenting on something that is none of his business. These two people have totally different interpretations of this same response option, which understandably, could lead to completely different effectiveness ratings. Let us take that scenario a bit further. At this point, Chris, the supervisor, is talking to Mary, who has completed some work on the project. The test taker is presented with several response options, one of which is to ask Mary how she would proceed. It seems very reasonable to query the effectiveness of this type of response; however, it is unclear to what competency the answer is linked. Asking Mary for her opinion could reflect teamwork and a participatory management style and therefore may be perceived as highly effective. On the other hand, asking Mary her thoughts about how to proceed in a situation that needs immediate action could connote that the supervisor (i.e., test taker) is not decisive. Therefore, a high effectiveness rating may reflect negatively about one’s decisiveness. Historically, one of the benefits of simulations is their ability to offer competency-level information, particularly for the softer skills. The nuances presented above do not tend to be an issue with live simulations since the context, intonation, and tone of the discussion are clear from the interactions between the test taker and the role player. In terms of the rich media counterparts of these measures, the potential for different interpretations of the response choices has serious implications for whether the automated tools can offer the same type of benefit as the live assessments. Another potential issue with the rich media simulations as defined in this chapter is how to deal with the thorny measurement challenges associ-
276
ated with combining different item types. Recall that in an attempt to get a better understanding of one’s thoughts and thought-processes, the author recommends posing different types of questions based on what competencies the developers are trying to assess. Consider the following potential types of items: • • • • •
Effectiveness ratings on a Likert scale. Rank ordering of options. Checklist and hot spot forms (click on section of form that contains an error). Multiple choice (e.g., yes, no, not sure). Categorization of options by level of priority.
Although conceptually it is appealing to present different item types as a means to more closely simulate a real situation, measurement experts can readily detect the issue with combining scores from items with different scales. In fact, there is a long history of research regarding options for weighting different items, test components, and scales and the implications for reliability and validity of the resulting composite scores (Rudner, 2001; Wainer, 1976; Wang & Stanley, 1970). It is possible that simply combining scores across items may result in unintended weighting of items if there are large item variance differences (Oswald, Putka, & Ock, 2014). Measurement challenges could also be introduced as a result of branching since respondents complete a slightly different set of items. More specifically, it is feasible that in this situation the reliability of the assessment may decrease since differences in item difficulty can impact how the test takers are rank ordered (Putka, 2014). Of course, as noted above, this can be controlled by ensuring that those interactions not experienced by all test takers do not offer any new information. Even when developers strictly adhere to industry best practices and standards to develop
Rich-Media Interactive Simulations
rich media simulations, it is critical to pay close attention to these types of measurement issues in order to feel confident in the competency level and overall scores derived from these types of measures.
SOLUTIONS AND RECOMMENDATIONS The core issue associated with rich media simulations is a question of validity, particularly given their novelty and the lack of research documenting the psychometric properties of these measures. One needs assurances about the competencies or constructs being measured and evidence that the assessment is related to the targeted criterion, which, for the author and her clients, is job performance. To address the issue of validity evidence, the author recommends adhering to a content-oriented development process as described earlier in this chapter, followed by a criterion-related validation study. Given the criticality of the question of validity, it is important to ensure the solutions to this issue are entirely clear. First, content validity evidence is built by relying on relevant job information to identify the important tasks and the critical knowledge, skills, and abilities or competencies needed to perform those tasks. Then, the developers should work with subject matter experts to generate a scenario or short vignettes that reflect a group of important tasks with the intent of measuring the competencies that are linked to, or in other words, are required to perform those tasks. This approach ensures the relevance of the stimulus material or context of the assessment and increases the likelihood that the simulation will provide information relevant to the targeted competencies, thus offering some validity evidence. To help bolster the relevance of the assessment, the next step is to work with subject matter experts
to identify realistic response options that represent a range of effectiveness. Then, developers should conduct a retranslation activity to determine which competency each option reflects. Any option that cannot clearly be categorized should be eliminated. As noted above, this activity often identifies differences in interpretations of the response choices, which may lead to either revising the statement or dropping it entirely. It is not unusual during the course of this process for an option to be assigned to more than one competency. That said, from a measurement perspective, this is troubling since ideally the competencies would be independent from one another. In these instances, the first course of action is to discuss the interpretation of the response, as well as revisit the definitions of the competencies. In most cases, this discussion either leads to a revised judgment by some of the evaluators regarding which competency is targeted or a modification to the response option so it more clearly is associated with a particularly construct. In a few cases, the response option is dropped entirely. The goal is to have clean independent measures of each competency. If the response options reflect more than one construct, then one cannot be certain what a test taker’s response to the options conveys. These processes provide some evidence of the content validity for the assessment. However, since test takers answer closed-ended questions rather than provide free responses as one would do in reality if faced with the situation, some experts would question whether this type of validity evidence is appropriate or sufficient. In addition, given the novelty of these types of rich media simulations and the absence of research documenting the properties of these measures, developers should seriously consider conducting a criterion-related validity study. The author and her colleagues are in the process of collecting this type of evidence using a concurrent research design for several interactive, rich media test batteries. Job incumbents, who have been in the targeted
277
Rich-Media Interactive Simulations
position for a minimum of one year, serve as the study participants. Concurrently, researchers are collecting performance evaluations from the supervisor of each study participant. Specifically, supervisors rate the test taker on each targeted competency using rating scales developed exclusively for research purposes only. One study has been completed and, although the sample size was small (n = 160), the uncorrected criterion-related validity coefficient for a battery of three rich media assessments was .45. Due to the proprietary nature of the information, the author cannot release additional information. Nevertheless, clearly these results are very promising, and not necessarily surprising. As previously noted, there is a long, rich history of strong concurrent and predictive validity evidence for live simulations (e.g., Gaugler et al., 1987; Thornton & Byham, 1982). Since the rich media counterparts to the live simulations are also based on job-related information and because the methodology we use to develop these rich media assessments mirrors the steps for developing live exercises, such as role plays and in-baskets, one could expect strong criterionrelated validity evidence. We have also collected some information about the potential construct validity associated with these virtual measures. The author and her colleagues conducted additional analyses to help inform whether the competency-level scores truly reflect levels of those capabilities. Intercorrelations among six competencies measured in one battery displayed some overlap; however, they were not redundant with one another (ranged from .01 – .53). The highest correlations were associated with competencies that may share some commonalities (e.g., Leading Others and Relating to Others). This pattern offers some evidence of construct validity although it is not overwhelmingly definitive. The assessment center literature has questioned the construct validity
278
of competency level scores, so it is possible the research associated with rich media simulations will be comparable. Our desire to more accurately understand what the test taker is thinking forces us to address the thorny measurement considerations introduced by the need to combine scores from questions that have different response formats to create competency level and overall assessment scores. One way to address this is to convert the different types of items to a common metric. For example, developers could use a simple linear transformation to map the responses to a common 7-point scale. The branching capabilities of many rich media simulations also present some unique challenges. Although it is possible to ensure all test takers complete a core set of questions, by design test takers may approach the situation differently and, in fact, not all respondents may complete the same full set of items. That said, developers should build in branching only when it makes sense to do so. There should be enough to provide a sense of realism, but not so much that it is difficult to ensure that everyone is exposed to a sufficient number of core questions. This balance can be achieved by offering realistic options of how to proceed but then having those choices either lead to a common place or to a “dead end” (e.g., the person you want to speak to does not have any relevant information). One can also control the core set of questions by forcing a visit from a key player, a phone call, or a new email message. Not all this is suggesting that we should avoid branching; in fact, branching capability is one of the beauties of this type of assessment. It is what allows rich media simulations to be more engaging and interactive so they more closely mirror live simulations and, in turn, reality. The issue at hand is how to deal with branching from a measurement perspective. In this situation, test developers are
Rich-Media Interactive Simulations
faced with an ill-structured measurement design where items are not fully nested within or fully crossed with test takers. In response, the author and her colleagues compute reliability by applying the methodology explicitly designed to address reliability estimation for ill-structured design (Putka, Le, McCloy, & Diaz, 2008). The resulting coefficient, G(q,k), is an estimate of internal consistency among items comprising the overall assessment score that is similar to coefficient alpha yet accounts for the fact that not all test takers completed all items due to branching. Given the above considerations, there are some clear guidelines when developing interactive rich media simulations. Test developers should follow industry best practices by following a contentoriented development approach and then by conducting a pilot test and by modifying the scenarios and response options, and if necessary pruning the response options based on study results. Next, they should conduct a criterion-related validity study and drop additional response options if warranted based on those results. If the situation calls for establishing a cut-score, then the validity study and operational data can help inform the standard for that cut point. In general, researchers can use these data to estimate the accuracy with which correct pass/fail decisions would be made if the cutoff score were set at various points on the overall assessment score continuum. Then, they can meet with the relevant stakeholders to select the cutoff score that best meets the organization’s needs. The author has direct experience developing a valid tool for a high-stakes program in a very litigious environment. The process demonstrated strong concurrent criterion-related validity evidence. In addition, test participants liked the interactive nature of the process and acknowledged the realism of the situations they were asked to address (McBeth & Tsacoumis, 2014). The measurement challenges can be tackled; however, it is imperative to collect additional information so we
can learn more about the measurement properties and more about precisely what is being measured by these simulations.
FUTURE RESEARCH DIRECTIONS For over 60 years, high-fidelity measures, such as those used in assessment centers, have proven to be powerful measures of soft skills and leadership competencies, and their validity is well documented. They remain popular in many circles, particularly for organizations interested in offering constructive, targeted, and valuable feedback to help their employees develop into future leaders. However, given the resources required to administer live job simulations, coupled with the prevalence of computer-based testing, there is a growing demand for the development of efficient assessment methods that accomplish more than evaluating cognitive abilities, personality, or interests. The use of rich media technology along with software that can accommodate branching and automated scoring offers a promising alternative to live assessments such as role plays and in-basket exercises. The discussion in this chapter describes how versions of these live simulations can be created, delivered, and scored using interactive computer-based software. Although the simulations are presented as discrete assessments (e.g., virtual role play, animated in-basket), there is little that differentiates these tests from one another. The distinctions associated with the live counterparts reflect the delivery and response formats, but those differences do not translate to the computer-based versions of the simulations. Given this, it seems logical that future rich media simulations are more likely to contain a broad range of item types and questions that cover short distinct situations, as well as more in-depth issues that need to be addressed. The author anticipates that in the future, rather than having a battery
279
Rich-Media Interactive Simulations
with a virtual role play, animated in-basket, and possibly a situational judgment test, we will be creating an inclusive “day-in-the-life” assessment. This measure will reflect the true nature of the job as a whole by asking test takers to handle numerous problems, address phone calls and emails as they come in “real time,” talk to visitors who enter the office, and decide how to manage one’s time to address everything. Of course, to ensure standardization and fairness, the assessment will still have to be crafted very carefully so that all participants complete a core set of items. The future for interactive, rich media simulations is to continue down this path. We should keep developing these types of assessments for a broad range of situations: employment testing for different occupations, educational testing for students at all levels, and certification and licensure testing for credentialing bodies. That said, ultimately their success is dependent on additional research that demonstrates the validity of these measures, including their relationship to relevant criteria (e.g., job performance) and a clear understanding about the meaning of the resulting competency-based scores (i.e., construct validity). In addition, we need to continue to take a close look at the scales used in the questions, the methods of combining scores, the relationship among the different scale scores, the reliability of the scale, along with countless other psychometric questions that arise as we delve into evaluating the utility and benefits of a new assessment method. At this point, practice is getting ahead of the research literature. Even in instances where relevant data have been collected, limits on the extent to which organizations allow those data to be made public and long publication cycles suggest that it may be some time before there is a strong research base associated with rich media simulations (Gibbons, 2013). Those interested in these measures need to move as swiftly as possible to publicize their work and results.
280
CONCLUSION The move toward interactive rich media simulations is a true game changer. The work the author referenced throughout this chapter is a significant step in demonstrating to the testing industry the power of well-developed interactive online simulations. These measures reflect an innovative approach to collecting multiple indicators of one’s competencies and capturing information about one’s capabilities in an engaging and efficient manner. All facets of the simulations combine to provide a sense of realism for test takers and come as close to a live simulations as feasible via a computer simulation. There is no doubt that high-fidelity simulations offer an excellent means for assessing one’s competency level. These measures include performance assessment in both the educational and employment context, as well as job simulations used as developmental tools or measures to hire or select people for promotion. However, their dependence on live role players and evaluators is often a deterrent to their use given the cost and time required to implement them. In addition, given the increased reliance on technology to evaluate one’s skills, organizations, stakeholders, and other policy makers continue to search for more efficient ways to gather information about one’s capabilities. Interactive rich media simulations offer a viable alternative. High-end computer-based simulations have been used for training and assessing procedural knowledge for several occupations (e.g., pilots, law enforcement officers) and even the more scalable versions have been available to help users learn various concepts (e.g., software to evaluate children’s knowledge of some basic math, reading, or science concepts). This chapter takes the principles of interactive simulations a step further by illustrating how they can be used to assess softer (i.e., non-cognitive) skills with sufficient rigor that high-stakes personnel deci-
Rich-Media Interactive Simulations
sions can made based on the resulting scores. Although additional research is needed to clearly understand these new innovative assessments, the initial psychometric properties and validity results offer strong support for these measures.
REFERENCES American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Arthur, W., Day, E. A., McNelly, T. L., & Edens, P. S. (2003). A meta-analysis of the criterionrelated validity of assessment center dimensions. Personnel Psychology, 56(1), 125–153. doi:10.1111/j.1744-6570.2003.tb00146.x Bray, D. W., & Grant, D. L. (1966). The assessment center in the measurement of potential for business development. Psychological Monographs, 80(17), 1–27. doi:10.1037/h0093895 PMID:5970218 Cascio, W. F., & Phillips, N. F. (1979). Performance testing: A rose among thorns? Personnel Psychology, 32(4), 751–766. doi:10.1111/j.1744-6570.1979.tb02345.x Cullen, M. J., Sackett, P. R., & Lievens, F. (2006). Threats to the operational use of situational judgment tests in the college admission process. International Journal of Selection and Assessment, 14(2), 142–155. doi:10.1111/j.14682389.2006.00340.x Equal Employment Opportunity Commission. (1978, August 25). Uniform Guidelines on Employee Selection Procedures. Federal Register, 44, 38290–38315.
Gaugler, B. B., Rosenthal, D. B., Thornton, G. C. III, & Bentson, C. (1987). Meta-analysis of assessment center validity. The Journal of Applied Psychology, 72(3), 493–511. doi:10.1037/00219010.72.3.493 Gibbons, A. M. (2013, March). Research evidence and AC 2.0: What we know and what we don’t. Presentation at the 33rd Annual Assessment Centre Study Group Conference, Stellenbosch, South Africa. Goldstein, I. L., Zedeck, S., & Schneider, B. (1992). An exploration of the job analysis-content validity process. In N. Schmitt & W. C. Borman (Eds.), Personnel Selection. San Francisco, CA: Jossey-Bass. International Task Force on Assessment Center Guidelines. (2014). Guidelines and ethical considerations for Assessment Center Operations (6th ed.). Retrieved from http://www.assessmentcenters.org/Assessmentcenters/media/2014/International-AC-Guidelines-6th-Edition-2014.pdf Klimoski, R., & Brickner, M. (1987). Why do assessment centers work? The puzzle of assessment center validity. Personnel Psychology, 40(2), 243–260. doi:10.1111/j.1744-6570.1987. tb00603.x Lance, C. E. (2008). Why assessment centers do not work the way they are supposed to. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(1), 84–97. doi:10.1111/ j.1754-9434.2007.00017.x Lane, S., & Stone, C. A. (2006). Performance Assessment. In R. L. Brennan (Ed.), Educational Measurement (4th ed.). Westport, CT: American Council on Education/Praeger.
281
Rich-Media Interactive Simulations
McBeth, R., & Tsacoumis, S. (2014, October). Going fully automated: A case study. Presentation at the 38th International Congress on Assessment Center Methods, Alexandria, VA.
Rudner, L. M. (2001). Informed test component weighting. Educational Measurement: Issues and Practice, 20(1), 16–19. doi:10.1111/j.1745-3992.2001.tb00054.x
McDaniel, M. A., Hartman, N. S., Whetzel, D. L., & Grubb, W. L. III. (2007). Situational judgment tests, response instructions and validity: A metaanalysis. Personnel Psychology, 60(1), 63–91. doi:10.1111/j.1744-6570.2007.00065.x
Rynes, S. L., & Connerley, M. L. (1993). Applicant reactions to alternative selection procedures. Journal of Business and Psychology, 7(3), 251–277. doi:10.1007/BF01015754
McDaniel, M. A., Psotka, J., Legree, P. J., Yost, A. P., & Weekley, J. A. (2011). Toward an understanding of situational judgment item validity and group differences. The Journal of Applied Psychology, 96(2), 327–336. doi:10.1037/a0021983 PMID:21261409 Moses, J. L. (1977). The assessment center method. In J. L. Moses & W. C. Byham (Eds.), Applying the assessment center method (pp. 3–11). New York: Pergamon Press. doi:10.1016/B978-0-08019581-0.50006-2 Oswald, F. L., Putka, D. J., & Ock, J. (2014). Weight a minute, what you see in a weighted composite is probably not what you get. In C. E. Lance & R. J. Vandenberg (Eds.), More statistical and methodological myths and urban legends. New York: Taylor & Francis. Putka, D. J. (2014, March). Transitioning from traditional ACs to automated simulations: Insights for practice and science. Invited keynote presentation at the 34th Annual Assessment Centre Study Group (ACSG) Conference, Stellenbosch, South Africa. Putka, D. J., Le, H., McCloy, R. A., & Diaz, T. (2008). Ill-structured measurement designs in organizational research: Implications for estimating interrater reliability. The Journal of Applied Psychology, 93(5), 959–981. doi:10.1037/00219010.93.5.959 PMID:18808219
282
Schmidt, F. L., Greenthal, A. L., Hunter, J. E., Berner, J. G., & Seaton, F. W. (1977). Job sample versus paper-and-pencil trades and technical tests: Adverse impact and examinee attitudes. Personnel Psychology, 30(2), 187–197. doi:10.1111/j.1744-6570.1977.tb02088.x Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. doi:10.1037/00332909.124.2.262 Schmitt, N., & Ostroff, C. (1986). Operationalizing the “behavioral consistency” approach: Selection test development based on a content-oriented approach. Personnel Psychology, 39(1), 91–108. doi:10.1111/j.1744-6570.1986.tb00576.x Society for Industrial and Organizational Psychology. (2003). Principles for the validation and use of personnel selection procedures. Bowling Green, OH: Author. Thornton, G. C. III. (1992). Assessment centers in human resource management. Reading, MA: Addison Wesley. Thornton, G. C., & Byham, W. C. (1982). Assessment centers and managerial performance. New York: Academic Press. Thornton, G. C. III, & Mueller-Hanson, R. A. (2004). Developing organizational simulations: A guide for practitioners and students. Mahwah, NJ: Lawrence Erlbaum Associates.
Rich-Media Interactive Simulations
Tsacoumis, S. (2007). Assessment centers. In D. L. Whetzel & G. R. Wheaton (Eds.), Applied measurement: Industrial psychology in human resources management (pp. 259–292). Mahwah, NJ: Lawrence Erlbaum. Tsacoumis, S. (2014, October) Do role plays need the human touch? Presentation at the 38th International Congress on Assessment Center Methods, Alexandria, VA. Tziner, A., Ronen, S., & Hacohen, D. (1993). A four-year validation study of an assessment center in a financial corporation. Journal of Organizational Behavior, 14(3), 225–237. doi:10.1002/ job.4030140303 U.S. Merit Systems Protection Board. (2009). Job simulations: Trying out for a federal job. Washington, DC: Author. Wainer, H. (1976). Estimating coefficients in linear models: It don’t make no nevermind. Psychological Bulletin, 83(2), 213–217. doi:10.1037/00332909.83.2.213 Wang, M. W., & Stanley, J. C. (1970). Differential weighting: A review of methods and empirical studies. Review of Educational Research, 40(5), 663–705. doi:10.3102/00346543040005663 Wernimont, P. F., & Campbell, J. P. (1968). Signs, samples, and criteria. The Journal of Applied Psychology, 52(5), 372–376. doi:10.1037/h0026244 PMID:5681116
KEY TERMS AND DEFINITIONS
High-Fidelity Assessment: A test that mirrors or closely simulates a real-world situation in which test takers takes action based on the scenario and available information just as they would in real life. High-Stakes Test: An assessment used to make important decisions for the test taker, such as one used to determine who to accept into a college or one used to hire or promote within an organization. Rich Media Simulation: An assessment that uses animation or live video along with branching technology to present the test material in a manner that simulates how the scenario may unfold in real life by allowing the test taker to dictate how the assessment proceeds or unfolds. Simulation: An assessment that mirrors the activities of interest, although they are not exact replicas. These measures are used in circumstances when it is impossible to have the individual complete an exact slice of the job. Stimulus Material: Test material that provides information about the context of the scenario for the test taker, such as a description of the organization, the employees in the office, or details about the projects, issues, or concerns. Virtual Role Play: A rich media simulation that presents a realistic, multifaceted scenario that an individual might encounter on the job and that is designed to evoke targeted competencies. Test takers select who they meet with, what they do next, and what type of information to review, and they are scored on their responses to a series of closed-ended questions throughout the simulation (e.g., “Rate the effectiveness of the following sets of responses,” “How soon would you do each of the following?”).
Distance Score: The difference between the test taker’s answer and the keyed response.
283
284
Chapter 11
An Approach to DesignBased Implementation Research to Inform Development of EdSphere®:
A Brief History about the Evolution of One Personalized Learning Platform Carl W. Swartz MetaMetrics, USA & University of North Carolina, USA Sean T. Hanlon MetaMetrics, USA
E. Lee Childress Corinth School District, USA A. Jackson Stenner MetaMetrics, USA & University of North Carolina, USA
ABSTRACT Fulfilling the promise of educational technology as one mechanism to promote college and career readiness compels educators, researchers, and technologists to pursue innovative lines of collaborative investigations. These lines of mutual inquiry benefit from adopting and adapting principles rooted in design-based implementation research (DBIR) approaches. The purposes of this chapter are to: (a) provide the research foundation on which a personalized learning platform was developed, (b) present the evolution of EdSphere, a personalized learning platform that resulted from a deep and long-term collaboration among classroom teachers, school and district administrators, educational researchers, and technologists, and (c) describe a need for development of innovative technologies that promote college and career readiness among our earliest readers.
INTRODUCTION Around the world, it is widely accepted that a quality education is one of the primary levers to increasing the percentage of educated citizens who may successfully participate in the workforce; with
untold economic and societal benefits to countries that make investments in education (Murnane & Willett, 2011, Chapter 1; Stewart, 2012, Zhao, 2012). Rapid changes in the nature of work in the 21st Century has prompted policy-makers and educators to implement a range of initiatives
DOI: 10.4018/978-1-4666-9441-5.ch011
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Approach to Design-Based Implementation Research to Inform Development
with the urgency demanded by the significant gap between our children’s current trajectory towards college and career readiness (CCR) described in the Common Core State Standards (CCSS) (National Governors Association Center [NGA] for Best Practices & Council for Chief State School Officers [CCSSO], 2010; Phillips & Wong, 2012, Williamson, 2008, Williamson, Fitzgerald, & Stenner, 2013), 21st Century Skills (Partnership for 21st Century Skills, 2009), and Common European Framework of Reference (CEFR, 2001; North, 2014). Many, if not all, current educational initiatives incorporate innovations necessary to ameliorate persistent challenges posed by an achievement gap between expectations and current readiness demanded in post-secondary institutions and the workplace. Technology-based innovations are increasingly being promoted in federal and state reports and educational organizations (for- and not-for-profit) as key components to most, if not all, efforts to enhance student college and career readiness (Aldunate & Nussbaum, 2013; Kim, Kim, Lee, Spector, & DeMeester, 2013; U.S. Department of Education, 2010, 2013). In part, proponents of technology-based solutions, with 24-7-365 access, posit benefits such as: (a) lengthening the school day, (b) extending the school year, and (c) providing greater personalization of academic learning time (Calkins & Vogt, 2013; Childress, 2013; U.S. Department of Education, 2010, 2012, 2013; U.S. Programs, Bill and Melinda Gates Foundation, 2012). It is however, possible that increasingly sophisticated technologies will disrupt widely accepted notions about where, when, and for how long a student learns along the path towards college and career readiness. Additionally, innovations that disrupt the status quo, with no commensurate increase in student readiness for college and career, will serve only to decrease investment, increase frustration, and ultimately abandonment of technologies. Growing pains are already being felt due to insufficient re-
sources being devoted to restructuring use of time during the school day, enhancing wireless bandwidth, improving security, maintaining hardware, professional development, and on-going technical assistance. Implementing disruptive technologies requires providing time for teachers to plan its use and refining of original implementation plans using qualitative and quantitative data to inform decisions. The benefits and frustrations related to daily use of technology-at-scale are being felt at a time when educators are spending considerable amounts of professional development and classroom instructional time implementing other non-technology based strategies purported to support student progress towards college and career readiness. Until recently, research investigating the impact of technology on enhancing teacher effectiveness and student growth towards CCR has provided equivocal evidence of effectiveness. Such results are attributable, in part, to the speed at which software is developed then deployed in classrooms. However, the emphasis and requirement for rapid improvement in test scores and an organization’s need for a return on investment has contributed to a lack of longterm collaborations among key stakeholders. This lack of collaboration impedes testing the efficacy of technologies and models of professional development that promote teacher effectiveness and student growth. Currently, the adoption of educational technologies is all-to-similar to the process that has governed textbook adoptions for decades. The expense in time and currency is too great and the risks too grave for this antiquated textbook adoption model to govern the adoption of next generation learning resources. A growing body of evidence points to design principles and approaches that ensure technology integrates the active ingredients that promote CCR among students (Dai, 2012; Hanlon, 2013, Hanlon, Greene, Swartz, & Stenner, 2015;
285
An Approach to Design-Based Implementation Research to Inform Development
Penuel, Fishman, Cheng, & Sebelli, 2011; U.S. Department of Education, 2010, 2013; Swartz et al., 2011, 2012; Williamson, Tendulkar, Hanlon, & Swartz, 2012). Design-based implementation research (DBIR) efforts involve long-term collaborations among educators, students, technologists, researchers (Dai, 2012; Penuel, Fishman, Cheng, & Sebelli, 2011). Also key, is funding from foundations (e.g., the Bill and Melinda Gates Foundation) and state or federal government agencies (e.g., Institute of Education Sciences) to allow better articulation of persistent questions, rapid iterative cycles of platform and application development, data collection and analytic strategies to inform the development and deployment of technologies designed to advance teacher effectiveness and student learning. Long-term collaborations that engage in such efforts improve the likelihood that the resulting technologies will earn educators’ trust and motivate students to want to use them (U.S. Department of Education, 2010, 2013). Such results may, in large part, be attributable to extensive evidence of collaboration and input from educators as well as efficacy results from a program of research rather than a single, or limited number of studies. The chapter presents a brief history about how theories of reading, writing, instruction and DBIR helped to guide the development and scaling of EdSphere® (Hanlon, et al., 2015) to its current level of sophistication and classroom utility. EdSphere® is one example of a personalized learning platform designed to address persistent challenges to promoting CCR; evolving technology-based solutions to meet those challenges; and on-going classroom-based implementation research efforts used to inform the design of prototypes to more sophisticated technologies. This DBIR process guided the development of EdSphere to its current state (and contributes to future editions) as a personalized learning platform that, around the world:
286
1. Educators’ trust enough to encourage student use each day, 2. Students use daily to increase their literacy ability and content knowledge, 3. Researchers use to investigate questions about the nature and role of technology in promoting reading and writing ability, and 4. Education organizations license functionality to integrate into their proprietary applications for the purpose of creating more personalized learning applications. Due to space limitations, only the current version of EdSphere® as a platform will be presented with illustrations and research foundations for the development of the applications on the platform (see Recent History 2012-2015 & Table 1 this chapter). EdSphere® (2012-2015) evolved from Online Reader-Writer® (1999-2005), MyWritingWeb® (2006-2009), MyReadingWeb® (20072009), and Learning Oasis® (2009-2012).
A Research Foundation for Addressing Persistent Literacy Challenges Ultimately, overcoming persistent challenges in education, means translating experimental, quasiexperimental, or qualitative research designs, methods, and results into a form and language that consistently impacts classrooms. Developing technology-based solutions to be tested, validated, and scaled in classrooms places a heavy burden on this translation process. Technologies that, in theory, are designed to personalize learning for each student are especially at-risk for taxing the DBIR efforts. EdSphere® minimized this risk by adopting three pillars to personalizing learning through technology, then elaborating and executing on these ideas using an approach to DBIR. The following three sections describe the research foundation that informed the design principles
An Approach to Design-Based Implementation Research to Inform Development
Table 1. Evolution of key features relevant to personalized learning from MyWritingWeb® to EdSphere® 2.0 Features
EdSphere® 1.0 – 2.0
MyWritingWeb® (2006-2009) MyReadingWeb® (2007-2009) Learning Oasis® (2009-2012)
Lifecycle
2006 – 2012
2012 – 2015
Platform
Required download and installation
Web-based; Device-independent; Modularized into applications
Texts Available
25,000 static articles
Millions of articles, updated daily; Dedicated e-libraries
Reading Item Type(s)
Auto-generated semantic cloze
Auto-generated semantic cloze; Auto-generated content cloze
Possible Semantic Cloze Terms
25,000
192,000
Potential Content Words to create Cloze Terms
None
13,000
Reader-Text Match
Only targeted
Targeted, Challenge, Grade-based CCR
Writing Formative Assessment Item Type
30 minutes
15, 30, 50, 70, 110 minutes
Growth Algorithm
Bayesian
Gaussian Kernel
Forecasting
Within-individual
Within-individual
Reporting of Usage and Growth
Limited to Usage, Raw Data
Comprehensive individual and sub-group, group data dashboards: Usage, growth, forecasting
Modalities
Eyes
Eyes and ears
Text-to-Speech
English Only
English, Cantonese, Korean, Polish, Portuguese, Mandarin, Spanish, Russian, Arabic
Promotion of Close Reading and Strategic Writing
None
Text-based literacy strategies, note-taking while reading, outlines, short- and long-constructed responses, opportunities to receive digital feedback from teachers
Other Scaffolds
Dictionary
Dictionary, Thesaurus, Content and Challenge General and Academic Vocabulary
that have guided the development of multiple editions of a personalized learning platform, from its most simple to its most complex form. The Lexile Framework® for Reading, The Lexile Framework® for Writing, and research supporting the active ingredients of deliberate practice have provided the conceptual framework and language that guided thousands of conversations among educators, researchers, and technologists over the past decade.
The Lexile Framework® for Reading Conceptually, reading is a process in which information from text and ability of the reader act together to produce meaning (Anderson, Hiebert,
Scott, & Wilkinson, 1985). More specifically, the Lexile Framework® for Reading is a causal model relating reader ability, text complexity, and comprehension (Stenner, Fisher, Stone, & Burdick, 2013). A Lexile measure, denoted by a numeric value followed by a trailing L (e.g., 1000L), can be either a measure of reader ability or a measure of text complexity. The unit of the Lexile scale is defined as 1/1000th of the difference in difficulty between a sample of basal primer texts and Grolier’s Encyclopedia (1986) (Stenner, Burdick, Sanford, & Burdick, 2006). This equal interval scale for both reader and text ranges from below 0L, or Beginning Reader (BR), to more than 1800L. The higher the Lexile measure, the more reading ability a student possesses or the more complex a 287
An Approach to Design-Based Implementation Research to Inform Development
piece of text. The Lexile Framework® for Reading allows for monitoring status and growth in reader ability, forecasting the level of comprehension a reader will have with a specific text, and most importantly, matching readers to text to promote literacy and content knowledge. The theoretical estimate of text complexity results from conceptualizing a book, article, or other piece of professionally-edited text as a test that is calibrated to the Lexile Scale (BR-more than 1800L). To create the text-as-a-test, the Lexile Reading Analyzer slices the whole text into approximately 125 word slices. The analyzer recognizes sentence ending punctuation to end the slice. This results in slight variation in the actual number of words per slice. No slice is shorter than 125 words. Each slice is then analyzed and the log mean sentence length and the mean log word frequency for each slice is calculated. These are the numerical proxy variables for syntactic complexity and semantic difficulty, respectively. The first proxy is the common logarithm of the ratio of the number of words in the text slice to the number of sentence endings. The second is the mean of the common logarithm frequencies of words in the text slice, which were obtained from a 500 million word corpus. The numerical proxy variables for each slice are inserted into the Lexile specification equation for text complexity with beta weights for each variable and a constant. The result is a theoretical estimate of text complexity for each slice. These estimates for all text slices are then entered as fixed text complexity parameters in a modified version of the Rasch (1960) model. That is, theoretical estimates of text complexity for each slice act as the item calibrations for a text-as-a-test analysis. The estimate of text complexity for a given text is estimated from the requisite ability needed to produce 75% success rate as if each slice in the text was a test item. Increased standards illuminated the need for a renewed focus on the importance of matching readers-to-text (NGA & CCSSO, 2010; COE,
288
2001; North, 2011; Williamson, Fitzgerald, & Stenner, 2013). The Lexile Framework® for Reading offers one scientific approach to measuring both reading ability and text complexity (see Nelson, Perfetti, Liben, & Liben, 2011). The Lexile Framework® posits that reading is a latent trait that influences a reader’s chance of success in comprehending professionally edited text (Williamson, 2008) and text complexity is a quantitative factor that represents the complexity of a text (NGA & CCSSO, 2010). Research suggests that students graduating high school must be able to comprehend text with a complexity of 1200L-1400L to be considered college and career ready (NGA, & CCSSO, 2010; Williamson, Fitzgerald, & Stenner, 2013). For purposes of on-going formative measurement and matching reader-to-text, the Lexile Scale for Reading is unique in its capacity to place readers and text on the same scale. At scale, this capacity means that 100s-of-millions of articles (ranging from Ranger Rick and Highlights for Children to The Economist and Science) can be accessed and targeted to each student’s reading ability. Updated measures of reader ability results from converting their count correct on auto-generated cloze items into an updated Lexile Reader measure after each article is read. This process is referred to as learning-embedded formative assessment. Research results provide evidence that, reader ability and vocabulary, and possibly their writing ability, may improve through wide-reading of high interest articles or articles to reinforce and extend content being taught in the classroom (Calderon, et al., 2010; Cunningham, 2010; Hiebert & Kamil, 2010; Kamil & Hiebert, 2010; Marzano, 2004; Marzano & Pickering, 2005; Marzano & Sims, 2013; Nagy, 2010; Prose, 2006, Chapter 1; Stahl, 2010; Sternberg, 1987).
The Lexile Framework® for Writing With few exceptions, writing ability has not been expressed on a developmental scale (for
An Approach to Design-Based Implementation Research to Inform Development
one such exception, see Attali & Powers, 2008). Thus, researchers, policy-makers, and educators know little about the qualitative and quantitative changes experienced by a developing writer from kindergarten through college, how these changes can be described by individual growth trajectories, or how reading and writing development co-vary. Understanding these changes may lead to advances in assessment that allow for monitoring growth, forecasting performance on high-stakes assessments, and personalizing instructional activities to promote growth. Common Core State Standards also described an urgent need to improve students’ writing ability (NGA & CCSSO, 2010). The Lexile Framework® for Writing is a scientific approach to measuring students writing given the (a) semantic and syntactic features of text scored using an autoessay scoring engine, (b) qualitative features of text scored using a rubric, and (c) convention ability based on performance on an editing task (Burdick, et al., 2013a, 2013b; Swartz & SanfordMoore, 2008). The semantic and syntactic features of text are analyzed by an auto-essay scoring engine (Lexile Writing Analyzer [LWA]) to estimate ability on a developmental scale (Shermis & Hamner, 2012). A comprehensive study of auto-essay scoring engines (n=9) suggested that the Lexile Writing Analyzer is as effective at estimating writing ability as the other eight scoring engines (Shermis & Hamner, 2012). The Lexile Writing Analyzer has the added benefits of being punctuation, genre, and topic independent and in incorporating a developmental scale (Burdick, et al., 2013a). These benefits allow students to write to a teacher-assigned prompt or one chosen by the student for four different periods or writing times (i.e., 15 minutes, 30 minutes, 50 Minutes, 70 minutes, 110 minutes). The results from the auto-scoring engine are integrated with selfjudgments using the rubric from any given essay. These data points can form the basis for discussions with peers, teachers, and parents.
The Lexile Framework® for Writing is comprised of three components: (1) the psychometric foundation on which the developmental scale of writer ability may be illustrated; (2) an extension of the estimation of text difficulty of professionally authored text to student writing samples; and (3) a classroom-based writing evaluation system that integrates the Lexile Writing Scale and LWA into activities designed to monitor growth in writer ability, convention ability, and device fluency (Swartz & SanfordMoore, 2008). The first component is the psychometric foundation on which the developmental scale of writer ability is based. The scale was constructed from a study in which approximately 800 students in grades 4, 6, 8, 10, and 12 each wrote responses to six release prompts from the National Assessment of Educational Progress (NAEP) (Burdick, 2013a). Three of the six prompts were spiraled between each adjacent grade. This design allowed four independent raters from a pool of 19 to score the writing quality of each essay. Each rater was calibrated to scoring papers using a NAEP-like rubric. A developmental scale for writer ability resulted from this design and FACETS analyses in which rater severity and prompt difficulty were accounted for in creating writer ability measures. Lexile Writer measures range from beginning writer (BW) to more than 1500W. An important property of the Lexile Writing Scale and Lexile Writer measures is the ability to monitor growth in writer ability plus status as a writer and response to instruction (RTI). A Lexile Writer measure is an estimate of a student’s ability to express language in writing based on factors related to semantic complexity (the level of the words written) and syntactic sophistication (how the words are combined). Like Lexile Reader measures, students receive Lexile Writer measures from standardized assessments or from interim assessments designed to monitor progress during and across school years.
289
An Approach to Design-Based Implementation Research to Inform Development
The second component of the Lexile Framework® for Writing extends the estimation of text difficulty of professionally authored text to student writing samples. A key component of the Lexile Framework® for Writing is the Lexile Writing Analyzer (LWA). This auto-essay scoring engine estimates the complexity of professionally authored and edited text by analyzing two key predictors: (a) semantic difficulty (as indexed by mean log word frequency) and (b) syntactic complexity (mean log sentence length). The LWA is an automatic essay scoring engine that evaluates the semantic difficulty and syntactic complexity of a student’s first draft response to a prompt. The LWA is a prompt, genre, and punctuation independent scoring engine, which means that the analyzer does not need to be trained to score papers. The correlation between the average of four human raters using a 6-point holistic scoring rubric and the Lexile writer measures equaled .78 (artifact-corrected correlation=.95). The LWA can only be used to estimate writer ability from responses in a digital format (i.e., text or .txt). The LWA may be used to estimate writer ability from handwritten responses (i.e., autograph) to a prompt after transcription of autographs. The process includes the following steps: (a) the autographs are collected and image files are created; (b) the image files of the autographs are then transcribed, with no corrections for punctuation, spelling, grammar, and capitalization, and then saved as a text file; and (c) the text files are submitted in a batch to the LWA. The third component of the Lexile Framework® for Writing is integration of these tools into technology-based writing-feedback systems. Such systems monitor growth in writer ability by providing students in grades 2-12 with multiple opportunities (at least once per week) to respond to prompts written to a wide variety of topics including math, science, and social studies. The responses are scored two ways. First, the Lexile Writing Analyzer estimates
290
writer ability by evaluating the words used and they are combined. Second, the writer and his or her teacher evaluate the quality of the response using state-specific scoring rubrics. Once scored, the writer can write subsequent drafts based on the feedback. Thus, the system is faithful to one interpretation of the writing process. However, writer measures are based on the first draft unaffected by teacher or peer feedback. Research evidence suggests that students’ writing ability should improve by increasing the amount of time students spend engaged in deliberate practice of writing with feedback (Graham, Harris, & Hebert, 2011; Graham, & Hebert, 2010, Graham, & Perin, D., 2007 Prose, 2006; Schunk & Swartz, 1993a,b).
Deliberate Practice Allington (1977) first argued that the amount of time spent reading should influence improvement of reader and writer ability over time. Subsequently, other researchers have supported the importance of daily practice on the development of literacy (Allington, 1980, 1983, 1984a; Anderson, Wilson, & Fielding, 1988; Cunningham & Stanovich, 1998; Gambrell, 1984; Hiebert, 1983; OECD, 2013; Stanovich, 2000; Stanovich, West, Cunningham, Cipielewski, & Siddiqui, 1996). The National Reading Panel Report (U.S. Department of Health and Human Services, 2000) argued that research into the relationship between the amount of reading and the growth of reading ability was inconclusive, citing a lack of compelling empirical evidence due to methodological limitations. In particular, researchers have struggled to quantify the amount of reading completed and to estimate changes in student reading ability over time. Allington (2009) continued to believe that reading volume certainly influenced reading development. Allington, however, observed a lack of evidence related to the type of practice required to enhance reading ability. Ericsson, Krampe, and Tesch-Romer (1993) proposed that
An Approach to Design-Based Implementation Research to Inform Development
it is not the amount of practice so much as the amount of deliberate practice that differentiates experts from novices. Deliberate practice requires activity that is specifically designed to challenge the learner and improve performance. Practice designed to be deliberate can be characterized by five principles: (1) targeted activity that is designed to appropriately challenge the learner; (2) real-time corrective feedback that provides an indicator of performance; (3) distributed practice over a long period of time; (4) intensive practice that does not require the learner to concentrate beyond his or her limits; and (5) self-directed practice when a teacher or coach is unavailable (Ericsson, 1996a, 1996b, 2002, 2004, 2006a, 2006b; Ericsson et. al, 1993). Research investigating evolution from noviceto-expert provided evidence to support the notion that deliberate practice fostered growth in a variety of domains: chess (e.g., Charness, Krampe, & Mayer, 1996; Charness, Tuffiash, Krampe, Reingold, & Vasyukova, 2005; Gobet & Charness, 2006), decision-making in a crisis (e.g., McKinney & Davis, 2004a,), study habits (e.g., Plant, Ericsson, Hill, & Asberg, 2005), mathematical computation (e.g., Butterworth, 2006), professional writing (e.g., Kellogg, 2006), and sports (e.g., Helsen, Starkes, & Hodges, 1998; Hodges & Starkes, 1996; Starkes, Deakins, Allard, Hodges, & Hayes, 1996). The lack of evidence relating growth in reading ability to reading practice could reflect poor understanding and an incomplete capturing of key qualitative and quantitative dimensions of such tasks. Fortunately, technological advances help researchers overcome the limitations of past research around student growth and response to deliberate literacy practice (i.e., failure to describe the nature of the practice with far too few occasions for measuring progress). Technologies, such as EdSphere®, have the potential to revolutionize educational practice and research by immersing students in personalized learning opportunities while providing researchers with detailed, real-
time accounts of student activity during learning (U.S. Department of Education, 2010). Known as trace data (Winne, 1982), these fine-grained accounts of student activity captured digitally during learning provide the information necessary to monitor student reading ability in real-time and to quantify the nature of the reading experience. Deliberate practice offers a set of established theoretical principles that can be incorporated into educational technology; however, educational technology must be specifically designed to foster deliberate practice. The technology must have an underlying scale that allows learners to be matched to appropriately-challenging activities. As students complete activities, they must receive feedback (e.g., visual, auditory) about their performance. Student performance must also be used to provide real-time updates and to ensure appropriate targeting of subsequent activities. The technology must have a systematic way to monitor the intensity of the activity, to ensure that learners are engaged in the activity without becoming fatigued. Finally, educational technology designed to provide deliberate practice, such as EdSphere®, must be accessible year-round so that learners can immerse themselves in daily deliberate literacy practice year-round with little or no direction and guidance from an adult. These principles, along with the Lexile Frameworks® for Reading and Writing, have guided our DBIR technology efforts since 1999-2000 school year.
Summary The principles of deliberate practice are strengthened by embedding psychometrically-sound assessment approaches into learning activities. For example, in EdSphere® students can respond to cloze items while reading, compose short and long constructed responses, correct different kinds of convention errors (i.e., spelling, grammar, punctuation, capitalization) in authentic text, and select words with common meanings from a thesaurus-based activity. Each item encountered
291
An Approach to Design-Based Implementation Research to Inform Development
by students can be auto-generated and auto-scored. The results of these learning embedded assessments are especially beneficial when assessment item types are calibrated to a developmental scale. Today, technology provides opportunities for creating platforms with which the principles of deliberate practice are supported by cognitive and learning sciences. Additionally, item response theory may be combined to provide each student with a personalized learning platform that, in part, promotes college and career readiness.
Persistent Challenges, Approaches and Solutions Far History (1999-2005) At the end of each class, Mrs. Rhinesmith assigns 5-15 pages of reading to her students in 10th
grade American Literature class. Each day, Mrs. Rhinesmith’ students dutifully sit in class. She thinks, but is never quite sure, the students read the assignment for homework and, if they did, how much of the readings each student understood. How is Mrs. Rhinesmith to know if her teaching strategies and assignments are enhancing her students’ reading ability?
An Approach to Meet the Challenge EdSphere®’s evolution as a personalized learning platform began in 1999 with the limited release of Online Reader-Writer® in a small number of school libraries in a single public school district in Florida and an introductory business class at a four-year university in North Carolina (see Figure 1). Online Reader-Writer® was used by students in Florida for approximately five months, during
Figure 1. EdSphere® application page with four literacy learning activities
292
An Approach to Design-Based Implementation Research to Inform Development
the school year. Undergraduate students enrolled at a local university used the application to read only two articles from the Wall Street Journal. The school district’s Director of Library Services and a university professor collaborated with a research team comprised of two psychometricians (one with classroom teaching experience), a technologist, a university professor of statistics, and an educational psychologist with teaching experience in the public schools and at the university level. This original version was designed to help educators meet three persistent classroom challenges: (a) assignment of reading materials appropriate to each student’s reading ability, (b) auditing completion of reading assignments, and (c) inferring understanding or comprehension of text based on percent correct from machine generated cloze items. Online Reader-Writer® was a minimally-viable product with three basic features: • •
•
25,000 digital articles from periodicals such as Ranger Rick, Highlights for Children, and Time Magazine, An auto-item generation and scoring engine that clozed out target words and three distractors (see Bormouth, 1966, 1968a for validity of cloze item type for measuring reader ability), and A text box that allowed students to compose a summary for each article read, which was not scored.
Educators asked students to read articles at their reading ability level. Teachers could log-in to their gradebooks to learn how much time students spent reading and their percent correct on the auto-generated semantic cloze items. Surveys completed by teachers, anecdotal evidence from interviews and conversations with educators and students suggested that Online Reader-Writer® was easy-to-use, engaging, and provided essential information about performance. Usage and performance data generated by students suggested
that the auto-generated cloze item type had the potential to provide reader measures useful for matching students to text and monitoring growth.
Results Online Reader-Writer® provided early evidence that technology could help educators overcome persistent challenges posed when making reading assignments. This early success provided the impetus to expand the trial into a public middle and high school in two different school districts in North Carolina. This phase was designed to continue feasibility trials, but more importantly, to develop and conduct the psychometric research necessary to link the auto-generated semantic cloze item type to the Lexile Scale for Reading. Achieving this link would unlock the potential future of learning-embedded assessments for monitoring growth in reader ability while students are reading authentic text chosen by them because of interest or because the text is about content being taught in the classroom.
Near History (2005-2012) A school district superintendent commented “. . . it is not enough for my students to be among the best in reading and writing across the state, the Southeast, or the South. It is probably not enough for them to be among the best in the country. My students will have to compete with students from Japan, China, India, Singapore, and Europe . . .”
An Approach to Meet the Challenge The book The World is Flat (Friedman, 2005) presented the 21st Century as a time period when traditional boundaries between countries and economies were being transformed and intertwined with technology. Educators in the United States started to look beyond their own classrooms,
293
An Approach to Design-Based Implementation Research to Inform Development
schools, and districts to the educational systems of other countries where graduating students possessed the mathematical, scientific, and literacy skills needed to meet more rigorous demands of an information and technology-based workplace (Stewart, 2012; Wagner, 2012, Zhao, 2012). Students in the United States scored below their same-age peers on most international measures of science, mathematics, and literacy, such as Programme for International Student Assessment and Trends in Mathematics and Science Survey (Stewart, 2012). Could technology provide educators and students with tools to close the CCR gap between students in the United States and their peers who achieved at higher levels? Based on promising, but early results from use of Online Reader-Writer®, the DBIR collaborative moved from early partners in Florida and North Carolina to the inclusion of a school district in north-central Mississippi. The DBIR collaborative model was (and still is today) comprised of: • • • • •
The school district superintendent, The elementary, middle school, junior high and high schools, At least one classroom teacher from each school (elementary, middle, and high school), Students enrolled in grades 2-12 districtwide, and A multi-disciplinary team of psychometricians, statisticians, learning theorists, and technologists.
The most significant change in the process involved moving away from Online ReaderWriter® to a platform that only used the Lexile Framework® for Writing. To achieve this goal, classroom teachers in the district participated in the implementation of MyWritingWeb® during the 2005-2006 school year (Hanlon, Swartz, Burdick, & Stenner, 2006). MyWritingWeb® was built on the research foundation provided by The Lexile Framework®
294
for Writing (Burdick et al., 2013a, 2013b), comprehensive summaries of research on writing instruction (Graham & Herbert, 2010; Graham & Perin, 2007), and in response to the urgent need to enhance student writing ability (College Entrance Examination Board, 2003, 2004, 2005). The results from a survey of chief executive officers from the top 150 companies and directors of human resource departments in the public and private sector provided strong evidence that promotion in the workplace was strongly related to an individual’s ability to use the written medium to communicate cogent and cohesive arguments (College Entrance Examination Board, 2005). Many consider writing to reflect: (a) student understanding of content; (b) insights about the world around them and planets light years away; and (c) the ability to think creatively as well as critically about problems from yesterday, today, and tomorrow. The collaborative spent 2005-2006 engaged in the initial deployment of MyWritingWeb® and rapid iteration of auto-essay scoring engines and user interfaces designed to enhance writer ability. The collaborative also developed and refined the model of professional development and on-going technical assistance designed to support classroom teacher. Educators implementing MyWritingWeb® provided on-going feedback to the collaborative team overseeing the project. Communication occurred during face-to-face visits every 4-6 weeks during the school year, conference calls and virtual demonstrations designed to provide on-going professional development and technical assistance, as well as email. Bugs in MyWritingWeb® were fixed and research questions were re-cast as data was analyzed. Each member of the collaborative filtered information given their domain of expertise and experience (e.g., the classroom, principal and central office, research-development organization). Classroom teachers and principals provided the applied science and artistry in support of using technological innovations in diverse classrooms. Researchers contributed flexible strategies for (re)
An Approach to Design-Based Implementation Research to Inform Development
formulating questions, on-going data collection to monitor quality of implementation, and new models for monitoring growth. Technologists provided the skills and knowledge to develop robust prototypes on which rapid iteration could occur then redeployment of an updated version of the application. The collaborative used the language of the Lexile Frameworks for Writing, and deliberate practice to guide this iterative process. One of the most important outcomes from this time period was the increased trust among the team members amongst each other and in the use of design-based implementation research approach. In 2007, MyReadingWeb® (Hanlon, Swartz, Burdick, & Stenner, 2007), a re-designed Online Reader-Writer®, was deployed district-wide in grades 2-12. For the first time, students engaged in deliberate practice of their reading and writing ability (see Table 1). This version was identical to the previous edition with the exceptions of a new user-interface, use of a Bayesian method for modeling reader growth, and removal of the summary writing functionality. Online ReaderWriter® reported only the percent of cloze items correctly answered while reading an article. Now, students, teachers, administrators, and researchers could monitor growth in reader and writer ability in response to usage. Unfortunately, student use of MyWritingWeb® (with all of its science and time dedicated to implementation) was significantly diminished during the 2007-2008 school year compared to the previous year. MyReadingWeb® cannibalized time dedicated to addressing the need to improve writing in order to address lagging reading scores as a matter of priority. Feedback from educators and students suggested that a unified literacy learning platform was needed, in part because of the inconvenience of requiring students to log into two different applications, but more importantly, because of the opportunity to emphasize the cognitive processes reading and writing share in order to promote mutual growth
(Fitzgerald & Shanahan, 2000; Berninger, Abbott, Abbott, Graham, & Richards, 2002; Shanahan, 1998; Tierney & Shanahan, 1992).
Results Although inconvenient, educators provided their students with access to two separate reading and writing platforms from 2006-2009. During this same time period, technologists on the team dedicated themselves to architecting and testing LearningOasis® (Hanlon, Swartz, Burdick, & Stenner, 2008), a personalized literacy learning platform that unified reading and writing. Educators provided key information useful for developing student workflows, and first edition of electronic gradebooks, and growth reports. For researchers, this was their introduction to a nascent era of analyzing big data. Team members used the millions of auto-generated semantic cloze items and hundreds-of-thousands of written words to begin preliminary research investigating the: (a) efficacy of the relationship between time spent engaged in reading and writing activities and growth in literacy (Hanlon, 2013; Hanlon, Greene, Swartz, Stenner, 2015; Swartz, Hanlon, Tendulkar, & Williamson, 2015; Swartz et al., 2011, 2012; Swartz, Emerson, Kennedy, & Hanlon, 2015); (b) innovative techniques beyond Bayesian approaches to modeling and forecasting growth (Lattanzio, Burdick, & Stenner, 2012; Williamson, Tendulkar, Hanlon, & Swartz, 2012); (c) validity of the Lexile Framework® for Reading (Nelson, Perfetti, Liben, & Liben, 2011; Swartz et al., 2014; Stenner, Fisher, Stone, & Burdick, 2013); and the validity of the Lexile Framework® for Writing (Burdick et al., 2013a,b; Shermis & Hamner, 2012; Swartz & Sanford-Moore, 2008). The results of each area of research were shared with and informed by input and comments from educators participating in the collaborative and external educational organizations (e.g., Student
295
An Approach to Design-Based Implementation Research to Inform Development
Achievement Partners), and external research groups (e.g. independent Research and Technical Advisory Committee). Both feedback and results informed the development of LearningOasis®.
Summary Educators’ wait of more than two years resulted in the seamless transition of students to LearningOasis®. For the first time, students used a single-sign-on to access the complete suite of reading, writing, and vocabulary activities. Each data-point was stored in a single secure database which could be accessed at any time to address concerns of usage or answer questions about the measurement and growth of students (see Table 2). It was now possible to monitor status and growth for any students by re-constructing each reading and writing experience through the dataset. Additionally, truly longitudinal datasets based on a daily recoding of learnings became possible. For example, students who graduated high school in June of 2014 started using MyWritingWeb® as 5th graders during the Fall of 2006 and first used MyReadiungWeb® at the start of their 6th grade year during at that start of the 2007 school year.
Recent History (2012-2015) “Each student, pre-kindergarten through 12th grade, now has a personal device because of a one-to-one initiative. Students access EdSphere® before, during and after school to immerse themselves in well-targeted reading and writing activities. Educators, students, administrators, researchers and technologists can access more data in real-time. Results from this “big data” are used to understand where our students are, how much they have grown, and where they will end up if they continue on the same growth trajectory. Now, the challenge is to turn this data back on technology and to inform instruction with only so many minutes available during a school-day (a district administrator)?”
296
An Approach to Meet the Challenge Since 2006, district leadership and classroom teachers have come to understand that personalized learning has a role to play in promoting students’ CCR through deliberate practice. The efficacy of various versions of EdSphere® provided one piece of an empirical foundation for the district to invest in a one-to-one device for each student in the district (see Hanlon, 2013; Hanlon, Greene, Swartz, Stenner, 2015; Swartz, Hanlon, Tendulkar, & Williamson, 2015; Swartz et al., 2011, 2012; Swartz, Emerson, Kennedy, & Hanlon, 2015). The results and district investment was, in large part, the result of DBIR efforts that allowed educators to implement disruptive innovations over a long Table 2. Data collected by LearningOasis® and EdSphere® Student Information First and last name
District affiliation
Unique identifier
Current grade
Classes being taken
Record of all readings
Record of all writings
Record of all paragraph edits
Record of all concept clue sets
Record of all modules
Current reader ability
Current writer ability
Historical growth in reading
Historical growth in writing
Forecast of reader ability
For Each Reading Encounter Article title
Article Lexile
Article length
Publication
Author
Publication date
Time spent reading
Items (foils, display order)
Response latency
Notes taken
Scaffold usage For Each Writing Encounter
Prompt
Time spent writing
Student & Teacher rubric scores
Teacher comments
Response
For Each Paragraph Edit Encounter Passage edited
Time sent editing
Capitalization item results
Grammar item results
Punctuation item results
Spelling item results
An Approach to Design-Based Implementation Research to Inform Development
period with educational researchers and technologists who adapted to their ways of working to better meet the needs of educators and students. EdSphere® (see Table 1 & Figure 1) is our current personalized learning platform. Each learning activity is embedded with computer generated items and scored with at least one computer-based auto-scoring engine. These activities are flexible enough that students may select activities based on their interest (“I love reading about basketball!”) or topics being taught in the classroom (“I need to read about natural disasters then write a paper for science class.”). The Reader application provides students with text targeted to their individual reading ability and interest. The Timed Writing application provides a mechanism by which students can practice and reflect on their own writing and writer ability by receiving immediate feedback from an automatedscoring engine, self-scoring and teacher-scoring using rubric, as well as teacher comments. The Concept Clue application is a vocabulary activity that blends assessment and instruction to facilitate vocabulary development. The Paragraph Edit application is designed to measure and improve student’s convention knowledge. The results from each activity contribute to the targeting of the next literacy activity. A Gaussian Kernel technique is used to model individually-centered growth trajectories in response to deliberate practice, and forecasting growth towards college and career readiness. Together, educators and students may use this data to make informed decisions about their progress in response to deliberate practice and the next steps that will better promote college and career readiness.
Reader Application Students access millions of high-quality professionally authored digital texts by clicking on the Reader App (see Figure 2). Digital articles are drawn from periodicals that range from Highlights
for Children, Boys Life, Girls Life, to Sports Illustrated, Newsweek, Discovery, to Science, The Economist, and Scientific American. A large amount of high-quality informational text is required to immerse students in daily deliberate practice of wide reading over a long period of time. Research results suggest that wide-reading is one active ingredient for enhancing not only reading ability, but also vocabulary and writing ability (Prose, 2006). Students may use one of three ways to search for text targeted to their ability: (1) click on suggested topics; (2) click the icon “Surprise Me”; or (3) type search terms into “Find a Book or Article” (see Figure 2). In the example below, a reader with an estimated ability of 1069L typed “climate change” in the search term field. More than 13,300 articles ordered by relevance to search terms were returned targeted to the students reading ability. At this point, students scan the abstracts to find the most relevant/interesting article to read. Students can easily revise the search until they find an article to suit their need. Students may click on the article that interests them or matches the goals of a teacher assignment. Once selected, a set of auto-semantic and academic vocabulary cloze items is presented in the passage (less than one second). In the example below, a student selected the article, A Low-Carbon Future Starts Here (see upper right hand corner of Figure 3). The auto-generated semantic cloze engine created 19 cloze items in the article below (see upper right hand corner). Students click on each cloze item in order while reading. Once clicked, the target word appeared at the bottom of the screen with three foils (see Figure 3). The foils are selected based on their difficulty and part-of-speech. For example, the student clicked on the second cloze and the four words appear at the bottom of the page. Students then click on the word that best completes the cloze. The student receives immediate feedback because the answer is auto-scored (see Figure 3,
297
An Approach to Design-Based Implementation Research to Inform Development
Figure 2. Reader search—Results student’s keyword search for articles about climate change; results include, publication titles, publication dates, and/or page length
item 1 with check-mark beside “billed”). The correct answer is inserted into the cloze if students select an incorrect response. The goal is to readto-learn so the system will not let an incorrect selection undermine comprehension even though an incorrect selection was chosen. Three learning scaffolds are integrated into the application to facilitate student comprehension. First, suggested strategies are presented to students at any point during the reading process. Selected strategies are designed to enhance stu-
298
dents’ self-regulation of reading informational text. These strategies have been shown to enhance self-efficacy for reading and writing performance (Biancarosa & Snow, 2006; Schunk & Rice, 1991). Second, students may click on any word (except for the four choices to complete the cloze) to learn its meaning (See Figure 3, “catastrophe” was selected). Students may also use a thesaurus to learn words with a meaning similar to the selected word (see Figure 3). Finally, a text-to-speech engine has been integrated into EdSphere®. This allows
An Approach to Design-Based Implementation Research to Inform Development
Figure 3. Reader Experience— Reader Experience—Auto-generated cloze items, auto-scoring, and select supports
students to have a word or phrase read to them by a high quality male or female voice. In the article below, a student selected the first sentence in the second paragraph to be read aloud. The words being read are highlighted with a different color.
Timed Writing Application Students can click on this application when composing in response to reading one or more articles in the Reading App or to any other writing assignment (see Figure 4). Students may write for five different time periods (15 minutes, 30 minutes, 45 minutes, 70 minutes, 110 minutes). Students reflect on their own writing while selecting the rubric rating that best represents their personal
judgment of the writing quality (see Figure 5). They click on the star that best describes their judgment. In the example below, students use a six-point raw score rating scale to self-rate each of six dimensions of the essay to determine the qualitative aspects of the essay. The Lexile Writing Analyzer reports a quantitative measure of writing ability with a three to four digit number with a trailing “W” (e.g., 400W) or a BW for a student whose Lexile Writer measure is below 0W.
Paragraph Edit Research suggests that explicit instruction about conventions (i.e., capitalization, grammar, punctuation, spelling) can lead to improved commu-
299
An Approach to Design-Based Implementation Research to Inform Development
Figure 4. Students compose essays to either existing system prompts or prompts selected by educators. Students are able to choose how long to compose.
nication skills and comprehension (Mochizuki & Ortega, 2008; Shiotsu & Weir, 2007). The CCSS (NGA & CCSSO, 2010) emphasized the importance of convention knowledge in the Language Progressive Skills section of the standards. Through the Paragraph Edit, EdSphere® provides students with an activity specifically designed to improve convention knowledge (see Figure 6). Students play the role of editor as they correct professionally-edited text targeted to their writing level that has been corrupted with capitalization, grammar, punctuation, and spelling errors. The activity blends assessment and instruction by providing three chances to correct each corruption. The first “pass” is purely assessment, allowing EdSphere® to monitor change in student conven-
300
tion ability over time. For the errors that are not corrected during the assessment pass, the student is given two additional attempts with progressively increasing instruction (i.e., highlighting the parts of the passage with errors and error-specific direct instruction) (Figure 6). At the conclusion of the activity, the student has recreated the professionally edited text.
Concept Clue Vocabulary is a critical component to literacy development. Over a century of research suggests that vocabulary knowledge is one of the best indicators of verbal ability (Sternberg, 1987; Terman, 1916) and that explicitly teaching vocabulary can
An Approach to Design-Based Implementation Research to Inform Development
Figure 5. Writing feedback screen—Rubric created by the Literacy Design Collaborative to evaluation a response written to an argumentative prompt.
improve reading comprehension for both native English speakers and English learners (Beck, Perfetti, & McKeown, 1982; Carlo et al., 2004; Carlo, August, & Snow, 2010). Research suggests that immersing students in a wide variety of language experiences (e.g., reading, writing, listening) facilitates vocabulary acquisition (see Graves, 2006). Grounded in this research, EdSphere® provides learners with a variety of language experiences that are accessible through the eyes (i.e., reading targeted text), through the ears (i.e., text-to-speech support), and through the fingertips (i.e., opportunities to write). EdSphere® also nurtures the development of vocabulary knowledge through a specially designed
activity, Concept Clue (Figure 7). Based on the premise that word knowledge exists as networks of words related to a category or concept (Beck, McKeown, & Kucan, 2002), Concept Clue helps students strengthen the connections and relationships between words. Students are presented with two related concepts and are asked to find the word that shares the relationship. The Concept Clue activity provides three chances for the student to correctly select the answer. Each subsequent chance is accompanied by additional instructional scaffolding (e.g., similar concepts, how the words are related). These items are generated automatically by EdSphere® based on the reading ability and vocabulary profile of the learner.
301
An Approach to Design-Based Implementation Research to Inform Development
Figure 6. Paragraph edit feedback (third attempt): Students are given three chances to correct the capitalization, grammar, punctuation, and spelling errors introduced into professionally authored text targeted to their writing ability. On the last pass, depicted here, a student is presented with direct and explicit instruction related to the convention error.
Figure 7. Concept clue activity: Students are immersed in vocabulary and are asked to find the relations between words. Based on student responses, EdSphere® provides progressive amounts of instructional scaffolding (i.e., related concepts, how the terms are related). Each item is targeted to individual students’ reading ability.
302
An Approach to Design-Based Implementation Research to Inform Development
Teacher and Student Reports EdSphere® collects detailed data about each student activity (e.g., time-data, words read, essays written, ability measures for reading and writing over time). To accommodate a variety of users, EdSphere® provides a tiered reporting suite. At the highest level, students and teachers are able to access a reporting dashboard that shows overall usage as well as usage by learning app or individual day. Educators and students are able to filter by date range, and educators are able to view class-level or individual student usage (Figure 8). From this dashboard screen, users are able to view learning app-specific records of activity with greater filtering options (e.g., all articles read in last week with performance greater than 75%).
Individual Growth Charts and Forecasting As EdSphere® is powered by the Lexile Framework® for Reading and the Lexile Framework® for Writing, estimates of student reading and writing ability can be monitored in real-time on a developmental scale (i.e., as students complete activities in EdSphere® and the estimates of ability immediately adjust based on performance). This level of measurement allows EdSphere® to produce student-centered estimates of change over time (i.e., only the data for a particular student is used to estimate change). Students are able to view where they are today, where they have been in the past, and where they are projected to be in the future (see Figure 9). These estimates of
Figure 8. Reporting dashboard—Students and teachers may view the same types of reports (i.e., aggregated dashboard, detailed views); however, teachers are able to view activity by class (or group) while a student can only see his/her own data
303
An Approach to Design-Based Implementation Research to Inform Development
Figure 9. Growth towards CCR graphs—Students are able to view a graph plotting their change in time in both reading (upper panel) and writing (lower panel). Change in reading is placed in the context of the college and career grade-based goals defined in the CCSS (2010). Depicted here, a fifth grade student is reading above the grade 5 goal (i.e., on track for college and career), and is forecasted to still be reading above the grade-level goal five years in the future. Growth reports can be aggregated by whole or subgroups within a class, grade, school, district, or state.
current ability and forecasted ability are placed in the context of the college and career readiness standards presented in the CCSS (NGA & CCSSO, 2010) (see Figure 9). The DBIR team will devote the next two school years (2014-2015, 2015-2016) to meeting challenges related to (a) creating blended classrooms that use well-developed applications and the data that results from students use to inform instruction (see Figure 10); (b) strengthening the reading-writing connection through enhanced tools designed to promote close reading (see Figure 11); and (c) translating written products from close reading to CCR short- and long-constructed responses (see Figures 12-14). In order to achieve this goal,
304
researchers in collaboration with those involved in providing professional development and on-going technical assistance will need to develop, test, and evaluate the nature and roles of direct instruction that strengthen reading-writing connections, opportunities for peer-to-peer collaboration, and independent work completed on-line. These goals can only be achieved by educators who are open to changing their practice. A key to any future success is creating and providing educators with digital tools for authoring and delivering lessons, units, and courses to students on-line (see Table 3). The collaborative team is already using a lesson and unit authoring tool for educators (Module Creator®) integrated with a student personalized
An Approach to Design-Based Implementation Research to Inform Development
Figure 10. Adapted from: Staker, H., & Horn, M. B. (2012). Classifying K-12 blended learning. Innosight Institute: Mountain View, CA.
content learning (Guided Literacy®) as the research and development technology platform to test ideas that will be integrated into EdSphere® Teach. Module Creator® and Guided Literacy® were developed with funds from the Bill and Melinda Gates Foundation (Swartz, Hanlon, & Stenner, 2011, OPP1020128). Models of innovative professional development, technical assistance, and on-line resources will result from the efforts of 2014-2015. A blend of multi-media digital resources will provide the foundation for creating blended models of professional development for the purpose of enhancing teacher efficacy and effective use of using technology to promote CCR.
The Near Future (2015-2017) Students enter school at ages 4 and 5 unprepared to engage in learning requisite, emergent literacy abilities. We miss too many adults who enter the workforce unprepared because they dropped out of school, or are underprepared even though they have a diploma. How can EdSphere® be enhanced so that can support children and adults who are acquiring emergent literacy skills?
An Approach to Meet the Challenge Educators at each grade and in each content area are implementing a range of curricula and instruc-
305
An Approach to Design-Based Implementation Research to Inform Development
Figure 11. Most recent version of applications available on EdSphere®. Note the addition of Guided Writing which is designed to provide more feedback and instruction designed to enhance write ability. The Timed Writing application is designed to monitor status and growth towards CCR at least four times per year, with students writing at least one narrative, informative, or argumentative piece.
tional strategies demanded by the gap created by increased 21st Century literacy skills and those skills currently possessed by our nation’s students (Phillips & Wong, 2010, 2012). Classroom teachers in pre-kindergarten through second grade are as involved as upper elementary, middle, and high school educators in implementing literacy initiatives designed to ensure that all students are on a trajectory towards college and career readiness. Early childhood educators’ efforts may be largely attributable to evidence that (a) early acquisition of foundational reading skills is highly predictive of later success in school and (b) most of the
306
growth in reader ability occurs during the early grades (Berninger, 1994; Blachman, 1997; Hart & Risley, 1992; Hart & Risley, 1995; Pearson & Duke, 2002; Pressley, Allington, WhartonMcDonald, Collins Block, & Mandel Morrow, 2001; National Institutes of Health and Human Services, 2000; Smith, Turner, & Lattanzio, 2014; Tracey & Mandel Morrow, 2002). The process of ensuring college and career readiness begins in pre-school. The instructional time and strategies dedicated to early literacy by educators and literacy specialists during the early grades may be insufficient to close gaps between
An Approach to Design-Based Implementation Research to Inform Development
Figure 12. Creating opportunities for close reading and connecting reading to learning new vocabulary and writing. An example of student reading an article about climate change while taking notes that include: (a) a citation for the article, (b) listing of key concepts for use in an outline or short-constructed response, and (c) listing of unknown general and academic terms for review and learning.
each child’s status and the trajectory required for college and career readiness. Growth models based on students attending schools in North Carolina provide a useful example of the average development of readers from four achievement groups (see Figure 15) (Swartz & Williamson, 2014). For purposes of this analysis students were placed into one of four groups based on their achievement at the end of third grade.
Each achievement group’s growth may be interpreted using a normative frame of reference (i.e., in comparison with other achievement groups) and an absolute frame of reference (i.e., in comparison to evolving demands of text complexity). A normative frame of reference allows us to make at least three broad summary statements: (1) A significant gap in reading ability exists at the end of third grade; (2) the achievement gap in
307
An Approach to Design-Based Implementation Research to Inform Development
Figure 13. Strengthening the Reader-Writing Connection—Student space for authoring outlines, shortand long-constructed responses. An example of one student beginning a first draft of written response to a class assignment with an article open as a resource. Various tools are always available for use during composing process. For example, a multi-language text-to-speech engine will provide students with an opportunity to have drafts to them). Each previously composed draft of notes, outlines, and short and constructed responses are available to the student when composing their current draft.
Figure 14. Strengthening the Reader-Writing Connection—An example of one student beginning a first draft with notes from original article available to him
308
An Approach to Design-Based Implementation Research to Inform Development
Table 3. Evolving EdSphere® beyond version 2.0 Features
EdSphere® 2.5 (2015-2016)
EdSphere® 3.0 (2016-2017)
Vocabulary Support
Image Dictionary
Multiple Reading Item Types
Prompted Production Cloze, Cold Production Cloze, Sentence Ending Cloze
Teacher Authoring Tools
Lesson, Unit, and Course Planning Tools
Feedback on authored content Video-uploads for feedback
Written Expression
Guided Writing through Multiple Drafts
Auto-trait scoring engines (in addition to LWA) Auto-feedback based on writing trait profile
Early Literacy
Auto-Generated and scored activities designed to enhance five pillars of reading
Fluency
Activity designed to enhance fluency (partnership with Spritz)
Content Item Types
BrainSpark, Concept Cloud
Achievement System
Badges and achievements earned through attaining proximal and distal usage and growth goals
Figure 15. Estimated intercept and growth curves from bending the curve back from 3rd grade to PreKindergarten
309
An Approach to Design-Based Implementation Research to Inform Development
reading at the end of third grade persists through grade 8, although the gaps lessen between the achievement groups; and (3) the achievement gaps among the four groups will persist through 12th grade. An absolute frame of reference allows us to make other sample independent statements about growth in reading ability in the context of ever-increasing text complexity: (1) Students in the highest achievement level will read significantly more complex text with a predicted higher rate of success compared to the other three achievement groups (i.e., higher comprehension rates); (2) students in the middle two achievement levels will read text with evolving demands on a trajectory that is commensurate to a trajectory towards college and career readiness (although students in Achievement Level II are not forecasted to be prepared for college level text); and (3) students in the lowest achievement level fall well-below a level commensurate with milestones marking a path towards CCR. The growth models depicted by the four achievement groups also allow for describing students’ average reading ability in pre-school through second grade (i.e., dashed curves) in such a fashion that may predict progress during thirdeighth grades. Interestingly, the results suggest that students may grow as much, or more, during the first four years of formal schooling as they grow from grades three through 12 (certainly through grade 8), with students in the lowest achievement group being predicted to grow the most during formal schooling. Unfortunately, students in the lowest two achievement groups do not grow enough during the first four years of school to place them on a trajectory predictive of CCR, even with a strong focus on early literacy during the early grades. Such results suggest there is considerable need to create and test technology solutions useful for promoting early literacy skills. The collaborative of professionals who will contribute to technology solutions in EdSphere®
310
will grow to include early childhood and literacy specialists. This has already begun with a review of existing technologies in the schools, how each are used, and the nature of reports educators use to guide instruction (see Table 3). As well, business leaders and owners, and faculty and staff at community colleges will be included to more effectively help young adults meet the literacy demands in the workplace.
A Future for EdSphere® EdSphere® 2.0 (see Figure 11) is a comprehensive personalized literacy learning platform that evolved from Online Reader-Writer®, a simple application. This evolution started in classrooms where educators used an application even though it only audited task completion using a simple percent correct of auto-generated cloze items and optional written summaries. As of today, students in second grade through community college have read millions of words from text targeted to their ability, responded to more than 13 million autogenerated terms, written more than 17 million words, and completed 100’s-of-thousands of paragraph edits and concept clue items. Efficacy research provides results supportive of EdSphere® as a platform that is comprised of applications using the active ingredients of deliberate practice to promote CCR. The future is bright for creating, testing, and scaling applications appropriate for enhancing early and adult literacy.
ACKNOWLEDGMENT The authors wish to thank the hundreds of educators and thousands of students who participated, or who are now participating, in the research efforts described in this chapter. We also want to recognize the efforts of Karin Neuvirth, Juee Tendulkar, Jennifer Houchins, Hal Burdick, Donald
An Approach to Design-Based Implementation Research to Inform Development
S. Burdick, Steve Lattanzio, Gary L. Williamson, Jakob Wandall, Siri Jordahn, Colin Emerson, and A. J. Kennedy for their contributions to the ongoing development and classroom-based research efforts.
REFERENCES Aldunate, R., & Nussbaum, M. (2013). Teacher adoption of technology. Computers in Human Behavior, 29(3), 519–524. doi:10.1016/j. chb.2012.10.017 Allington, R. L. (1977). If they don’t read much, how are they ever gonna get good? Journal of Reading, 21, 57–61. Allington, R. L. (1980). Poor readers don’t get much in reading groups. Language Arts, 57(8), 872–877. Allington, R. L. (1983). The reading instruction provided readers of differing abilities. The Elementary School Journal, 83(5), 548–559. doi:10.1086/461333 Allington, R. L. (1984). Content coverage and contextual reading in reading groups. Journal of Reading Behavior, 16(1), 85–96. Allington, R. L. (2009). If they don’t read much…30 years later. In E. H. Hiebert (Ed.), Reading more, reading better (pp. 30–54). New York: Guilford Press. Anderson, R. C., Hiebert, E. H., Scott, J. A., & Wilkinson, I. A. G. (1985). Becoming a nation of readers. U.S. Department of Education, Office of Educational Research and Improvement (ED). Washington, DC: U.S. Government Printing Office.
Anderson, R. C., Wilson, P. T., & Fielding, L. C. (1988). Growth in reading and how children spend their time outside of school. Reading Research Quarterly, 23(3), 285–303. doi:10.1598/ RRQ.23.3.2 Attali, Y., & Powers, D. (2008). A developmental writing scale (Report No. ETS RR-08-19). Princeton, NJ: ETS. Beck, I. L., McKeown, M. G., & Kucan, L. (2002). Bringing words to life: Robust vocabulary instruction. New York, NY: The Guilford Press. Beck, I. L., Perfetti, C. A., & McKeown, M. G. (1982). Effects of long-term vocabulary instruction on lexical access and reading comprehension. Journal of Educational Psychology, 74(4), 506–521. doi:10.1037/0022-0663.74.4.506 Berninger, V. W. (1994). Reading and writing acquisition: A developmental neuropsychological perspective. Boulder, CO: Westview Press. Berninger, V. W., Abbott, R. D., Abbott, S. P., Graham, S., & Richards, T. (2002). Writing and reading: Connections between the language and by hand. Journal of Learning Disabilities, 35(1), 39–56. doi:10.1177/002221940203500104 PMID:15490899 Biancarosa, C., & Snow, C. E. (2006). Reading next: A vision for action and research in middle and high school literacy: A report to Carnegie Corporation of New York (2nd ed.). Washington, DC: Alliance for Excellent Education. Blachman, B. (1997). Foundations of reading acquisition and dyslexia: Implications for Early Intervention. Mahway, NJ: Lawrence Erlbaum Associates.
311
An Approach to Design-Based Implementation Research to Inform Development
Bormuth, J. R. (1966). Readability: A new approach. Reading Research Quarterly, 1(3), 79–132. doi:10.2307/747021 Bormuth, J. R. (1968a). Cloze test readability: Criterion reference scores. Journal of Educational Measurement, 5(3), 189–196. doi:10.1111/j.1745-3984.1968.tb00625.x Burdick, H., Swartz, C. W., Stenner, A. J., Fitzgerald, J., Burdick, D., & Hanlon, S. T. (2013a). Measuring students’ writing ability on a computer-analytic developmental scale: An exploratory validity study. Literacy Research and Instruction, 52(4), 255–280. doi:10.1080/19388 071.2013.812165 Burdick, H., Swartz, C. W., Stenner, A. J., Fitzgerald, J., Burdick, D., & Hanlon, S. T. (2013b). Technological assessment of composing: Response to reviewers. Literacy Research and Instruction, 52(4), 184–187. Butterworth, B. (2006). Mathematical expertise. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance (pp. 553–568). Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511816796.032 Calderon, M., August, D., Slavin, R., Duran, D., Madden, N., & Cheng, A. (2010). Bringing words to life in classrooms with English-language learners. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 115–136). New York, NY: Routledge. Calkins, A., & Vogt, K. (2013). Next Generation Learning: The Pathway to Possibility. Washington, DC: EduCause.
312
Carlo, M. S., August, D., McGlaughlin, B., Snow, C. E., Dressler, C., Lippman, D. N., & White, C. E. et al. (2004). Closing the gap: Addressing the vocabulary needs of English-language learners in bilingual and mainstream classes. Reading Research Quarterly, 39, 188–215. Carlo, M. S., August, D., & Snow, C. E. (2010). Sustained vocabulary-learning strategy instruction for English-language learners. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 137–153). New York, NY: Routledge. Charness, W. G., Krampe, R. T., & Mayer, U. (1996). The role of practice and coaching in entrepreneurial skill domains: An international comparison of life-span chess skill acquisition. In K. A. Ericsson (Ed.), The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games (pp. 51–80). Mahwah, NJ: Erlbaum. Charness, W. G., Tuffiash, M. I., Krampe, R., Reingold, E., & Vasyukova, E. (2005). The role of deliberate practice in chess expertise. Applied Cognitive Psychology, 19(2), 151–165. doi:10.1002/acp.1106 Childress, S. (2013, December 19). Re: Shared attributes of schools implementing personalized learning [Web blog post]. Retrieved from http:// nextgenstacey.com/2013/12/19/shared-attributesof-schools-implementing-personalized-learning/ Childress, S. (2014, January 1). Re: Personalized learning will go mainstream [Web blog post]. Retrieved from https://www.edsurge.com/n/201401-01-stacey-childress-personalized-learningwill-go-mainstream
An Approach to Design-Based Implementation Research to Inform Development
College Entrance Examination Board, The National Commission on Writing in America’s Schools and Colleges. (2003). The neglected “R”: The need for a writing revolution. Retrieved from http://www.collegeboard.com College Entrance Examination Board, The National Commission on Writing in America’s Schools and Colleges. (2004). Writing: A ticket to work . . . Or a ticket out, a survey of business leaders. Retrieved from http://www.collegeboard.com College Entrance Examination Board, The National Commission on Writing in America’s Schools and Colleges. (2005). Writing: A powerful message from state government. Retrieved from http://www.collegeboard.com Council of Europe. Language, Policy Unit. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Retrieved from http://www.coe.int/t/dg4/linguistic/ Cadre1_en.asp Cunningham, A. E. (2010). Vocabulary growth through independent reading and reading aloud to children. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 45–68). New York, NY: Routledge. Cunningham, A. E., & Stanovich, K. E. (1998). The impact of print exposure on word recognition. In J. Metsala & L. Ehri (Eds.), Word recognition in beginning literacy (pp. 235–262). Mahwah, NJ: Erlbaum. Dai, D. Y. (2012). From smart person to smart design: Cultivating intellectual potential and promoting intellectual growth through Design Research. In D. Y. Dai (Ed.), Design Research on Learning and Thinking in Educational Settings: Enhancing Intellectual Growth and Functioning. New York, NY: Routledge.
Ericsson, K. A. (1996a). The acquisition of expert performance: An introduction to some of the issues. In K. A. Ericsson (Ed.), The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games (pp. 1–50). Mahwah, NJ: Erlbaum. Ericsson, K. A. (Ed.). (1996b). The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games. Mahwah, NJ: Erlbaum. Ericsson, K. A. (2002). Attaining excellence through deliberate practice: Insights from the study of expert performance. In M. Ferrari (Ed.), The pursuit of excellence in education (pp. 21–55). Hillsdale, NJ: Erlbaum. doi:10.1002/9780470690048. ch1 Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Medicine, 10(Supplement), S70– S81. doi:10.1097/00001888-200410001-00022 PMID:15383395 Ericsson, K. A. (2006a). Introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance (pp. 3–19). Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511816796.001 Ericsson, K. A. (2006b). The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance (pp. 683–703). Cambridge, UK: Cambridge University Press. doi:10.1017/ CBO9780511816796.038
313
An Approach to Design-Based Implementation Research to Inform Development
Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406. doi:10.1037/0033295X.100.3.363 Fitzgerald, J., & Shanahan, T. (2000). Reading and writing relations and their development. Educational Psychologist, 35(1), 39–50. doi:10.1207/ S15326985EP3501_5 Friedman, T. (2005). The world is flat: A brief history of the 21st Century. New York, NY: Farrar, Straus, and Giroux. Gambrell, L. B. (1984). How much time do children spend reading during reading instruction? In J. A. Niles, & L. A. Harris (Eds.), Changing perspectives on research in reading/language processing and instruction (pp. 127-135). Rochester, NY: National Reading Conference. Gobet, F., & Charness, N. (2006). Expertise in chess. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance (pp. 523–538). Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511816796.030 Graham, S., Harris, K., & Hebert, M. (2011). Informing writing: The benefits of formative assessment. A Carnegie Corporation Time to Act report. Washington, DC: Alliance for Excellent Education. Graham, S., & Hebert, M. (2010). Writing to read: Evidence for how writing improves reading. — A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education. Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools — A report to Carnegie Corporation of New York. Washington, DC: Alliance for Excellent Education.
314
Graves, M. F. (2006). The vocabulary book. New York, NY: Teachers College Columbia University. Hanlon, S. T. (2013). The relationship between deliberate practice and reading ability (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses databases (AAT 3562741). Hanlon, S. T., Greene, J. A., Swartz, C. W., & Stenner, A. J. (2015). The relationship between deliberate practice and reading ability (manuscript in preparation). Hanlon, S. T., Neuvirth, K., Tendulkar, J., Houchins, J., Swartz, C. S., & Stenner, A. J. (2015). EdSphere®. Retrieved from http://www. EdSphere.com Hanlon, S. T., Swartz, C. S., Burdick, H., & Stenner, A. J. (2006). MyWritingWeb®. Retrieved from http://www.mywritingweb.com Hanlon, S. T., Swartz, C. S., Burdick, H., & Stenner, A. J. (2007). MyReadingWeb®. Retrieved from http://www.myreadingweb.com Hanlon, S. T., Swartz, C. S., Burdick, H., & Stenner, A. J. (2008). LearningOasis®. Retrieved from http://www.alearningoasis.com Hart, B., & Risley, T. R. (1992). American parenting of language-learning children: Persisting differences in family-child interactions observed in natural home environments. Developmental Psychology, 28(6), 1096–1105. doi:10.1037/00121649.28.6.1096 Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experiences of young American children. Baltimore, MD: Paul H. Brookes. Helsen, W. F., Starkes, J. L., & Hodges, N. J. (1998). Team sports and the theory of deliberate practice. Journal of Sport & Exercise Psychology, 20, 12–34.
An Approach to Design-Based Implementation Research to Inform Development
Hiebert, E. H. (1983). An examination of ability grouping for reading instruction. Reading Research Quarterly, 18(2), 231–255. doi:10.1598/ RRQ.18.2.5 Hiebert, E. H., & Kamil, M. L. (2010). Teaching and learning vocabulary: Bringing research to practice. New York, NY: Routledge. Hodges, N. J., & Starkes, J. L. (1996). Wrestling with the nature of expertise: A sport specific test of Ericsson, Krampe, and Tesch-Romer’s (1993) “Deliberate Practice”. International Journal of Sport Psychology, 27, 400–424. Kamil, M. L., & Hiebert, E. H. (2010). Teaching and learning vocabulary: Perspectives and persistent issues. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 1–23). New York, NY: Routledge. Kellogg, R. T. (2006). Professional writing experience. In K. A. Ericsson, N. Charness, P. Feltovich, & R. R. Hoffman (Eds.), Cambridge handbook of expertise and expert performance (pp. 389–402). Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511816796.022 Kim, C. M., Kim, M. K., Lee, C., Spector, J. M., & DeMeester, K. (2013). Teacher beliefs and technology integration. Teaching and Teacher Education, 29, 76–85. doi:10.1016/j.tate.2012.08.005 Lattanzio, S. M., Burdick, D. S., & Stenner, A. J. (2012). The ensemble Rasch model. Durham, NC: MetaMetrics Paper Series. Marzano, R. J. (2004). Building background knowledge for academic achievement. Alexandria, VA: ASCD. Marzano, R. J., & Pickering, D. J. (2005). Building academic vocabulary: Teacher’s manual. Alexandria, VA: ASCD.
Marzano, R. J., & Sims, J. A. (2013). Vocabulary for the Common Core. Bloomington, IN: Marzano Research Laboratory. McKinney, E. H., & Davis, K. (2004). Effects of deliberate practice on crisis decision performance. Human Factors, 45(3), 436–444. doi:10.1518/ hfes.45.3.436.27251 PMID:14702994 Mochizuki, N., & Ortega, L. (2008). Balancing communication and grammar in beginning-level foreign language classrooms: A study of guided planning and relativization. Language Teaching Research, 12(1), 11–37. doi:10.1177/1362168807084492 Murnane, R. J., & Willett, J. B. (2011). Methods matter: Improving causal inference in educational and social science research. New York, NY: Oxford University Press. Nagy, W. (2010). Why vocabulary instruction needs to be long-term and comprehensive. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 27–44). New York, NY: Routledge. National Governors Association Center for Best Practices & Council for Chief State School Officers. (2010). Common Core State Standards for English language arts & literacy in history/ social, science, and technical subjects, Appendix A. Washington, DC: Author. Retrieved from Common Core State Standards Initiative website: http:// www.corestandards.org/the-standards Nelson, J., Perfetti, Liben, D., & Liben, M. (2011). Measures of text difficulty: Testing the Predictive Value for Grade Levels and Student Performance. Retrieved from http://www.ccsso. org/Documents/2012/Measures%20ofText%20 Difficulty_final.2012.pdf
315
An Approach to Design-Based Implementation Research to Inform Development
North, B. (2011). Putting the common European framework of reference to good use. Language Teaching, 47(2), 228–249. doi:10.1017/ S0261444811000206 Organisation of Economic Cooperation and Development. (2013). Time for the U.S. to Reskill What the Survey of Adult Skills Says, OECD Skills Studies. Paris, France: OECD Publishing. Partnership for 21st Century Skills. (n.d.). Framework for the 21st century. Retrieved from http:// www.p21.org/storage/documents/p21-stateimp_ standards.pdf Pearson, P. D., & Duke, N. K. (2002). Comprehension instruction the primary grades. In C. C. Block & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp. 247–258). New York, NY: The Guilford Press. Penuel, W. R., Fishman, B. J., Cheng Hauganm, B., & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational Researcher, 40(7), 331–337. doi:10.3102/0013189X11421826 Phillips, V., & Wong, C. (2010). Tying together the Common Core of standards, instruction, and assessments. Kappan, 91(3), 37–42. doi:10.1177/003172171009100511 Phillips, V., & Wong, C. (2012). Teaching to the Common Core by design, not accident. Kappan, 93(7), 31–37. doi:10.1177/003172171209300708 Plant, E. A., Ericsson, K. A., Hill, L., & Asberg, K. (2005). Why study time does not predict grade point average across college students: Implications of deliberate practice for academic performance. Contemporary Educational Psychology, 30(1), 96–116. doi:10.1016/j.cedpsych.2004.06.001
316
Pressley, M., Allington, R. L., Wharton-McDonald, R., Collins Block, C., & Mandel Morrow, L. (2001). Learning to read: Lessons from exemplary first-grade classrooms. New York, NY: The Guilford Press. Programs, U. S., & the Bill and Melinda Gates Foundation. (2012). Innovations in education. Retrieved from http://www.gatesfoundation.org/ Prose, F. (2006). Reading like a writer. New York, NY: Harper Collins. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen: Danish Institute for Educational Research. Schunk, D. H., & Rice, J. M. (1991). Learning goals and progress feedback during reading comprehension instruction. Journal of Reading Behavior, 23, 351–364. Schunk, D. H., & Swartz, C. W. (1993a). Goals and progress feedback: Effects on self-efficacy and writing achievement. Contemporary Educational Psychology, 18(3), 337–354. doi:10.1006/ ceps.1993.1024 Schunk, D. H., & Swartz, C. W. (1993b). Writing strategy instruction with gifted students: Effects of goals and feedback on self-efficacy and skills. Roeper Review, 15(4), 225–230. doi:10.1080/02783199309553512 Scott, J. A. (2010). Creating new opportunities to acquire new word meanings from text. In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 69–91). New York, NY: Routledge. Shanahan, T. (1998). The reading-writing relationship: Seven instructional principles. The Reading Teacher, 41(6-9), 636–647.
An Approach to Design-Based Implementation Research to Inform Development
Shermis, M. D., & Hamner, B. (2012). Contrasting state-of-the-art automated scoring of essays: Analysis. Retrieved from http://www.scoreright. org/NCME_2012_Paper3_29_12.pdf
Stenner, A. J., Burdick, H., Sanford, E. F., & Burdick, D. S. (2006). How accurate are Lexile text measures? Journal of Applied Measurement, 7(3), 307–322. PMID:16807496
Shiotsu, T., & Weir, C. J. (2007). The relative significance of syntactic knowledge and vocabulary breadth in the prediction of reading comprehension test performance. Language Testing, 24(1), 99–128. doi:10.1177/0265532207071513
Stenner, A. J., Fisher, W. P., Stone, M. H., & Burdick, D. S. (2013). Causal Rasch models. Frontiers in Psychology, 4, 1–14. doi:10.3389/ fpsyg.2013.00536 PMID:23986726
Smith, M., Turner, J., & Lattanzio, S. (2014). The NC CAP “Road map of Need” supports the need for the Read to Achieve Act. Retrieved from http:// cdn.lexile.com/m/cms_page _media/135/NC%20 CAP%20II_4.pdf Stahl, S. A. (2010). Four problems with teaching word meanings (and what to do to make vocabulary an integral part of instruction). In E. H. Hiebert & M. L. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice (pp. 95–114). New York, NY: Routledge. Staker, H., & Horn, M. B. (2012). Classifying K-12 blended learning. Mountain View, CA: Innosight Institute. Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford Press. Stanovich, K. E., West, R. F., Cunningham, A. E., Cipielewski, J., & Siddiqui, S. (1996). The role of inadequate print exposure as a determinant of reading comprehension problems. In C. Cornoldi & J. Oakhill (Eds.), Reading comprehension difficulties: Processes and intervention (pp. 15–32). Mahwah, NJ: Erlbaum. Starkes, J. L., Deakins, J., Allard, F., Hodges, N. J., & Hayes, A. (1996). Deliberate practice in sports: What is it anyway? In K. A. Ericsson (Ed.), The road to excellence: The acquisition of expert performance in the arts and sciences, sports, and games (pp. 81–106). Mahwah, NJ: Erlbaum.
Sternberg, R. J. (1987). Most vocabulary is learned from context. In M. G. McKeowon & M. E. Curtis (Eds.), The nature of vocabulary acquisition (pp. 89–105). Hillsdale, NJ: Erlbaum. Stewart, V. (2012). A world-class education: Learning from international models of excellence and innovation. Arlington, VA: ASCD. Swartz, C. W., Burdick, D. S., Hanlon, S. T., Stenner, A. J., Kyngdon, A., Burdick, H., & Smith, M. (2014). Toward a theory relating text complexity, reader ability, and reading comprehension. Journal of Applied Measurement, 16(1), 359–371. PMID:25232670 Swartz, C. W., Emerson, C., Kennedy, A. J., & Hanlon, S. T. (2015). Impact of deliberate practice on reader ability for students Taiwanese students learning to read English as a foreign language. (Manuscript in preparation). Swartz, C. W., Hanlon, S. T., Stenner, A. J. (2011-2013, Grant No. OPP1020128). Literacy by Technology: Technology Solutions Designed to Help Educators Create Literacy Rich Content Area Classrooms. Funds provided by The Bill and Melinda Gates Foundation. Swartz, C. W., Hanlon, S. T., Stenner, A. J., Burdick, H., Burdick, D. S., & Emerson, C. (2011). EdSphere®: Using technology to enhance literacy through deliberate practice. (MetaMetrics Research Brief). Durham, NC: MetaMetrics.
317
An Approach to Design-Based Implementation Research to Inform Development
Swartz, C. W., Hanlon, S. T., Tendulkar, J., & Williamson, G. W. (2015). Impact of different amounts of reading on early adolescents’ growth in reading ability. (Manuscript in preparation). Swartz, C. W., & Sanford-Moore, E. (2008). Implications of The Lexile Framework® for Writing for North Carolina’s General Model of Writing Assessment. A MetaMetrics Research Report submitted to the North Carolina Department of Public Instruction. Swartz, C. W., Stenner, A. J., Hanlon, S. T., Burdick, H., Burdick, D. S., & Kuehne, K. W. (2012). From novice to expert: Applying research principles to promote literacy in the classroom. (MetaMetrics Research Brief). Durham, NC: MetaMetrics. Swartz, C. W., & Williamson, G. L. (2014). Bending the curve of reader ability estimates from third grade back to pre-kindergarten: Results from an exploratory study of early literacy. Unpublished technical report. Terman, L. M. (1916). The measurement of intelligence. Boston, MA: Houghton Mifflin. doi:10.1037/10014-000 Tierney, R. J., & Shanahan, T. (1992). Research on the reading-writing relationship: Interactions, transactions, and outcomes. In R. Barr, M. I. Kamil, P. B. Mosenthal, & P. D. Pearson (Eds.), Handbook of Reading Research (Vol. 2, pp. 246–280). Mahway, NJ: Lawrence Erlbaum Associates. Tracey, D. H., & Mandel Morrow, L. (2002). Preparing young learners for successful reading comprehension: Laying the foundation. In C. C. Block & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp. 219–233). New York, NY: The Guilford Press. U.S. Department of Education, Office of Educational Technology. (2010). Transforming American education: Learning powered by technology. Washington, DC: Author.
318
U.S. Department of Education, Office of Educational Technology. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. Washington, DC: Author. U.S. Department of Education, Office of Educational Technology. (2013). Expanding evidence approaches for learning in a digital world. Washington, DC: Author. U.S. Department of Health and Human Services, National Institutes of Health, National Institute of Child Health and Human Development. (2000). Teaching children to read: An evidence-based assessment of the scientific literature on reading and its implications for reading instruction. Retrieved from http://www.nichd.nih.gov/publications/ pubs/ nrp/documents/report.pdf Wagner, T. (2012). Creating innovators: The making of young people who will change the world. New York, NY: Scribner. Williamson, G. L. (2008). A Text Readability Continuum for Postsecondary Readiness. Journal of Advanced Academics, 19(4), 602–632. Williamson, G. L., Fitzgerald, J., & Stenner, A. J. (2013). The Common Core State Standards quantitative text complexity trajectory: Figuring out how much complexity is enough. Educational Researcher, 42(2), 59–69. doi:10.3102/0013189X12466695 Williamson, G. L., Tendulkar, J., Hanlon, S. T., & Swartz, C. W. (2012). Growth in reading ability as a response to using EdSphere®. (MetaMetrics Research Brief). Durham, NC: MetaMetrics. Winne, P. H. (1982). Minimizing the black box problem to enhance the validity of theories about instructional effects. Instructional Science, 11(1), 13–28. doi:10.1007/BF00120978 Zhao, Y. (2012). World class learners: Educating creative and entrepreneurial students. Thousand Oaks, CA: NAESP, Corwin.
319
Chapter 12
Computer Agent Technologies in Collaborative Assessments Yigal Rosen Harvard University, USA Maryam Mosharraf Pearson, USA
ABSTRACT Often in our daily lives we learn and work in groups. In recognition of the importance of collaborative and problem solving skills, educators are realizing the need for effective and scalable learning and assessment solutions to promote the skillset in educational systems. In the settings of a comprehensive collaborative problem solving assessment, each student should be matched with various types of group members and must apply the skills in varied contexts and tasks. One solution to these assessment demands is to use computer-based (virtual) agents to serve as the collaborators in the interactions with students. The chapter presents the premises and challenges in the use of computer agents in the assessment of collaborative problem solving. Directions for future research are discussed in terms of their implications to large-scale assessment programs.
INTRODUCTION Collaborative problem solving is recognized as a core competency for college and career readiness. Students emerging from schools into the workforce and public life will be expected to work in teams, cooperate with others, and resolve conflicts in order to solve the kinds of problems required in modern economies. They will further need to be able to use these skills flexibly with various group compositions and environments (Davey, et al., 2015; Griffin, Care, & McGaw, 2012; O’Neil,
& Chuang, 2008; Rosen, & Rimor, 2013; Roseth, et al., 2006). Educational programs in K-12 have focused to a greater extent on the advancement of learning and the assessment of collaborative problem solving as a central construct in theoretical and technological developments in educational research (National Research Council, 2011, 2013; OECD, 2013a; National Assessment Governing Board, 2013; U.S. Department of Education, 2010). Collaborative skills are included within the major practices in the 2014 U.S. National Assessment of Educational Progress (NAEP)
DOI: 10.4018/978-1-4666-9441-5.ch012
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Computer Agent Technologies in Collaborative Assessments
Technology and Engineering Literacy (National Assessment Governing Board, 2013). In NAEP Technology and Engineering Literacy assessment program, students are expected to show their ability in collaborating effectively with computer-based (virtual) peers and experts and to use appropriate information and communication technologies to collaborate with others on the creation and modification of knowledge products. Similarly, the Israeli national program of adopting the educational system to the 21st century illustrates a multi-year program with the goal of leading the implementation of innovative pedagogy and assessment in schools, including collaboration, communication, and problem solving (Israel Ministry of Education, 2011). Collaborative problem solving is one of the areas that the Organisation for Economic Co-operation and Development (OECD) emphasized for major development in the Programme for International Student Assessment (PISA) in addition to scientific, math, and reading literacy for the 2015 assessment. Collaborative problem solving refers to problem solving activities that involve collaboration among a group of individuals (O’Neil, Chuang, & Baker, 2010; Zhang, 1998). In the PISA 2015 Framework (OECD, 2013b), collaborative problem solving competency is defined as “the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills, and efforts to reach that solution” (p. 6). This definition treats the competency as conjoint dimension collaboration skills and the skills needed to solve a problem. For the PISA assessment, the focus is on individual capacities within collaborative situations. Thus, the effectiveness of collaborative problem solving depends on the ability of group members to collaborate and to prioritize the success of the group over individual successes. At the same time, this ability is still a trait in each of the individual members of the group. Development of a standardized computer-based assessment of
320
collaborative problem solving skills, specifically for large-scale assessment programs, remains challenging. Unlike some other skills, collaborative problem solving typically requires using complex performance tasks, grounded in varied educational domains, with interaction among students. These factors can affect the level of control that can be applied to ensure accurate assessment of students. In our chapter, an operational definition of collaborative problem solving refers to “the capacity of an individual to effectively engage in a group process whereby two or more agents attempt to solve a problem by sharing knowledge and understanding, organizing the group work and monitoring the progress, taking actions to solve the problem, and providing constructive feedback to group members.” First, collaborative problem solving requires students to be able to establish, and maintain the shared understanding throughout the problem-solving task by responding to requests for information, sending important information to agents about tasks completed, establishing or negotiating shared meanings, verifying what each other knows, and taking actions to repair deficits in shared knowledge. Shared understanding can be viewed as an effect, if the goal is that a group builds the common ground necessary to perform well together, or as a process by which peers perform conceptual change (Dillenbourg, 1999). Collaborative problem solving is a coordinated joint dynamic process that requires periodic communication between group members. Communication is a primary means of constructing a shared understanding or Common Ground (e.g.,Clark, 1996; Nelson, 1999). An “optimal collaborative effort” is required of all of the participants in order to achieve adequate performance in a collaborative environment (Dillenbourg, & Traum, 2006). Second, collaboration requires the capability to identify the type of activities that are needed to solve the problem and to follow the appropriate steps to achieve a solution (Mayer, & Wittrock, 1996; Roschelle, & Teasley, 1995). This process involves exploring and interacting with the prob-
Computer Agent Technologies in Collaborative Assessments
lem situation. It includes understanding both the information initially presented in the problem and any information that is uncovered during interactions with the problem. The accumulated information is selected, organized, and integrated in a fashion that is relevant and helpful to solving the particular problem and that is integrated with prior knowledge. Setting sub-goals, developing a plan to reach the goal state, and executing the plan that was created are also a part of this process. Overcoming the barriers of reaching the problem solution may involve not only cognition, but motivational and affective means (Funke, 2010; Mayer, & Wittrock, 2006). Third, students must be able to help organize the group to solve the problem; consider the talents and resources of group members; understand their own role and the roles of the other agents; follow the rules of engagement for their role; monitor the group organization; reflect on the success of the group organization; and help handle communication breakdowns, conflicts, and obstacles (Rosen, & Rimor, 2013). After the group goes through a phase of constructive feedback and sometimes a conflict, which can be associated with negative affect (Barth, & Funke, 2010, Salomon, & Globerson, 1989), a productive team moves forward with a better solution than if no constructive feedback or conflict had occurred. That is, some amount of “affective dissonance” forces one to sort out facts and converge on better solutions. This is very different from a rapid consensus and “group think,” where the group quickly agrees with other team members instead of investing time in the tasks at a deep level (Rosen, & Rimor, 2013; Stewart, Setlock, & Fussell, 2007). The proposed collaborative problem solving proficiency descriptors in Table 1aimed to guide efforts in designing assessment tasks for collaborative problem solving skills. The collaborative problem solving proficiencies in Table 1 can be measured by observing whether the student responds to questions and requests for information, acknowledges other team members’ actions, takes
appropriate actions to solve the problem, works in harmony with the team members, and provides constructive feedback to the team members in order to optimize the collaborative problem solving process. If such acts are not observed, the student exhibits performance below a proficient level. If the student responds but does not initiate questions or requests for information, does not always take appropriate actions or work in a balanced manner with the team members, and provides feedback that represents only basic dimensions of collaboration, then the student meets minimal standards. If the student initiates questions and requests, proactively helps the team in organizing the work, acknowledges contributions made by others, and responds positively to feedback, then the student is above the minimal standard. These interpretations of pro-active versus responsive student’s behaviors during collaboration are in line with PISA 2015 collaborative problem solving assessment (OECD, 2013) Team composition plays a significant role in collaborative settings (Davey et al., 2015; Kreijns, Kirschner, & Jochems, 2003; Nelson, 1999; Rosen, & Rimor, 2009). Collaborative problem solving performance is compromised to the extent that the division of labor is unintelligent, subgoals are not achieved, the group goals are blocked, and there are communication breakdowns. Collaborative problem solving tasks with high interdependency are very sensitive to group composition. One team member who has low competency can dramatically decrease the performance of the entire team and force other team members to compensate in order to achieve team goals. An overly strong leader can prevent other team members from manifesting their talents. A meaningful collaborative interaction rarely emerges spontaneously, but requires careful structuring of the collaboration to promote constructive interactions. According to Dillenbourg (1999), effective collaboration is characterized by a relatively symmetrical structure. Symmetry of knowledge occurs when all participants have roughly the same level of knowledge,
321
Computer Agent Technologies in Collaborative Assessments
Table 1. Presents proposed collaborative problem solving descriptors for Low, Medium, and High proficiencies Proficiency
Establishing and maintaining shared understanding
Organizing the group work and monitoring the progress
Taking actions to solve the problem
Providing constructive feedback to group members
Low
The student responds to or generates information that has little relevance to the task, yet the student contributes when explicitly or repeatedly prompted. Student takes actions that create additional misunderstandings of shared knowledge.
The student operates individually, often not in concert with the appropriate role for the task. Student’s actions and communications suggest that the student does not understand the roles of the other team members.
The student’s actions contribute minimally to achieving group goals. Student performs actions that are inappropriate for the distribution of tasks.
The student’s feedback includes minimal to no acknowledgment of other team members’ contribution to the task. The student proposes no pathways for teamwork improvement when prompted. The student pays little to no attention to feedback from other team members.
Medium
The student responds to most requests for information. The student does not always proactively take the initiative to overcome difficult barriers in collaboration. Student generates and responds to inquiries to clarify problem goals, problem constraints, and task requirements.
The student participates in assigned roles and contributes to the overall strategies for solving the problem. Student acknowledges or confirms roles taken by other group members. Student responds appropriately when asked to complete the student’s role assignment.
The student selects actions that contribute to achieving group goals and occasionally initiates actions. Student acknowledges completion of actions when prompted. Student participates in modification of actions without initiating the modifications.
The student’s feedback acknowledges contributions made by team members. Student responds appropriately when asked to discuss possible improvements to the teamwork. The student pays attention to feedback from other team members but without implementing the improvements proposed by others as appropriate.
High
The student proactively takes the initiative in requesting information from others and responds to requests for information. Student detects deficits (gaps or errors) in shared understanding when needed and takes the initiative to perform actions and communication to solve the deficits.
The student acts as a responsible team member and proactively takes the initiative to solve difficult barriers in collaboration. Student monitors the actions of others on the team and actively inquires about the tasks and plans to be completed by team members. Student effectively responds to conflicts, changes in the problem situation, and new obstacles to goals.
Student identifies efficient pathways to goal resolution. Student’s actions fully comply with the planned distribution of roles and tasks. The student initiates unprompted actions. Student inquires about the actions, tasks, and plans to be completed by members of the group to solve the problem when contextually appropriate. Student takes the initiative to identify, propose, describe, or change the tasks when there are changes in the problem or when there are obstacles towards the solution.
The student actively acknowledges contributions made by team members. Student detects gaps in teamwork and discusses possible pathways to improvement. The student responds positively to feedback from other team members and takes the initiative to implement the improvements proposed by others as appropriate.
322
Computer Agent Technologies in Collaborative Assessments
although they may have different perspectives. Symmetry of status involves collaboration among peers rather than interactions involving facilitator relationships. Finally, symmetry of goals involves common group goals rather than individual goals that may conflict. The degrees of interactivity and negotiability are additional indicators of collaboration (Dillenbourg, 1999). For example, trivial, obvious, and unambiguous tasks provide few opportunities to observe negotiation because there is nothing about which to disagree. Among the other factors that may influence student collaborative problem solving performance are gender, race, status, perceived cognitive or collaborative abilities, motivation, and attractiveness (e.g., Chiu, & Khoo, 2003; Loughry, Ohland, & Moore, 2007). Thus, in a standardized assessment situation, it is possible that a student should be matched with various types of group members that will represent different collaboration skills, problem-solving abilities, and psychological characteristics (cognitive abilities, attitudes, motivation, personality), while controlling for other factors that may influence student performance (e.g., asymmetry of roles). This chapter provides a comprehensive overview of collaborative problem solving assessment principles, presents two sample tasks for standardized assessment of collaborative problem solving skills along with the major findings from an international pilot, and discusses implications for development and research on collaborative assessments.
BACKGROUND Despite widespread collaborative problem solving teaching and learning practices, the methodology for assessing the learning benefits of collaboration continues to rely on educational assessments designed for isolated individuals that work alone. While there have been many advances in the theory and practice of collaborative problem
solving in learning contexts, there has been much less research on the assessment of individuals’ contributions to the processes and outcomes of collaborative tasks. Although measurement of individuals’ competencies may be most optimal when accomplished through traditional assessments in individual settings, student skills in collaborative problem solving can be assessed through a number of new assessment approaches. One key consideration in the development of collaborative problem solving assessment is types of measures used to determine the quality of student performance within and across groups. These measures can include quality of the solutions and the objects generated during the collaboration (Avouris, Dimitracopoulou, & Komis, 2003); analyses of log files, intermediate results, paths to the solutions (Adejumo, Duimering, & Zhong, 2008), team processes, and structure of interactions (O’Neil, Chung, & Brown, 1997); and quality and type of collaborative communication (Cooke et al., 2003; Dorsey et al., 2009; Foltz, & Martin, 2008; Graesser, Jeon, & Dufty, 2008). While there are a number of options for measurement, a key challenge is to ensure that the assessment approach can accurately capture the individual and group processes as well as be able to convert the dataset into a quantifiable and meaningful measure of performance. Another challenge in collaborative problem solving assessment refers to the need for synthesizing information from individuals and teams along with actions and communication dimensions (Laurillard, 2009; O’Neil, Chen, Wainess, & Shen, 2008; Rimor, Rosen, & Naser, 2010). Communication among the group members is central in the collaborative problem solving assessment, and it is considered a major factor that contributes to the success of collaborative problem solving (Dillenbourg & Traum, 2006; Fiore et al., 2010; Fiore, & Schooler, 2004). While communication can be classified as an individual collaboration skill, the output of communication provides a window into the cognitive and social
323
Computer Agent Technologies in Collaborative Assessments
processes related to all collaborative problem solving skills. Thus, communication among the team members can be assessed to provide measures of these processes. One approach has been to analyze the streams of open-ended communication in collaborative situations. For example, Foltz and Martin (2008) have used semantic and syntactic analyses of team communications in order to score individual and team performance as well as classify individual statements to different collaborative skills, while Erkens and Janssen (2008) have used automatic protocol techniques to code collaboration. Analysis of the content and structure of communication streams can provide measures of shared understanding; and progress toward goals, negotiation, consensus, leadership, and quality of solutions generated. However, such analysis approaches require capturing the written or spoken communication stream, having robust models of human dialogs, and then performing fairly intensive computational processing for scoring and classification of the language, such as Latent Semantic Analysis (Landauer, McNamara, Dennis, & Kintsch, 2007). This can be limiting, most particularly in applications in large-scale international testing, which requires scoring across multiple languages. Nevertheless, various techniques have been developed to address the challenge of providing a tractable way to communicate in collaborative problem solving assessment contexts. One technique that has been tested is communication through predefined messages (Chung, O’Neil, & Herl, 1999; Hsieh & O’Neil, 2002; O’Neil et al., 1997). In these studies, participants were able to communicate using the predefined messages (i.e., selection of a sentence from a menu) and to successfully complete tasks, such as a simulated negotiation or a knowledge map, and the team processes and outcomes were measurable. Measures of collaborative problem solving processes were computed based on the quantity and type of messages used (i.e., each message was coded a priori as representing adaptability, coordination, decision making,
324
interpersonal skill, or leadership). The use of messages provides a manageable way of measuring collaborative problem solving skills and allows real-time scoring and reporting. As mentioned above, within large-scale assessment programs such as PISA and NAEP, the focus of measurement of collaboration is on the individual rather than the group. This approach is not surprising because, in most educational systems of accountability, it is the individual who is assessed. The focus on the individual as a unit of analysis in collaborative contexts allows application of traditional psychometric models and direct comparisons. The next section identifies some of the challenges and possible solutions in the assessment of collaborative problem solving that will address two issues: Computer agent technology, and the comparison between the human-to-human and human-to-agent approach in collaborative problem solving assessment.
Computer Agent Technology Collaboration can take many forms, ranging from two individuals to large teams with predefined roles. For assessment purposes, collaboration can also be performed using simulated agents playing the role of team members, using computer or humans as team members. Thus, a critical distinction is whether all team members are human or some are computer agents. There are advantages and limitations for each method, which are outlined below. The Human-to-Human (H-H) approach provides an authentic human-human interaction that is a highly familiar situation for students. Students may be more engaged and motivated to collaborate with their peers. Additionally, the H-H situation is closer to the collaborative problem solving situations students will encounter in their personal, educational, professional, and civic activities. However, because each human will act independently, the approach can be problematic because of individual differences that can significantly
Computer Agent Technologies in Collaborative Assessments
affect the collaborative problem solving process and its outcome. Therefore, the H-H assessment approach of collaborative problem solving may not provide sufficient opportunity to cover variations in group composition, diversity of perspectives, and different team member characteristics in a controlled manner for accurate assessment of the skills on an individual level. Simulated team members using a preprogrammed profile, actions, and communication can potentially provide the coverage of the full range of collaboration skills with sufficient control. In the Human-to-Agent (H-A) approach, collaborative problem solving skills are measured by pairing each individual student with a computer agent or agents that can be programmed to act as team members with varying characteristics relevant to different collaborative problem solving situations. Collaboration in H-H mode may limit significantly the extent to which collaborative problem solving dimensions, such as shared understanding, are externalized through communication with the partner. The agents in H-A communication can be developed with a full range of capabilities, such as text-to-speech, facial actions, and optionally rudimentary gestures. In its minimal level, a conventional communication media, such as text via emails, chat, or a graphic organizer with lists of named agents can be used for H-A purposes. However, there are several limitation in using computer agents in collaborative problem solving rasks. Collaboration in H-A settings deviates from natural human communication delivery. The dynamics of H-H interaction (timing, conditional branching) cannot be perfectly captured with agents, and agents cannot adjust to idiosyncratic characteristics of humans. For example, human collaborators can propose unusual, exceptional solutions; the characteristic of such a process is that it cannot be included in a system following an algorithm, such as H-A interaction. Research shows that computer agents have been successfully used for tutoring, collaborative learning, co-construction of knowledge, and
collaborative problem solving (Biswas, Jeong, Kinnebrew, Sulcer, & Roscoe, 2010; Graesser et al., 2008; Millis et al., 2011). A computer agent can be capable of generating goals, performing actions, communicating messages, sensing its environment, adapting to changing environments, and learning (Franklin, & Graesser, 1996). One of the examples for computer agent use in education is a teachable agent system called Betty’s Brain (Biswas, Leelawong, Schwartz, & Vye, 2005; Leelawong, & Biswas, 2008). In this system, students use a causal map to teach a computer agent concepts and their relationships. Using their agent’s performance as motivation and a guide, students study the available resources so that they can remediate the agent’s knowledge and, in the process, learn the domain material themselves. Operation ARA (Cai et al., 2011; Millis et al., 2011) uses animated pedagogical agents that converse with the student in a gamebased environment for helping students learn critical-thinking skills and scientific reasoning within scientific inquiry. The system dynamically adapts the tutorial conversations to the learner’s prior knowledge. These conversations, referred to as “trialogs” are between the human learner and two computer agents (student and teacher). The student learns vicariously by observing the agents, gets tutored by the teacher agent, and teaches the student agent. The Tactical Language Training System (Johnson, & Valente, 2008) provides rapid training in a foreign language and culture through artificial intelligence-enhanced, story-driven gaming; taskoriented spoken language instruction; and intelligent tutoring. Trainees learn skills necessary to carry out a civil affairs mission, where they must enter a town, establish contact with local people, meet the local leader, and arrange for postwar reconstruction. Trainees carry out the mission by speaking with computer agents in a simulated world. The computer agents accompany them through the environment, providing assistance when needed and giving feedback.
325
Computer Agent Technologies in Collaborative Assessments
Computer agents implement sophisticated tutoring techniques, such as Socratic questioning, modeling, remediation, and scaffolding. For example, AutoTutor (Graesser et al., 2008; Graesser et al., 2004) improves learning of subject matter such as computer literacy and conceptual physics by coconstructing explanations and answers to complex questions. One version of AutoTutor is sensitive to the affective states of the learners in addition to their cognitive states and also responds with emotions designed to facilitate learning. Computer agents display emotions through facial expressions, gesture, and speech intonation (D’Mello, & Graesser, 2012This type of tutoring conversation revolves around a conversational pedagogical agent that scaffolds students to articulate specific expectations as part of a larger ideal answer to a posed question. The system interprets the students’ language by combining Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007), regular expressions (Jurafsky & Martin, 2008), and weighted keyword matching. LSA provides a statistical pattern-matching algorithm that computes the extent to which a student’s verbal input matches an anticipated expectation or misconception of one or two sentences. Regular expressions are used to match the student’s input to a few words, phrases, or combinations of expressions, including the key words associated with the expectation. Comparisons have been made between AutoTutor versions with pure text and versions that vary the presence of the agent’s speech and facial expressions. It was found that the text versions are nearly as effective as a fullblown animated conversational agent (D’Mello, Dowell, & Graesser, 2011). Also, it was found that the typed and spoken input versions yield similar learning gains, but there was a slight advantage for the typed input version because the spoken version has speech recognition errors. The studies concluded that it is the content of what gets said that most matters, not the face or verbal communication medium.
326
These observations opened the possibilities of programming computer agents to simulate collaboration and communication that is ubiquitously exhibited by humans. AutoTutor and other conversation-based learning environments such as ITSPOKE (Litman et al., 2006), Tactical Language and Culture Training System (Johnson & Valente, 2008), Why-Atlas (VanLehn et al., 2007), Operation ARA (Millis et al., 2011), and iSTART (McNamara, O’Reilly, Rowe, Boonthum, & Levinstein, 2007) have collectively contributed to the development of scalable computer agent technologies in a variety of skills and subject matter, such as critical thinking in science, computer literacy, foreign language, and reading strategies. Computer agents may play different roles in learning and assessment environments, such as virtual peers, teachers, and facilitators (Graesser, & McDaniel, 2008; Soland, Hamilton, & Stecher, 2013). However, eembedding computer agent technology is a challenging solution in large-scale assessments of collaborative problem solving. Implementing such computer agent-rich environments would raise major challenges in technology, costs, and cultural variations in language and discourse. The proposed solution is minimalist agents that consist of communication via pre-determined chat (OECD, 2013a; Rosen, & Tager, 2013). A similar approach of agent-based social communication has already been implemented in Programme for the International Assessment of Adult Competencies (PIAAC) on problem solving in technology-rich environments (OECD, 2013b).
Illustration of One Approach: Zoo Quest In their studies, Rosen and Tager (2013) and Rosen (2014a, 2014b) introduced a standardized collaborative problem solving assessment task administered both in H-H and H-A settings, as well as scalable scoring and process data mining techniques. In this collaborative problem
Computer Agent Technologies in Collaborative Assessments
solving computer-based assessment task, the student was asked to collaborate with a partner (computer-driven agent or a classmate) to find the optimal conditions for an animal at the zoo. The student was able to select different types of food, life environments, and extra features, while both partners were able to see the selections made and communicate through a phrase-chat (selections from 4-5 predefined options). An animal’s life expectancy under the given conditions was presented after each trial of the conditions. The student and the partner were prompted to discuss how to reach better conditions for an animal at the beginning of the task. By the end of the task, the student was asked to rate the partner (1-3 stars) and provide written feedback on the partner’s performance. The task was programmed in such a way that at least two attempts for problem solving and at least one communication act with a partner were required to be able to complete the assessment task. Figures 1-5 show examples of
the progress of a student (Alex) within the Zoo Quest task, while collaborating with a computer agent (Mike). The following information was presented interactively to the student during the task (see Figures 1-5): Episode #1: “It was a normal zoo… But suddenly disaster struck! A rare animal called an Artani was found dead! You and a friend have been tasked with saving the other Artani by finding the most suitable conditions for them within the zoo.” Episode #2: “Collaborate with your partner, help the Artani: In this task you will be in control of the selections made by you and your friend. Work with your partner to determine the best living conditions for Artani. You must change the three elements of Environment (savannah, foliage, rocks, desert, rainforest, aquatic), Food (plants, seeds, vegetables,
Figure 1. Outlining the Zoo Quest collaborative problem solving assessment task
327
Computer Agent Technologies in Collaborative Assessments
Figure 2. Establishing a shared understanding of the task through a phrase-chat
Figure 3. Collaborative monitoring results
328
Computer Agent Technologies in Collaborative Assessments
Figure 4. Discussing the problem solving strategy
Figure 5. Reaching the solution of the problem
329
Computer Agent Technologies in Collaborative Assessments
fruits, meat, fish), and Extras (stones, water, tree house, weed, tire swing) to find the best living conditions. Your friend can help you plan a strategy to improve the conditions for the Artani, before you make your selection.” Episode #3: The major area of the screen allows the partners to view the options available for the Environment, Food, and Extras. Both partners can see what variables were selected. However, the selections of the variables were made by one partner only (i.e., by the student in H-A mode or by one of the students in H-H mode). The ability to try out the variables selected (by clicking on “Go”) was also provided to only the one partner. On the right side of the screen, the partners were able to communicate by using a phrase-chat. The phrases presented at the chat were based on a pre-programmed decision tree and situated in order to allow students to authentically communicate with a partner and to be able to cover the collaborative problem solving measures defined for the task. The computer agent’s phrases were programmed to act with varying characteristics relevant to different collaborative problem solving situations (e.g., agreeing or disagreeing with the student, contributing to solving the problem or proposing misleading strategies, etc.). This approach provided each individual student with similar optimal chances to show his or her collaborative problem solving skills. Each student in the H-A setting (regardless of student’s actions) was provided with the same amount of help and misleading strategies proposed by the agent. While in the H-H mode, the team members were provided with exactly the same set of possible phrases for each collaborative problem solving situation. There was no control over the selections made by the students. In the H-H setting, the chat starts with the “leader” who asks questions and leads conversation. The other person takes the
330
role of “collaborator” and generally replies to questions asked by the leader. Certain questions or responses, however, can lead to a flipping of this questioning-answering pattern, and so the person initially asking questions may not do so for the entire conversation. Only the leader can submit guesses for the conditions and complete the task, though. As the task progresses, some of the possible replies may be marked as “hidden,” which means they do not show again until the leader has submitted the attempt at solving the problem. If no replies are available, then a secondary list of replies is available – this will be the case when all conditions have been agreed upon, and the focus will change to submitting. If the students have submitted their attempt at solving the problem, then some additional statements may become available to be said. There are various types of sentences that can be used through the phrase-chat. First, accepting statements that indicate agreement with the previous statement, such as “I think we should use ...” and “Yes, I agree.” Both students need to agree on all three variables the first time they submit their attempt at solving the problem, but following this they only need to agree on changing one variable before they can progress. Second, some statements are linked so that if one should be hidden as it has been accepted, then its linked statement should be as well. For example, “What kind of food does he eat?” and ‘Let’s try … for food” refer to the same subject; if the students have agreed on the kind of food, then there is no need to display these or similar questions again this time around. Last, some questions have options associated with them, such as ideas for food or environment. These are highlighted as “options” and can be selected by the students. In the H-A setting, the student leads the chat that gets an automated response to simulate a two- way chat with a collaborator (i.e., a computer agent). The sentences a student can use are limited to what is relevant at that time and change based on their progress through the task, similar to the H-H setting. Certain questions or replies, however, can
Computer Agent Technologies in Collaborative Assessments
lead to a flipping of these roles during conversation, and so the student may not continue in his or her original role for the entire conversation. If the team fails to improve the results within a few tries, the computer agent provides a student with helpful advice, such as “Let’s try to change one condition per trial,” “I think we can try water,” or “I don’t think that ‘seeds’ is the best choice.” Only the student can submit the guesses for the conditions and complete the task, though. Clicking on “Go” provided the partners with the possibility to see the life expectancy (0-26 years) of the animal under the variables selected and to read textual information regarding the result achieved. At this stage, the partners were allowed to communicate about the result and about ways to reach the optimal solution and then decide whether to keep the selections or try again (i.e., change the variables). Episode #4: “Give feedback to your partner: Having saved the Artani, you need to provide feedback on your partner. Give your partner a star rating (1 to 3) and add written feedback below.” Episode #5: “Partner’s feedback: Thanks for your feedback. You have done a great job on your part! Hope to work with you again sometime.” Collaborative problem solving scores for the assessment task consisted of shared understanding (40 points), problem solving (26 points), monitoring progress (26 points), and providing feedback (8 points). In both H-H and H-A settings, student scores in the first three collaborative problem solving dimensions were generated automatically based on a predefined programmed sequence of possible optimal actions and communication that was embedded into the assessment task. The problem-solving dimension was scored as one point per each year of the animal’s life expectancy that was achieved by selecting the variables. The shared understanding score consisted of a number
of grounding questions that were initiated by a student in appropriate situations (e.g., explaining the reason for a variable selection, questioning “What can we do to reach better conditions for the animal?”) and appropriate responses to the grounding questions made by the partner. The monitoring progress score was created based on communication initiated by the student prior to the submission of the selected variables (e.g., questioning “Are you ready to go ahead with our plan?” before clicking on “Go”) and the statements made by the student based on the life expectancy results that were achieved (e.g., “Should we keep this selection or try again?”). Scoring of student feedback was provided independently by two teachers from participating schools in the United States. The teachers were trained through a one-day workshop to consistently evaluate whether students’ feedback indicated both successful and challenging aspects of working with the partner on the task and acknowledged the contributions the partner made toward reaching a solution. Spelling and grammar issues did not affect students’ scores. Overall, the scoring strategy was discussed with a group of 10 teachers from participating countries in order to achieve consensus on a collaborative problem solving scoring strategy and reduce cultural biases as much as possible. Inter-coded agreement of feedback scoring was 92%. This approach suggests operationalization of the collaborative problem solving assessment through computer-based agents and semi-structured human agent setting. In the H-A setting, students will collaborate with computer-based conversational agents representing team members with a range of skills and abilities. The computer agent approach allows the high degree of control and standardization required for measurement of collaborative problem solving skills of each individual student. It further permits placing students in a number of collaborative situations and allows measurement within the time constraints of the test administration. The H-H approach provides more
331
Computer Agent Technologies in Collaborative Assessments
authentic assessment of collaborative problem solving skills, while reducing significantly the level of control and standardization. In both settings, collaborative problem solving skills should be measured through a number of tasks, where each task represents a phase in the problem solving and collaborative process and can contain several steps. Collaborative problem solving tasks may include: consensus building, writing a joint document, making a presentation, or Jigsaw problems. In each task, the student is expected to work with one or more team members to solve a problem, while the team members should represent different roles, attitudes, and levels of competence in order to vary the collaborative problem solving situation the student is confronted with.
Students’ Performance in Collaborative Problem Solving Assessment A focused study has been conducted to investigate differences in student online collaborative problem solving (CPS) performance in H-A and H-H modes (Rosen, 2014a; Rosen, & Foltz, 2014). Study participants included 179 14 years-old students from the United States, Singapore, and Israel. In all, 136 students participated in the H-A group and 43 participated in the H-H group (43 additional students participated in the H-H setting, acting as ‘collaborators’ for the major H-H group). Specifically in H-H assessment mode, students were randomly assigned into pairs to work on the CPS task. Because the H-H approach required pairs of students working together in a synchronized manner, the number of pairs was limited. This is due to the characteristics of technology infrastructures in participating schools. The students were informed prior to their participation in the study whether they would collaborate online with a computer agent or a classmate from their school. In a case of H-H setting, the students were able to see the true name of their partner. Students were exposed to identical collaborative problem solv-
332
ing assessment tasks and were able to collaborate and communicate by using identical methods and resources. However, while in the H-A mode students collaborated with a simulated computerdriven partner, and in the H-H mode students collaborated with another student to solve a problem. The findings showed that students assessed in H-A mode outperformed their peers in H-H mode in their collaborative skills (Rosen, 2014b; Rosen, & Foltz, 2014). Collaborative problem solving with a computer agent involved significantly higher levels of shared understanding, progress monitoring, and feedback. The results further showed no significant differences in other student performance measures to solve the problem with a computer agent or a human partner, although on average students in H-A mode applied more attempts to solve the problem, compared to the H-H mode. Interdependency is a central property of tasks that are desired for assessing collaborative problem solving, as opposed to a collection of independent individual problem solvers. A task has higher interdependency to the extent that student A cannot solve a problem without the actions of student B. Although interdependency between the group members was required and observable in the studied collaborative problem solving task, the collaboration in both settings was characterized by asymmetry of roles. A “leader” student in the H-H setting and the student in the H-A setting were in charge of selecting the variables and submitting the solutions in addition to the ability to communicate with the partner. According to Dillenbourg (1999), asymmetry of roles in collaborative tasks could affect each team member’s performance. Thus, a possible explanation for these results is the asymmetry in roles between the “leader” student and the “collaborator” in the H-H setting and the student and the computer agent in the H-A setting. In a more controlled setting (i.e., H-A), the asymmetrical nature of collaboration was associated with no relationship to the quality of collaborative skills that were observed during the task (Rosen, &
Computer Agent Technologies in Collaborative Assessments
Foltz, 2014). While in the H-H setting, in which the human “collaborator” was functioning with no system control over the suggestions that he or she made, the asymmetry in roles was associated with the quality of collaborative skills. A process analysis of the chats and actions of the students from the Rosen and Tager (2013) study showed that in the human-to-agent group the students encountered significantly more conflict situations than in the human-to-human group (Rosen, 2014b). Conflict in this research is operationally defined as a situation in which there are disagreements between the team members on a solution as reflected in communication or actions taken by the team members. Conflicts are essential to enhance collaborative skills; therefore, conflict situations are vital to collaborative problem solving performance (Mitchell, & Nicholas, 2006; Scardamalia, 2002; Stahl, 2006; Uline, Tschannen-Moran, & Perez, 2003; Weinberger, & Fischer, 2006). It includes handling disagreements, obstacles to goals, and potential negative emotions (Barth, & Funke, 2010; Dillenbourg, 1999; Trötschel, et al., 2011). For example, the tendency to avoid disagreements can often lead collaborative groups toward a rapid consensus (Rimor et al., 2010), while students accept the opinions of their group members because it is a way to quickly advance with the task. This does not allow the measurement of a full range of collaborative problem solving skills. Collaborative problem solving assessment tasks may have disagreements between team members, involving highly proficient team members in collaborative problem solving (e.g., initiates ideas, supports and praises other team members) as well as team members with low collaborative skills (e.g., interrupts, comments negatively about work of others). By contrast, with an integrative consensus process, students reach a consensus through an integration of their various perspectives and engage in convergent thinking to optimize the performance. An integrative approach for collaborative problem solving is also aligned with what is envisioned as a highly
skilled student in collaborative problem solving (OECD, 2013b) who is expected to be able to “initiate requests to clarify problem goals, common goals, problem constraints and task requirements when contextually appropriate” (p. 29) as well as to “detect deficits in shared understanding when needed and take the initiative to perform actions and communication to solve the deficits” (p. 29).
FUTURE RESEARCH DIRECTIONS Policymakers, researchers, and educators are engaged in vigorous debate about assessing collaborative problem solving skills on an individual level in valid, reliable, and scalable ways. The challenges facing implementing collaborative problem solving in large-scale assessment programs suggest that both H-H and H-A approaches should be explored. Technology offers opportunities for measurement of collaborative problem solving skills in domains and contexts where assessment would otherwise not be possible or would not be scalable. One of the important enhancements brought by technology to educational assessment is the capability to embed computer-based responses and behaviors into the instrument, enabling it to change its state in response to a student’s operations. These can be designed in such a way that the student is exposed to an expected situation and set of interactions, while the student’s interactions as well as the explicit responses are captured and scored automatically. Each mode of collaborative problem solving assessment can be differently effective for different educational purposes. For example, a formative assessment program which has adopted rich training on the communication and collaboration construct for its teachers may consider the H-H approach for collaborative problem solving assessment as a more powerful tool to inform teaching and learning, while H-A may be implemented as a formative scalable tool across a large district or in standardized summative settings. Non-availability
333
Computer Agent Technologies in Collaborative Assessments
of students with certain collaborative problem solving skill levels in a class may limit the fulfilment of assessment needs, but technology with computer agents can fill the gaps. In many cases, using simulated computer agents instead of relying on peers is not merely a replacement with limitations, but an enhancement of the capabilities that makes independent assessment possible. Due to lack of empirical research in the field of computer-based assessment of collaborative problem solving skills, it is necessary to conduct small-scale pilot studies in order to inform a more comprehensive approach of collaborative problem solving assessment. Further studies could consider including a representative sample of students in a wider range of ages and backgrounds. Additionally, future research could consider exploring differences in student performance in a wide range of problem-solving complexity and ill-structured tasks that cannot be solved by a single, competent group member. Such tasks require knowledge, information, skills, and strategies that no single individual is likely to possess. When ill-structured tasks are used, all group members are more likely to participate actively, even in groups featuring a range of student ability (Webb, Nemer, Chizhik, & Sugrue, 1998). One question that can be raised is whether to measure the collaborative problem solving skills of the group as a whole or for particular individuals within the group. The collaborative problem solving definition in this chapter follows the direction of PISA 2015 for collaborative problem solving assessment that focuses on measurement of individual skills. This decision has nontrivial consequences on the measurement considerations. In particular, it is necessary to expose each student to multiple tasks with different team compositions, to partners with varying collaborative skills, and to multiple phases within a task that afford a broad array of situations. It is important for there to be challenges and barriers in the tasks so that an assessment can be made of the different col-
334
laborative problem solving skills. With students working in teams, there is no guarantee that a particular student will be teamed up with the right combination to arrive at a sensitive measurement of any individual student’s collaborative problem solving skills. Consequently, there is a need to turn to technology to deliver a systematic assessment environment that can accommodate the dynamics of collaborative problem solving. Various research methodologies and measures developed in previous studies of collaborative problem solving, collaborative learning, and teamwork processes potentially can be adapted to collaborative problem solving assessment (Biswas et al., 2005; Hsieh, & O’Neil, 2002; O’Neil, & Chuang, 2008; Rosen, & Rimor, 2013; Weinberger, & Fischer, 2006).
CONCLUSION Students can demonstrate collaborative problem solving in tasks where assignments are distributed among team members, progress and results are integrated and shared, and products are presented jointly. Task structuring approaches aim to create optimal conditions for collaborative problem solving assessment by designing and scripting the situation before the interaction begins. It may include varying the characteristics of the participants (e.g., the size and composition of the group, or the roles), the availability and characteristics of communication and collaboration tools (e.g., the use of a phrase-chat or graphical tools), and the organization of the task. The collaborative problem solving assessment methods described in this chapter offer one of the few examples today of an approach to direct, large-scale assessment targeting collaborative problem solving skills. Collaborative assessments bring new challenges and considerations for the design of effective assessment approaches because they move the field beyond standard item design tasks. The assessment must incorporate concepts of how humans solve
Computer Agent Technologies in Collaborative Assessments
problems in situations where information must be shared and must incorporate considerations of how to control the collaborative environment in ways sufficient for valid measurement of individual and team skills (Rosen, 2014a, 2014b). The quality and practical feasibility of these measures are not yet fully documented. However, these measures can rely on the abilities of technology to engage students in interaction, to simulate others with whom students can interact, to track students’ ongoing responses, and to draw inferences from those responses. Group composition is one of the important issues in large-scale assessments of collaborative skills (Webb, 1995; Wildman et al., 2012). Overcoming possible bias of differences across groups by using computer agents or other methods becomes even more important within international large-scale assessments where cultural boundaries are crossed. New psychometric methods should be explored for reliable scoring of an individual’s contribution to collaborative processes and solutions, such as stochastic and social network analyses, hidden Markov models, and Bayesian knowledge tracing (Soller, & Stevens, 2008; von Davier, & Halpin, 2013). Current research suggests that by using computer agents in a collaborative problem solving task, students are able to show their collaborative skills at least at the level of that of their peers who collaborate with human partners. Although human-to-agent interaction might not be regarded as equal to human-to-human collaboration, the advancing technology of computer agents makes the use of avatars a viable way to simulate collaboration, and this can offer researchers more control than is available with real human collaboration (OECD, 2013b; Rosen, & Wolf, 2014). However, each approach to assessment of collaboration still involves limitations and challenges that must be considered in the design of the assessments. Further research can continue to establish comprehensive validity evidence and generalization of findings both in H-A and H-H collaborative problem solving settings.
REFERENCES Adejumo, G., Duimering, R. P., & Zhong, Z. (2008). A balance theory approach to group problem solving. Social Networks, 30(1), 83–99. doi:10.1016/j.socnet.2007.09.001 Avouris, N., Dimitracopoulou, A., & Komis, V. (2003). On analysis of collaborative problem solving: An object-oriented approach. Computers in Human Behavior, 19(2), 147–167. doi:10.1016/ S0747-5632(02)00056-0 Barth, C. M., & Funke, J. (2010). Negative affective environnements improve complex solving performance. Cognition and Emotion, 24(7), 1259–1268. doi:10.1080/02699930903223766 Biswas, G., Jeong, H., Kinnebrew, J. S., Sulcer, B., & Roscoe, A. R. (2010). Measuring selfregulated learning skills through social interactions in a teachable agent environment. Research and Practice in Technology-Enhanced Learning, 5(2), 123–152. doi:10.1142/S1793206810000839 Biswas, G., Leelawong, K., Schwartz, D., & Vye, N.The Teachable Agents Group at Vande. (2005). Learning by Teaching: A New Agent Paradigm for Educational Software. Applied Artificial Intelligence, 19(3-4), 363–392. doi:10.1080/08839510590910200 Cai, Z., Graesser, A. C., Forsyth, C., Burkett, C., Millis, K., Wallace, P., & Butler, H. et al. (2011). Trialog in ARIES: User input assessment in an intelligent tutoring system. In W. Chen, & S. Li (Eds.), Proceedings of the 3rd IEEE International Conference on Intelligent Computing and Intelligent Systems (pp.429-433). Guangzhou: IEEE Press. Chiu, M. M., & Khoo, L. (2003). Rudeness and status effects during group problem solving: Do they bias evaluations and reduce the likelihood of correct solutions? Journal of Educational Psychology, 95(3), 506–523. doi:10.1037/00220663.95.3.506
335
Computer Agent Technologies in Collaborative Assessments
Chung, G. K. W. K., O’Neil, H. F. Jr, & Herl, H. E. (1999). The use of computer-based collaborative knowledge mapping to measure team processes and team outcomes. Computers in Human Behavior, 15(3-4), 463–494. doi:10.1016/ S0747-5632(99)00032-1 Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press. doi:10.1017/ CBO9780511620539 Cooke, N. J., Kiekel, P. A., Salas, E., Stout, R., Bowers, C., & Cannon-Bowers, J. (2003). Measuring team knowledge: A window to the cognitive underpinnings of team performance. Group Dynamics, 7(3), 179–219. doi:10.1037/10892699.7.3.179 D’Mello, S. K., Dowell, N., & Graesser, A. (2011). Does it really matter whether students’ contributions are spoken versus typed in an intelligent tutoring system with natural language? Journal of Experimental Psychology. Applied, 17(1), 1–17. doi:10.1037/a0022674 PMID:21443377 D’Mello, S. K., & Graesser, A. C. (2012). Dynamics of affective states during complex learning. Learning and Instruction, 22(2), 145–157. doi:10.1016/j.learninstruc.2011.10.001 Davey, T., Ferrara, S., Holland, P., Shavelson, R., Webb, N., & Wise, L. (2015). Psychometric Considerations for the Next Generation Performance Assessment. Washington, DC: Center for K-12 Assessment & Performance Management, Educational Testing Service. Dillenbourg, P. (Ed.). (1999). Collaborative learning: Cognitive and computational approaches. Amsterdam, NL: Pergamon, Elsevier Science. Dillenbourg, P., & Traum, D. (2006). Sharing solutions: Persistence and grounding in multi-modal collaborative problem solving. Journal of the Learning Sciences, 15(1), 121–151. doi:10.1207/ s15327809jls1501_9
336
Dorsey, D., Russell, S., Keil, C., Campbell, G., van Buskirk, W., & Schuck, P. (2009). Measuring teams in action: Automated performance measurement and feedback in simulation-based training. In E. Salas, G. F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 351–381). New York, NY: Routledge. Erkens, G., & Janssen, J. (2008). Automatic coding of online collaboration protocols. International Journal of Computer-Supported Collaborative Learning, 3(4), 447–470. doi:10.1007/s11412008-9052-6 Fiore, S., Rosen, M., Smith-Jentsch, K., Salas, E., Letsky, M., & Warner, N. (2010). Toward an understanding of macrocognition in teams: Predicting process in complex collaborative contexts. The Journal of the Human Factors and Ergonomics Society, 53(2), 203–224. doi:10.1177/0018720810369807 Fiore, S., & Schooler, J. W. (2004). Process mapping and shared cognition: Teamwork and the development of shared problem models. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 133–152). Washington, DC: American Psychological Association. doi:10.1037/10690-007 Foltz, P. W., & Martin, M. J. (2008). Automated communication analysis of teams. In E. Salas, G. F. Goodwin, & S. Burke (Eds.), Team effectiveness in complex organizations and systems: Cross-disciplinary perspectives and approaches (pp. 411–431). New York: Routledge. Franklin, S., & Graesser, A. C. (1996). Is it an agent or just a program? A taxonomy for autonomous agents. Proceedings of the Agent Theories, Architectures, and Languages Workshop (pp. 21-35). Berlin: Springer-Verlag.
Computer Agent Technologies in Collaborative Assessments
Funke, J. (2010). Complex problem solving: A case for complex cognition? Cognitive Processing, 11(2), 133–142. doi:10.1007/s10339-009-0345-0 PMID:19902283 Graesser, A. C., Foltz, P., Rosen, Y., Forsyth, C., & Germany, M. (in press). Challenges of assessing collaborative problem solving. In B. Csapo, J. Funke, & A. Schleicher (Eds.), The nature of problem solving. OECD Series. Graesser, A. C., Jeon, M., & Dufty, D. (2008). Agent technologies designed to facilitate interactive knowledge construction. Discourse Processes, 45(4), 298–322. doi:10.1080/01638530802145395 Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H., Ventura, M., Olney, A., & Louwerse, M. M. (2004). AutoTutor: A tutor with dialogue in natural language. Behavior Research Methods, Instruments, & Computers, 36(2), 180–193. doi:10.3758/BF03195563 PMID:15354683 Graesser, A. C., & McDaniel, B. (2008). Conversational agents can provide formative assessment, constructive learning, and adaptive instruction. In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 85–112). New York, NY: Routledge. Griffin, P., Care, E., & McGaw, B. (2012). The changing role of education and schools. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching 21st century skills (pp. 1–15). Heidelberg: Springer. doi:10.1007/978-94-007-2324-5_1 Hsieh, I.-L., & O’Neil, H. F. Jr. (2002). Types of feedback in a computer-based collaborative problem solving group task. Computers in Human Behavior, 18(6), 699–715. doi:10.1016/S07475632(02)00025-0 Israel Ministry of Education. (2011). Adapting the educational system to the 21st century. Jerusalem, Israel: Ministry of Education.
Johnson, L. W., & Valente, A. (2008). Tactical language and culture training systems: Using artificial intelligence to teach foreign languages and cultures. In M. Goker, & K. Haigh (Eds.), Proceedings of the Twentieth Conference on Innovative Applications of Artificial Intelligence (pp. 1632-1639). Menlo Park, CA: AAAI Press. Jurafsky, D., & Martin, J. (2008). Speech and language processing. Englewood, NJ: Prentice Hall. Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social interaction in Computer-Supported Collaborative Learning environments: A review of the research. Computers in Human Behavior, 19(3), 335–353. doi:10.1016/S0747-5632(02)00057-2 Landauer, T., McNamara, D. S., Dennis, S., & Kintsch, W. (2007). Handbook of Latent Semantic Analysis. Mahwah, NJ: Erlbaum. Laurillard, D. (2009). The pedagogical challenges to collaborative technologies. International Journal of Computer-Supported Collaborative Learning, 4(1), 5–20. doi:10.1007/s11412-008-9056-2 Leelawong, K., & Biswas, G. (2008). Designing Learning by Teaching Systems: The Betty’s Brain System. International Journal of Artificial Intelligence in Education, 18(3), 181–208. Litman, D. J., Rose, C. P., Forbes-Riley, K., VanLehn, K., Bhembe, D., & Silliman, S. (2006). Spoken versus typed human and computer dialogue tutoring. International Journal of Artificial Intelligence in Education, 16, 145–170. Loughry, M. L., Ohland, M. W., & Moore, D. D. (2007). Development of a theory-based assessment of team member effectiveness. Educational and Psychological Measurement, 67(3), 505–524. doi:10.1177/0013164406292085
337
Computer Agent Technologies in Collaborative Assessments
Mayer, R. E., & Wittrock, M. C. (1996). Problemsolving transfer. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 47–62). New York: Macmillan Library Reference USA, Simon & Schuster Macmillan. Mayer, R. E., & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander & P. Winne (Eds.), Handbook of educational psychology (2nd ed.; pp. 287–304). Mahwah, NJ: Lawrence Erlbaum Associates. McNamara, D. S., O’Reilly, T., Rowe, M., Boonthum, C., & Levinstein, I. B. (2007). iSTART: A web-based tutor that teaches self-explanation and metacognitive reading strategies. In D.S. McNamara (Ed.), Reading comprehension strategies: Theories, interventions, and technologies (pp. 397–421). Mahwah, NJ: Erlbaum. Millis, K., Forsyth, C., Butler, H., Wallace, P., Graesser, A. C., & Halpern, D. (2011). Operation ARIES! A serious game for teaching scientific inquiry. In M. Ma, A. Oikonomou, & J. Lakhmi (Eds.), Serious games and edutainment applications (pp. 169–195). London: Springer-Verlag. doi:10.1007/978-1-4471-2161-9_10 Mitchell, R., & Nicholas, S. (2006). Knowledge creation in groups: The value of cognitive diversity, transactive memory and open-mindedness norms. Electronic Journal of Knowledge Management, 4(1), 64–74. National Assessment Governing Board. (2013). Technology and engineering literacy framework for the 2014 National Assessment of Educational Progress. Washington, DC: National Assessment Governing Board. National Research Council. (2011). Assessing 21st Century Skills. Washington, DC: National Academies Press. National Research Council. (2013). New directions in assessing performance of individuals and groups: Workshop summary. Washington, DC: National Academies Press. 338
Nelson, L. (1999). Collaborative problem-solving. In C. M. Reigeluth (Ed.), Instruction design theories and models (pp. 241–267). Mahwah, NJ: Lawrence Erlbaum Associates. O’Neil, H. F. Jr, Chen, H. H., Wainess, R., & Shen, C. Y. (2008). Assessing problem solving in simulation games. In E. L. Baker, J. Dickieson, W. Wulfeck, & H. F. O’Neil (Eds.), Assessment of problem solving using simulations (pp. 157–176). Mahwah, NJ: Lawrence Erlbaum Associates. O’Neil, H. F. Jr, & Chuang, S. H. (2008). Measuring collaborative problem solving in low-stakes tests. In E. L. Baker, J. Dickieson, W. Wulfeck, & H. F. O’Neil (Eds.), Assessment of problem solving using simulations (pp. 177–199). Mahwah, NJ: Lawrence Erlbaum Associates. O’Neil, H. F. Jr, Chuang, S. H., & Baker, E. L. (2010). Computer-based feedback for computerbased collaborative problem solving. In D. Ifenthaler, P. Pirnay-Dummer, & N. M. Seel (Eds.), Computer-based Diagnostics and Systematic Analysis of Knowledge (pp. 261–279). New York: Springer-Verlag. doi:10.1007/978-1-4419-56620_14 O’Neil, H. F. Jr, Chung, G. K. W. K., & Brown, R. (1997). Use of networked simulations as a context to measure team competencies. In H. F. O’Neil Jr., (Ed.), Workforce readiness: Competencies and assessment (pp. 411–452). Mahwah, NJ: Lawrence Erlbaum Associates. OECD. (2013a). OECD skills outlook 2013: First results from the survey of adult skills. OECD Publishing. OECD. (2013b). PISA 2015 Collaborative Problem Solving framework. OECD Publishing. Rimor, R., Rosen, Y., & Naser, K. (2010). Complexity of social interactions in collaborative learning: The case of online database environment. Interdisciplinary Journal of E-Learning and Learning Objects, 6, 355–365.
Computer Agent Technologies in Collaborative Assessments
Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem-solving. In C. E. O’Malley (Ed.), Computer-supported collaborative learning (pp. 69–97). Berlin: Springer-Verlag. doi:10.1007/9783-642-85098-1_5 Rosen, Y. (2014a). Assessing collaborative problem solving through computer agent technologies. In M. Khosrow-Pour (Ed.), Encyclopedia of information science and technology (3rd ed.; pp. 94-102). Hershey, PA: Information Science Reference, IGI Global. Rosen, Y. (2014b). Comparability of conflict opportunities in human-to-human and humanto-agent online collaborative problem solving. Technology Knowledge and Learning, 19(1-2), 147–174. doi:10.1007/s10758-014-9229-1 Rosen, Y., & Foltz, P. (2014). Assessing collaborative problem solving through automated technologies. Research and Practice in Technology Enhanced Learning, 9(3), 389–410. Rosen, Y., & Rimor, R. (2009). Using collaborative database to enhance students’ knowledge construction. Interdisciplinary Journal of E-Learning and Learning Objects, 5, 187–195. Rosen, Y., & Rimor, R. (2013). Teaching and assessing problem solving in online collaborative environment. In R. Hartshorne, T. Heafner, & T. Petty (Eds.), Teacher education programs and online learning tools: Innovations in teacher preparation (pp. 82-97). Hershey, PA: Information Science Reference, IGI Global. doi:10.4018/9781-4666-1906-7.ch005 Rosen, Y., & Tager, M. (2013). Computer-based assessment of collaborative problem-solving skills: Human-to-agent versus human-to-human approach. Boston, MA: Pearson Education.
Rosen, Y., & Wolf, I. (2014). Learning and Assessing Collaborative Problem Solving Skills. Paper presented at the International Society for Technology in Education (ISTE) conference, Atlanta, GA. Roseth, C. J., Fang, F., Johnson, D. W., & Johnson, R. T. (2006, April). Effects of cooperative learning on middle school students: A meta-analysis. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA. Salomon, G., & Globerson, T. (1989). When teams do not function the way they ought to. International Journal of Educational Research, 13(1), 89–99. doi:10.1016/0883-0355(89)90018-9 Scardamalia, M. (Ed.). (2002). Collective cognitive responsibility for the advancement of knowledge. Chicago, IL: Open Court. Soland, J., Hamilton, L. S., & Stecher, B. M. (2013, November). Measuring 21st century competencies: Guidance for educators. Santa Monica, CA: RAND Corporation. Soller, A., & Stevens, R. (2008). Applications of stochastic analyses for collaborative learning and cognitive assessment. In G. R. Hancock & K. M. Samuelson (Eds.), Advances in latent variable mixture models (pp. 109–111). Charlotte, NC: Information Age Publishing. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stewart, C. O., Setlock, L. D., & Fussell, S. R. (2007). Conversational argumentation in decision-making: Chinese and U.S. participants in face-to-face and instant-messaging interactions. Discourse Processes, 44(2), 113–139. doi:10.1080/01638530701498994
339
Computer Agent Technologies in Collaborative Assessments
Trötschel, R., Hüffmeier, J., Loschelder, D. D., Schwartz, K., & Gollwitzer, P. M. (2011). Perspective taking as a means to overcome motivational barriers in negotiations: When putting oneself into the opponent’s shoes helps to walk toward agreements. Journal of Personality and Social Psychology, 101(4), 771–790. doi:10.1037/ a0023801 PMID:21728447 Uline, C., Tschannen-Moran, M., & Perez, L. (2003). Constructive conflict: How controversy can contribute to school improvement. Teachers College Record, 105(5), 782–816. doi:10.1111/1467-9620.00268 U.S. Department of Education. (2010). Transforming American education – Learning powered by technology: National Education Technology Plan 2010. Washington, DC: Office of Educational Technology, U.S. Department of Education. VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31(1), 3–62. doi:10.1080/03640210709336984 PMID:21635287 Von Davier, A. A., & Halpin, P. F. (2013, December). Collaborative problem solving and the assessment of cognitive skills: Psychometric considerations. Research Report ETS RR-13-41. Educational Testing Service. Webb, N. M. (1995). Group collaboration in assessment: Multiple objectives, processes, and outcomes. Educational Evaluation and Policy Analysis, 17(2), 239–261. doi:10.3102/01623737017002239 Webb, N. M., Nemer, K. M., Chizhik, A. W., & Sugrue, B. (1998). Equity issues in collaborative group assessment: Group composition and performance. American Educational Research Journal, 35(4), 607–651. doi:10.3102/00028312035004607
340
Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46(1), 71–95. doi:10.1016/j.compedu.2005.04.003 Wildman, J. L., Shuffler, M. L., Lazzara, E. H., Fiore, S. M., Burke, C. S., Salas, E., & Garven, S. (2012). Trust development in swift starting action teams: A multilevel framework. Group & Organization Management, 37(2), 138–170. doi:10.1177/1059601111434202 Zhang, J. (1998). A distributed representation approach to group problem solving. Journal of the American Society for Information Science, 49(9), 801–809. doi:10.1002/(SICI)10974571(199807)49:93.0.CO;2-Q
ADDITIONAL READING Avery Gomez, E., Wu, D., & Passerini, K. (2010). Computer-supported team-based learning: The impact of motivation, enjoyment and team contributions on learning outcomes. Computers & Education, 55(1), 378–390. doi:10.1016/j. compedu.2010.02.003 Baker, M., & Lund, K. (1997). Promoting reflective interactions in a CSCL environment. Journal of Computer Assisted Learning, 13(3), 175–193. doi:10.1046/j.1365-2729.1997.00019.x Barron, B. (2000). Achieving coordination in collaborative problem-solving groups. Journal of the Learning Sciences, 9(4), 403–436. doi:10.1207/ S15327809JLS0904_2 Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359. doi:10.1207/S15327809JLS1203_1
Computer Agent Technologies in Collaborative Assessments
Baylor, A. L., & Kim, Y. (2005). Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education, 15, 95–115. Brannick, M. T., Prince, A., Prince, C., & Salas, E. (1995). The measurement of team process. Human Factors, 37(3), 641– 651. doi:10.1518/001872095779049372 PMID:11536716 Brannick, M. T., & Prince, C. (1997). An overview of team performance measurement. In M. T. Brannick, E. Salas, & C. Prince (Eds.), Team performance assessment and measurement: Theory methods and applications (pp. 3–16). Mahwah, NJ: Lawrence Erlbaum Associates. Brodbeck, F. C., & Greitemeyer, T. (2000). Effects of individual versus mixed individual and group experience in rule induction on group member learning and group performance. Journal of Experimental Social Psychology, 36(6), 621–648. doi:10.1006/jesp.2000.1423 Cohen, E. G. (1994). Restructuring the classroom: Conditions for productive small groups. Review of Educational Research, 64(1), 1–35. doi:10.3102/00346543064001001 Collazos, C. A., Guerrero, L. A., Pino, J. A., Renzi, S., Klobas, J., Ortega, M., & Bravo, C. et al. (2007). Evaluating Collaborative Learning Processes using System-based Measurement. Journal of Educational Technology & Society, 10(3), 257–274. Crowston, K., Rubleske, J., & Howison, J. (2006). Coordination theory: A ten-year retrospective. In P. Zhang & D. Galletta (Eds.), Human-computer interaction in management information systems (pp. 120–138). Armonk, NY: M.E. Sharpe.
De Dreu, C. K. W., Nijstad, B. A., & van Knippenberg, D. (2008). Motivated information processing in group judgment and decision making. Personality and Social Psychology Review, 12(1), 22–49. doi:10.1177/1088868307304092 PMID:18453471 Dehler, J., Bodemer, D., Buder, J., & Hesse, F. W. (2011). Guiding knowledge communication in CSCL via group knowledge awareness. Computers in Human Behavior, 27(3), 1068–1078. doi:10.1016/j.chb.2010.05.018 Dönmez, P., Rosé, C. P., Stegmann, K., Weinberger, A., & Fischer, F. (2005). Supporting CSCL with automatic corpus analysis technology. In T. Koschmann, D. Suthers, & T. W. Chan (Eds.), Proceedings of the International Conference on Computer Supported Collaborative Learning – CSCL 2005 (pp. 125–134). Taipei, Taiwan: Erlbaum. doi:10.3115/1149293.1149310 Dyer, J. L. (2004), The measurement of individual and unit expertise. In J. W. Ness, V. Tepe, & D. R. Ritzer (Eds.), The Science and Simulation of Human Performance (Advances in Human Performance and Cognitive Engineering Research, Volume 5) (pp.11-124). Emerald Group Publishing Limited. doi:10.1016/S1479-3601(04)05001-5 Fall, R., Webb, N., & Chudowsky, N. (1997). Group discussion and large-scale language arts assessment: Effects on students’ comprehension. CSE Technical Report 445. Los Angeles, CRESST. Fischer, F., Kollar, I., Mandl, H., & Haake, J. (Eds.). (2007). Scripting computer-supported collaborative learning: Cognitive, computational, and educational perspectives. New York, NY: Springer. doi:10.1007/978-0-387-36949-5
341
Computer Agent Technologies in Collaborative Assessments
Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H., Ventura, M., Olney, A., & Louwerse, M. M. (2004). AutoTutor: A tutor with dialogue in natural language. Behavior Research Methods, Instruments, & Computers, 36(2), 180–193. doi:10.3758/BF03195563 PMID:15354683 Hmelo-Silver, C. E., Chinn, C. A., O’Donnell, A. M., & Chan, C. (Eds.). (2013). The International handbook of collaborative learning. New York, NY: Routledge. Janetzko, D., & Fischer, F. (2003). Analyzing sequential data in computer-supported collaborative learning. Journal of Educational Computing Research, 28(4), 341–353. doi:10.2190/805XVG4A-DNND-9NTC Jarvenoja, H., & Jarvela, S. (2009). Emotion control in collaborative learning situations – Do students regulate emotions evoked from social challenges? The British Journal of Educational Psychology, 79(3), 463–481. doi:10.1348/000709909X402811 PMID:19208290
Qin, Z., Johnson, D. W., & Johnson, R. T. (1995). Cooperative versus competitive efforts and problem solving. Review of Educational Research, 65(2), 129–143. doi:10.3102/00346543065002129 Roschelle, J., & Teasley, S. (1995). The construction of shared knowledge in collaborative problem solving. In C. E. O’Malley (Ed.), Computer supported collaborative learning (pp. 69–97). Heidelberg: Springer. doi:10.1007/978-3-64285098-1_5 Salomon, G. (Ed.). (1993). Distributed cognitions. Cambridge University Press. Schoor, C., & Bannert, M. (2011). Motivation in a computer-supported collaborative learning scenario and its impact on learning activities and knowledge acquisition. Learning and Instruction, 21(4), 560–573. doi:10.1016/j.learninstruc.2010.11.002
Johnson, D., Johnson, R., & Holubec, E. (1998). Cooperation in the classroom. Boston: Allyn and Bacon.
Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2), 297–323. doi:10.1007/s11251011-9174-5
Larson, J. R. Jr, & Christensen, C. (1993). Groups as problem-solving units: Toward a new meaning of social cognition. The British Journal of Social Psychology, 32(1), 5–30. doi:10.1111/j.2044-8309.1993.tb00983.x
Van Boxtel, C., van der Linden, J., & Kanselaar, G. (2000). Collaborative learning tasks and the elaboration of conceptual knowledge. Learning and Instruction, 10(4), 311–330. doi:10.1016/ S0959-4752(00)00002-5
Mäkitalo, K., Weinberger, A., Häkkinen, P., Järvelä, S., & Fischer, F. (2005). Epistemic cooperation scripts in online learning environments: Fostering learning by reducing uncertainty in discourse? Computers in Human Behavior, 21(4), 603–622. doi:10.1016/j.chb.2004.10.033
Webb, N. M., & Palincsar, A. S. (1996). Group processes in the classroom. In D. Berliner & R. Calfee (Eds.), Handbook of Educational Psychology (pp. 841–873). New York: Macmillan.
342
Computer Agent Technologies in Collaborative Assessments
Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26(4), 506–515. doi:10.1016/j.chb.2009.08.007 Weinberger, A., Stegmann, K., Fischer, F., & Mandl, H. (2007). Scripting argumentative knowledge construction in computer-supported learning environments. In F. Fischer, I. Kollar, H. Mandl, & J. Haake (Eds.), Scripting computer-supported communication of knowledge—Cognitive, computational and educational perspectives (pp. 191–211). New York: Springer. doi:10.1007/9780-387-36949-5_12 Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686–688. doi:10.1126/science.1193147 PMID:20929725
KEY TERMS AND DEFINITIONS Agent: Either a human or a computersimulated participant in a collaborative problem solving group. Collaboration: Coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem. Collaborative Problem Solving: The capacity of an individual to effectively engage in a group process whereby two or more agents attempt to solve a problem by sharing knowledge and understanding, organizing the group work and monitoring the progress, taking actions to solve the problem, and providing constructive feedback to group members.
Computer Agent: An avatar with a preprogrammed profile, actions, and communication. Computer agents can be capable of generating goals, performing actions, communicating messages, sensing environment, adapting to changing environments, and learning. Conflict Situation: Situation in which there are disagreements between the team members on a solution as reflected in communication or actions taken by the team members. Openness: The degree to which a problem is “well-defined” (e.g., all the information is at hand for the problem solver) vs. “ill-defined” (e.g., the problem solver must discover or generate new information in order for the problem to be solved). Perspective Taking: the ability to place oneself in another’s position, which can lead to adaptation and to modification of communication to take the other’s perspective into consideration. Problem Solving: Cognitive processing directed at achieving a goal when no solution method is obvious to the problem solver. Problem Space: The space in which the actions are carried out to solve the problem. Can be explicitly or implicitly visible to team members. Referentiality: A problem’s context may have high referentiality to the outside world and real-world contexts or, at the other end of the spectrum, a low referentiality with little reference to external knowledge. Semantic Richness: The degree to which the problem provides a rich problem context that relates to the external world. Symmetry of Roles: The degree to which team members are assigned similar or different roles in a problem scenario. Symmetry of Status: The degree to which the status of team members is the same or is of different rank (e.g., peers vs. supervisor and subordinate relationships).
343
344
Chapter 13
A Tough Nut to Crack:
Measuring Collaborative Problem Solving Lei Liu Educational Testing Service, USA
Alina A. von Davier Educational Testing Service, USA
Jiangang Hao Educational Testing Service, USA
Patrick Kyllonen Educational Testing Service, USA
Diego Zapata-Rivera Educational Testing Service, USA
ABSTRACT The purpose of our project is to explore the measurement of cognitive skills in the domain of science through collaborative problem solving tasks, measure the collaborative skills, and gauge the potential feasibility of using game-like environments with avatar representation for the purposes of assessing the relevant skills. We are comparing students’ performance in two conditions. In one condition, students work individually with two virtual agents in a game-like task. In the second condition, dyads of students work collaboratively with two virtual agents in the similar game-like task through a chat box. Our research is motivated by the distributed nature of cognition, extant research on computer-supported collaborative learning (CSCL) which has shown great value of collaborative activities for learning, and the framework for the Programme for International Student Assessment (PISA) framework. This chapter focuses on the development and implementation of a conceptual model to measure individuals’ cognitive and social skills through collaborative activities.
INTRODUCTION Emerging conceptions of learning reveal that learning is not only a cognitive, but also a social and constructive process (Salomon & Perkins, 1998). Cognitive and social approaches to science learning have highlighted the importance
of discussion for helping students solve problems and achieve understanding. As a result, there is an urgent call for assessments to measure individual skills in collaborative settings. The field of computer-supported collaborative learning (CSCL) research deals with issues concerning collaboration, learning processes, and the use of
DOI: 10.4018/978-1-4666-9441-5.ch013
Copyright © 2016, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Tough Nut to Crack
computers by providing a technology-enhanced environment where students work on group tasks and produce a collective product or response, but the assessment component does not account for the rich interactions among the team members when evaluating students’ problem solving skills. In other words, most CSCL research focuses on exploring the important impact of collaboration on learning, however there is little research on how to fuse the quality of the process of collaborative problem solving with the outcome of problem solving in assessment. The purpose of our project is to explore the measurement of cognitive skills in the domain of science through collaborative problem solving tasks, measure the collaborative skills, and gauge the potential feasibility of using game-like environments with avatar representation for the purposes of assessing the relevant skills. In other words, our research examines whether the nature and quality of students’ collaboration affects their cognitive understanding of scientific phenomena and their ability to solve meaningful science problems. We are comparing students’ performance in two conditions. In one condition, students work individually with two virtual agents in a game-like task. In the second condition, dyads of students work collaboratively with two virtual agents in the similar game-like task through a chat box. Our research is motivated by the distributed nature of cognition and extant CSCL research which has shown great value of collaborative activities for learning and by the Programme for International Student Assessment (PISA) framework for collaboration (Collaborative Problem Solving Framework, Graesser & Foltz, 2013). This chapter focuses on the development and implementation of a conceptual model to measure individuals’ cognitive and social skills through collaborative activities. The overall hypothesis of the study is that providing collaboration opportunities may promote greater integration of knowledge, thus result in better student performances in an assessment.
COLLABORATIVE PROBLEM SOLVING SKILLS FRAMEWORK Collaborative Interactions and Cognition Collaborative learning techniques have been used extensively by educators at all levels as research suggests that active student participation in the small-group interactions is critical to effective learning (Chinn, O’Donnell, & Jinks, 2000). The CSCL research in educational settings has not become prominent until somewhat more recently, and this has been partially intertwined with the widespread use of educational technology in the classroom (see Hmelo-Silver, Chinn, Chan, & O’Donnell, 2013). Within the domain of educational assessment, there has been a strong recent interest in the evaluation of CPS as a social skill (Griffin, Care, & McGaw, 2012; Organization for Economic Co-operation and Development [OECD], 2013; von Davier & Halpin, 2013). However, social skills are not sufficient to define CPS, which also includes other essential components identified by collaborative learning research as key to knowledge construction such as establishing shared goals, accommodation of alternative perspectives to converge ideas, and regulative attempts to achieve goals. In our CPS assessment design, we realize the importance of measuring both the social and cognitive aspects of knowledge and skills that affect student performance when taking an assessment in a collaborative setting. A key question that has driven CSCL research is: How do learners develop shared understanding of the task to be accomplished, a question related to the process of knowledge co-construction. Knowledge co-construction, sometimes called shared cognition (Hatano & Inagaki, 1991; Wertsch, 1991) or convergent conceptual change (Roschelle, 1996) or the construction of joint problem space (Hmelo-Silver, Nagarajan, & Day, 2000; Roschelle, 1996), refers to the shared representation
345
A Tough Nut to Crack
that results from participants’ negotiation and convergence on meanings. Co-construction of knowledge can occur in scientists’ collaborative activities as well as in peer learning. Roschelle (1992, 1996) used conversational turn-taking patterns and the content of those turns, to reveal that convergent conceptual change is achieved incrementally, interactively, and socially through participation in a collaborative activity. In addition, Roschelle (1996) suggested that by asking learners to work together on joint problems, they are faced with challenges of establishing common references, resolving discrepancies in understanding, negotiating issues of individual and collective action, and coming to joint understanding. Roschelle (1996) reported a study in which convergent conceptual change occurred when students collaboratively used a computer-based simulation - the Envisioning Machine (EM) to learn about two physical concepts: velocity and acceleration. In the EM study, Roschelle (1996) applied Smith et al’s knowledge reconstruction model (Smith et. al., 1993) to explain the process of collaborative conceptual change. Specifically, students restructured their “p-prims”, such as commonsense metaphors, to make meaning of a scientific concept. In other words, the students successfully understood a scientific concept without using the standard scientific language. Roschelle’s study indicated that small group interactions may enhance individual cognition by restructuring what they already knew and develop new knowledge. To explore the process of collaborative interactions, Scardamalia (2002) describes a program of cognitive, pedagogical, and technological affordances that lead groups of learners to achieve collective cognitive responsibility. This canonical process of knowledge-building also attributes success in such learning communities to a set of 12 determinants that include, among others, epistemic agency (participants offer their ideas and negotiate a fit with the ideas of the group), democratizing knowledge (all participants contribute to community goals and take pride in knowledge advances
346
of the group), and symmetric knowledge advancement (expertise is distributed among the group and knowledge is exchanged regularly between group members). Taken collectively, we draw from this literature a set of joint activity indicators that are hypothesized to lead to convergent and non-convergent adaptation. Convergent adaptation is illustrated by conversational dynamics where 1) group members share approximately equal speaking time; 2) turn-taking patterns indicate a level of synergy, as demarcated through group members finishing each other’s sentences; 3) members all contribute to the goals of the group collaboratively; 4) individual ideas are negotiated and decisions are made collectively; and 5) group members distribute expertise across the group. Conversely, in non-convergent adaptation, 1) group members do not share equal speaking time; 2) turn-taking patterns show that members rarely finish each other’s sentences; 3) members contribute to the goals in a cooperative manner; 4) individual ideas are not negotiated and decisions are made unilaterally; and 5) expertise is localized and not distributed. Finally, Duschl and Osborne (2002) suggest opportunities for discussion and argumentation in collaborative activities could aid students in considering and evaluating other perspectives and thus helps them revise their original ideas. Peer collaboration provides opportunities for scientific argumentation to occur, which involves proposing, supporting, criticizing, evaluating, and refining ideas, some of which may conflict or compete, about a scientific subject, and engages students in using evidence and theory to support or refute ideas or claims (Simon, Erduran, & Obsorne, 2002). Therefore collaborative discussion provides a rich environment for mutual discovery, reciprocal feedback, and frequent sharing of ideas (Damon & Phelps, 1989). Taken all together, literature shows that collaborative discussion among students provide opportunities to increase individual cognition as well as group cognition. As summarized in Liu and Hmelo-Silver (2010),
A Tough Nut to Crack
collaborative interactions may contribute to enhancing cognition by engaging students into deep processing of knowledge restructure and revision. If a particular instructional tool can increase the interactions and provide opportunities for constructive processing, such as negotiations, among dyads or groups, this tool may lead to successful collaborative conceptual change.
Collaborative Problem Solving (CPS) Assessment Emerging theories of learning and thinking informed by complex systems analysis of group interactions, which often integrate the roles of context, experience, and active engagement of learners as members of collaborating groups, play an important role in designing and researching new means for learning and teaching. There is some research in the field that particularly target the interest of investigation CPS assessment methods (e.g., OECD, 2013; Rosen & Tager, 2013; von Davier & Halpin, 2013). As Rosen and Tager proposed (2013), there are a number of dimensions that can affect the type of collaboration and the processes used during the team problem solving process, including team size, social hierarchies, agent type (human vs. computer agents). These mutli-dimensions have significant impact on the CPS process and its outcome, therefore it is essential to consider these effects when designing a CPS assessment. von Davier and Halpin (2013) also did a review on collaboration in various disciplines and proposed a framework for an educational assessment of cognitive skills with CPS tasks in addition to outlining a novel psychnometric approach for assessing individuals’ cognitive skills using tasks that requires CPS process. In their review, they concluded that like any new types of assessment, CPS assessments will require considerations regarding how to satisfy traditional assessment requirements, such as reliability, validity, and comparability, however there is little research exists on building tasks to assess cognitive skills
through CPS assessments. Therefore, they call for development of CPS assessment that require data collection for calibration and validation, and stress the importance of collecting data that meet the needs of investigating the quality of proposed psychometric models, in particular, to validate the estimates of cognitive ability obtained by CPS assessment against those obtained through the concurrent use of multiple-choice items. Both the above mentioned reviews raise the question that CPS assessment requires communication, therefore requires a suitable design of discourse management. Such management system should afford test takers to create and maintain a shared understanding, and to take action to advance a solution and handle conflict or change. It is exciting that PISA have decided that in 2015, the assessment will include CPS as one of the key competencies to measure by placing each individual student in CPS situations, where the team member(s) with whom the student has to collaborate is fully controlled by programming computer agents. They also published the PISA 2015 Draft Collaborative Problem Solving Framework that guides the development of the assessment (OECD, 2013). Such decision will contribute to shifting large-scale assessment more toward authentic assessments that require applying real-work skills such as CPS skills. However, due to the time limit of PISA assessment, the collaborative process will be measured through multiple-choice of responses to computer agents. In our study, as a research project, we are not limited by assessment time, we would apply open-ended discussion in humans as well as with computer agents to explore alternative CPS assessment methods. We take the definition of CPS by PISA 2015, Collaborative Problem Solving Framework (OECD, 2013, p.6), which is, “Collaborative problem solving competency is the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort
347
A Tough Nut to Crack
required to come to a solution and pooling their knowledge, skills and efforts to reach that solution.” In this definition, CPS skills include two dimensions of skills, namely social and cognitive skills. The social skills involve individuals’ skills of interacting with each other to develop and reach a shared group goal by externalizing one’s cognition (e.g., sharing ideas). The cognitive skills include individuals’ ability of internalizing others’ externalized cognition as well as developing one’s own cognition during the problem solving process (e.g., assimilating/accommodating ideas). Based on this definition, we developed a CPS framework to guide the design of our assessment to measure individuals’ cognitive skills needed to solve a target problem and social skills through collaborative interactions. However, unlike in the PISA framework, we believe it is not easy to disentangle the cognitive and social aspects during the problem solving process. Therefore, in our CPS framework, we did not try to separate these two dimensions and instead we focused on the integrated functions of both dimensions that support knowledge building. In addition, unlike PISA which measures CPS as a separate construct, we tried to explore the relationships between CPS and student performance in a complex science task. Our hypothesis is that the team performance can be greater than the sum of outputs from all individuals from the team. To test this hypothesis, we designed two versions of assessments to compare students’ performances when completing a task individually versus when completing a similar task with a partner. In the individual version task, participants were asked to complete a game-like task by themselves. In the collaborative version task, participants were asked to work with a partner through a chat box and system prompts were provided to guide the flow of the task as well as their collaboration. In the following section, we introduce how our framework was developed and how it was used to design our task. The two versions of tasks will be introduced in detail below as well.
348
A CPS Framework Based on review of computer-supported collaborative learning (CSCL) research findings (Barron, 2003; Dillenbourg & Traum, 2006; Griffin, Care, & McGaw, 2011; von Davier & Halpin, 2013), the PISA 2015 Collaborative Problem Solving Framework (OECD, 2013), and evidence centered design principles (Mislevy, Steinberg, Almond, & Lukas, 2006), we developed a conceptual model that documents a matrix of individual and social skills involved in collaborative problem solving, which provides a basis for designing assessments of CPS. The individual cognitive skills are used to complete tasks independently of other team members. In this individual dimension of the CPS skills contextualized to inquiring knowledge, we include the following cognitive skills: conceptual understanding and inquiry skills in science (e.g., data collection, data analysis, prediction making, and evidence-based reasoning). The second dimension in the CPS skills matrix is social skills, which are often acquired in social networks that can affect both group and individual performance. There are four major categories of social skills, namely sharing ideas, negotiating ideas, regulating problem-solving activities, and maintaining communication. The first category – sharing ideas – is to capture evidence of how individual group members bring divergent ideas into a collaborative conversation. For instance, participants may share their individual responses to assessment items and/or point out relevant resources that might help resolve a problem. The second category – negotiating ideas – is to capture evidence of the team’s collaborative knowledge building and construction through negotiating with each other. The subcategories under this category include agreement/disagreement with each other, requesting clarification, elaborating/ rephrasing other’s ideas, identifying gaps, revising one’s own idea. The third category – regulating problem-solving activities – focuses on the collaborative regulation aspect of the team discourse,
A Tough Nut to Crack
and includes subcategories as identifying goals, evaluating teamwork, checking understanding. The last category – maintaining a positive communication atmosphere – is to capture content irrelevant social communications.
THE GAME-LIKE TASK AND STRUCTURED CPS SYSTEM PROMPTS We designed an online science scenario-based task using the above CPS model framework in a game-like setting. There are two versions of tasks. The major difference between these two versions is that one task implemented collaborative features (e.g., a chat box and chat history), and the other task does not include any collaborative features. Below we first introduce the individual version of the game-like task, then we introduce the collaborative version task.
The Individual Version Game-Like Task We extended an existing game-like volcano task developed earlier at ETS (see details in Zapata-
Rivera et al., 2014) in which students work with two virtual agents (a virtual scientist and virtual peer) to resolve a complex science problem about making predictions about volcanic eruption using a dialogue engine. Although most research on dialogues has been done in the area of intelligent tutoring systems, recently researchers are applying this approach to assessment (e.g., OperationARIES!, Millis et al.,2011). We intend to use a game-like task to engage students in high-order thinking during their problem-solving processing no matter whether they work alone or work with a partner. In the volcano task, which was originally designed for middle school students, students are introduced to factors related to volcanic eruptions and allows them an opportunity to converse with virtual agents, place seismometers to collect data, analyze data, take notes, and make data-based predictions. These activities were designed to evaluate students’ science inquiry skills. In our study, due to the nature of the crowdsourcing data collection, we converted the task to an adult version. The base task included three major parts. At the beginning of the Volcano scenario, test takers become the apprentice of Dr. Garcia, a (virtual) world-renowned volcanologist and are introduced to the task relevant background knowledge (e.g.,
Figure 1. CPS skills
349
A Tough Nut to Crack
seismic events, seismometers—tools to collect seismic data) and a simulation about a sequence of seismic events that typically occur before a volcanic eruption.. Dr. Garcia provides a brief introduction to volcanoes, including terms related to volcanic events, a sequence of volcanic seismic events that is often observed before a volcano erupts, how seismometers can be used to collect seismic data, and a warning system in the form of a table that maps evidence of volcanic seismic events to various alert levels. At the end of the introduction, several questions were asked to assess test takers’ understanding of the introduction. Second, the test taker then “joins” a field trip to collect data on a long-dormant composite volcano that has recently shown signs of seismic activity. On the trip, the test taker engages in an interactive cycle of collecting and interpreting seismic data using seismometers. The test taker can choose up to four seismometers and places them around the volcano, setting the data collection time for all seismometers (ranging from 2 days up to 2 months). After the seismic data are collected, the test taker has the opportunity to explore the data table(s) and select useful data ranges for making a prediction of the volcano alert level. Third, the students communicate with the two virtual agents to select a better note from two notes presented and make a prediction of volcano alert level using a similar dialogue engine developed by Art Graesser and colleagues (Cai et al., 2011; Graesser et al., 2010). Several conversation-based tasks with the virtual agents gather additional information about decisions test takers make during data collection, and alert level predictions based on data collected. For example, after making notes about the data collected by seismometers, the test taker interacts with two virtual agents (Dr. Garcia and a student named Art) to review and compare one of his/her own notes with one of Art’s notes. Art’s note is created based on the test taker’s notes and is used to gather additional evidence of the test taker’s data collection skills. The test taker is asked to select the note that is
350
more useful for making a prediction. In this interaction, Dr. Garcia asks Art to share one of his notes, which happens to be similar to one of the human test taker’s existing notes, but uses data from more (or fewer) seismometers. The test taker is asked to compare these notes and selects one for predicting the likelihood of a volcanic eruption. These interactions include 2 or 3 cycles of the test taker receiving limited feedback and having a chance to elaborate on their reasons for choosing a particular note. The other trialogue interactions deal with a mismatch between a prediction and the evidence used to support it. This scenario also includes several multiple choice and constructed response tasks (e.g., placing seismometers). In the individual version task, all students complete the whole task individually.
The Collaborative Version Task In the collaborative setting, dyads of students work collectively to make a prediction of volcano alert level. The questions in the collaborative version task are exactly the same as in the individual version. Within the collaborative task, we designed structured system prompts to facilitate the collaborative discourse between dyad participants as extensive research has shown that the success of collaborative efforts does not occur by itself and requires adequate support (Johnson & Johnson, 2003; Rummel & Spada, 2005). The design of the prompt structure is consistent with existing literature on collaborative learning, such as rules for brainstorming (Sutton & Hargadon, 1996) and patterns of discourse found to be associated with more and less successful collaborative learning outcomes (Barron, 2003). Specifically, the system prompts require students to share, explain, evaluate, compare and contrast their individual scientific knowledge during their collaborative problem-solving process. In the following section, we introduce the collaborative version task and the system prompts in detail. At the beginning of the task, two students were matched as partners
A Tough Nut to Crack
if they logged into the task within a similar time range. Once matched with a partner, each student was prompted to introduce himself/herself to his/ her dyad partner in the chat box. For each assessment item in the collaborative task, cycles of system prompts are displayed to facilitate students’ collaborative discourse to communicate and respond to the questions collectively (see Figure 2). For example, in the first part of the task, students were prompted to discuss with each other to come up with a team response to questions about the seismic events. In the second part of the task, students were asked to discuss how many seismometers should be placed around the volcano and where. In the third part of the task, students were asked to compare and select the best note taken based on the seismic data collected through seismometers, and then make a decision on what alert level should be assigned. When communicating with virtual agents, dyads of students need to make group decisions on how to respond to the two virtual agents’ questions.
For each item, after the team members finish their discussion in the chat box, the system randomly selects one student’s response to submit it as their team response.
System Prompts to Facilitate Collaborative Discourse For most item types (e.g., multiple-choice items, making notes when analyzing data), students are first prompted to input their individual responses. Then they are asked to share their individual responses and discuss them with their partners using the chat box. This system prompt is used to facilitate students’ sharing information/ideas, which is an aspect of collaboration (the first category in the CPS framework) and students’ assimilating and accommodating knowledge/perspective taking, another aspect of collaboration (the second category in the CPS framework). After the team discussion, students get a chance to revise their initial individual responses. Finally, one student
Figure 2. Self-instruction system prompt
351
A Tough Nut to Crack
from each dyad is randomly selected to submit a response on behalf of the team, while the other team member can view the response and provide input though the chat box. For other complex item types (e.g., constructed response questions to explain reasons for previous discrete items, participating in conversations with two virtual agents), considering the technical difficulty of programming and to reduce the confusion of completing the task, we asked to students start their collaboration by discussing how to respond to each item question with each other, then one student is randomly selected to input responses on behalf of the team. For example, for conversationbased items, the dialogue engine can currently only handle one input (i.e., the team response) to respond virtual agents’ questions. For the explaining items, requiring individual explanation would confuse participants about whose responses they should be explaining. Therefore asking for a team explanation made it clear that the team should provide explanation for the previously submitted team response. The chat history log captures students’ collaborative discourse throughout the task. In addition, students’ initial individual responses, revisions on the individual responses, and the team responses are also logged in the XML format with time stamps, which includes rich information about sequences of student problem-solving processes.
System Prompts and Other Task Features to Facilitate the Assessment Flow In addition to system prompts that facilitate students’ collaborative discourse, there are system prompts and other task features to facilitate the flow of completing the assessment. For example, if one player is ready to move on and click on “Next,” the partner player will see a system prompt to remind him/her that “Your partner has clicked “Next.” Please click “Next” to continue when you are finished viewing this slide.” Another example is that when collaborating with each other, if one
352
player fails to input any comments when a discussion is required, this player will get a reminder to contribute: “[Name of player], it is your turn to contribute to the conversation.” Finally, there are other collaborative task features that support students in viewing each other’s work. For example, when students are taking turns placing seismometers, one player can observe the other player’s action and vice versa.
CROWDSOURCING DATA COLLECTION Crowdsourcing provides a fast and cost-effective way to collect a large amount of data from human subjects in relatively short period of time. Hundreds of participants can be recruited for highly interactive and computer-based tasks for marginal costs within a timeframe of months or even days (Kittur, Chi, & Suh, 2008). In our study, we used Amazon Mechanical Turk (AMT) as the crowdsourcing provider. Currently, we have completed the data collection for the individual version task (N= 486) and are finishing up collecting data with the collaborative version task (N=278 dyads). In the individual task condition, the gender distribution of participants is 59.8% male and 40.6% female; the racial distribution is 68.34% Caucasian, 14.61% Asian, 8.22% African American, 7.46% Hispanic/Latino, 0.91% American Indian, and 0.46% Hawaian or Pacific Islander. In the collaborative task condition, the gender distribution of participants is 50.34% male and 49.66% female; the racial distribution is 72.12% Caucasian, 7.36% Asian, 9.82% African American, 5.56% Hispanic/ Latino, 1.64% American Indian, and 3.5% Hawaian or Pacific Islander. Besides the game-like task, all participants will take a science content test, a background questionnaire, and a personality questionnaire so that we can explore the impact of students’ science knowledge, demographic background, and personality on their CPS skills.
A Tough Nut to Crack
PRELIMINARY DATA ANALYSIS AND RESULTS Quantitative Analysis To test our hypothesis that providing collaboration opportunities may promote greater integration of knowledge and result in better student performances in an assessment, we conducted preliminary data analysis to compare students’ performances on seven questions in the first part of the task in the two conditions. Those questions assessed student understanding about seismic events related to the volcanic eruption and data patterns related to those seismic events. The data analysis on the other parts of the task is still ongoing. Specifically, we compared data collected through four time points: first, students’ individual responses in the individual version task, students’ initial individual responses in the collaborative version task, students’ revised individual responses after their discussion, and students’
team responses in the collaborative version task. In our preliminary analysis, we calculated the sum scores from seven multiple-choice items in the task (note that we are still in the process of scoring other constructed-response items as well as the conversation-based items). Then we conducted several comparison analyses. First, we compared students’ individual responses in the individual version task with students’ initial individual responses in the collaborative version task. Second, we compared students’ revised individual responses after their discussion with students’ initial individual responses in the collaborative version task. Third, we compared dyads’ team responses with students’ revised individual responses in the collaborative version task. We conducted t-tests to compare the differences. In Figure 3, we present our results by plotting the average scores for the aforementioned four time points. The legend in the plot shows what each color represent. The dots are the mean of the sum scores and the error bars are the standard error
Figure 3. Average sum scores of responses from the single player version and from the CPS version
353
A Tough Nut to Crack
of the mean. The corresponding numbers in the plots are listed in Table 1. The p values from the t-tests were presented in Table 2. From Figure 3 and Table 2, we identified three key findings. First, the individual initial responses (in green) from the participants who took the collaborative version task were not statistically different from those from participants who took the single version task (in red, and p = 0.580 as in Table 2). Second, the revised individual responses (in blue in Figure 3) from the participants who took the collaborative version task were significantly improved compared to their initial individual responses (in green, and p < 0.001 as in Table 2), which indicated that the collaborative discussion through the chat box had a positive effect on participants’ performance. In addition, the average score of the team responses from the collaborative version task (in black in
Figure 3) were also significantly different from the initial individual responses (in red, and p < 0.001 as in Table 2).
Qualitative Analysis In addition to the quantitative analysis of comparing students’ responses in two conditions and at different time points, we also conducted case studies with the dyads’ conversation logs from the chat box to demonstrate why collaboration among students may promote student performance during their problem-solving process. Two themes emerged from the case studies: first, the quality of the discussion of different types of items vary within a dyad; second, discussion on the same item varies across dyads. Students tended to have longer and deeper discussion on open-ended questions, for example,
Table 1. Means and standard deviations of individual response and team response scores in two conditions N
Mean
SD
Max
Min
Individual Condition
Group
486
4.796
0.063
7
0
Collaborative Condition: Initial Individual Response
556
4.842
0.053
7
1
Collaborative Condition: Revised Individual Response
556
5.147
0.049
7
1
Collaborative Condition: Team Response
278
5.237
0.068
7
2
Table 2. The p-value from pairwise t-tests for different scores Individual Condition
Collaborative Condition: Initial Individual Response
Collaborative Condition: Revised Individual Response
Collaborative Condition: Team Response
Individual Condition
X
0.580
p