126 69 19MB
English Pages 693 [669] Year 2021
Computer-Supported Collaborative Learning Series
Ulrike Cress Carolyn Rosé Alyssa Friend Wise Jun Oshima Editors
International Handbook of Computer-Supported Collaborative Learning
Computer-Supported Collaborative Learning Series Volume 19
Series Editor Christopher Hoadley Steinhardt School of Culture, Education New York University Brooklyn, NY, USA Associate Editors Jan van Aalst Faculty of Education University of Hong Kong Hong Kong, Hong Kong Isa Jahnke Information Science & Learning Technologies University of Missouri Columbia, MO, USA
The Computer-Supported Collaborative Learning Book Series is for people working in the CSCL field. The scope of the series extends to 'collaborative learning' in its broadest sense; the term is used for situations ranging from two individuals performing a task together, during a short period of time, to groups of 200 students following the same course and interacting via electronic mail. This variety also concerns the computational tools used in learning: elaborated graphical whiteboards support peer interaction, while more rudimentary text-based discussion forums are used for large group interaction. The series will integrate issues related to CSCL such as collaborative problem solving, collaborative learning without computers, negotiation patterns outside collaborative tasks, and many other relevant topics. It will also cover computational issues such as models, algorithms or architectures which support innovative functions relevant to CSCL systems. The edited volumes and monographs to be published in this series offer authors who have carried out interesting research work the opportunity to integrate various pieces of their recent work into a larger framework. Book proposals for this series may be submitted to the Publishing Editor: Melissa James. E-mail: [email protected]. All books in the series are available at 25% discount to ISLS: International Society of Learning Sciences (http://www.isls.org).
More information about this series at http://www.springer.com/series/5814
Ulrike Cress • Carolyn Rosé • Alyssa Friend Wise • Jun Oshima Editors
International Handbook of Computer-Supported Collaborative Learning
Editors Ulrike Cress Leibniz-Institut für Wissensmedien Tübingen, Germany
Alyssa Friend Wise Department of Administration Leadership, and Technology New York University New York, NY, USA
Carolyn Rosé Language Technologies Institute and Human-Computer Interaction Institute Carnegie Mellon University Pittsburgh, PA, USA Jun Oshima Faculty of Informatics Shizuoka University Hamamatsu-shi, Shizuoka, Japan
ISSN 1573-4552 ISSN 2543-0157 (electronic) Computer-Supported Collaborative Learning Series ISBN 978-3-030-65290-6 ISBN 978-3-030-65291-3 (eBook) https://doi.org/10.1007/978-3-030-65291-3 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
Part I
Foundations
Foundations, Processes, Technologies, and Methods: An Overview of CSCL Through Its Handbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ulrike Cress, Jun Oshima, Carolyn Rosé, and Alyssa Friend Wise
3
Theories of CSCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gerry Stahl and Kai Hakkarainen
23
A Conceptual Stance on CSCL History . . . . . . . . . . . . . . . . . . . . . . . . . Sten Ludvigsen, Kristine Lund, and Jun Oshima
45
An Overview of CSCL Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cindy E. Hmelo-Silver and Heisawn Jeong
65
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Camillia Matuk, Kayla DesPortes, and Christopher Hoadley
85
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion . . . . 103 Kimberley Gomez, Louis M. Gomez, and Marcelo Worsley Sustainability and Scalability of CSCL Innovations . . . . . . . . . . . . . . . . 121 Nancy Law, Jianwei Zhang, and Kylie Peppler Part II
Collaborative Processes
Communities and Participation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Yotam Hod and Stephanie D. Teasley Collaborative Learning at Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Bodong Chen, Stian Håklev, and Carolyn Penstein Rosé Argumentation and Knowledge Construction . . . . . . . . . . . . . . . . . . . . . 183 Joachim Kimmerle, Frank Fischer, and Ulrike Cress v
vi
Contents
Analysis of Group Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Richard Medina and Gerry Stahl Dialogism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Stefan Trausan-Matu, Rupert Wegerif, and Louis Major Trialogical Learning and Object-Oriented Collaboration . . . . . . . . . . . . 241 Sami Paavola and Kai Hakkarainen Knowledge Building: Advancing the State of Community Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Marlene Scardamalia and Carl Bereiter Metacognition in Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . 281 Sanna Järvelä, Jonna Malmberg, Marta Sobocinski, and Paul A. Kirschner Group Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Jürgen Buder, Daniel Bodemer, and Hiroaki Ogata Roles for Structuring Groups for Collaboration . . . . . . . . . . . . . . . . . . . 315 Bram De Wever and Jan-Willem Strijbos Part III
Technologies
Collaboration Scripts: Guiding, Internalizing, and Adapting . . . . . . . . . 335 Freydis Vogel, Armin Weinberger, and Frank Fischer The Roles of Representation in Computer-Supported Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Shaaron E. Ainsworth and Irene-Angelica Chounta Perspectives on Scales, Contexts, and Directionality of Collaborations in and Around Virtual Worlds and Video Games . . . . . . . . . . . . . . . . . . 371 Deborah Fields, Yasmin Kafai, Earl Aguilera, Stefan Slater, and Justice Walker Immersive Environments: Learning in Augmented + Virtual Reality . . . 389 Noel Enyedy and Susan Yoon Robots and Agents to Support Collaborative Learning . . . . . . . . . . . . . . 407 Sandra Y. Okita and Sherice N. Clarke Collaborative Learning Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Alyssa Friend Wise, Simon Knight, and Simon Buckingham Shum Tools and Resources for Setting Up Collaborative Spaces . . . . . . . . . . . 445 Carolyn Rosé and Yannis Dimitriadis
Contents
Part IV
vii
Methods
Case Studies in Theory and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Timothy Koschmann and Baruch B. Schwarz Design-Based Research Methods in CSCL: Calibrating our Epistemologies and Ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Yael Kali and Christopher Hoadley Experimental and Quasi-Experimental Research in CSCL . . . . . . . . . . . 497 Jeroen Janssen and Ingo Kollar Development of Scalable Assessment for Collaborative Problem-Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Yigal Rosen, Kristin Stoeffler, Vanessa Simmering, Jiangang Hao, and Alina von Davier Statistical and Stochastic Analysis of Sequence Data . . . . . . . . . . . . . . . 533 Ming Ming Chiu and Peter Reimann Artifact Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Stefan Trausan-Matu and James D. Slotta Finding Meaning in Log-File Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Jun Oshima and H. Ulrich Hoppe Quantitative Approaches to Language in CSCL . . . . . . . . . . . . . . . . . . . 585 Marcela Borge and Carolyn Rosé Qualitative Approaches to Language in CSCL . . . . . . . . . . . . . . . . . . . . 605 Suraj Uttamchandani and Jessica Nina Lester Gesture and Gaze: Multimodal Data in Dyadic Interactions . . . . . . . . . . 625 Bertrand Schneider, Marcelo Worsley, and Roberto Martinez-Maldonado Video Data Collection and Video Analyses in CSCL Research . . . . . . . . 643 Carmen Zahn, Alessia Ruf, and Ricki Goldman Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
Contributors
Earl Aguilera California State University-Fresno, Curriculum & Instruction, Fresno, CA, USA Shaaron E. Ainsworth Learning Sciences Research Institute, School of Education, University of Nottingham, Nottingham, UK Carl Bereiter University of Toronto, Toronto, ON, Canada Daniel Bodemer Media-Based Knowledge Construction Lab, University of Duisburg-Essen, Duisburg, Germany Marcela Borge Department of Learning and Performance Systems, The Pennsylvania State University, University Park, State College, PA, USA Jürgen Buder Knowledge Exchange Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany Bodong Chen Department of Curriculum and Instruction, University of Minnesota, Minneapolis, MN, USA Ming Ming Chiu Special Education and Counseling, The Education University of Hong Kong, Tai Po, Hong Kong Irene-Angelica Chounta Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Duisburg, Germany Sherice N. Clarke Education Studies, University of California San Diego, La Jolla, CA, USA Ulrike Cress Knowledge Construction Lab, Leibniz-Institut für Wissensmedien (Knowledge Media Research Center), Tübingen, Germany Department of Psychology, Eberhard Karls University, Tübingen, Germany Kayla DesPortes Department of Administration, Leadership and Technology, New York University, New York, NY, USA ix
x
Contributors
Bram De Wever Tecolab Research Unit, Department of Educational Studies, Ghent University, Ghent, Belgium Yannis Dimitriadis GSIC/EMIC Research Group and Department of Signal Theory, Communications and Telematics Engineering, Universidad de Valladolid, Valladolid, Spain Noel Enyedy Department of Teaching and Learning Peabody College, Vanderbilt University, Nashville, TN, USA Deborah Fields Instructional Technologies & Learning Sciences, Utah State University, Logan, UT, USA Frank Fischer Department of Psychology, Ludwig-Maximilians-University, Munich, Germany Munich Center of the Learning Sciences, Munich, Germany Department of Psychology and Munich Center of the Learning Sciences, LudwigMaximilians-Universität München, Munich, Germany Ricki Goldman NYU Steinhardt-Educational Communication and Technology, New York, NY, USA Kimberley Gomez Graduate School of Education and Information Studies, University of California, Los Angeles, Los Angeles, CA, USA Louis M. Gomez Graduate School of Education and Information Studies, University of California, Los Angeles, Los Angeles, CA, USA Kai Hakkarainen Department of Educational Sciences, University of Helsinki, Helsinki, Finland Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland Stian Håklev School of Computer and Communication Sciences, École polytechnique fédérale de Lausanne, Lausanne, Switzerland Jiangang Hao Educational Testing Service, Princeton, NJ, USA Cindy E. Hmelo-Silver Center for Research on Learning and Technology, Indiana University, Bloomington, IN, USA Christopher Hoadley Department of Administration, Leadership and Technology, New York University, New York, NY, USA Educational Communication and Technology Program, New York University, New York, NY, USA Yotam Hod Department of Learning, Instruction, and Teacher Education, University of Haifa, Haifa, Israel H. Ulrich Hoppe Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Duisburg, Germany
Contributors
xi
Jeroen Janssen Department of Education, Utrecht University, Utrecht, The Netherlands Sanna Järvelä Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland Heisawn Jeong Department of Psychology, Hallym University, Chuncheon, South Korea Yasmin Kafai Teaching & Leadership Division, University of Pennsylvania, Philadelphia, PA, USA Yael Kali Faculty of Education, University of Haifa, Haifa, Israel Joachim Kimmerle Knowledge Construction Lab, Leibniz-Institut Wissensmedien (Knowledge Media Research Center), Tübingen, Germany Department of Psychology, Eberhard Karls University, Tübingen, Germany
für
Paul A. Kirschner Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland Open University of the Netherlands, Heerlen, The Netherlands Simon Knight Transdisciplinary School, University of Technology Sydney, Sydney, Australia Ingo Kollar Educational Psychology, University of Augsburg, Augsburg, Germany Timothy Koschmann Department of Medical Education, Southern Illinois University, Springfield, IL, USA Nancy Law University of Hong Kong, Hong Kong, SAR, China Jessica Nina Lester Counseling & Educational Psychology, Indiana University, Bloomington, IN, USA Sten Ludvigsen Faculty of Educational Sciences, University of Oslo, Oslo, Norway Kristine Lund CNRS, Ecole Normale Supérieure de Lyon, University of Lyon, Lyon, France Louis Major Faculty of Education, University of Cambridge, Cambridge, UK Jonna Malmberg Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland Roberto Martinez-Maldonado Faculty of Information Technologies, Monash University, Melbourne, Australia Camillia Matuk Department of Administration, Leadership and Technology, New York University, New York, NY, USA
xii
Contributors
Richard Medina Faculty Specialist in Human-Computer Interaction, Center for Language & Technology, University of Hawai‘i at Mānoa, Honolulu, HI, USA Hiroaki Ogata Academic Center for Computing and Media Studies, Kyoto University, Kyoto, Japan Sandra Y. Okita Mathematics, Science and Technology, Teachers College, Columbia University, New York, NY, USA Jun Oshima Faculty of Informatics, Shizuoka University, Hamamatsu-shi, Japan Research and Education Center for the Learning Sciences, Shizuoka University, Shizuoka-shi, Japan Department of Informatics, Shizuoka University, Shizuoka-shi, Japan Sami Paavola Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland Kylie Peppler University of California, Irvine, Irvine, CA, USA Peter Reimann Centre for Research on Learning and Innovation, University of Sydney, Sydney, Australia Carolyn Rosé Language Technologies Institute and Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA Yigal Rosen BrainPOP, New York, NY, USA Alessia Ruf University of Applied Sciences and Arts Northwestern Switzerland, School of Applied Psychology, Olten, Switzerland Marlene Scardamalia University of Toronto, Toronto, ON, Canada Bertrand Schneider Graduate School of Education, Harvard University, Cambridge, MA, USA Baruch B. Schwarz School of Education, Hebrew University of Jerusalem, Jerusalem, Israel Simon Buckingham Shum Connected Intelligence Centre, University of Technology Sydney, Sydney, Australia Vanessa Simmering ACT, Inc., Iowa City, IA, USA Stefan Slater Teaching & Leadership Division, University of Pennsylvania, Philadelphia, PA, USA James D. Slotta Ontario Institute for Studies in Education, University of Toronto, Toronto, Canada Marta Sobocinski Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland
Contributors
xiii
Gerry Stahl College of Computing and Informatics, Drexel University, Philadelphia, PA, USA Professor Emeritus of Computing and Informatics, Chatham, MA, USA Kristin Stoeffler ACT, Inc., Iowa City, IA, USA Jan-Willem Strijbos Faculty of Behavioural and Social Sciences, Department of Educational Sciences, University of Groningen, Groningen, The Netherlands Stephanie D. Teasley School of Information, University of Michigan, Ann Arbor, MI, USA Stefan Trausan-Matu Department of Computer Science and Engineering, University Politehnica of Bucharest, Bucharest, Romania Suraj Uttamchandani Center for Research on Learning and Technology, Indiana University, Bloomington, IN, USA Freydis Vogel Learning Sciences Research Institute, University of Nottingham, Nottingham, UK Alina von Davier Duolingo, Pittsburgh, PA, USA EdAstra Tech LLC, Boston, MA, USA Justice Walker Teacher Education, University of Texas, El Paso, TX, USA Rupert Wegerif Faculty of Education, University of Cambridge, Cambridge, UK Armin Weinberger Department of Educational Technology, Saarland University, Saarbrücken, Germany Alyssa Friend Wise Department of Administration, Leadership, and Technology, New York University, New York, NY, USA Learning Analytics Research Network (NYU-LEARN), New York University, New York, NY, USA Marcelo Worsley School of Education and Social Policy and Computer Science, Northwestern University, Evanston, IL, USA Learning Sciences and Computer Science, Northwestern University, Evanston, IL, USA Susan Yoon Graduate School of Education, University of Pennsylvania, Philadelphia, PA, USA Carmen Zahn University of Applied Sciences and Arts Northwestern Switzerland, School of Applied Psychology, Olten, Switzerland Jianwei Zhang University at Albany, State University of New York, Albany, NY, USA
Part I
Foundations
Foundations, Processes, Technologies, and Methods: An Overview of CSCL Through Its Handbook Ulrike Cress, Jun Oshima, Carolyn Rosé, and Alyssa Friend Wise
Abstract Computer-supported collaborative learning (CSCL) has been a topic of research for more than 30 years. This first international handbook provides an overview of CSCL research, describing the history, development and state-of-theart for key areas of work in the field. It approaches the topic from four different perspectives that form the sections of the handbook: theoretical foundations, collaborative processes, relevant technologies, and common research methods. This introduction chapter provides an overview of the entire handbook by briefly summarizing the topic and content of each of the 34 chapters as well as drawing out themes and connections among them. Keywords Computer-supported collaborative learning · Collaborative processes · Computer-supported collaboration · Collaborative learning · Collaboration
The first time “computer-supported collaborative learning” (CSCL) occurred as a topic of research was at a NATO-sponsored workshop in Maratea, Italy, in 1989 (O’Malley 1995). The aim of this meeting was to apply findings from research about U. Cress (*) Leibniz-Institut für Wissensmedien (Knowledge Media Research Center), Knowledge Construction Lab, Tübingen, Germany Department of Psychology, Eberhard Karls University, Tübingen, Germany e-mail: [email protected] J. Oshima Faculty of Informatics, Shizuoka University, Hamamatsu-shi, Japan C. Rosé Language Technologies Institute and Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA A. F. Wise Department of Administration, Leadership, and Technology, New York University, New York, NY, USA © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_1
3
4
U. Cress et al.
collaborative learning to the design of new computer-based learning systems. This came from the clear need researchers saw to take the organizational, cultural, and social contexts of the classroom into account. With that in mind, the 20 participants from Europe, the USA, and Canada, with backgrounds in education, cognitive psychology, and artificial intelligence, came together to launch CSCL as a new interdisciplinary area aimed at integrating findings about learning, social processes, and technology. From the start, there was a transformative character to the field, one that centered learners and their agency in building knowledge together with the support of technology over traditional models of education in which instruction was “delivered” to students. Six years later, in 1995, the first CSCL conference was held in Bloomington, Indiana, in the United States, with 79 participating papers (Goldman and Greeno 1995). The papers dealt with technologies like simulations and micro worlds, as well as with processes of collaborative writing and sharing of annotations. Referring to knowledge integration, small-group problem-solving, reciprocal peer-tutoring, conceptual change, and other foundational theoretical frameworks, they grounded CSCL within existing concepts and developed them further in order to specify the complex interrelationships between technological tools, social processes, and learning. Drawing on techniques used in psychology, anthropology, and discourse studies (among others), a wide variety of methods were used to investigate these interrelationships, often through the lens of design and computer science. Since then, an international conference has been held every other year, alternating with the International Conference of the Learning Sciences. At the CSCL ’03 conference in Bergen, Norway, a steering committee for the CSCL Community within the International Society for the Learning Sciences was formed, and the start to what would become a powerful community structure was built. This led to the establishment of an international peer-reviewed journal (International Journal of Computer-Supported Collaborative Learning, ijCSCL) in 2006, which has served as a key scientific outlet for CSCL researchers worldwide and achieved substantial impact both within the community and beyond. As the field of CSCL matured over the last 30 years, the community came to see a need for a comprehensive volume to present the manifold research of the field and provide an introduction to the many new scholars with interest in this area of research. This handbook is the first of its kind. It aims at providing an overview of CSCL research approached from four different perspectives: Theoretical foundations, collaborative processes, relevant technologies, and common research methods. There is, of course, intersection and overlap between the perspectives: Processes draw on theoretical frameworks, they are supported by technologies and are investigated through the use of specific methods. Theoretical frameworks inspire the development of technologies to elicit collaborative processes, and methods are innovated to examine how technologies impact collaborative processes. Nonetheless, these are the main elements of CSCL, and thus they are represented as the four sections of the handbook. The chapters in each section serve as an introductory reading for researchers and students in relevant courses of study, and follow a common structure. Each begins with definitions of relevant concepts and
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
5
terminology as the common use of language is critical for communication in a multidisciplinary field such as CSCL. Chapters then review the history of each topic and how it has developed, up to the current state of the art, with key examples. With an eye to the future in a changing world, each chapter ends with an outlook on imminent directions and promising avenues for further development. All chapters are written by two or more authors who are world-class experts in their field. In many cases they represent different viewpoints, so the chapters not only provide an overview but also serve as an integrative piece of work. A research field and community are made up not just of ideas but also of people, their intellectual and personal histories, and, notably given the subject matter of collaborative learning, the collective understanding produced by their interactions over time. For this reason, the overview of sections and chapters below include reference not only to what the texts are about but also who created them. Each chapter can be read independently from the others, but there are also many cross-linkages between them that show the rich way theory, methods, collaborative learning processes and technologies are intertwined in CSCL. In order to stimulate readers’ deeper understanding, each chapter provides suggestions for additional readings.
1 Section I: Foundations of Computer-Supported Collaborative Learning The six chapters that comprise the “Foundations” section of the book discuss crosscutting issues central to CSCL: the history of the field; core concepts and methods; ways to think about the contexts in which collaboration occurs; issues of diversity, equity, and inclusion; and paths to scalability and sustainability. Each can be considered foundational in the sense that it addresses a set of considerations that are relevant to all CSCL work. Far from being dogmatic, however, each of the chapters makes clear that multiplicity is a defining feature of CSCL: multiplicity of epistemological frameworks, multiplicity of concepts, multiplicity of methodologies, multiplicity in scales of analysis, and multiplicity in the populations we seek to support. These multiplicities are not independent, as the intertwining and need for alignment of epistemological, theoretical, methodological, technological, social, and cultural elements in CSCL is a reoccurring theme across the chapters. The many multiplicities are due in large part to the interdisciplinary nature of CSCL, which brings together scholars trained in widely divergent traditions of research such as psychology, anthropology, sociology, computer science, linguistics, information science, and education. Many chapters highlight this multiplicity as a strength, pointing to the power of diverse approaches as necessary to unpack the complex phenomena of collaborative learning supported by technology. However, to be productive, such multiplicity requires ways to effectively communicate across them in service to the greater goal of building a collective knowledge base. This is something that has been, and continues to be, one of the great challenges for CSCL.
6
U. Cress et al.
A powerful way to facilitate such communication is to be explicit about the choices one makes epistemologically, conceptually, methodologically, and contextually and to describe the ways in which considerations of sustainable scalability for diverse populations have been actively taken into account in the design of a CSCL innovation. To this end, each of the chapters in this section offers categories and language that researchers can use to communicate the assumptions and decisions that underlie their work. However, multiplicity presents a challenge here yet again, as each of the first four chapters offer different schemes for doing so. While there is clear alignment across them at a general level, the terminology varies, and in many cases reflects subtle but important distinctions. This presents a challenge both for new scholars seeking to understand the landscape of CSCL research and for established scholars seeking to communicate across it. Differences in terminology are less of an issue for the latter two chapters in this section, in part because questions of diversity, equity, and inclusion and sustainability and scalability have not historically received widespread attention in the field. This has started to change with growing recognition that effective CSCL involves engaging with multiple nested systems at the level of the individual, small group, classroom, institutions, culture, and beyond. The final two chapters in this section make a strong case for the importance of centering these considerations, pointing out that we must actively design for equity in order for it to occur and that planning for scalability needs to be considered as an integral element of design from the outset. These are critical elements to develop in the next decade of CSCL if the field is to have the desired transformative impact on education. In the first chapter “Theories of CSCL” Gerry Stahl and Kai Hakkarainen situate theory at the center of the methods, technologies, and social practices of CSCL and illustrate interdependencies between them. Highlighting the importance of theory in dealing not only with questions of “what is” but also questions of “what could be?” they review broad classes of theories used in CSCL categorized as subjective, intersubjective, interobjective. These foreshadow the question of an appropriate unit of analysis for CSCL inquiry (e.g., individual, group, system). The chapter also reviews a selection of the vast diversity of specific theories currently influential in CSCL (see also individual chapters in section II) and discusses common elements among them central to a conceptual consideration of computer-supported collaborative learning: discourse and nonverbal interaction; epistemic and interactional mediation by computational tools and artifacts; temporality and sequentiality; intersubjectivity and shared understanding; personal, distributed, and group agency; and orchestration and scaffolding of collaborative culture. The chapter concludes with a call for greater theorization of the interrelations between social practices, computational technologies and processes of educational change as necessary to implement the transformative vision of CSCL in classrooms. Sten Ludvigsen, Kristine Lund, and Jun Oshima tackle a similar set of issues in their chapter “A Conceptual Stance on CSCL History.” As they review the development of the field from its roots in learning, computer and social sciences, they note both evolution of technologies, contexts and techniques of investigation, but also consistency in underlying epistemological and methodological stances. Centering
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
7
their presentation around three perspectives on what is worth knowing and what counts as valid evidence for this, they characterize the differences between individualist, relationist and pragmatic/computational stances. On the methodological front, a distinction is made between three classes of analyses: those based on predefined categories; those that seek to identify emergent collaborative phenomena; and design-based research approaches. Again, the choice of an appropriate unit of analysis and the alignment of methodological approach with underlying conceptual stance are emphasized. Looking to the future, the authors point to the rise of mass collaboration, immersive environments, interactive surfaces, and technologyenhanced embodied play as developments that may simultaneously provide data on previously inaccessible aspects of collaboration and give rise to new collaborative phenomena. In their chapter “An Overview of CSCL Methods” Cindy Hmelo-Silver and Heisawn Jeong review the diversity of research designs, data sources, and analysis methods used in the field (see also individual chapters in section IV) based on their examination of a corpus of CSCL literature from 2005 to 2014. With an emphasis on mixed methods, multiple measures, and the recent adoption of new analytic tools, they highlight the importance of aligning methodological choices and practices with underlying theoretical and epistemological commitments. To this end, they describe the different premises of research designs categorized as quantitative, qualitative, or mixed methods, noting that each can be used for descriptive or explanatory purposes. In addition to laying out the conceptual space of the different kinds of CSCL methods, the chapter provides data about the relative frequency with which each is used, and notes that the vast majority of studies take place in classroom settings, reflecting the field’s commitment to the ecological validity of its work. The importance of an appropriate unit of analysis that is in alignment with a study’s theoretical underpinnings is raised again, as is the need for researchers to be explicit about how they select, enact, and combine methods to help build a cumulative knowledge base from work grounded in different disciplinary traditions. Finally, the challenge of taking contextual factors into account (from either a qualitative or quantitative approach) is raised. The chapter concludes with a look towards emerging methodological innovations in the areas of temporality, visualization, and analytics. The question of how to consider context as one aspect of ecologically valid CSCL research is taken up by Camillia Matuk, Kayla DesPortes, and Christopher Hoadley in their chapter “Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives.” Again, an underlying theoretical perspective (cognitive or sociocultural in their taxonomy) plays a role, this time in influencing how context is considered in terms of focal, immediate, and peripheral layers surrounding the subject of study. Starting with a discussion of what exactly is meant by the term “context,” they explore the question of how boundaries can be defined amidst the many interdependent physical, temporal, personal, and cultural factors that come into play in CSCL. Yet again, the multidisciplinary origins of CSCL offer a wealth of ways in which this can be done, and emphasis is placed on being intentional and explicit about one’s decisions. Looking to the future, the chapter considers how the development of new technologies that extend learning across time and space may
8
U. Cress et al.
shift conceptions of context and how the growing ability to collect new sources of fine-grained multimodal data can allow for the development of more nuanced models of context than possible previously. Finally, the chapter raises the important role of power relations in considering how context is characterized and by whom, and the need for explicit attention to how people who have been historically marginalized or systematically disempowered are represented. In their chapter “Interrogating the Role of CSCL in Diversity, Equity, and Inclusion” Kim Gomez, Louis Gomez, and Marcelo Worsley tackle the challenge of building a design landscape and vocabulary with which the field of CSCL can engage more robustly in conversations about diversity, equity, and inclusion. They point out that while the field is deeply concerned with questions of equity, efforts to actively address these issues have been limited. CSCL tools have historically been designed to be used by a broad cross-section of people; however by being agnostic to important differences in learners’ backgrounds, cultures, and linguistic and other resources, such tools effectively serve only a narrow portion of the global population. In contrast, with conscious effort towards inclusion and attention to the diversities in populations, CSCL can actively promote equity in access, use, and outcomes. Importantly, the authors highlight how effective consideration of these issues requires moving to scale with projects that bring in learners from a wide variety of backgrounds and intersectional identities. In offering some directions for how we can interrogate the ways we design, they point to considering language accessibility, differentiation to support varied learner needs, and active recognition of the multifaceted identities that learners bring to their collaborative learning experiences. The chapter concludes with promising initiatives through which CSCL environments and designers can better represent and respond to a wide variety of learners, particularly those who have been marginalized or historically underserved. In the final chapter in this section “Sustainability and Scalability of CSCL Innovations” Nancy Law, Jianwei Zhang, and Kylie Peppler address a set of issues critical for CSCL to have widespread impact on educational systems and practices. Considering both how innovations can be spread to increasingly wider spheres of adoption and how their use can be maintained over time, they deal directly with the questions of large context and system-level units of analysis mentioned in earlier chapters. Importantly, connecting to ideas from the prior chapter, the vision of lasting change put forth includes responsiveness to social, cultural, and historical characteristics that vary across contexts of use. From this we see that what is critical is not simple surface “adoption” of a technological tool but the internalization and adaptation of its underlying ideas and ethos, for example, related to student ownership and agency in learning. Pointing to methodologies such as Design-Based Implementation Research and Research-Practice Partnerships, they highlight the need for researchers and practitioners to engage collaboratively to shape how CSCL technologies can help cultivate local knowledge practices. Such efforts need to address question of both classroom culture and that of larger educational ecosystems, as the authors draw attention to the hierarchically interdependent levels of networks and systems that must be attended to in an aligned manner by both
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
9
innovations and the policies that surround their use in order to achieve lasting change. Together, these six chapters provide a high-level overview of the field of CSCL that describe its past, represent its present, and speak to what its future could be. This provides an overarching framework in which the collaborative processes, supporting technologies, and methods of investigation in the subsequent sections can be situated.
2 Section II: Processes of Computer-Supported Collaborative Learning Collaborative processes include both the visible and invisible progressions of action and interaction among learners as they play out over time during collaboration. These processes respond to contextual factors and mediate both the measured effects and tangible accomplishments of collaboration. Alternative conceptualizations of these processes may operate at different levels of representation (e.g., the individual, the dyad, the group) and include human participants and artifacts, or only human participants, as they interact through multiple modalities. The “Processes” section of the handbook comprises 10 chapters organized into three subsections. The first subsection, “Evolution of Conceptualization of Processes,” sets the historical and theoretical stage by introducing the basic components of the conceptualization of collaborative processes, namely, Communities (the Who and the Where), Participation (the What and the How), and Scale (the How Much). The next subsection, called “Processes of Participation,” comprises five chapters that drill down into the Participation component. They focus on discussion as a visible manifestation of collaboration. Each chapter explores collaborative discussions from a distinct theoretical lens. Differences in lenses across chapters are sometimes motivated by differences in emphasis on variables related to the other two components (Communities and Scale). The final section, “Management of Processes,” comprises three chapters that zoom out to consider how participation is managed, and they begin to suggest how technologies might play into that management process. The two foundation-setting chapters of the “Evolution of Conceptualization of Processes” subsection reach back in time, even to pre-history, to identify the roots of conceptualizations of communities, participation, and scale, but also bring these conceptualizations forward into the age of big data that we are now in. In taking this expansive approach, these chapters challenge us to strive for relevance, and beyond that, to reach towards transformation in the light of progressive change, but at the same time not to forget our roots, especially the theoretical roots on which our field rests. The first of these chapters, by Yotam Hod and Stephanie Teasley, is called “Communities and Participation.” As this chapter reaches back in time, it explores
10
U. Cress et al.
how the conceptualizations of communities and participation have evolved over time as they have been considered by key thinkers in every age, beginning with the Neolithic revolution 12,000 years ago. From a theoretical perspective, the ideas can be seen as emerging from sociocultural theoretical foundations alongside technological advancements in the rise of the internet. Against this backdrop the field of CSCL was born, and in that context the evolving theoretical foundations and technological capabilities fostered the opportunity for design of new learning opportunities to emerge. In the second chapter, “Collaborative Learning at Scale,” by Bodong Chen, Stian Håklev, and Carolyn Penstein Rosé, the emergence of the field is stretched and challenged both by emerging opportunities (e.g., the rise of the internet) and by increasing pressures (e.g., to offer high quality education to the masses at a reasonable cost). Scale is considered both in terms of its difficulties and unique affordances, all of which have implications both for pedagogy and for technology in light of theoretical considerations. The second subsection, “Processes of Participation,” explores collaboration through communication (as a key aspect of participation), and its five chapters each adopt a different theoretical orientation to use as a lens. This diversity highlights the very rich nature of communication data and the value of the multivocality fostered through the meta-conversation between these separate voices. Communication is not naturally built up of discrete components or scales. Rather it is continuous in its richness and complexity. However, our operationalizations of communicative processes reduce down this complexity, enabling distinctions that allow for answering specific kinds of scientific questions. Within the field of CSCL, the range of questions that are of interest is as vast as the rich landscape of theoretical frameworks that house the accumulation of knowledge from our research. These chapters illustrate that rich diversity and therefore challenge us to think about the multiple roles that communicative processes play in collaborative learning. These five chapters could be organized in many different ways; however, here we will consider them in two sets. In the first set, we consider contributions to a discussion as operating at the individual level or the group level (as a key aspect of communities, in other words the “who and where”). To illustrate this contrast, we offer a chapter on argumentation, written with a cognitive focus, where discussion contributions trigger cognitive processes, leading to learning and elaboration of mental models within individuals, and a chapter on group processes, written from a sociocultural perspective in which practices (which include conversational practices) are conceptualized in terms of their association with group identity, group participation, and accomplishments of groups. The first of these, by Joachim Kimmerle, Frank Fischer, and Ulrike Cress, is called “Argumentation and Knowledge Construction.” The focus in this chapter is to explore discussions where participants bring different perspectives with them. The knowledge construction process is a focus of this chapter, where this process occurs as participants with differing perspectives exchange, elaborate, and synthesize their perspectives. They create new knowledge through this process, and more importantly that knowledge is elaborated within the mental models individuals keep within their own minds. From the sociocultural perspective, Richard Medina and Gerry Stahl offer their chapter on
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
11
“Analysis of Group Practices” where the same processes may be described in terms of ways in which groups set norms for how they take place, and then in engaging with a group, they take up practices associated with these norms in order to contribute towards the group accomplishment. In the final three out of five chapters in this subsection, we explore a different aspect of communities where what distinguishes the chapters is the participants (e.g., actors and artifacts) who are the focus. First, attention is paid to the individual actors, then artifacts that mediate the collaboration are introduced, and finally the emphasis is placed on group accomplishments. These chapters differ also in terms of scale, with the final chapter operating in learning communities rather than small groups. The “Dialogism” chapter by Stefan Trausan-Matu, Rupert Wegerif, and Louis Major explores the communicative processes between learners within a group as they explore the diversity of perspectives. In that way, it is similar to the Kimmerle et al. chapter, “Knowledge Construction,” but the focus is on incorporation of the diversity of perspectives in the ongoing conversation rather than the formation of a synthesis of perspectives. In fact, their chapter challenges the extent to which a synthesis should be the goal, or whether differences in perspective should be preserved. In the next chapter, by Sami Paavola and Kai Hakkarainen, called “Trialogical Learning and Object-Oriented Collaboration,” artifacts are considered along with human participants, and their affordances are explored in terms of how they influence collaborative behaviors. Their chapter could be seen as an extension of dialogical approaches to analysis of collaboration. The final chapter, by Marlene Scardamalia and Carl Bereiter, recounts the history since the 1980s of a pioneering strand within CSCL research. This chapter, called “Knowledge Building: Advancing the State of Community Knowledge,” explores how the artifacts produced as an accomplishment of a community reflects the engagement of the community in producing it. Their technological development of the Knowledge Forum, an infrastructure for housing knowledge building communities as work, reifies the practices associated with knowledge building communities. In this way, this chapter can be seen as an instantiation of the concepts offered in the Kimmerle et al. “Group Practices” chapter. The nature of the practices as similar to those of interest in the Kimmerle et al. chapter. Nevertheless, the flavor of their presentation of these practices is distinct because the goal is for the group to advance in their joint pursuit of knowledge elaboration in an externalized and shared artifact. The third subsection about the “Management of Processes” addresses collaboration as a group-level construct. The three chapters, either implicitly or explicitly, explore the issue that the benefits of collaborative learning do not happen automatically and the collaborative processes of value must be supported. This requires understanding the problems that inhibit such processes as well as the means through which these problems can be mitigated. To this end, in the first such chapter, the idea of invisible regulatory processes that guide collaborative processes towards success is introduced at a theoretical level and examined from multiple perspectives. These invisible processes are linked to visible collaborative processes, such as discussion as it is explored through the previous five chapters, though they are not one and the same thing. As a segue bridging from section II (Processes) into III (Technologies),
12
U. Cress et al.
the final two chapters introduce scaffolds that either foster regulatory processes or make them less necessary. In particular, awareness tools scaffold the metacognitive awareness that allows regulatory processes to be put into practice. And role assignment removes some of the need to answer the kind of questions that some regulatory processes are meant to seek answers for. The first of the three chapters, by Sanna Järvelä, Jonna Malmberg, Marta Sobocinski, and Paul A. Kirschner, is called “Metacognition in Collaborative Learning.” The authors explore the costs (i.e., process losses) associated with collaboration in order to achieve positive outcomes and explain how metacognitive monitoring and regulation of these processes might reduce these costs. They illustrate the tremendous complexity of collaboration in groups by introducing concepts related to metacognitive awareness, monitoring, and management, where all three dimensions as well as their object, namely the collaboration processes themselves, may occur at either the individual level or the group level. Thus, in order for computer-supported collaborative learning to proceed successfully, the complex interplay between these levels must be carefully considered. This chapter does the important work of clarifying the distinction between self-regulation, co-regulation, and shared regulation, and then further discusses the concept of socially shared regulation. The chapter reviews the foundational literature exploring these constructs in turn, and pointing towards opportunities for continuing to expand this important area both theoretically and technologically. Whereas regulation processes are invisible, socially shared regulation occurs as an exchange between learners and thus is visible, thus bridging back to the concepts introduced in the five previous chapters and pointing towards new opportunities for supporting regulation of collaboration through conversational interventions. Regulation requires first awareness of collaborative processes and then strategies for making adjustments along the way when necessary. In light of this, the next chapter, by Jürgen Buder, Daniel Bodemer, and Hiroaki Ogata, explores “Group Awareness.” This important area of research is explored in terms of its historical development, with the product of such research in terms of support tools being classified either in terms of whether the focus is primarily cognitive or social or in terms of which of five functional levels of group awareness is the focus, namely framing, displaying, feedback, problematizing, or scripting. They explore how awareness tools scaffold regulative processes and offer a vision of the forward progress of the area. In the final chapter, Bram De Wever and Jan-Willem Strijbos explore what might be considered the other side of the coin. Rather than enabling regulation to occur, they explore “Roles for Structuring Groups for Collaboration,” in which the prestructuring of groups by means of explicit roles might make regulation less necessary. Care and consideration must go into the orchestration of learning processes. In this chapter, the authors explore how participation can be conceptualized into roles, and how these roles can be managed through design of support. Together the chapters of this section lay a comprehensive foundation for conceptualization of collaborative processes. It begins with a recounting of the history of thought around processes. It moves on then to defining dimensions of a conceptual
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
13
space in which processes can be enumerated. It ends with consideration of the ways in which these processes are managed effectively within collaboration.
3 Section III: Technologies for Computer-Supported Collaborative Learning In the “Technologies” section, seven chapters are organized under three subsections: “Understanding Interactions between Technology and Learning,” “Learning Spaces,” and “Enabling Technologies.” Through reading chapters in this section, readers can come to understand how theories guide CSCL practices, what are virtual learning spaces and how researchers develop them, and finally what are new technologies in the field, such as robotics and learning analytics. The final chapter then provides a new point of view on the technologies of CSCL from the engineering perspective. In the first subsection “Understanding Interaction between Technology and Learning,” two chapters discuss how CSCL technologies could be developed in line with robust theoretical guidelines. Freydis Vogel, Armin Weinberger, and Frank Fischer explain the Script Theory of Guidance (SToG), which describes the mechanisms through which individual learners develop their cognitive schemas (internal scripts) about collaborative learning, and how external scripts as instructional interventions should be designed. Based on the principle of SToG about how “external” scripts support learners in eliciting and adapting their internal scripts during their collaborative learning activities, the design of scripts should afford an appropriate interrelationship between internal and external scripts, and external scripts should not overscript learners with respect to the sophistication of their internal scripts. Empirical studies have so far revealed that learners are able to maximize their learning of the content knowledge they have studied when the appropriate levels of external scripts are provided. Vogel et al. further argue that learners should be able to control external scripts. Shaaron Ainsworth and Irene-Angelica Chounta discuss the roles of representations in CSCL environments by categorizing them into four types. In the first type, the existing representation plays a role in guiding learners’ collaboration and supporting them to construct their shared knowledge. Learners can collaborate with multiple representations in a context where a different learner focuses on a different representation, or every learner focuses on multiple representations. In the second type, collaboration is supported by jointly constructing representations. Learners’ intentional engagement with these activities has been found to facilitate their learning more than just sharing existing representations. In the third type, using digital technologies, learners can portray themselves and their partners in collaboration (see also Fields et al. in this volume). When effectively using these technologies, learners engage in their collaboration without unnecessary fear and worry. Finally, Ainsworth and Chounta propose a new role for representations that has not
14
U. Cress et al.
been sufficiently examined in previous CSCL studies: representations constructed by learners that become artifacts for researchers to evaluate their designs and conduct further studies. Such representations can also serve as formative feedback for learners to help regulate their collaboration appropriately. This leaning analytics orientation to research related to representations in CSCL will likely garner increasing attention into the future (see also Wise et al. in this volume). Two chapters of the second subsection about “Learning Spaces” focus on technologies for virtual learning spaces in CSCL. Beyond physical classrooms, collaboration has had a significant impact on learning in digital community spaces. In their chapter “Perspectives on Scales, Contexts and Directionality of Collaborations in and around Virtual Worlds and Video Games” Deborah Fields, Yasmin Kafai, Earl Aguilera, Stefan Slater, and Justice Walker broaden the conceptualization of CSCL in games and virtual worlds by expanding the scales, contexts, and directionality. Beyond many studies that handle small groups in a virtual world, Field et al. discuss that the massive collaboration in a virtual world leads learners to engage in collective intelligence by crowdsourcing their expertise. They advocate for exploring the multiple scales of contexts of learning spaces whereas previous studies have been focused on a specific context like Law, Zhang, and Peppler in section I. Second, they address learners who not only collaborate with others at multiple scales of learning spaces but also expand their affinity groups around their game practices. Collaborative learning outside, but related to, their original game practices is another critical opportunity for learners to develop their understanding of the games and their skills for collaboration. Finally, Fields et al. propose a new spectrum for participant-driven game design facilitating emergent experiences, which in their own words is meant to “expand scales, contexts and directionality of collaborative learning.” Immersive learning environments elicit learners’ subjective experiences in an alternate reality where they cannot be physically present. Based on the previous framework of “sensory, actional, narrative, and social immersion,” Noel Enyedy and Susan Yoon propose “emancipatory immersion” to expand a research focus towards social justice and equity, and they examine synergies with the five qualities of immersion among four types of immersive environments in CSCL. The first is headset virtual reality (VR) in which learners can be immersed in distant sites to conduct their research, for example, on ocean acidification. This type of technology is mainly based on the synergy between sensory and actional immersion. The second is the virtual world on the desktop in which the technology is designed based on multi-user virtual environments where learners are able to collaborate with others and artifacts in the 3D virtual world. The synergy is further expanded to social, narrative, and emancipatory immersion. The two remaining types of immersive environments are more recent emergent technologies, namely, space-based augmented reality (AR) and place-based AR. While both technologies are based on the synergies of immersion in the framework of “sensory, actional, narrative, social, and emancipatory immersion,” they are different from each other in that space-based AR is the technology with which learners can explore the environment within a relatively small space, whereas place-based AR makes it possible for learners to go out to places and explore.
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
15
In the third subsection “Enabling Technologies,” two chapters discuss recent emergent technologies in the CSCL field, and the final chapter proposes a new framework for sharing and discussing enabling technologies in CSCL. Sandra Okita and Sherice N. Clarke introduce the area of robots and agents. Recent technological advancement in this area affords consideration of more flexible support through implementation of computer-based learning partners. Okita and Clarke propose a two-dimensional research space for categorizing technologies as learning partners. One dimension is “social metaphor.” For developing the technologies as partners for students, the appearance is made more similar to that of humans. Students naturally interact with human-like robots and agents in collaborative learning. The other dimension specifies how technologies support learning across different contexts: self-learning, interactive learning between learners, and learning as a result of teachers’ decision-making. The technologies are able to support self-learning by externalizing learners’ thoughts for reflection. They support interactive learning through dialogues between self and others by providing appropriate feedback. They support learning through teachers’ decision-making through sophisticated learning analytics. Finally, Okita and Clarke argue how the technologies and theories should be productively coordinated in classroom practices. In their chapter on “Collaborative Learning Analytics” Alyssa Friend Wise, Simon Knight, and Simon Buckingham Shum argue that CSCL and learning analytics can have a productive synergy rather than tension, arguing that learning scientists can establish more powerful research practices by leveraging the respective strengths of the two fields to connect learning constructs based on robust theories with digital traces automatically detected by technologies. Further, Wise et al. propose two strands of research practices to orient the productive intersection of learning analytics and CSCL: namely, analytics of collaborative learning (ACL) and collaborative learning analytics (CLA). In ACL, the use of sophisticated learning analytics based on theories developed in CSCL provides researchers with more in-depth insight into collaboration. Recent studies in this vein described in the chapter have the potential to create theoretical advances in CSCL field going forward. CLA takes such understanding a step further to enhance collaborative processes while they are in progress. There are multiple possible visions for the final state of the art of CLA, some in which technologies automatically diagnose the state of collaborative learning and give learners the appropriate feedback in real time, and others in which information about the collaborative learning is provided to students and/or instructors as a basis for reflection and action. Achievement is not easy. Wise et al. suggest three possible synergies for our reflection as we consider the epistemic stance of learners’ agency in CLA. In the final chapter in this section, Carolyn Penstein Rosé and Yannis Dimitriadis offer a perspective on the role of tools and technologies in CSCL research and what hurdles might need to be addressed in order to improve the collaboration between technologists, who are the minority in the field, and researchers in Education and Psychology, who are the majority. One hurdle is that technologies explicitly developed for research purposes are not easily accessible for newcomers to this research, and this gap must be bridged to make emerging technologies within easier reach of
16
U. Cress et al.
those not formally trained in engineering or computer science. Second, learning scientists may not be aware of the range of emerging technological environments our target learners live in, and awareness in this area must be raised. Young people use a variety of information-based technologies in their everyday life. For those reasons, Rosé and Dimitriadis argue that the field needs a unified framework to organize technologies for researchers from both sides to come together in order to further the field. Their framework comprises four different types of tools: foundation, building, management, and analysis. The four types of tools should be coordinated for situating contexts of collaborative learning, making appropriate interventions possible, designing macro scripts in the classroom, and providing researchers with feedback for evaluating their designs. Section III introduces readers to a brief history of CSCL technologies from the current state towards cutting-edge, along with the development of theories behind them. As the learning sciences has been evolving in decades, researchers have developed the technologies by expanding their target contexts of collaborative learning from the classroom to more informal and scalable. Readers are expected to construct their epistemic schemes to use and develop technologies.
4 Section IV: Methods for Studying Computer-Supported Collaborative Learning CSCL is a confluence of different disciplines. It considers a variety of quite different theoretical backgrounds. Consequently, CSCL’s methods repertoire is large and heterogeneous. In line with the continuing theme throughout this handbook, the methodologies that are used have sometimes even been considered mutually incompatible or—more positively framed—multivocal. The different voices point to different interpretations about the nature of research: Some state that the basis for scientific understanding needs to observe processes deeply and in great detail, and without any prior expectations. Others see the necessity of stating hypotheses before any observation and to reduce complexity and to focus just on predefined variables and their mutual relationships. Moreover, some approaches emphasize that learning and communication are contextualized, and the relevant processes are unique to their context and have to be understood within a specific situation, while others aim to find general effects that apply the same way across different contexts. The “Methods” section of this handbook aims at presenting this variety of methodological approaches. Each chapter presents a concert of research examples that make clear what kinds of research questions a methodological approach can be applied to and what kind of theoretical and conceptual backgrounds it is compatible with. This supports the repeated calls for theoretical and methodological alignment seen in the first three chapters of the “Foundations” section. The section is divided into two subsections and the chapters within them originate from different perspectives. The five chapters in the “Overarching Methodologies”
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
17
subsection start by explaining the overall approach and then diving down into details, while the six chapters in the “Data Types” subsection each focus on a specific kind of data about collaborative learning and expand up to describe the different ways it can be analyzed. Within both subsections, the distinction between qualitative and quantitative research is relevant. The first subsection about “Overarching Methodologies” starts with the chapter “Case Studies in Theory and Practice.” Timothy Koschmann and Baruch B. Schwarz describe case studies as a traditional means of investigating how participants build meaning together in practical situations. In this sense, case studies are a kind of starting point in the history of CSCL, and they are prototypes for qualitative research in CSCL. Case studies observe and describe in detail what happens in the interaction between people. The authors provide a set of examples of typical case studies in CSCL research and they point the reader to a set of questions: What is being construed as a “case”? How was it selected? What forms of contrast are built into the analysis and to what end? What is the role of time and sequence within the analysis? Does the study seek to alter the social phenomenon under investigation or merely document it faithfully? Whereas this first chapter is mainly interested in understanding meaning-making processes, the second chapter deals with a further aim of CSCL: Not just understanding processes, but also making a difference by designing environments. In the chapter titled “Design-Based Research Methods in CSCL: Calibrating Our Epistemologies and Ontologies,” Yael Kali and Christopher Hoadley define the concept of DBR and discuss its history in CSCL. They show the contributions of design-based research to the epistemology and ontology of CSCL and furthermore reflect on the tension between two modes of inquiry—namely, science and design. They view both modes as inherent to design-based research. Based on that review they present a renewed approach for conducting more methodologically coherent design-based research which calibrates between these two modes of inquiry in CSCL research. The following chapters then deal with quantitative research methods. Jeroen Janssen and Ingo Kollar explain the central concepts “Experimental and QuasiExperimental Research in CSCL.” Experimental research designs manipulate one or more independent variables in order to identify their influence on some dependent variables. Experimental research aims to identify causal effects and it tests for causality through experimental control, including random assignment. In CSCL (quasi) experimental designs allow for testing which effects certain tools or scaffolds have on learning processes and outcomes. As an important—and often neglected— issue in CSCL, the authors point to the relevance of statistical interdependence of data from two or more learners who have interacted within the same group. They then present several more advanced statistical methods that are able to deal with this certain type of hierarchical data. Janssen and Kollar end by considering most recent movements within experimental research such as pre-registration and open science. Both are highly relevant demands for the replicability of results and the transparency of the research process. The chapter of Yigal Rosen, Kristin Stoeffler, Vanessa Simmering, Jiangang Hao, and Alina von Davier deals with another highly important topic in CSCL. How can
18
U. Cress et al.
we measure collaboration and assess people’s collaboration competency? Is it measurable as an individual variable at all—considering the fact that collaboration is not located in individuals, but between them? Collaboration is an important twenty-first-century skill and so it should be included in large-scale educational research. Only then can different educational systems be compared with regard to their ability to enhance cooperation between their pupils. However, a precondition for this is that cooperation must be made measurable at a large scale. The chapter “Development of Scalable Assessment for Collaborative Problem-Solving” introduces the concept of collaborative problem-solving and shows how it develops. Based on this conceptual basis, the authors provide a scalable assessment to advance the theory and practice of measuring collaborative problem-solving. Competencies and skills of learners can be measured and described quantitatively. But it is also an important issue to quantify process-related variables. In CSCL such processes take place between individuals or between individuals and artifacts. The data that depict these processes are often sequential data, which means, data with time stamps or a specific order. Ming Ming Chiu and Peter Reimann provide an overview of statistical and stochastic methods for dealing these kinds of data in their chapter “Statistical and Stochastic Analysis of Sequence Data.” They present Statistical Discourse Analysis about Hidden Markov Models (HMMs) as well as recent extensions like Dynamic Bayesian Network models (DBNs). Looking into the near future, the authors identify opportunities for a closer alignment of qualitative with quantitative methods for temporal analysis, afforded by actual developments such as machine learning or advances in computational modelling. After having dealt with methodologies, the second part of the “Methods” section is about different data types. It is inherent in the nature of CSCL that it deals with data about artifacts and data about interpersonal communication, whether it be verbal or nonverbal. And these are the topics with the six chapters of this subsection address. The chapter of Stefan Trausan-Matu and James D. Slotta addresses the concept of “Artifact Analysis.” Artifacts like written texts, videos, and codes result from human activities. In social settings people exchange artifacts. They enrich communication and shape further activities. Artifacts can be analyzed in order to inform the understanding of learning processes in general, but they can also be used to evaluate special interventions. The chapter provides a broad overview and presents dialog analysis, conversation analysis, and content analysis of verbal, textual, and other forms of data. It also discusses applications to the analysis of online discussions and classroom discourse. The next chapter is about “Finding Meaning in Log-File Data.” Jun Oshima and H. Ulrich Hoppe start with a characterization of log-file data. The authors review computational techniques that support the analysis of log-files including processoriented approaches (such as process mining, sequence analysis, or sequential pattern mining) as well as approaches based on social network analysis. They discuss such techniques with regard to their contribution to data interpretation and meaning-making and with regard to their use for supporting decision-making and action (“actionable insights”). The chapter closes with a discussion of future
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
19
directions of log-file analysis and considers the development of new technologies for analyzing spoken conversation and nonverbal behaviors as part of action-log data. The next two chapters deal with language. It can be analyzed quantitatively or qualitatively. In their chapter “Quantitative Approaches to Language in CSCL” Marcela Borge and Carolyn Penstein Rosé present a survey of language quantification practices in CSCL. They begin by defining quantification of language and give an overview of the different purposes it serves. The chapter aims to show that both quantitative and qualitative researchers can use the approach of quantifying language. Borge and Rosé situate language quantification within the spectrum of more to less quantitative research designs. With a review of published studies, they provide examples, show the contexts where researchers quantify language, and show how it is done, and for what purpose. In the following chapter, Suraj Uttamchandani and Jessica Nina Lester deal with qualitative approaches to the study of language and discourse. After an overview of several approaches, the authors describe the history of language-based methodologies in CSCL and contextualize two of the more common methodological approaches in the field, namely, conversation analysis and interaction analysis. The chapter closes by presenting methodologies that have not been so widely used so far: critical discourse analysis and discursive psychology. They show their history, provide quality markers, and give an illustrative example of each. Language may be the most relevant and explicit medium for communication between humans, but it is not the only one. In particular, gestures and gaze each play an important role. The chapter “Gesture and Gaze: Multimodal Data in Dyadic Interactions” by Bertrand Schneider, Marcelo Worsley, and Roberto MartinezMaldonado discusses new opportunities and challenges that come along with new and affordable sensing technologies. These methods enable CSCL researchers to automatically capture collaborative interactions with unprecedented levels of accuracy. The authors describe empirical studies and theoretical frameworks that leverage multimodal sensors to study dyadic interactions. They focus on gaze and gesture sensing and show how these measures can be associated with constructs such as learning, interaction, and collaboration strategies in co-located settings. The last chapter of the handbook is about “Video Data Collection and Video Analyses in CSCL Research.” Carmen Zahn, Alessia Ruf, and Ricki Goldman give an overview of how video analysis has developed within the field of CSCL. They present theoretical, methodological, and technological advances for video analysis in CSCL research. Specific empirical and experimental research examples illustrate current and future advances in data collection, transformation, coding, and analysis. The authors also discuss research benefits and challenges that include the current state of understanding from observations of single, multiple, or 360○ camera recordings. In addition, eye-tracking and virtual reality environments for collecting and analyzing video data are addressed as more recent methods in CSCL research. In sum, the 11 chapters provide a comprehensive overview about the methods and types of data that are used in CSCL from the origin of CSCL, to those that are emerging and becoming more and more at the forefront of state-of-the-art research. Of course, the chapters are not extensive enough to explain the different methods
20
U. Cress et al.
exhaustively, and so we encourage readers to treat these chapters as gateways to learning. Readers who aim to wield these methodologies will have to learn more about each to be able to apply them to data in practice. What the chapters provide is insights into the usefulness of each method, they deliver examples for their use and link to further and more specific literature.
5 Conclusion and Looking Forward Collectively, the chapters of this handbook review the field of Computer-Supported Collaborative Learning that, by now, has a history of more than 30 years. According to Stahl (2015), CSCL began to develop at a time, when there was “a pervasive sense of a paradigm revolution in learning research” (p. 337). He describes that the critique of cognitive science’s behaviorism extended the unit of cognition beyond the boundaries of the individual mind. Concepts like distributed cognition, activity theory, and conversation analysis rose to prominence. Besides people and their individual cognition, physical artifacts and interactional resources became relevant. But, according to Stahl, at that time these developments mainly had influenced theory, and only to a lesser degree technology design and research methods. In the time since, this has begun to change, and new digital tools and new approaches for studying them have begun to emerge over the last 5 years. Taking over the editorship of ijCSCL from Gerry Stahl and Friedrich Hesse, Ludvigsen (2016) saw a need in the third decade of CSCL to consider the growing variety of technologies that make new forms of collaborative learning possible. Whereas in the beginning of CSCL many studies considered situations where two people sat in front of one computer, later on remote situations became relevant, tools for group awareness emerged, and scripts and norms became central concepts. The questions arose: How are processes on the individual level and the group level linked through tools? How could these connections and dependences be analyzed? The technological and methodological developments now come quickly. And for the future, Wise and Schwarz (2017) see, besides other challenges, the need for CSCL to expand its focus to social media and large-scale learning environments and to use learning analytics and further deal with adaptive support for collaborative learning. They furthermore—provocatively—question whether CSCL should resist from attempting to achieve two aims: reconciling analytical and interpretative approaches to understanding collaboration and achieving tangible change in the education system. The conversation in the community continues through new articles and squibs as Carolyn Penstein Rosé and Sanna Järvelä take the journal into a new era yet ahead. The different sections and the multiple chapters of this handbook show how manifold CSCL is. It is not capturable in one over-arching theory. The processes that it considers are, on the one hand, fine grained, taking place at the individual level, but on the other hand—and this has been the primary focus of CSCL—at an interactional level (Puntambekar et al. 2011). Furthermore, the field is increasingly concerned
Foundations, Processes, Technologies, and Methods: An Overview of CSCL. . .
21
with the larger issues of systems, context, and culture that surround and shape how CSCL is taken up by and affects diverse populations across institutions and around the world. Computer-supported collaborative learning, with consideration of each of these levels, can be analyzed with different theoretical frameworks, research foci, and methods. This multivocality is in large part what makes CSCL so productive (Suthers et al. 2013), but also presents continual challenges for communication as discussed above. With its interest in describing the complex, mutual influences between learners, peers, technologies, and context, CSCL has the ability to stimulate the necessary debate about the questions of what the nature of learning is and how it can be supported in different contexts. The answer depends on collaboration and linkage between the different approaches.
5.1
What Is Missing?
This handbook is a representative artifact, a reification of the collective knowledge base of the CSCL field at a moment in time. Its scope and structure reflect important decisions made not only proximately by the handbook editors in selecting and organizing the material to include, but also more broadly by the larger community of authors, reviewers, program committees, and journal editors in terms of the kinds of work that has been conducted and has successfully advanced through the publication process in the first place. It thus is important, at the moment of putting out the very first handbook of CSCL, to look back not only at what is included in the volume but also to ask about what topics, perspectives, and voices are not fully represented. Some absences are explicitly discussed; for example, the chapter on “Interrogating the Role of CSCL in Diversity, Equity and Inclusion” notes clearly the need for greater attention to issues of learner background and identity across the board, including intersectional considerations of race, gender, and privilege, among others. We can additionally consider the different conditions of schools and situations that make it more or less likely for students to be invited to learn collaboratively. Other absences can be gleaned from ideas that appear repeatedly in passing but are never brought to the forefront. For example, the transformative and emancipatory potential of CSCL is often mentioned as core to the field’s founding, but its form and meaning in relation to the current educational, social, and political landscape remains to be elaborated. Finally, the most difficult type of absence to recognize is of those things not present at all, representing perhaps our collective blind spots. Some examples here might include how students think and feel about collaboration and the personal stories that brought each of us as researchers to do this work. To the extent that we value these, or other, missing elements we must consciously work together to figure out how to create space for them in the field. This requires more than individuals pursuing specific work but relates to the structures and processes we create that frame our academic endeavor and support certain kinds of scholarly work over others.
22
5.2
U. Cress et al.
What Comes Next?
That is up to all of us, including you. Whether you are a long-time CSCL scholar, a student just beginning your career, a researcher coming from another field, or someone with another history interested in computer-supported collaborative learning, we invite you to join in having the conversations and doing the research that will collectively determine how the field develops next. There are many ways, both big and small, to play a role in shaping this trajectory; from asking questions to setting agendas, from offering language to validating studies, together, we as a community build the future of CSCL. Especially for those just entering the field, we hope this will be the start of an exciting and meaningful journey for you. As you continue reading and learning, and as you more actively engage in the field, we invite you to add your own voices to the discussion, through contributions to the CSCL conference and ijCSCL journal, as well as the many other scholarly venues in which computer-supported collaborative learning is a topic of interest. At a moment when the world is facing extraordinary challenges and divisions in societies, we leave you with three questions: Why is CSCL important to consider? What does CSCL have to offer the world? What could CSCL do for different communities? The answers to these questions can help shape the problems we choose to work on, the approaches we adopt to address them, and consequently what the field will look like in 2030. In this way, we connect full circle back to the early visions that initially inspired the start of CSCL: a desire for transformative impact on education through research that goes beyond existing practices to use technology as a tool to explore ways to elevate learning, teaching, and collaboration. Acknowledgements We thank Marcela Borge for productive conversations at ICLS 2020 which generated ideas that have been incorporated into this chapter.
References Goldman, S., & Greeno, J. (1995). CSCL ‘95: The first international conference on computer support for collaborative learning. Hillsdale, NJ: Erlbaum Associates Inc.. Ludvigsen, S. (2016). CSCL towards the future: The second decade of ijCSCL. International Journal of Computer-Supported Collaborative Learning, 11(1), 1–7. O’Malley, C. (Ed.). (1995). Computer supported collaborative learning. NATO ASI series, series F: Computer and systems sciences: 128. Proceedings originating from the NATO Advanced Research Workshop on Computer Supported Collaborative Learning, held in Acquafredda di Maratea, Italy, September 24–28, 1989. Springer. Puntambekar, S., Erkens, G., & Hmelo-Silver, C. (Eds.). (2011). Analyzing interactions in CSCL: Methods, approaches and issues. New York: Springer. Stahl, G. (2015). A decade of CSCL. International Journal of Computer-Supported Collaborative Learning, 10(4), 337–344. Suthers, D. D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (Eds.). (2013). Productive multivocality in the analysis of group interactions. New York: Springer. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467.
Theories of CSCL Gerry Stahl and Kai Hakkarainen
Abstract This chapter examines collaborative learning as cognition at the smallgroup unit of analysis, and highlights theoretical questions concerning interrelationships among individual, collective, and cultural cognition. CSCL is a theory- and research-based pedagogical vision of what collaborative learning could be like, thanks to innovative computational supports and new ways of thinking about learning. Theories of CSCL are shaped by rapidly evolving digital technologies, pedagogical practices, and research methods. Relevant theories can be categorized as: subjective (individual cognition and learning), intersubjective (interactional meaning making), and inter-objective (networks of learners, tools, artifacts, and practices). Theoretical insights suggest ways of enhancing, supporting, and analyzing cognition and learning by individuals, groups, and communities. The emerging ecology of socio-digital participation—involving students’ daily use of computers, mobile devices, social media, and the Internet—requires extending and synthesizing CSCL theories to conceptualize connected learning at multiple levels. Keywords Subjective · Inter-subjective · Inter-objective · Socio-cognitive · Sociocultural · Ethnomethodology · Dialogism · Knowledge building · Activity theory · Actor network · Group cognition · Group practice
G. Stahl (*) College of Computing and Informatics, Drexel University, Philadelphia, PA, USA e-mail: [email protected] K. Hakkarainen Department of Educational Sciences, University of Helsinki, Helsinki, Finland e-mail: Kai.Hakkarainen@helsinki.fi © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_2
23
24
G. Stahl and K. Hakkarainen
1 Definitions and Scope: Theory of Theories Educational research and practice should be informed by theory. However, CSCL has adopted and spawned a variety of competing theories. How should CSCL researchers and practitioners react to the current situation and what should they expect in the future? Theories of CSCL are important to define what is unique about CSCL and to counter misunderstandings about the nature and aims of CSCL as an evolving research field. CSCL is a theory- and research-based pedagogical vision of what collaborative learning could be like, given the development of innovative computational supports and new ways of conceptualizing knowledge (epistemology), thought (cognition), and (collaborative) learning—largely influenced by contemporary and emerging philosophical approaches and theories. Hence, CSCL is not simply the study of the use of existing technologies in conventional educational settings, as analyzed by traditional methods and theories. Rather, new theories have implications for designing CSCL technologies, associated pedagogic practices, and analytic methods. To examine the role of theory, we need to examine the question of just what “CSCL” is. Some treat it as simply a form of educational technology, where students communicate over networked devices, possibly enhanced through some AI application. From this perspective, CSCL can involve learning either “through” or “around” CSCL technology (Lehtinen, Hakkarainen, Lipponen, Rahikainen, & Muukkonen, 1999). The former involves CSCL environments mediating—or providing a medium for—learners’ synchronous or asynchronous online interaction, whereas the latter engages learners interacting face-to-face and cocreating knowledge or artifacts around digital devices, such as models, drawings, artworks, or craft objects developed on computers or tablets. Technological development is, however, blurring boundaries of such activities, as all knowledge work increasingly involves sociodigital technologies. Others define CSCL in distinction to “cooperative” learning, where tasks are divided among students in a group working on a task, whereas collaborative learning involves joint pursuit of knowledge objects (Knorr-Cetina, 2001), which learners seek to understand by coauthoring texts or other products incorporating evolving shared meaning and common understanding. CSCL is also contrasted with Computer-Supported Cooperative Work (CSCW), where adults work together on professional tasks using computer support. Still others focus on the intersubjective aspects of collaboration, which involve real-time interaction in small groups and associated efforts of meaning making. Posthumanist approaches highlight the active role of digital and other artifacts and physical, virtual or mixed environments in which enacted collaborative activity is embedded. Such an “inter-objective” (Latour, 1996) framework guides one to examine how multiple people learn as a group, community, or network by building
Theories of CSCL
25
Fig. 1 Framework for examining theories of CSCL
joint meaning and constructing shared artifacts within technologically rich environments. This chapter reviews the changing role of theory in CSCL, the major theories that are currently influential in the field, as well as their philosophical and methodological underpinnings. This chapter’s discussion of theories of CSCL is anchored to an examination of interrelations and mutual shaping among the technologies, practices, and research methods of CSCL (Fig. 1), characterized as follows. • Technology: The emergence of the CSCL field was associated with the development of information and communication technologies or groupware systems that enabled synchronous and asynchronous interaction and collaboration among learners. These developments inspired environments and theories for collaborative learning. The future of CSCL will continue to be mediated by rapid development of socio-digital technologies. However, the use of generic social media apps is in tension with CSCL’s traditional focus on specialized applications for collaboration. Commercially developed social media (like FaceBook or Twitter) are predominantly designed for exchange of personal opinions (resulting in flaming and fake news) rather than for supporting intersubjective processes of knowledge building in domains like argumentation, sciences, and mathematics. • Practice: Educational use of CSCL technologies is a systemic endeavor anchored in social practices of students, teachers, and educational institutions. The impacts
26
G. Stahl and K. Hakkarainen
of CSCL technologies are mediated both by prevailing educational practices and enacted practices of using these technologies in learning and instruction. CSCL investigators have developed pedagogic frameworks and guidelines for supporting innovative CSCL implementations, together with developing theories for understanding practices of CSCL and its transformative dynamics. The sociopolitical agenda of CSCL to improve the quality of learning, democratize knowledge, and promote educational equity requires CSCL researchers to work closely with educators in iterative design experiments to implement CSCL in context. • Method: With their research methods, investigators analyze CSCL processes and practices, contributing to redesign of CSCL technologies and pedagogic models, as well as refining theories of CSCL. Analyses of CSCL in practice have motivated theories of cognition that are socially and materially distributed, temporally and socially emergent, and embodied, enactive, embedded, and extended. The field has developed specific methods and investigative practices for studying collaborative learning at multiple levels: from the individual and small group to classroom/community/cultural/societal units of analysis. What kind of theory is appropriate and useful for deepening understanding, explanation, and advancement of CSCL? The theory of science has morphed considerably in recent decades (see e.g., Latour & Woolgar, 1979), away from former positivist conceptions of theory and science. Today, the goal of a theory of CSCL is a controversial moving target, not an established canon of universally accepted principles. We will be less concerned with predictive theory typical for the natural sciences, and more with theory as a tool for understanding and transforming learning and education. A number of theories have been prominent in CSCL during the past 25 years due to the transdisciplinary nature of the field; researchers trained in specific fields—such as education, design, psychology, computer science, anthropology, or linguistics—brought with them theories, methodologies, and philosophies of science from these quite diverse enterprises. This has resulted in a confusing variety of incommensurate, competing theories influential within CSCL research. For instance, the most common theories identified in recent content meta-analyses of CSCL (Akkerman et al., 2007; Jeong & HmeloSilver, 2016; Jeong, Hmelo-Silver, & Yu, 2014; Kienle & Wessner, 2006; Lonchamp, 2012; Wise & Schwarz, 2017; Tang, Tsai, & Lin, 2014) were constructivist, sociocultural, social–psychological, and information-processing frameworks. It is not clear what specific theories correspond to these vague classifications, which are often grouped based on loose author self-identification rather than by looking at the approaches actually applied in the reported research. Difficulties in comprehensively characterizing CSCL theories reflect the complexity of the evolving field, where different research questions require distinct kinds of investigation. To clarify the range of traditional and emerging theories, we have categorized them under these headings: subjective (foregrounding individual cognition and learning), intersubjective (centered on interactional meaning making), and
Theories of CSCL
27
inter-objective (emphasis on building of heterogeneous networks of learners, tools, artifacts, and practices). These overlapping categories of theories have been crucial for understanding the field of CSCL, its developmental history, and its envisioned future. In the following sections, we will suggest elements of a more integrated theory of CSCL. We first review the history of CSCL technologies, practices, and methods, as tied to the subjective, intersubjective, and inter-objective theories that seem critical for advancement of the field.
2 History and Development 2.1
Interdependence of Theory and Method
Historical shifts in theory both influenced and responded to changes in research practices, analysis methods, and focal concerns of CSCL research. The theories influence how researchers define their object of study, how they investigate it, and how they interpret their findings. Much theory in CSCL came from the subjective theories of empirical approaches in psychology—cognitive, educational, and social psychology—and contributed assumptions and research methods for CSCL. Although the pioneering contributions of psychologists like Brown (1992) highlighted the importance of pursuing field case studies in actual classrooms, the psychological sciences generally prioritized controlled laboratory experiments and statistical measures of collected data. Because implementation of CSCL in education calls for systemic change in social practices that individualistic psychological theories are unable to account for, subjective approaches have been critiqued, complemented, expanded, and partially replaced by approaches that emphasize materially and socially distributed aspects of thinking and learning, rather than mental models or symbolic representations. Such development has been critical for the development of CSCL, given its technological and social mediation of learning. One way to understand the history of psychological theories is as a sequence from positivism and behaviorism to cognitivism, and then to sociocultural theory—or from individual cognition to situated, distributed, group, and social cognition. Controlled experiments to measure individual learning gains have been either complemented or replaced with in-depth case studies or longitudinal ethnographies, without which emerging CSCL practices could not have been fully understood, adequately explained, or deliberately fostered. The recognition of the complexity of learning in CSCL settings necessitates extending the theory and bringing in conceptualizations and methods from related fields. Hence, CSCL theories increasingly invoke and adapt methods from other social sciences, including linguistics and anthropology. The resulting contextualized approaches to analyzing cognition address thinking and learning as involving people situated in dialog with others, within a world of language, artifacts, and culture. Such CSCL studies often use interaction analysis or design-based research to understand
28
G. Stahl and K. Hakkarainen
and explore how groups of students interact using technological artifacts and systems. Especially in CSCL, the primary actor, cognitive agent or collaborative learner may be seen as the small group itself (Stahl, 2006). Collaborative learning can be studied at various interdependent units of analysis—such as linguistic moves and embodied actions (e.g., gesturing, sketching, and prototyping)—and at different levels of social organization—such as an individual person, team, classroom, community, or culture. Surveys of methodological practices of CSCL often reflect on how theoretical frameworks affect the analysis methods of investigators. However, available technologies and methods can provide access to specific kinds of empirical phenomena and data, in turn inspiring the refinement of CSCL theory. In human sciences, methods and tools can create the very phenomena (research objects) of investigation, so that theories, methods, and technologies are interdependent (Gigerenzer, 1994). In the development of the field of CSCL, interventions with discussion forums gave rise to theories of computer-mediated communication; the use of video games resulted in microanalytic studies of small-group cognition; and studies of collaborative environments, such as Knowledge Forum (Scardamalia & Bereiter, this volume), shaped knowledge-building theories. The recent emergence of digital fabrication technology and educational maker spaces expand the scope of CSCL epistemologically, theoretically, and methodologically, to centrally involve the role of materially embodied artifacts in collaboration. CSCL studies rely on complementary bodies of thick, thin, and rich big data (Hillman & Säljö, 2016). They collect thick data through ethnographic and participant observations, interviews, and documentation of design experiments. Such data are needed for understanding, examining, and further refining learners’ and teachers’ socio-digital knowledge practices. CSCL studies may also utilize thin data, i.e., selfreport response data that enable tracing learning, motivation, and socio-digital activity. Self-report data may be needed for showing the perceived impact of interventions. Moreover, CSCL investigators have developed novel instruments and methods for tracing and analyzing the “big” data of contextual, digitally mediated learning activities and processes. Such big data can be interpreted along with thick process data and thin self-report data. CSCL research addresses complex and often messy efforts of implementing collaborative practices in education and, therefore, often uses mixed methods for reaching robust understanding of CSCL processes. Although design-based and interventionist approaches appear to dominate CSCL, it is also important to continue pursuing controlled experiments for testing the impact of well-understood practices of using technology, possibly within the cycles of design-based research. There is growing recognition that human cognition takes place on multiple, interdependent levels, and that research methods should include approaches at the individual, small group, community, and network units of analysis. One could use different methods at each unit of analysis and then identify links between them. A central open question involves how the levels interact. This must become a vital concern of further development of theories of CSCL.
Theories of CSCL
2.2
29
Diversity of Theories and Traditional Oppositions
An important distinction between different theoretical frameworks depends on the focal unit of collaboration. Subjective theories focus on the individual mind—admitting that student learning is influenced by the social context but measuring the effects of participation in the group on the individual members as psychological subjects. Intersubjective theories focus on the group itself as the unit of analysis. Collaborative learning, which takes place in CSCL primarily at the group unit, can have consequences at the other levels, leading to learning outcomes for the individuals or transformation of community social practices (Lave & Wenger, 1991). Inter-objective theories are more oriented to social, community, and cultural levels of analysis—emphasizing linguistic interactions or embeddedness of learning in networks of people and artifacts. They are concerned with analyzing and cultivating the social practices in which learning is embedded and the social institutions that structure learning activities. The collaborative group then stands in the middle, between the individuals who participate in the group, tools, and artifacts used, and the community or larger network whose practices the group adopts and adapts as it learns collaboratively. The array of theories has evolved through a series of historical developments. The history of Western philosophy from the early Greeks to the present provides many of our now commonsensical assumptions about scientific method (Stahl, 2021, Investigation 15). Empiricism, for instance, culminated in positivism and its view of objective knowledge. Rationalism assumed that all cognition took place in individual minds, which used propositions in the head to represent facts in the world and to deduce knowledge. In psychology, behaviorism limited science to empirical study of a subject’s externally observable behavior. That was challenged by cognitivism, which argued that learning and knowledge required mediation by the mind, for instance using language and logical reasoning (Chomsky, 1959). Cognitive science’s computational theory of mind assumed encapsulated mind with internal representations, memory storage, and information processing analogous with those of early computers (Gardner, 1985). Constructivism and social constructivism followed (Packer & Goicoechea, 2000). They accepted Kant’s (1787) philosophical insight that the human mind structures all knowledge of the world. Educationally, this implies that students should be guided to make sense of new information in terms of their own understandings (past knowledge, personal perspective, existing conceptualizations, motivations). While this had radical consequences for educational theory, it still focused on the individual as learner. The resulting “constructivist” theories tended to be uninformative (everything is in some vague sense constructed). Alternative socio-historically motivated theories then developed based on the dynamic philosophy of Hegel (1807) and Marx (1867), which shaped Vygotsky’s, Bakhtin’s, and other investigators’ theories of the social mind and mediated cognition. From the perspective of the emerging sociocultural framework, cognitive
30
G. Stahl and K. Hakkarainen
development and learning were results of dialectics between personal tool-mediated activities, group interactions, social practices, and “cognitive-cultural macro-structures” (Donald, 1991, 2001). This can be viewed as a watershed transformation from individualism to recognition of the group and social community as pivotal to learning, opening the way for CSCL as an educational approach. “Mediation” is a concept developed in Hegel’s dialectical philosophy and central for CSCL. Notice that the word has connotations of media and middle. It can refer to a variety of processes that take place in the middle of two related phenomena. For cognitivism, the human mind plays a mediating role in transforming perceptions of the world into mental knowledge. In CSCL, technologies provide the tools and media through which interactions between people, groups, and artifacts take place; they mediate both interaction and materially embodied activity. In CSCL contexts, interaction is not directly between minds, but is mediated by language, gesture, symbol, technology, and context (including school practices, background knowledge, previous interactions). Vygotsky’s theory of “mediated cognition” provides an historical cornerstone of CSCL theory.
2.3
Development and Learning in Vygotsky
Vygotsky (1930) developed an approach to educational psychology appropriate to the philosophical methods of Hegel and Marx. His writings point beyond individual psychology to a recognition of mediated, group, social cognition. Thereby, they offer an important starting point for CSCL theory. Collaborative learning, as the source of cognitive development, may be considered a basis of all human learning, not just an optional and rare mode of instruction. That is, group cognition is a foundation of human cognition (planning, problem solving, deduction, storytelling, etc.) at all levels. Vygotsky’s experiments illustrate ways in which group cognition forms a base for individual cognition. By incorporating language, external symbols, and other cultural artifacts, this process connects the cultural and community level to the small-group and individual levels. The gap between cultural development and individual learning is what Vygotsky calls the “zone of proximal development” (ZpD). This includes what a child will next be able to learn. It is a prime arena for CSCL intervention, because students in this zone can learn collaboratively what they cannot yet learn by themselves. In Vygotsky’s (1930, p. 86f) well-known discussion of the ZpD, he cites a study in which children “could do only under guidance, in collaboration and in groups at the age of three-to-five years what they could do independently when they reached the age of five-to-seven years.” CSCL can be seen precisely as such an effort to stimulate students within their ZpD—on tasks they cannot yet master individually but are close to being ready to learn—under guidance, in collaboration and groups. In his “Problems of Method,” Vygotsky (1930, pp. 58–75) called for a new paradigm of educational research almost a century ago. Arguing that one cannot
Theories of CSCL
31
simply look at posttest results of an experiment, he proposed a method of “double stimulation” where a child is confronted by a learning challenge and a potential artifact to mediate that work. Instead of proposing an experimental study for comparing learning outcomes with and without some furnished artifact, Vygotsky suggests that “the experimenter waits until they spontaneously apply some new auxiliary method or symbol that they then incorporate into their operations.” Taking this inter-objective research approach on collaboration requires attention to the children’s interaction, the object-related activity, and the sense-making that is involved in creative, unanticipated collaborative accomplishments. The essence of Vygotsky’s method of double stimulation is the CSCL practice of engaging learners themselves in extended processes of cocreating artifacts for transforming problem situations and remediating their learning processes (Ritella & Hakkarainen, 2012); see also (Paavola & Hakkarainen, this volume). Such investigation involves tracing the unique trajectories of distinct groups’ objectrelated activities, which could not be understood if sorted into statistically aggregated or standardized categories. Furthermore, the key role of mediation of group cognition by artifacts—as stimulants to working on a primary learning object—points to the importance of computer support in CSCL. CSCL environments can be designed with a wide variety of artifacts (scripts, models, manipulatives, graphics, prompts, etc.) to stimulate collaborative learning. Vygotsky’s brief career began in the context of stimulus/ response behaviorism. Through critiquing with a dynamic lens, the theories of learning that were popular in his time, Vygotsky sketched a vision of the ties between individual, group, and community (social, cultural) cognition that CSCL researchers can now elaborate.
3 State of the Art 3.1
Recent Theories Influential in CSCL
CSCL is distinguished by its pedagogic, analytic, and technological focus on collaboration. Popular sociocultural theories in CSCL build on Vygotsky’s initiative. Most traditional and socio-cognitive theories of learning, by contrast, focus on the individual mind as the learner and the repository of learned knowledge. The theories presented in this section consider how learning (cognition) and knowledge (epistemology) can be considered at larger units of analysis than the individual human, such as the small group and various social or cultural levels, including artifacts and other contextual referents.
32
3.1.1
G. Stahl and K. Hakkarainen
Socio-Cognitive Research on CSCL
Socio-cognitive theories of CSCL, which build on conceptions of individual learning, cognition, and motivation, typically aim at examining (a) how collaborative group learning affects advancement of individual learning and (b) how manipulations of controlled independent variables affect the success of students’ collaborative learning. Investigators may focus on cognitive and motivational gains of personal and collaborative learning or measure the impact of various scripting strategies on collaborative learning processes and individual learning outcomes (e.g., Weinberger, Reiserer, Ertl, Fischer, & Mandl, 2005). Studies of regulation in CSCL have expanded from self-regulation to peer-assisted co-regulation and group regulation (e.g., Panadero & Järvelä, 2015). Although socio-cognitive studies often rely on laboratory experiments and quasi-experimental designs, many use mixed methods and collect data from field studies. Each approach has appropriate rigorous standards of evidence that it can follow (Methods section, this volume).
3.1.2
Ethnomethodology
Ethnomethodology contrasts with socio-cognitive approaches in that it does not seek to analyze psychological processes in the minds of individuals, but studies social, interactional, and linguistic practices that can be observed directly, for instance in detailed transcripts of conversation. Garfinkel (1967) argued that human behavior is based on the adoption of social practices or “member methods” shared through participation in a given culture. It is because everyone is familiar with these practices that people can make sense of each other’s behavior. Furthermore, people display in their embodied activity how their actions should be understood. Sacks studied this in transcripts of ordinary conversation, founding Conversation Analysis (Garfinkel & Sacks, 1970; Sacks, 1965). Investigations showed how people design their speech to open and close new topics, to respond to each other, and to repair misunderstandings (Schegloff, 2007). As a sociological approach, ethnomethodology shifts the view of learning to the community, social, or cultural level.
3.1.3
Dialogism
Bakhtin’s (1981) theory has affected CSCL research by guiding investigators in analyzing dialogic interaction processes. The dialogic approach guides students in sustained interaction that enables them to explore and build on their own and peers’ ideas (Wegerif, 2007). From the dialogic nature of thinking and meaning, it follows that a person’s utterance in conversation, writing, or thinking should not necessarily be interpreted as an expression of private mental representations or beliefs, but as an interactive response to ongoing communication, designed to evoke future responses. Furthermore, speech incorporates countless standard elocutions that are part of
Theories of CSCL
33
shared literary genres and language. Often, specific words that someone else used are repeated and taken up in subsequent utterances. Accordingly, utterances should be analyzed and understood as dialogical moves within a social setting, not just as personal expressions.
3.1.4
Knowledge Building
Pioneering CSCL work of Scardamalia and Bereiter (1996) created a knowledgebuilding framework that engages young students in the collaborative pursuit of knowledge advancement. Their groupware system for mediating knowledgebuilding processes evolved into Knowledge Forum (see Scardamalia & Bereiter, this volume). They consider knowledge building to be a collaborative effort of advancing communal knowledge, as distinguished from individual learning. They propose that schools can be developed into “knowledge-building” communities that engage students in expert-like creative work with knowledge, appropriating disciplinary methods of advancing knowledge. Toward that end, students are engaged in “design mode” activities of creating, improving, sharing, and advancing ideas, understood as improvable conceptual artifacts (i.e., results of knowledge building, such as texts, reports, designs, theories, symbols, tools, usable objects). Knowledge building is an emergent, nonlinear process that cannot be rigidly scripted or predetermined. The knowledge-building framework has been developed in close collaboration with teachers committed to implementing Scardamalia’s (2002) knowledge-building principles in practice (e.g., anchoring learning on real issues and authentic problems, promoting idea diversity, and engaging in efforts of reflecting upon earlier investigations or proposals).
3.1.5
Knowledge-Creating Learning
Paavola and Hakkarainen (2014) expanded the conceptually oriented knowledgebuilding theory by also taking into consideration materially embodied aspects of artifacts (see Paavola & Hakkarainen, this volume). Their knowledge-creating learning approach is distinguished both from the knowledge-acquisition metaphor and the participation metaphor (Sfard, 1998). While the acquisition view represents a “monological” (subjective, mental) view on human learning and the participation view represents a “dialogical” (intersubjective) view, the knowledge-creation perspective may be understood as “tri-logical” in nature because of its foregrounding interaction between individuals, communities, and shared epistemic objects being developed. Knowledge creation is anchored by deliberately cultivated knowledge practices, i.e., social practices of working with knowledge artifacts and media (Hakkarainen, 2009).
34
3.1.6
G. Stahl and K. Hakkarainen
Cultural-Historical Activity Theory
Relying on Cultural Historical Activity Theory (CHAT) developed by Vygotsky’s colleagues, Engeström (1987) investigated CSCL from the perspective of expansive learning. CHAT guides researchers to examine CSCL as an integral part of the contradiction-laden historical development of educational activity, calling for profound transformation of social practices prevailing at schools. Social practices are anchored in dynamic activity systems, which must be transformed to allow significant changes to happen. Expansive learning starts by criticizing, questioning, and analyzing contradictions arising within the system or in its external relations. CHAT studies often promote community development by engaging students and teachers in solving vital real-world problems in collaboration with networks of local stakeholders, such as community organizations and workplaces (Engeström, Engeström, & Suntio, 2002; Roth & Lee, 2007).
3.1.7
Actor-Network Theory
Actor-Network Theory (ANT) (Latour, 2007) builds on science-and-technology studies showing how complex human activity relies on networks of people, artifacts, and practices. Such networks diverge from CHAT activity systems in terms of having diverse kinds of actors exerting causal influences: including nonhuman agents such as tools, technology-rich environments, or knowledge objects. This framework is characterized by “inter-objectivity” (Latour, 1996) in terms of treating humans and artifacts symmetrically and highlighting the active roles of the various actors. ANT has been applied more often in CSCW and workplace situations than in educational or CSCL contexts but appears to have potential here as well (Fenwick & Edwards, 2011). Learning takes place in increasingly complex socio-material environments, which intertwine enacted local practices with virtual and distributed activities. Technological artifacts have a dynamic dual role as agents that oscillate between structuring and constraining as well as directing and expanding activity. ANT examines social engineering involved in negotiating conflicting interests of stakeholders—such as researchers, technology developers, educational administrators, teachers, and students—that successful CSCL projects must align.
3.1.8
Group Cognition and Adopting Group Practices
The theory of group cognition (Stahl, 2006, 2021, Investigation 16) is primarily concerned with building knowledge and epistemic artifacts through artifactmediated processes of group interaction. It focuses on the small-group unit of analysis, as the level at which social and cultural phenomena and artifacts influence the interaction, which, in turn, may produce group, individual, and community learning. The theory elaborates concepts of cognition, knowledge, interaction,
Theories of CSCL
35
sequentiality, intersubjectivity, shared understanding, artifact mediation, practice, agency, and joint attention appropriate to the small-group level of description. The interpenetration of the social, group, and individual cognitive levels can be observed, analyzed, and studied in processes involving the adoption of group practices, for instance, in the context of learning geometry (Stahl, 2013, 2016; see Medina & Stahl, this volume). One can refine CSCL curriculum and pedagogy to promote the adoption of key group practices. CSCL technology can support the presentation, exploration, and adoption of identified group practices. Analysis of group interaction in CSCL settings can reveal successes and barriers to adoption of such practices and point to needed improvements as well as documenting successful learning at group and individual levels.
3.2
Dealing with Diversity
It is appropriate that a field like CSCL, which is still an exploratory vision, allows a diversity of theories, from subjective to intersubjective and inter-objective. This inspires innovative research agendas. However, because theory has consequences for methodology, a researcher should be explicit about what theoretical framework guides a specific research project or analysis. One’s research question should determine the unit of analysis and associated methods. While all established theories capture some truth, when combining approaches, their corresponding methodologies may be both limiting and mutually incompatible. For instance, validated self-report questionnaires are useful tools, but participants’ individual responses are not likely to adequately reveal contextual factors and intersubjective learning processes. The current situation of the theory of CSCL affords flexibility to the researcher but requires careful respect for the diverse approaches.
4 The Future 4.1
Toward an Integrated Theory of CSCL
CSCL theory during recent decades has increasingly broadened the phenomena of interest—from learning impacts on individual students to forms of interaction within small groups and communities, involving various forms of artifacts and interactions among levels. Central theoretical concepts have been reconceptualized. Investigation of the phenomena related to these concepts will continue to stimulate theory building and may allow a more integrated framework to emerge for understanding collaborative learning and for guiding technological and pedagogical support. In this section, we review themes and concepts that seem central to continuing to develop CSCL theory—from a collection of concerns from related fields to a framework specific to what is unique to CSCL (see Fig. 2). Finally, we turn from
36
G. Stahl and K. Hakkarainen
Fig. 2 Framework for integrated CSCL theory
theory to practice and consider the implications of this chapter’s discussion for pursuing CSCL in the classroom.
4.2 4.2.1
Elements of an Integrated Theory of CSCL Discourse and Interaction
Collaborative learning proceeds through knowledge-creating discussion within a group of learners. The group learns by building and sharing knowledge and by interacting in nonverbal ways within the CSCL environment (e.g., highlighting, sketching, modeling, prototyping, gesturing, producing knowledge artifacts). Analysis of collaborative interaction usually involves investigating transcripts of the discourse and multimodal interaction. It may consist of understanding the flow of conversational moves and embodied actions and the meaning making that took place by the group, perhaps adapting Conversation Analysis (Schegloff, 2007) or Interaction Analysis (Jordan & Henderson, 1995).
Theories of CSCL
4.2.2
37
Interactional Mediation by CSCL Environments
CSCL provides multifaceted socio-technical environments that mediate collaborative interaction and learning in diverse ways. The rapidly evolving ecology of sociodigital technologies is distributed across formal and informal spaces of learning, so that technology mediation is increasingly mashed up to take place through and “around” socio-digital tools. Theory should account for such mediation and inform the design of media to support specific, identified aspects of collaborative learning, as well as interconnecting informal and formal technology-mediated learning.
4.2.3
Epistemic Mediation by Knowledge Artifacts
CSCL environments offer learning communities shared spaces and scaffolding for creating, building, visualizing, sharing, organizing, and advancing knowledge artifacts. Socio-digital technologies enable cognitive augmentation that CSCL builds on: By technologically extending the mind, digital devices foster new forms of collaborative working and engagement in successive refinement of complex ideas (Donald, 1991, 2001). The “epistemic mediation” involved in such extended thinking processes refers to a deliberate process of deepening inquiry by creating external epistemic artifacts (e.g., shared written notes, visual representations, material artifacts, simulations, and discourse media) that crystallize and promote evolving understanding and collective inquiry. Problems and solutions in CSCL processes can be understood as epistemic objects; such objects represent what the participants are seeking to understand and create but do not yet know or understand. These objects are defined by their openness, incompleteness, and capacity to unfold indefinitely through successive thought- and affect-laden instantiations as textual or other artifacts (Knorr-Cetina, 2001).
4.2.4
Temporality and Sequentiality
CSCL takes place over time and through language use embedded in technologymediated activity. Interaction takes place through the sequential ordering of actions, utterances, and gestures. A given oral or written utterance typically responds to previous activity and discourse, generally designed to provoke a response and to propel the discourse and inquiry forward. The analysis of collaborative learning as a group meaning-making process may need to interpret the temporality and sequentiality of captured discourse and related activity (Medina & Stahl, this volume). Although utterances may be analyzed statistically to answer specific research questions, the enacted collaboration itself is an inherently sequential process, which cannot be fragmented without losing its meaning. Further, temporality and sequentiality also structure the nonlinguistic activity. CSCL activity is embedded in unfolding social (group work) and material (technological) processes, which
38
G. Stahl and K. Hakkarainen
are entangled in temporal emergent assemblages, analysis of which may reveal development of key epistemic, group, and social practices. For instance, analysis at multiple time scales can reveal processes at the micro level (e.g., utterances), meso level (establishment of group practices), and macro level (evolution of community cultural norms).
4.2.5
Intersubjectivity and Shared Understanding
A fundamental theoretical question for CSCL is that of intersubjectivity (Stahl, 2021, Investigation 18): How is it possible (both in the abstract and in practical terms) for participants in a group to understand each other? This is a problem for cognitivism: If one person’s mind expresses a thought in a spoken utterance, how can another person’s mind know what that utterance meant to the speaker? Sociocultural theory answers this by noting that people share language, activity context, and cultures laden with mutually understood meanings. Of course, in a situation of collaborative learning, there are ample opportunities for misunderstanding each other. Fortunately, our languages and embodied activity include shared practices for repairing misunderstandings. Intersubjectivity is the result of specific aspects of human interaction, beginning in prehistory (Tomasello, 2014) and continuing in successful CSCL sessions today (Schneider & Pea, 2013). The need to constantly maintain intersubjective shared understanding is a major reason that CSCL requires special supports, training, and effort in order to be successful.
4.2.6
Personal, Distributed, and Group Agency and Units of Analysis
Theories based on individual minds locate the agency that causes events like expressing opinions or learning at the individual unit of analysis, looking to personal motivations and beliefs. Theories of distributed cognition (Hutchins, 1996) or group cognition locate collaborative agency at the group unit. Activity Theory (Engeström, 1987) looks as well at tensions or contradictions among social factors in the setting and Actor Network Theory (Latour, 2007) goes even further to bestow agency on an open-ended universe of (past and present) human and artifact actors, bringing in a cultural–historical unit of analysis. CSCL theory should account for agency and other phenomena at multiple units of analysis.
4.2.7
Orchestrating and Scaffolding the CSCL Culture
An early finding of CSCL research was that collaborative learning cannot succeed in classrooms without preparing teachers and students with an understanding of the theory and pedagogy of CSCL. A classroom culture of collaboration must replace the culture of individual rote learning and competition. CSCL aims at cultivating “nonlinear” pedagogy, characterized by open-ended, emergent, and inventive
Theories of CSCL
39
educational practices (Ng & Bereiter, 1995). Although nonlinear knowledgecreation processes cannot be rigidly scripted (Scardamalia & Bereiter, 2014), it is necessary to guide and scaffold student learning for productive collaborative learning, interaction, and knowledge creation. Flexible teacher orchestration and CSCL structuring are required to cultivate local practices of working with knowledge and media (Zhang et al., 2018). A delicate balance is needed for guiding, scaffolding, orchestrating, structuring, and facilitating collaborative knowledge creation. CSCL theory must recognize these implementational requirements and point the way to the desired vision. The theories just enumerated offer insights into what learning and knowledge building might be like in effective CSCL contexts. They supply concepts and frameworks for thinking about such collaborative processes. They also provide guidance for CSCL research into the design and trial of technology and pedagogy for supporting CSCL.
4.3 4.3.1
Theoretical Perspectives on Implementing CSCL Implementing the Vision of CSCL in Classrooms
CSCL has been criticized for having failed to transform education (e.g., Wise & Schwarz, 2017). Critics assume that once students had computers and became accustomed to networking with other students, the incorporation of collaborative learning and CSCL in classrooms should have spread rapidly. We all seriously underestimated the challenges of transforming technological infrastructure, cultivating CSCL practices, and changing associated educational accountability regimes. The preceding theoretical perspectives indicate why implementation of CSCL will take longer: • CSCL is a vision of a future involving technologies, practices, and research methods that guide investigators’ theory-building and intervention efforts. CSCL is an incomplete epistemic object (Knorr-Cetina, 2001), which constantly raises new questions and becomes more complex as technologies, practices, and methods develop unpredictably. • CSCL is embedded in rapidly expanded ecologies of socio-digital participation that involve young people using technology intensively. Many young people use digital technologies for pursuing their interests together with their peers, experimenting with digital tools and making personal media productions. The challenge of CSCL is to promote connected learning in terms of also engaging students at school in creative and academic collaborative use of technology for knowledge building (Ito et al., 2013). A theoretical and practical challenge is to determine what processes, methods, and practices are needed for CSCL to penetrate deeply into educational systems. A handful of systematic efforts have produced promising results (e.g., Chan, 2011; Looi, So, Toh, & Chen, 2011), but they have been rare. Although there have been
40
G. Stahl and K. Hakkarainen
isolated CSCL classrooms sustained by committed teachers, the establishment and dissemination of rich collaboration cultures in schools remain elusive and prone to failure (Hakkarainen, 2009; Ritella & Hakkarainen, 2012). Advancement of the CSCL field requires a more comprehensive theoretical and practical understanding of the complex and dynamic relations between digital technologies, social practices, and educational-transformation processes. Despite transformative CSCL visions, new digital tools tend to be initially used to promote traditional practices of teaching or learning; radical innovative possibilities emerge only through sustained transformation of social practices (Hakkarainen, 2009). Successful implementations of CSCL practices rely on systematic participatory transformations taking place through intensive research–practice partnerships. To effectively utilize CSCL practices, teachers and students must undergo “instrumental genesis” (Rabardel & Bourmaud, 2003), integrating the CSCL tools into learning/teaching activities. This involves shaping, adapting, and tailoring the CSCL tools and practices according to local needs and requirements by participants, as well as cultivating novel personal and group practices. The process iteratively evolves the design of the tools to better facilitate intended practices and the creation of novel practices, tool usages, and understandings by the participants. As students increasingly rely on technology in their everyday interaction, cognition, and learning practices, approaches explored in CSCL research and theory may promote connected learning practices and, thereby, overcome the limitations of simplistic social media apps. The result may be quite different from the experimental prototypes of classic CSCL research projects. Despite the complexity of the challenges, that is what it means to understand the CSCL vision as an epistemic object of global inquiry, rather than as a summative evaluation of a well-defined object of study. Theories of CSCL should comprehend, envision, and guide the targeted transformations and emergent technologies, practices, and methods for achieving the CSCL vision.
References Akkerman, S., Bossche, P. V. d., Admiraal, W., Gijselaers, W., Segers, M., Simons, R.-J., et al. (2007). Reconsidering group cognition: From conceptual confusion to a boundary area between cognitive and socio-cultural perspectives? Educational Research Review, 2, 39–63. Bakhtin, M. (1981). The dialogic imagination: Four essays. Austin, TX: University of Texas Press. Brown, A. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141–178. Chan, C. K. K. (2011). Bridging research and practice: Implementing and sustaining knowledge building in Hong Kong classrooms. International Journal of Computer-Supported Collaborative Learning, 6(2), 147–186. Chomsky, N. (1959). Review of verbal behavior, by B. F. Skinner. Language, 35(1), 26–57. Donald, M. (1991). Origins of the modern mind: Three stages in the evolution of culture and cognition. Cambridge, MA: Harvard University Press.
Theories of CSCL
41
Donald, M. (2001). A mind so rare: The evolution of human consciousness. New York, NY: W. W. Norton. Engeström, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki: Orienta-Kosultit Oy. Engeström, Y., Engeström, R., & Suntio, A. (2002). Can a school community learn to master its own future? In G. W. G. Claxton (Ed.), Learning for life in the 21st century (pp. 211–224). Cambridge, MA: Blackwell. Fenwick, T., & Edwards, R. (2011). Introduction: Reclaiming and renewing actor network theory for educational research. Educational Philosophy and Theory, 43, 1–14. Gardner, H. (1985). The mind’s new science: A history of the cognitive revolution. New York, NY: Basic Books. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Garfinkel, H., & Sacks, H. (1970). On formal structures of practical actions. In J. Mckinney & E. Tiryakian (Eds.), Theoretical sociology: Perspectives and developments (pp. 337–366). New York, NY: Appleton-Century-Crofts. Gigerenzer, G. (1994). Where do new ideas come from? In M. Boden (Ed.), Dimensions of creativity (pp. 53–74). Cambridge, MA: The MIT Press. Hakkarainen, K. (2009). A knowledge-practice perspective on technology-mediated learning. International Journal of Computer-Supported Collaborative Learning, 4(2), 213–231. Hegel, G. W. F. (1807). Phenomenology of spirit. (J. B. Baillie, Trans.) New York, NY: Harper & Row. Hillman, T., & Säljö, R. (2016). Learning, knowing and opportunities for participation: Technologies and communicative practices. Learning, Media, and Technology, 41, 306–309. Hutchins, E. (1996). Cognition in the wild. Cambridge, MA: MIT Press. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, W., Rhodes, J., Salen, K., Schor, J., Sefton-Green, J., & Watkins, S. (2013). Connected learning: An agenda for research and design. Irvine, CA: Digital Media. Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educational Psychologist, 51(2), 247–265. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. International Journal of Computer-Supported Collaborative Learning, 9(3), 305–334. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. Journal of the Learning Sciences, 4(1), 39–103. Kant, I. (1787). Critique of pure reason. Cambridge: Cambridge University Press. Kienle, A., & Wessner, M. (2006). The CSCL community in its first decade: Development, continuity, connectivity. International Journal of Computer-Supported Collaborative Learning, 1(1), 9–33. Knorr-Cetina, K. (2001). Objectual practices. In K.-C. T. Schatzki & K. E. Von Savigny (Eds.), The practice turn in contemporary theory (pp. 175–188). London: Routledge. Latour, B. (1996). On interobjectivity. Mind, Culture and Activity, 3(4), 228–245. Latour, B. (2007). Reassembling the social: An introduction to actor-network-theory. Cambridge: Cambridge University Press. Latour, B., & Woolgar, S. (1979). Laboratory life. Thousand Oaks, CA: Sage Publications. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Lehtinen, E., Hakkarainen, K., Lipponen, L., Rahikainen, M. & Muukkonen, H. (1999). Computer supported collaborative learning: A review of research and development. Cl-net project. The J. H. G. I. Giesbers reports on education (vol 10): Department of Educational Sciences, University of Nijmegen. Lonchamp, J. (2012). Computational analysis and mapping of ijCSCL content. International Journal of Computer-Supported Collaborative Learning, 7(4), 475–497.
42
G. Stahl and K. Hakkarainen
Looi, C.-K., So, H.-J., Toh, Y., & Chen, W. (2011). The Singapore experience: Synergy of national policy, classroom practice and design research. International Journal of Computer-Supported Collaborative Learning, 6(1), 9–37. Marx, K. (1867). Capital (B. Fowkes, Trans. Vol. I). New York, NY: Vintage. Medina, R., & Stahl, G. (this volume). Analysis of group practices. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Ng, E., & Bereiter, C. (1995). Three levels of goal orientation in learning. In Ram, A., Leake, D. B., & Leake, D. (Ed.), Goal-driven learning (pp. 355–380). Cambridge, MA: The MIT Press. Paavola, S., & Hakkarainen, K. (2014). Trialogical approach for knowledge creation. In S.-C. Tan, H.-J. Jo, & J. Yoe (Eds.), Knowledge creation in education (pp. 53–72). New York, NY: Springer. Paavola, S., & Hakkarainen, K. (this volume). Trialogical learning and object-oriented collaboration. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Packer, M., & Goicoechea, J. (2000). Sociocultural and constructivist theories of learning: Ontology, not just epistemology. Educational Psychologist, 35(4), 227–241. Panadero, E., & Järvelä, S. (2015). Socially shared regulation of learning: A review. European Psychologist, 20, 190–203. Rabardel, P., & Bourmaud, G. (2003). From computer to instrument system: A developmental perspective. Interacting with Computers, 15, 665–691. Ritella, G., & Hakkarainen, K. (2012). Instrumental genesis in technology-mediated learning: From double stimulation to expansive knowledge practices. International Journal of ComputerSupported Collaborative Learning, 7(2), 239–258. Roth, W. M., & Lee, Y. J. (2007). “Vygotsky’s neglected legacy”: Cultural-historical activity theory. Review of educational research., 77(2), 186–232. Sacks, H. (1965). Lectures on conversation. Oxford: Blackwell. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society. Chicago, IL: Open Court. Scardamalia, M., & Bereiter, C. (1996). Computer support for knowledge-building communities. In T. Koschmann (Ed.), CSCL: Theory and practice of an emerging paradigm (pp. 249–268). Hillsdale, NJ: Lawrence Erlbaum Associates. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy and technology. In K. Sawyer (Ed.), Cambridge handbook of the learning sciences (2nd ed.). Cambridge: Cambridge University Press. Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Schegloff, E. A. (2007). Sequence organization in interaction: A primer in conversation analysis. Cambridge: Cambridge University Press. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2013). Translating Euclid: Designing a human-centered mathematics. San Rafael, CA: Morgan & Claypool Publishers. Stahl, G. (2016). Constructing dynamic triangles together: The development of mathematical group cognition. Cambridge: Cambridge University Press. Stahl, G. (2021). Theoretical investigations: Philosophical foundations of group cognition. New York, NY: Springer.
Theories of CSCL
43
Tang, K.-Y., Tsai, C.-C., & Lin, T.-C. (2014). Contemporary intellectual structure of CSCL research (2006–2013): A co-citation network analysis with an education focus. International Journal of Computer-Supported Collaborative Learning, 9(3), 335–363. Tomasello, M. (2014). A natural history of human thinking. Cambridge, MA: Harvard University Press. Vygotsky, L. (1930). Mind in society. Cambridge, MA: Harvard University Press. Wegerif, R. (2007). Dialogic, education and technology: Expanding the space of learning. New York, NY: Kluwer-Springer. Weinberger, A., Reiserer, M., Ertl, B., Fischer, F., & Mandl, H. (2005). Facilitating collaborative knowledge construction in computer-mediated learning environments with cooperation scripts. In R. Bromme, F. Hesse, & H. Spada (Eds.), Barriers and biases in computer-mediated knowledge communication—And how they may be overcome. Dordrecht: Kluwer Academic Publisher. Wise, A., & Schwarz, B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Zhang, J., Tao, D., Chen, M.-H., Sun, Y., Judson, D., & Nagvi, S. (2018). Co-organizing the collective journey of inquiry with idea thread mapper. The Journal of the Learning Sciences, 27, 390–430.
Further Readings Donald, M. (1991). Origins of the modern mind: Three stages in the evolution of culture and cognition. Cambridge, MA: Harvard University Press; Donald, M. (2001). A mind so rare: The evolution of human consciousness. New York, NY: W. W. Norton.—In these books, Donald presents culture as a rapid form of human evolution and extends the theory of learning to include external memories provided by digital technology. Hakkarainen, K. (2009). A knowledge-practice perspective on technology-mediated learning. International Journal of Computer-Supported Collaborative Learning, 4(2), 213–231.—This article generalizes research experiences implementing CSCL in educational practices, expands knowledge building toward the trialogic approach to knowledge-creating learning and works out the notion of knowledge practices. See also (Paavola & Hakkarainen, this volume). Koschmann, T. (Ed.). (1996). CSCL: Theory and practice of an emerging paradigm. Hillsdale, NJ: Lawrence Erlbaum Associates.—This edited volume defined the beginnings of CSCL theory. It includes Koschmann’s discussion of the CSCL paradigm, Roschelle’s model of CSCL interaction analysis and Scardamalia & Bereiter’s argument for supporting collaborative learning, among other seminal papers. Stahl, G. (2021). Theoretical investigations: Philosophical foundations of group cognition. New York, NY: Springer —This edited volume brings together many of the past articles in the International Journal of CSCL and recent essays by the journal’s editor that are most relevant to this chapter. Together, they point in the direction of CSCL theory indicated here for the future. See also (Medina & Stahl, this volume) and essays that are available at http:// gerrystahl.net/elibrary. Vygotsky, L. (1930). Mind in society. Cambridge, MA: Harvard University Press.—Vygotsky’s most important writings and notes collected here present a vision of the theory of learning most influential in CSCL.
A Conceptual Stance on CSCL History Sten Ludvigsen, Kristine Lund, and Jun Oshima
Abstract CSCL is focused on the interdependence of social interaction and computational artifacts. A computational artifact mediates participants’ sensemaking in the collaboration. The sensemaking is dependent on what the participants do together with the computational artifact. The analysis of this mediation unveils core CSCL processes. CSCL draws on foundations in social, learning, and computer sciences. In this historical chapter, we focus on epistemic issues in CSCL. The focus on methodological stances and computational artifacts varies between the different strands of research, whereas the epistemological stances were established early on in the history of CSCL and have remained stable. Conceptualizing CSCL as the interdependency between collaborating participants and computational artifacts requires the definition of a unit of analysis that can explain and help us understand what and how people learn in collaboration. The way concepts in CSCL studies are operationalized signals the epistemological stance that authors make use of. Keywords Collabortive learning · Computational artifacts · Epistemology · Methodology · Sensemaking · Learning sciences · Computer sciences
S. Ludvigsen (*) Faculty of Educational Sciences, University of Oslo, Oslo, Norway e-mail: [email protected] K. Lund CNRS, Ecole Normale Supérieure de Lyon, University of Lyon, Lyon, France e-mail: [email protected] J. Oshima Research and Education Center for the Learning Sciences, Shizuoka University, Shizuoka-shi, Japan e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_3
45
46
S. Ludvigsen et al.
1 Definitions and Scope The origin of the CSCL (Computer-Supported Collaborative Learning) field can be traced back to the 1980s. The NATO-sponsored workshop in Maratea, Italy in 1989 was the first formal event that used the term “Computer-Supported Collaborative Learning.” The first CSCL conference was held in Bloomington, Indiana, in the United States in 1995. The International Journal of Computer-Supported Collaborative Learning (IJCSCL) was established in 2006, and since then, more than 240 scientific papers have been published in IJCSCL. IJCSCL has become a leading journal with high impact (the 5-Year Impact Factor in 2018 was 2.788), and it is one of the top journals in the broader field of technology-enhanced learning and in education in general. Since 1996, different accounts of the history of CSCL have been published (Koschmann, 1996; Ludvigsen & Mørch, 2012; Stahl, 2015; Stahl, Koschman, & Suthers, 2014). Subfields of CSCL such as group awareness and collaboration scripts have also been conceptualized and discussed in the International Handbook of the Learning Sciences (Bodemer, Janssen, & Schnaubert, 2018; Jeong & Hartly, 2018; Kollar, Wecker, & Fischer, 2018; Schwarz, 2018; Slotta, Quintana, & Moher, 2018), and a meta-analysis of publications from 2005 to 2014 showed how pedagogy, technology, and modes of collaboration covary in CSCL for science, technology, engineering, and math (STEM) domains (Jeong, Hmelo-Silver, & Yu, 2014). At its core, CSCL is an interdisciplinary field of knowledge in which social, learning, and computer sciences are foundational. Since CSCL is rather specialized, researchers in CSCL typically mobilize specific frameworks within subfields of the aforementioned sciences. This leads to a conceptual base composed of multiple perspectives and stances, as illustrated in the chapters in this book (e.g., Hmelo-Silver & Jeong, this volume; Matuk, DesPortes, & Hoadley, this volume; Stahl & Hakkarainen, this volume). In some historical accounts, it is argued that collectively these constitute CSCL as a new paradigm based on specific assumptions such as collaboration as joint construction of meaning (Koschmann, 1996). Others take a pragmatic perspective and argue that the diversity of stances and perspectives is one of the strengths of the field (Ludvigsen & Mørch, 2012). All the aforementioned publications provide important insight into the emergence of CSCL as a scientific field over the last 30 years. Given the many recent chronological histories of the field, this chapter focuses instead on the conceptual dimensions of how CSCL has developed, specifically the epistemological and methodological stances taken and the computational artifacts studied.
1.1
Conceptualizing CSCL
The field of CSCL is centrally concerned with the interdependency between social interaction and computational artifacts. This forms a triadic structure, where a collaboration between at least two people is mediated by a computational artifact.
A Conceptual Stance on CSCL History
47
Computational artifacts contain information-processing capacities and are often part of a larger digital infrastructure or platform; for example, an online science simulation in which students can manipulate variables. In CSCL studies, collaboration can be mediated by computational artifacts in physical settings and also in synchronous and asynchronous online settings. To say that, the collaboration between participants is mediated by the computational artifact means that the overall situation is dependent on what the participants do together with the computational artifact. The description or analysis of this mediation unveils core processes of computersupported collaborative learning and is a central focus of the field. More specifically, mediation can be specified as the mechanisms that support emerging, productive interactional processes, and researchers can study how specific design features and collaborative patterns create the desired effects of mediation. CSCL research investigates the forms of guidance, prompts, scripts, and scaffolds that are beneficial for human learning and how to design environments or settings to optimize learning opportunities and enhance learning through different forms for regulation. Regulation is a process by which learners monitor their progress and adjust their future activities and can be designed through computational artifacts for self-regulation, co-regulation, and shared regulation (for a detailed overview, see Järvelä et al., 2016). In CSCL, a basic question is how to regulate and support human actions through collaboration mediated by computational artifacts as part of larger sociotechnical systems (e.g., Jeong et al., 2014). Introducing a computational artifact into collaboration has opened new research horizons. For example, a computational artifact has affordances, and specific kinds of activities can be either enabled or disabled by its features (Suthers, 2006), and this merits study. The participants’ sensemaking processes are realized through collaboration and can be stored in computational artifacts. Sensemaking is then used as an inscription in an analytic way, implying that participants in interaction create the meanings of their actions. The artifact thus allows researchers to track the sensemaking trajectories of learners. It is possible then to see that if participants struggle to understand a specific concept, they still make sense of what they do, even if they do not understand the deeper meaning of the concept. This is described as a back-and-forth movement in which students gradually improve their understanding of a concept or a conceptual system. The artifact can also be viewed as a representation of accumulated knowledge that is used to give instruction to participants. In this way, it can be studied as a resource for both students and teachers to be mobilized according to pedagogical objectives. In particular, an appropriate representation of argument could help learners be aware of what claims and evidence are involved in their argumentation and discuss how to deal with the components at a meta-cognitive level. In addition, teachers could monitor classroom discussion based on visualizations of student argumentation. Research on CSCL examining the mediational role of computational artifacts has been carried out in many settings and contexts, especially instructional settings such as schools but also other types of settings, such as workplaces and museums. The history of CSCL has moved from an emphasis on stand-alone applications to artifacts that are parts of classroom settings, sociotechnical systems, and digital infrastructures. In the last 10 years, online learning for massive population of learners and analytics of big data has received increased attention in CSCL.
48
S. Ludvigsen et al.
Digital infrastructures and artifacts—together with the methods that have been used throughout the history of CSCL (such as interaction analysis, knowledge tests, instruments for capturing performance, self-regulation, and co-regulation)—provide researchers with opportunities to collect new forms of data. CSCL researchers have also begun to test analytics to understand how collaboration emerges and what can be expected in terms of learning outcomes (Wise, Knight, & Buckingham-Shum, this volume). Models for predicting learning outcomes seem to be a promising route for the future (see the Journal of Learning Analytics published by the Society for Learning Analytics Research (SoLAR)). There are many opportunities provided by the emergence of new forms of data and larger digital infrastructures surrounding computational artifacts, CSCL researchers can take a step toward the challenge to better integrate the pragmatic stances with the other two epistemological stances that were established in early years. Conceptualizing CSCL as the interdependency between collaborating participants and computational artifacts requires the definition of an appropriate unit of analysis to explain and make comprehensible what and how people learn in collaboration (e.g., Enyedy & Stevens, 2014; Ludvigsen & Arnseth, 2017). All CSCL researchers design their studies based on this premise, and they design units of analysis that are specifically aligned with their epistemological assumptions about what is worth knowing and what counts as evidence of such knowing. Three main epistemological stances in CSCL can be identified over the last 30 years: Individualism, Relationism, and Pragmatic/Computational. Each holds assumptions that drive what kinds of details about learners and technologies are included in a study and how the central concepts motivating the work are operationalized. In this chapter, we unpack epistemological and methodological stances, as well as a selection of technological artifacts as a way to characterize the development of CSCL.
2 History and Development: A Scientometric Analysis As a backdrop for the epistemological and methodological stances taken in this historical chapter, we first provide a scientometric analysis of how CSCL has evolved as a field during the last 25 years (cf. Lund, Jeong, Grauwin, & Jensen, 2017 for a similar analysis on research in education more generally). SCOPUS is used as an appropriate database to track CSCL topics since it indexes more conference papers than Web of Science and the CSCL conferences (as well as other related conferences) are important sources of publications in the field. Fig. 1 shows the 6622 CSCL publications that result from a targeted SCOPUS database search.1 There is a notably higher number of publications in odd-numbered years, when the CSCL conference is held, showing the importance of the conference proceedings as the central publication venue in the field. Starting in 2022, the International Society of 1
The database was queried in May 2020 for publications in which the Title, Abstract, Keywords, or Publication Source contained the terms computer-supported AND collaborative AND learning, as well as any in which the Title, Abstract, or Keywords contain the term cscl AND (collaborative OR cooperation OR education OR learning) or the Publication Source includes the term cscl.
A Conceptual Stance on CSCL History
49
Fig. 1 Computer-supported collaborative learning publications in major journals and conference proceedings from 1990 to 2020 Table 1 Top sources of CSCL-related publications from 1990 to 2020; % values have been rounded off Source Computer-Supported Collaborative Learning Conferences Lecture Notes in Computer Science International Journal of Computer-Supported Collaborative Learning Computers and Education Computers in Human Behavior
# of publications 1901 438 324
% of 6622 29 7 5
171 115
3 2
the Learning Sciences will hold the CSCL conference conjointly with the International Conference in the Learning Sciences on an annual basis—we can thus expect that CSCL publications will be more constant across years in the future. In addition, we can see a clear jump in the overall number of papers published in 2005 (just before the founding of the International Journal of Computer Supported Collaborative Learning in 2006). Since then, the annual number of publications in years in which the conference is held has remained steadily over 400; this may be an early indication of a mature field. Table 1 shows the top 5 sources for the publications, representing the origin of 2949 (44%) of the 6622 publications. These sources are broadly situated at the crossroads of research in education, psychology, and computer science. Unsurprisingly, the CSCL conferences are the top venue, accounting for almost a third of the publications in the corpus.2 The relative importance of specific publication venues
All sources referring to specific CSCL conferences have been gathered together in the first row. This has been done for clarity as particular conferences are cited in varying ways. For example, two publications can be cited from the CSCL conference in 2017, but one is cited as CSCL conference and the other is cited as 2017 CSCL conference. 2
50
S. Ludvigsen et al.
declines sharply after this. While the dedicated IJCSCL journal is the source of 5% of the publications, the book series Lecture Notes in Computer Science actually has published more CSCL-related publications in total. Notably, they publish the proceedings for the Learning and Collaboration Technologies Conference (annually from 2014), affiliated with the Human Computer Interaction International Conference and, from 1995 to 2018, the Collaboration Researchers’ International Working Group Conference on Collaboration and Technology (with a learning theme). These are computer science-based communities that are thematically related to the CSCL conference and illustrate the diversity by which computer science connects to learning. Finally, two other journals found to frequently publish CSCL-related literature are the education-focused journal Computers and Education and the more generalist journal Computers in Human Behavior. The fact that over 50% of these publications (those not shown in the table) are dispersed across other venues can again illustrate diversity in approach, but also a broadening of research interest, since the inception of the flagship journal IJCSCL. The different fields represented by these publication venues have each contributed to the epistemological and methodological stances of CSCL as well as to the assumptions behind using technology to mediate collaborative learning. This will be discussed in depth in the remainder of the chapter.
3 State of the Art 3.1
Epistemological Stances in CSCL
We define an epistemological stance as a focus on what is worth knowing, given the researchers’ assumptions, combined with what counts as evidence for learning and knowing. Three main epistemological stances in CSCL can be identified over the last 30 years; two are theoretical (individualism and relationism) while the third is pragmatic, focusing on testing different collaborative technologies. In each epistemic stance, researchers operationalize components, such as collaborating participants, a unit of analysis, and computational artifacts, in different ways. The diversity in epistemological stances (and their methodological operationalization) is viewed as a weakness by some CSCL researchers. On the contrary, we argue that a single unified approach is impossible in interdisciplinary research on social, cognitive, emotional, and digital phenomena and would lead to an inadequate reduction of the phenomena in studies on CSCL. We prefer to formulate the agenda for CSCL as productive tensions, conceptualized through multivocality across stances (Suthers, Lund, Rosé, Teplovs, & Law, 2013). Given the productive tensions, it is important to develop analytic approaches for aggregating results across stances that can inform and deepen our studies and inform policy, especially on micro-level phenomena in the case of a shared corpus where differing units of analysis hinder direct comparisons of empirical results.
A Conceptual Stance on CSCL History
3.1.1
51
Individualism
The first epistemological stance is connected to the cognitive revolution of the 1950s and builds on methodological individualism (Sawyer, 2002). It is typical of approaches taken in experimental psychology, in which dependent and independent variables are defined and associated with individual learners’ performances. Here, learning is defined as what can be observed, tested, and measured in long-term memory. The individualism evidence base is in the measurement of individual knowledge gains. The computational artifact in such an experimental design is fashioned to create specific effects as part of the collaborative efforts. In the CSCL literature, this is often referred to as the factoring hypothesis (Greeno & Engeström, 2014; Ludvigsen & Arnseth, 2017). This epistemic stance is often based on assumptions that come from information processing, with roots in the cognitive revolution and advancements in cognitive and education psychology over the last 50 years (Clarkson & Simon, 1960; Greeno & Engeström, 2014). Such a cognitive stance can be seen in a number of different research traditions in CSCL. It is most often identified through research questions, review sections, relevant contributions, methods techniques, and the operationalization of collaboration. For example, the unit of analysis and description of collaboration are commonly operationalized through the analysis and measurement of individuals’ knowledge (Fischer, Kollar, Stegmann, & Wecker, 2013; Rummel, Mullins, & Spada, 2012). Studies of scripts in CSCL can be seen as a classical type of approach where the interaction between human cognition and computational artifacts is influenced in ways that are argued to facilitate learning and the development of collaboration capacities from an individualist epistemological stance. A script scaffolds the interaction with a view to meeting learning objectives; many scripts have been developed based on conceptual and empirical studies of CSCL (e.g., Dillenbourg & Jermann, 2002; Fischer et al., 2013). Fischer et al. (2013) proposed a script theory of guidance in CSCL that builds on schema theory and dynamic memory theory (Schank, 1999). The main argument is that it is necessary to learn how to design an environment that optimizes the connections between internal and external collaboration scripts. Internal collaboration scripts are understood as a dynamic cognitive structure of a learner about how collaboration should unfold in which knowledge components are configured; these scripts are developed over a period of time. Optimal learning takes place when the external guidance provided to collaborating learners is aligned within the learner’s zone of proximal development (Vygotsky, 1978, 1986) as per the internal script configuration (Fischer et al., 2013; Vogel, Weinberger, & Fischer, 2020). While there is much CSCL work that clearly takes an epistemological stance of individualism, there are also important, influential approaches in CSCL which are less clear-cut. One such example is the Knowledge Building stance (Scardamalia & Bereiter, this volume), which is used by some CSCL researchers within the individualism (Scardamalia & Bereiter, 2014, this volume) and, is focused on individual
52
S. Ludvigsen et al.
learning outcomes. However, other researchers use knowledge building from a different perspective that renders visible the collective aspects necessary for collaborative learning. This approach is more closely related to the second epistemological stance, relationism.
3.1.2
Relationism
The second epistemological stance is often connected to the sociocultural perspective, in which the computational artifact mediates the actions that are played out in specific settings (Ludvigsen & Arnseth, 2017; Ritzer & Gindoff, 1992). Dialogue is one of the central concepts. This epistemological stance is also related to microsociological orientations and ethnographic studies. Researchers taking a relationism stance are concerned with observing how collaborations emerge and what kinds of resources are used rather than conducting analyses based on predefined categories, as those in the individualism stance usually do. One of the central aims of such qualitative studies is to find mechanisms for collaboration. The inclusion of computational artifacts in the unit of analysis makes the analysis more complex because the artifacts come with processing capacities, inscriptions of knowledge, and different forms of regulation. CSCL researchers change the features of artifacts and modify the ways students work together with the goal of enhancing collaboration. In such studies, by zooming in on social interaction, coupled with the transformation of computational artifacts, one can identify unpredictable, emergent properties. This means that describing what goes on and what is at stake for participants is an important part of this epistemological stance. In other words, researchers examine how participants in a collaborative setting involve themselves and what they think is important in their involvement. In design-based research (DBR) in schools, new features of artifacts are tested and new social features create new conditions for learning. The collected data from DBR studies should be analyzed only from the normative design point of view but from the perspective of emerging institutional practices in which social norms and values are at stake (Enyedy, Danish, & DeLiema, 2015; Furberg, 2016). In relationism, the assumption is that learning and knowing moves from the social plane to the individual plane where they are internalized and appropriated and then move back to the social plane when activated. Although there is also an interplay between the social and the individual plane in the individualism stance, for example through scripting and scaffolding, the individualism stance’s evidence base is in the measurement of individual knowledge gains. In contrast, in the relationism stance, the unit of analysis and levels of description give priority to understanding collaborative efforts and how these efforts influence the participation of individuals and the mediation of the computational artifacts rather than how individuals develop their own skills and knowledge. Thus, while learning processes and outcomes can be reported in both epistemological stances, the second stance tends to focus more on processes that are recognized as facilitation of learning rather than quantified learning outcomes.
A Conceptual Stance on CSCL History
3.1.3
53
The Pragmatic and Computational Stance
The third epistemological stance has recently emerged and differs from the first two in that it is more pragmatic and gives precedence to CSCL tools and environments. Many examples of designed computational artifacts are based on pragmatic solutions or design principles that have emerged out of practical work with technologies. In these cases, although their epistemological positions often remain implicit, some researchers claim that the computational approach would be used for examining the emergence of ideas in collaboration and collective intelligence. We present an illustration related to learning analytics in recent CSCL developments in which contributions have taken a pragmatic stance. In online collaboration, many researchers have attempted to develop ways to visualize complex interactions among vast numbers of learners in MOOCs. Hoppe and others (Hoppe, 2017; Oshima & Hoppe, 2020) identified three categories of pragmatic or computational approaches to do so. The first category includes analytics of network structure, such as a social network of learners, other networks of learners, and artifacts. The second category includes methods of sequential analysis to examine how learners engage in their social interactions. The third category includes content analytics, such as text mining, other artifact analyses, and so on. Although most researchers in the field do not manifest their epistemic stances, they work to improve the general sense of “collaboration,” they document how much interaction occurred and among which learners, they strive to show how interaction patterns develop through learning phases or courses, and they advise on what educational materials could support the engagement in their productive collaboration. We take the normative view that in future work, the pragmatic and computational stance should be better integrated with the other two epistemic stances. The field has already evolved in this direction. First, the computational approach has been applied to knowledge-building studies (e.g., Oshima, Oshima, & Matsuzawa, 2012) through the use of social network analysis of vocabularies used by learners in their collaborations. Knowledge Building Discourse Explorer (KBDeX) visualizes how vocabulary network structures develop through learners’ engagement in their collaborative discourse in colocated contexts (Oshima, Oshima, & Fujita, 2018) and online contexts (Lee & Tan, 2017). Second, Shaffer and colleagues have used a computational approach called Epistemic Network Analysis (ENA) (Shaffer, 2017; Shaffer, Collier, & Ruis, 2016), which is based on epistemic frame theory to disentangle the epistemic practices that learners engage in through their collaborative discourse. In their sophisticated technique, researchers can directly compare epistemic practices between different groups and identify how their epistemic practices develop over time. These two examples of computational approaches (KBDeX and ENA) are based on specific epistemic stances, and this allows researchers to conduct their DBR through the plan–enactment–analysis–refinement cycle.
54
3.2
S. Ludvigsen et al.
Methodological Stances in CSCL
Methodologies and analytic techniques in CSCL—as in similar domains of investigation—are constituted by a number of dimensions: theoretical assumptions, purpose of analysis, units of action, interaction, and analysis, representations of data, and analytic manipulations of data (for an overview of methodological approaches to CSCL, see Hmelo-Silver & Jeong, this volume; Jeong et al., 2014; Lund & Suthers, 2013). The evolution of methodologies and analytic techniques in the field of CSCL can, in part, be tracked through their relationship to authors’ epistemological stances and the stances’ key concepts. Techniques founded on individualism, such as intervention-based experimental protocols that target collaboration, coupled with pre and posttests of individual knowledge, represent one of the dominating positions in CSCL. This tradition was quickly followed by dialogical approaches, such as interaction analysis. Multivocal (Suthers et al., 2013) and mixed-methods (Johnson & Onwuegbuzie, 2004) approaches are reactions to the different ways of doing research which suggests ways in which the convergence of qualitative and quantitative methods may be possible; researchers in the multivocality approach also claim that the epistemological tensions underlying different analytical approaches can be productive. In both cases, the goal is to move the field toward a more comprehensive understanding of conceptual constructs of interest through the use of multiple methods.
3.2.1
Analyzing Interactions Based on Predefined Dimensions and Categories
Predefined dimensions, such as codes, are used for testing hypotheses or examining models of collaborative activity or specific visions of disciplinary teaching and learning in many CSCL studies based on individualism. The dimensions represented through codes are theoretically informed. Examples can be taken from one of the most influential perspectives in CSCL—the knowledge-building stance (Scardamalia & Bereiter, 2014). Knowledge building expands on a set of assumptions from research (especially writing and philosophy of science) and many empirical studies over the last 30 years (Chen, Scardamalia, & Bereiter, 2015). The core idea is to create conditions for inquiry learning that contribute to collective and individual advancement of knowledge in a community such as a classroom. Students perform inquiries in which they attempt to contribute to their community knowledge advancement, consequentially leading them to change their minds about the problem they are working on. Through these inquiries, the class can build shared knowledge. Work with authoritative information is an important feature of knowledge-building designs, which create transparency for idea generation and advancement. Students are encouraged to develop a collective explanation in order to understand complex phenomena in different knowledge domains. This means they delve deeply into the problems they are working on and identify and agree upon scientific explanations.
A Conceptual Stance on CSCL History
55
Such agreement develops over time and is based on arguments (Kimmerle, Fischer, & Cress, 2020). Agreement is negotiated and based on the knowledge-building work that the students have engaged in. Researchers who use this approach analyze student knowledge-building activities by developing a number of categories/codes to be used in empirical studies. Yang et al. (2016), for instance, created codes for categorizing students’ inquiries by establishing three sets of codes: questions, ideas, and community. Students’ questions were categorized into three types: fact-seeking, explanation-seeking, and metacognitive, by which students form the basis of their work with authoritative sources. Ideas were also identified as simple claims, more elaborate explanations of connections, or metacognitive statements. Finally, community is divided into actions and activities. Taking the step of creating a summary and conceptualizing the connection between questions, ideas, and community action involve a social position that is dependent on which positions students can take and the forms of agency that actually emerge. These are top-down categories based on the principles of knowledge building. The coding categories move from rather simple to more advanced cognitive operations and from individual actions to collective processing. The epistemic orientation is related to the types of questions and explanations. The most advanced reasoning types include the capacity to frame questions and expand and refine explanations. Advanced cognitive operations and social dynamics are necessary to frame an ill-structured problem or set of tasks. Collaborative efforts involving social and cognitive functions are highly dependent on regulation at different levels. Individual students must develop regulation at the individual level in order to collaborate. Co-regulation is also needed for advanced forms of collective effort to occur.
3.2.2
Analyzing Interaction as Identification of Emergence in Collaboration
Interaction analysis is often preferred in CSCL studies that are based on relationism, sometimes in combination with pre- and posttests, so sometimes elements of methodological stances are mixed. Interaction analysis originated in an influential article by Jordan and Henderson (1995) that was published in the Journal of the Learning Sciences and has developed and gradually become more sophisticated since (Furberg, Kluge, & Ludvigsen, 2013; Ludvigsen & Arnseth, 2017). In this type of analysis, researchers analyze how interaction is mediated by artifacts in an institutional setting, such as a school, museum, or workplace. Many researchers often combine detailed examination of interaction from video recordings with an ethnographic approach, in which the role of the larger culture in shaping the collaboration is explored. The depth of the ethnography varies in studies; however, usually, the in-depth analysis of the interaction is given priority. Interaction analysis is not a unified analytic endeavor because the cognitive– interactive unit of study varies (Lund & Suthers, 2013). Many studies define the group as the unit of analysis (Stahl, 2015). This means that group properties emerge
56
S. Ludvigsen et al.
and become the unit that learns. The individuals involved are part of the emerging group activities, but they are not analytically separated. In other versions of interaction analysis, researchers investigate how linguistic cues are part of students’ work with content (Furberg et al., 2013). The combination of language use and emerging knowing becomes the analytic foci. One could argue that what varies in different versions of interaction analysis are the levels of description (the kinds of details that are treated as most interesting). The interaction analysis used in CSCL does not take settings for granted. Rather, it analyzes data with a bottom-up approach through participants’ orientations to the interaction. This methodological stance is inspired by the ethnosciences (e.g., ethnomethodology, conversation analysis, studies of everyday practices, and qualitative studies in the sociocultural perspective). The assumption is that researchers must understand what is at stake for the agents in an interaction in order to understand what and how they learn; this is also known as the emic perspective (Pike, 1967) or the insider’s view, as opposed to the normative outsider’s view.
3.2.3
Design-Based Research in CSCL
It is reasonable to argue that DBR is closely connected to CSCL. Historically speaking, CSCL has developed in parallel and conjunction with DBR. DBR was conceptualized in two influential papers published in the Journal of the Learning Sciences in 1992—one by A. Brown (1992) and the other by A. Collins, Joseph, and Bielaczyc (2009). Both papers argue that we must advance our understanding of how people learn in natural contexts. In terms of the learning sciences and research design, the key issues were the forms of intervention that should be tested and how the results should be interpreted and understood. Interventions in DBR in CSCL are often concerned with the design of an environment, typically including the organization of STEM content (Stahl, 2006; White, 2018; White & Pea, 2011) and specific digital tools in a web-based environment. The more domain-oriented interventions can be combined with specific regulation prompts or scripts. The implication is that many DBR projects in CSCL consist of multiple interventions or combinations of interventions to test how social and cognitive functions can be enhanced. Similar to analysis with predefined categories, one treats DBR interventions as a set of variables that are evaluated to determine if learning processes become more advanced and if learning outcomes improve in the treatment group. However, if one sees these as practices that emerge from set of formative interventions, the main concerns are shifted to the enhancement of specific social and cognitive functions that become part of deeper epistemic orientations and productive practices. In other words, testing variables offers insights about how individuals develop, while formative interventions in DBR focus mainly on how sociocultural resources are activated and used in CSCL processes to create collaboration that is more productive.
A Conceptual Stance on CSCL History
3.2.4
57
Examining Mass Collaboration in CSCL
Until recently, CSCL was often done in the context of small groups. The technological shift that made mass collaboration possible has changed the way we think about collaboration and shown the necessity for new ways to study it. Indeed, largescale interactions must be viewed as a specific type of phenomenon which requires a commensurate methodological stance to study it. One of the first such systems in which collaboration was studied was TAPPED_IN, a community forum for teacher professional development (Schlager & Schank, 1997). The goal of the study was to facilitate teachers’ transitions to reform-based practices through the development and support of a virtual community. Cress and Kimmerle (2008) have more formally established and continued this work on the conceptualization of collaboration in large groups for more than 10 years. They used what they call the cognitive-systemic stance, which builds on cognitive psychology, focusing on investigations of conceptual change and a general system theory view. Their argument is that both individual cognitive systems and (more generally) the individual’s interactions should be connected with the larger environment, which is described as a social system. Moreover, social systems in themselves develop features, capacities, and their own complexities. The latter is a key issue since the social system builds on the cognitive complexity that emerges and its own complexity (autopoietic systems) (for a full presentation of this influential stance, see Cress & Kimmerle, 2017). These researchers developed a framework—called the A3C framework—to understand and explain large-scale interactions (Jeong, Cress, Moskaliuk, & Kimmerle, 2017). This framework takes joint interactions as the foundation for successful collaboration. The concepts used to describe social interaction are attendance, coordination, cooperation, and collaboration. These four concepts make it possible to conduct a differentiated analysis of the degrees of collaboration—from individual to collective responsibility.
3.3
Computational Artifacts in CSCL
Computational artifacts to support collaboration in a variety of contexts have recently evolved as the technological developments have become available. Newly developed computational artifacts are expected to expand the field of CSCL and will enrich the epistemological and methodological stances discussed in this chapter. We briefly introduce three recent CSCL artifacts that (1) allow for innovative recording of previously unavailable aspects of collaboration, (2) mobilize technology to render possible new ways of viewing phenomena to be learned, and (3) use alternative embodiment to explore issues of identity and to scaffold the modeling, simulation, and discussion of parts of a system.
58
3.3.1
S. Ludvigsen et al.
Interactive Surfaces
Tabletop surfaces that record individual and group work allow for the closer study of the face-to-face patterns of collaboration. These patterns can involve pinpointing specific activities important for collaboration, such as coordination or target of interaction (e.g., Davis, Horn, Block, et al., 2015), or they can suggest techniques that are made possible by the technology to help enable group work. Niu, McCrickard, and Nguyen (2016) described three such techniques that enhance a card-based activity. Semantic zooming shows more information at larger sizes, dynamic grouping proposes card combinations that facilitate idea generation and synthesization, and info glow highlights artifacts that other participants have recently accessed or changed, thus providing a way for users to keep track of engagement and individual and group focus.
3.3.2
Immersive Environments
Virtual reality gives us the ability to experiment with being in imaginary places and with changing aspects of our bodies, giving us increased opportunities for learning. For example, students may view a functioning heart and obtain a better understanding of anatomy and physiology than with ordinary dissections due to rotation possibilities (Silén, Wirell, Kvist, Nylander, & Smedby, 2008). In another example, the experience of having the impression of being in a body that belongs to a different race, age, or gender of one’s own changes implicit social biases and thus shows how the multisensory experience of the body influences higher level social attitudes (Maister, Slater, Sanchez-Vives, & Tsakiris, 2015).
3.3.3
Technology-Enhanced Embodied Play
The role of the body in learning and understanding has long been neglected (Stolz, 2015), but scholars are realizing its potential for learning, especially when motioncapture technology is used as an aid in the analysis of body movements. Danish, Enyedy, Saleh, Lee, and Andrade (2015) have proposed that by taking on the roles of bees in a beehive, children can plan their role-play (i.e., model parts of a system) before acting out different phenomena (i.e., simulate aspects of a system). It is easy to see how class discussion of such activities can scaffold an understanding of the influence of the role of the individual(s) with the group.
A Conceptual Stance on CSCL History
59
4 The Future Since 1989, the CSCL field has gone through major developments. With regard to changes related to computational artifacts, we see changes in micro designs and larger digital infrastructures. Digital transformation in general has changed the condition for human learning over the last 30 years. However, it is interesting to note that the epistemological stances emphasized in this chapter have been coherent since 1989, even though the methodological stances and the technology have evolved. Researchers take one epistemological stance as the primary stance in their research designs. The development within the stances is toward increased sophistication in the analysis or toward more specialization in the use of analytic techniques. The epistemological stances give a very good overview of both incremental changes and what is stable in the CSCL field. All types of contributions based on the two epistemological stances, and the more computational and pragmatic stance have started to look into the use of analytics for predicting learning outcomes. It is not surprising that CSCL has been stabilizing these last years. However, looking closer at how CSCL is studied within specialized domains of computer science while also examining the possible bridges between how collaboration is studied in education and the human and social sciences more broadly is a path forward. An important guiding question on this journey would be to ask what we need to renew in order to capture the core CSCL processes and understand what is affecting them. Acknowledgments The authors would like to warmly thank Sebastian Grauwin for his work regarding the scientometrics in this chapter (cf. http://sebastian-grauwin.com/XYZ_EDUCMAP/). The authors are also grateful to the ASLAN project (ANR-10-LABX-0081) of Université de Lyon, for its financial support within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) of the French government operated by the National Research Agency (ANR).
References Bodemer, D., Janssen, J., & Schnaubert, L. (2018). In F. Fischer, C. Hmelo-Silver, S. Goldman, & P. Reinman (Eds.), International handbook of the learning sciences (pp. 351–358). New York, NY: Routledge. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2(2), 147–178. Chen, B., Scardamalia, M., & Bereiter, C. (2015). Advancing knowledge-building discourse through judgments of promising ideas. International Journal of Computer-Supported Collaborative Learning, 10(4), 345–366. Clarkson, G. P. E., & Simon, H. A. (1960). Simulation of individual and group behavior. American Economic Review, 50, 920–930. Collins, A., Joseph, D., & Bielaczyc, K. (2009). Design research: Theoretical and methodological issues. Journal of the Learning Sciences, 13(1), 15–42.
60
S. Ludvigsen et al.
Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building with wikis. International Journal of Computer-Supported Collaborative Learning, 3 (2), 105–122. Cress, U., & Kimmerle, J. (2017). The interrelations of individual learning and collective knowledge construction: A cognitive-systemic framework. Cham: Springer. Danish, J., Enyedy, N., Saleh, A., Lee, C., & Andrade, A. (2015). Science through technology enhanced play: Designing to support reflection through play and embodiment. In O. Lindwall, P. Häkkinen, T. Koschman, P. Tchounikine, & S. Ludvigsen (Eds.), Exploring the material conditions of learning: The computer supported collaborative learning (CSCL) conference (Vol. 1, pp. 332–339). Gothenburg: The International Society of the Learning Sciences. Davis, P., Horn, M., Block, F., et al. (2015). Whoa! We are going deep in the trees!: Patterns of collaboration around an interactive information visualization exhibit. International Journal of Computer Supported Collaborative Learning, 10, 53–76. Dillenbourg, P., & Jermann, P. (2002). Designing integrative scripts. In F. Fischer, I. Kollar, H. Mandl, & J. M. Haake (Eds.), Scripting computer-supported collaborative learning: Cognitive, computational, and educational perspectives (pp. 275–301). New York: Springer. Enyedy, N., Danish, J. A., & DeLiema, D. (2015). Constructing liminal blends in a collaborative augmented-reality learning environment. International Journal of Computer-Supported Collaborative Learning, 10(1), 7–34. Enyedy, N., & Stevens, R. (2014). Analyzing collaboration. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 191–212). New York, NY: Cambridge University Press. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. Furberg, A. (2016). Teacher support in computer-supported lab work: Bridging the gap between lab experiments and students’ conceptual understanding. International Journal of ComputerSupported Collaborative Learning, 11, 89–113. Furberg, A. L., Kluge, A., & Ludvigsen, S. R. (2013). Student sensemaking with science diagrams in a computer-based setting. International Journal of Computer-Supported Collaborative Learning, 8(1), 41–64. https://doi.org/10.1007/s11412-013-9165-4. Greeno, J., & Engeström, Y. (2014). Learning in activity. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 128–150). New York, NY: Cambridge University Press. Hmelo-Silver, C., & Jeong, H. (this volume). An overview of CSCL methods. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Hoppe, U. (2017). Computational methods for the analysis of learning and knowledge building communities. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (1st ed., pp. 23–33). Alberta: Society for Learning Analytics Research. Järvelä, S., Kirschner, P. A., Hadwin, A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual- and group-level shared regulatory activities. International Journal of Computer-Supported Collaborative Learning, 11(3), 263–280. Jeong, H., Cress, U., Moskaliuk, J., & Kimmerle, J. (2017). Joint interactions in large online knowledge communities: The A3C framework. International Journal of Computer-Supported Collaborative Learning, 12, 113–151. Jeong, H., & Hartly, K. (2018). Theoretical and methodological frameworks for CSCL. In F. Fischer, C. Hmelo-Silver, S. Goldman, & P. Reinman (Eds.), International handbook of the learning sciences (pp. 330–339). New York: Routledge. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks, 2005–2009. International Journal of Computer-Support Collaborative Learning, 9(3), 305–334. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14–26. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. Journal of the Learning Sciences, 4(1), 39–103.
A Conceptual Stance on CSCL History
61
Kimmerle, J., Fischer, F., & Cress, U. (2020). Argumentation and knowledge construction. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Kollar, I., Wecker, C., & Fischer, F. (2018). Scaffolding and scripting (computer-supported) collaborative learning. In F. Fischer, C. Hmelo-Silver, S. Goldman, & P. Reinman (Eds.), International handbook of the learning sciences (pp. 340–350). New York: Routledge. Koschmann, T. (Ed.). (1996). CSCL: Theory and practice of an emerging paradigm. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.. Lee, A. V. Y., & Tan, S. C. (2017). Promising ideas for collective advancement of communal knowledge using temporal analytics and cluster analysis. Journal of Learning Analytics, 4(3), 76–101. Ludvigsen, S., & Arnseth, H. C. (2017). Computer-supported collaborative learning. In E. Duval, M. Sharples, & R. Sutherland (Eds.), Technology enhanced learning: Research themes (pp. 47–58). Cham: Springer. Ludvigsen, S., & Mørch, A. (2012). Computer-supported collaborative learning: Basic concepts, multiple perspectives, and emerging trends. In P. Peterson, E. Baker, & B. McGaw (Eds.), International encyclopedia of education (Vol. 5, 3rd ed., pp. 290–296). Oxford: Elsevier. Lund, K., Jeong, H., Grauwin, S., & Jensen, P. (2017). Une carte scientométrique de la recherche en éducation vue par la base de données internationales Scopus [A scientometric map of research on education according to the SCOPUS international database]. Les Sciences de l'éducation pour l'ère nouvelle: Revue internationale, CERSE, Université de Caen, 2017, 50(1), 67–84. Lund, K., & Suthers, D. D. (2013). Methodological dimensions. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 21–35). New York: Springer. Maister, L., Slater, M., Sanchez-Vives, M. V., & Tsakiris, M. (2015). Changing bodies changes minds: Owning another body affects social cognition. Trends in Cognitive Sciences, 19(1), 6–12. Matuk, C., DesPortes, K., & Hoadley, C. (this volume). Conceptualizing context in CSCL: Cognitive and sociocultural perspectives. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Niu, S., McCrickard, D. S., & Nguyen, S. (2016). Learning with interactive tabletop displays. IEEE Frontiers in Education Conference, 1–9. https://doi.org/10.1109/FIE.2016.7757601. Oshima, J., & Hoppe, U. (2020). Finding meaning in log-file data. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Oshima, J., Oshima, R., & Fujita, W. (2018). A mixed-methods approach to analyze shared epistemic agency in jigsaw instruction at multiple scales of temporality. Journal of Learning Analytics, 5(1), 10–24. Oshima, J., Oshima, R., & Matsuzawa, Y. (2012). Knowledge building discourse explorer: A social network analysis application for knowledge building discourse. Educational Technology Research & Development, 60, 903–921. Pike, K. (1967). Language in relation to a unified theory of the structure of human behavior. The Hague: Mouton. Ritzer, G., & Gindoff, P. (1992). Methodological relationism: Lessons for and from social psychology. Social Psychology Quarterly, 55(2), 128–140. Retreived from http://www.jstor.org/ stable/2786942 Rummel, N., Mullins, D., & Spada, H. (2012). Scripted collaborative learning with the cognitive tutor algebra. International Journal of Computer-Supported Collaborative Learning, 7(2), 307–339. Sawyer, R. K. (2002). Unresolved tensions in sociocultural theory: Analogies with contemporary sociological debates. Culture & Psychology, 8, 283–305. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 397–417). New York: Cambridge University Press.
62
S. Ludvigsen et al.
Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Schank, R. C. (1999). Dynamic memory revisited. Cambridge: Cambridge University Press. Schlager, M., & Schank, P. (1997). TAPPED IN: A new on-line teacher community concept for the next generation of internet technology. In R. Hall, N. Miyake, & N. Enyedy (Eds.), Proceedings of CSCL ’97, The second international conference on computer support for collaborative learning (pp. 234–243). Hillsdale, NJ: Lawrence Erlbaum Associates. Schwarz, B. B. (2018). Computer-supported argumentation and learning. In F. Fischer, C. HmeloSilver, S. Goldman, & P. Reinman (Eds.), International handbook of the learning sciences (pp. 318–329). New York: Routledge. Shaffer, D. W. (2017). Quantitative ethnography. Madison, WI: Cathcart Press. Shaffer, D. W., Collier, W., & Ruis, A. R. (2016). A tutorial on epistemic network analysis: Analyzing the structure of connections in cognitive, social, and interaction data. Journal of Learning Analytics, 3(3), 9–45. Silén, C., Wirell, S., Kvist, J., Nylander, E., & Smedby, O. (2008). Advanced 3D visualization in student-centred medical education. Medical Teaching, 30(5), e115–e124. Slotta, J., Quintana, R., & Moher, T. (2018). Collective inquiry in community of learners. In F. Fischer, C. Hmelo-Silver, S. Goldman, & P. Reinman (Eds.), International handbook of the learning sciences (pp. 308–317). New York: Routledge. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2015). A decade of CSCL. International Journal of Computer-Supported Collaborative Learning, 10(4), 337–344. Stahl, G., & Hakkarainen, K. (this volume). Theories of CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Stahl, G., Koschman, T., & Suthers, D. (2014). Computer-supported collaborative learning. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 479–500). Cambridge: Cambridge University Press. Stolz, S. A. (2015). Embodied learning. Educational Philosophy and Theory, 47(5), 474–487. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1, 315–337. Suthers, D. D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (Eds.). (2013). Productive multivocality in the analysis of group interactions. New York: Springer. Vogel, F., Weinberger, A., & Fischer, F. (2020). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Boston: Harvard University Press. Vygotsky, L. S. (1986). Thought and language. Cambridge, MA: Harvard University Press. White, T. (2018). Connecting levels of activity with classroom network technology. International Journal of Computer-Supported Collaborative Learning, 13(1), 93–122. White, T., & Pea, R. (2011). Distributed by design: On the promises and pitfalls of collaborative learning with multiple representations. Journal of the Learning Sciences, 20(3), 1–59. Wise, A. F., Knight, S., & Buckingham-Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Yang, Y., van Aalst, J., Chan, C. K. K., & Tien, W. (2016). Reflective assessment in knowledge building by students with low academic achievement. International Journal of ComputerSupported Collaborative Learning, 11, 281–311.
A Conceptual Stance on CSCL History
63
Further Readings Furberg, A., Kluge, A., & Ludvigsen, S. (2013). Student sensemaking with science diagrams in a computer-based setting. International Journal of Computer-Supported Collaborative Learning, 8(1), 41–64. In this paper, the authors report on a study of students’ conceptual sensemaking with science diagrams within a computer-based learning environment aimed at supporting collaborative learning. Through the microanalysis of students’ interactions in a project about energy and heat transfer, they demonstrate how representations become productive social and cognitive resources in the students’ conceptual sensemaking. This paper is typical for studies with a sociocultural stance in CSCL. Hall, K., Vogel, A., Huang, G., Serrano, K., Rice, E., Tsakraklides, S., & Fiore, S. (2018). The science of team science: A review of the empirical evidence and research gaps on collaboration in science. American Psychological Association, 73(4), 532–548. This review is relevant to CSCL in that it provides empirical evidence on how interdisciplinary scientific teams can improve the science that they produce together. The review summarizes the empirical findings from the Science of Team Science literature, which centers around five key themes: the value of team science, team composition and its influence on team science performance, formation of science teams, team processes central to effective team functioning, and institutional influences on team science. Cross-cutting issues are discussed in the context of new research opportunities to further advance the Science of Team Science evidence and better inform policies and practices for effective team science. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks, 2005–2009. International Journal of Computer-Support Collaborative Learning, 9(3), 305–334. This paper provides an overview of CSCL methodological practices. CSCL is an interdisciplinary research field where several theoretical and methodological traditions converge. In the paper, CSCL research methodology is examined in terms of (1) research designs, (2) research settings, (3) data sources, and (4) analysis methods. In addition, the authors examine how these dimensions are related to the theoretical frameworks of the research. Methodological challenges of the field are discussed along with suggestions to move the field toward meaningful synthesis. Lund, K., Rosé, C. P., Suthers, D. D., & Baker, M. (2013). Epistemological encounters in multivocal settings. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions. C. Hoadley & N. Miyake (Series Eds.), Computer supported collaborative learning series (Vol. 15, pp. 659–682). Springer. In this chapter, the authors argue for maintaining the diversity of epistemological approaches while either achieving complementarity within explanatory frameworks on different levels or maintaining productive tension. They highlight four ways to do this: (1) Leverage the project’s boundary object in order to broaden epistemological views, (2) use alternative operationalization to bring out different aspects of a complex analytical construct, (3) enrich a method’s key analytic constructs with new meanings in an isolated manner, and (4) recognize that incommensurability radicalizes researcher positions but also makes researchers more aware of their constraints. Shaffer, D. W. (2017). Quantitative ethnography. Cathcart Press. In his book, Shaffer proposes a new discipline called quantitative ethnography. The recent development of information technologies and computer science would make it possible for us to treat big data in the CSCL field. Along with a robust epistemic stance, Shaffer explores a new direction of analyzing collaboration computationally.
An Overview of CSCL Methods Cindy E. Hmelo-Silver and Heisawn Jeong
Abstract CSCL as a field incorporates diverse methodological practices. This chapter provides an overview of research methods and practices in CSCL. Research methods are designed to answer those questions central to CSCL, whereas the practices refer to how these methods are used in practice. The chapter considers the diversity of methodological practices that are used to address the questions that CSCL researchers ask. In particular, research designs, settings, data sources, and analysis methods are reviewed using a literature corpus of CSCL research in STEM fields from 2005 to 2014. The results of this review show the range of practices used in CSCL research and the common practice of mixing methods. Finally, future trends related to visualization, automated analysis, and multimodal data are considered. These future trends address the complexity and diversity of CSCL environments as well as the challenges in analyzing the vast amount of data in seeking to support and understand collaborative sensemaking. Keywords Research methods · Research design · Methodological practices
1 Definitions and Scope: CSCL Methodological Practices CSCL is an active and growing field that embraces diverse methodological practices. In this chapter, we provide an overview of research methods and methodological practices in CSCL. Research methods are the specific techniques designed to answer those questions central to CSCL. Methodological practices are how those techniques are applied. These questions revolve around how technology supports and mediates C. E. Hmelo-Silver (*) Center for Research on Learning and Technology, Indiana University, Bloomington, IN, USA e-mail: [email protected] H. Jeong Department of Psychology, Hallym University, Chuncheon, South Korea e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_4
65
66
C. E. Hmelo-Silver and H. Jeong
collaborative learning and sensemaking (Jeong & Hmelo-Silver, 2016; Suthers, 2006). We consider research methods and practices to include types of questions being asked, research designs, settings in which the research is conducted, sources of data that are used, and approaches taken to analyze the data. Research methods are, at their very essence, ways to produce knowledge that a professional community considers to be legitimate (Kelly, 2006). These methods are the ways that we gather and analyze data and produce evidence that help make inferences that address research questions and problems (Shavelson & Towne, 2002). We consider qualitative, quantitative, and mixed-method approaches. That said, the choice of method (and resulting methodological practice) is often driven by the overriding theoretical commitments of the authors (see Stahl & Hakkarainen, this volume) as well as the goals of research. This chapter describes the results from an examination of an expanded corpus of CSCL research from 2005 to 2014 that includes 693 articles (McKeown et al., 2017), updating the results of similar work of the authors that covers 400 articles from 2005 to 2009 (Jeong, Hmelo-Silver, & Yu, 2014), that we refer to as “the earlier results.” This article corpus includes systematically collected journal articles with CSCL research from 2005 to 2014. The earlier results included a review of both STEM and non-STEM domains, but the updated corpus is focused on STEM domains as well as use in education programs. In the earlier results, these disciplines accounted for 78% of the papers reviewed (Jeong & Hmelo-Silver, 2012) suggesting that narrowing the focus to STEM and education disciplines will not have a substantial effect on the review presented here. To update this, we include some newer examples, selecting empirical articles from recent ijCSCL volumes to use as running examples. These examples were selected because they were about CSCL environments that used mixed methods or multiple measures as well as new analytic tools. We consider overall historical trends and the challenges in CSCL research in formal educational settings, with a particular focus on STEM and Education disciplines. In considering methodological practices in CSCL, it is important to consider the nature of questions being asked, the research designs used, the settings in which research is conducted, the types of data collected, and the analytical tools used. Research designs indicate the plan for addressing the research question. They differ depending on whether the study goals are descriptive or explanatory. Research settings refer to the contexts in which the research is conducted, generally either classrooms or controlled laboratory settings. Data are defined as the sources and materials, which are analyzed in the research. Analysis methods are the processes conducted on the data sources that include both qualitative and quantitative approaches. These are defined further in their respective sections. Within all these different aspects, there is considerable diversity—a theme to which this chapter returns. CSCL research methods are unique in that they address written and spoken language as well as nonverbal aspects of interaction in examining collaborative processes and outcomes.
An Overview of CSCL Methods
67
2 History and Development Research methods have been an important topic of discussion in the CSCL community. This is not surprising given the complexity of CSCL environments with the interplay of pedagogies, technology, and modes of collaboration (Kirschner & Erkens, 2013; Major, Warwick, Rasmussen, Ludvigsen, & Cook, 2018; McKeown et al., 2017). It has been the subject of numerous workshops at CSCL conferences over the years that resulted in a special issue of Computers and Education in 2003 on “Documenting Collaborative Interactions: Issues and Approaches” (Puntambeker & Luckin, 2002) with another special issue on “Methodological Issues in CSCL” in 2006 (Valcke & Martens, 2006). In 2007, a special issue of Learning and Instruction focused on “Methodological Challenges in CSCL” (Strijbos & Fischer, 2007). The interest has continued in the CSCL book series with two methods-focused volumes in that series (Analyzing interactions in CSCL: Methodology, approaches, and issues, Puntambekar, Erkens, & Hmelo-Silver, 2011; Productive Multivocality in the Analysis of Group Interactions, Suthers, Lund, Rosé, & Teplovs, 2013). The methods of CSCL have been derived from psychology, education, cognitive science, linguistics, anthropology, and computer science. Over time, the kinds of research questions have become more diverse. In the earlier research, most studies focused on examining the effects of technology and instruction, and although these were still a major focus in the later years of the larger corpus, researchers also addressed more questions that included effects of learner characteristics and affective outcomes. There have also been trends over time to a smaller percentage of qualitative studies that are not connected to a well-described research method and a higher percentage of studies that used inferential statistics and content analysis.
3 State of the Art: Current Methodological Practices in CSCL 3.1
Research Questions in CSCL
Research methods are designed to help answer specific types of research questions. Such questions organize research activities. CSCL researchers use research questions to determine the appropriateness of data collection and analysis methods and evaluate the relevance and meaningfulness of results (Onwuegbuzie & Leech, 2006). Research questions can be distinguished from research problems or goals (Creswell & Creswell, 2017; Onwuegbuzie & Leech, 2006). A research problem is an issue or dilemma within the broad topic area that needs to be addressed or investigated such as shared regulation in online learning. A research purpose or objective follows from the research problem and specifies the intent of the study such as whether it intends to describe variable relationships, explain the causality of the relationships, or explore a phenomenon (e.g., whether to search for causes or seek a remedy). A
68
C. E. Hmelo-Silver and H. Jeong
research question is a specific statement of specific inquiry that the researcher seeks to investigate (e.g., whether shared regulation can be improved in certain conditions). A hypothesis, unique to quantitative research, is a formal prediction that arises from a research question (e.g., group awareness tools can improve shared regulation). In the corpus reviewed from 2005 to 2014, most research questions were, not surprisingly, organized around examining technology interventions (37% of studies) or instructional interventions (24%). In addition, 13% of studies looked at the effects of learner characteristics such as motivation. But looking at these general questions in isolation does not tell the whole story. Often CSCL research asks questions about knowledge outcomes, learner, and/or group characteristics such as gender and group cohesion, as well as different kinds of knowledge outcomes and collaborative processes. The CSCL studies reviewed often asked multiple questions in a given study. Similarly, a review that focused on classroom discourse and digital technologies found that similar themes emerged in examining the affordances of technologies and learning environments more broadly (Major et al., 2018). However, they also found a theme focused on how digital technologies enhanced dialogic activities and support knowledge co-construction. Even without the emphasis on STEM, Major et al. (2018) found similar themes providing converging evidence for the generality of the kinds of questions asked in CSCL. A recent study exemplifies this trend to use multiple questions as Borge, Ong, and Rosé (2018) ask questions about whether and how a pedagogical framework and technology support can affect group regulation as well as how individual reflective scripts affect collaborative processes. The Borge et al. study also epitomize the challenges in categorizing the kinds of research questions that CSCL researchers ask—it is testing a theoretically grounded framework, but this framework is embodied in a pedagogical approach and technology that are being studied.
3.2
Research Designs and Settings
Research designs refer to strategies for inquiry that allow researchers to address their questions (Creswell & Creswell, 2017). Traditionally, they tend to be divided into three types: qualitative, quantitative, and mixed methods and within each of these they can be descriptive or explanatory. Descriptive designs focus on what is happening whereas explanatory designs can investigate causal processes or mechanisms (Shavelson & Towne, 2002). Quantitative designs tend to test objective theories by looking at the relationships among variables that can be measured (Creswell & Creswell, 2017) but they can also be used for exploratory data analysis to generate and refine theories and hypotheses (Behrens, 1997). Experimental designs describe studies where researchers actively manipulate variables to examine causal relationships among variables (e.g., whether the use of particular kinds of scaffolds increases interaction; see Janssen & Kollar, this volume). Statistical tests would be used to determine if any difference between
An Overview of CSCL Methods
69
conditions was greater than might occur by chance. Experiments can be further classified as randomized or quasi-experimental (e.g., assignments to conditions are nonrandom as in assigning different classes to different conditions; Shadish, Cook, & Campbell, 2002; What Works Clearinghouse, 2017). Both randomized experimental or quasi-experimental designs allow causal inferences to be made. In contrast, pre-post designs look at change between a variable measured before an intervention and measured again after an intervention. In such designs, it is harder to draw causal conclusions about the effects of an intervention. In that sense, such designs as well as correlational studies are descriptive. Although descriptive quantitative research can show what happened and experimental designs can explain why, they may not be able to explain how the change occurred and/or what would happen when these variables covary simultaneously. Qualitative designs may be better suited to address such “how” questions about phenomena that emerge from a complex set of interactions. Qualitative research designs involve emerging questions and are conducted in natural settings (Creswell & Creswell, 2017). These designs can be either descriptive or explanatory. They are interpretive and tend to involve emergent themes. Such designs include ethnographies and case studies (see also, Uttamchandani & Lester, this volume; Koschmann & Schwarz, this volume). There is often a focus on the meaning that participants bring to the setting as researchers try to construct holistic accounts. Such designs may focus on trying to explain the how and why in their account (so they might be explanatory) or they might be more descriptive. Descriptive designs are studies that aimed at providing an account of a phenomenon or intervention. Such studies seek to uncover regularities in the data without actively manipulating variables (Creswell & Creswell, 2017). Case studies, observational studies, and surveys are examples of descriptive designs. Case studies are detailed analyses of a program, event, or activity that are clearly bounded in some way. In contrast, ethnographic designs may seek to be more explanatory, showing how processes unfold as researchers study a group in a natural setting over an extended time frame. Many of these forms of research involve collecting rich forms of data such as interviews, observations, and artifacts. Mixed methods research designs integrate qualitative and quantitative methods. An important research design for CSCL is design-based research (DBR). DBR methods are a research strategy in which theoretically-driven CSCL designs and interventions are enacted and which are refined progressively over several iterations (Brown, 1992; Collins, 1992; Sandoval, 2014). DBR refers to a research framework or strategy that can transcend the design of individual iterations (e.g., Zhang, Scardamalia, Reeve, & Messina, 2009). The goal of DBR is to design learning environments that succeed in natural settings and advance theories of learning and design of learning environments. This involves developing theory-driven designs of learning environments and iteratively refining both theory and design. In particular, DBR aims to understand what works for whom, how, and under what circumstances (Design-based Research Collective, 2003). Such programs of research may stretch over several years (e.g., Zhang et al., 2009). All of these research designs have
70
C. E. Hmelo-Silver and H. Jeong
philosophical underpinnings that are beyond the scope of this chapter (see Creswell & Creswell, 2017 for further details). Of the studies examined in the corpus, descriptive (including qualitative) designs were most frequent (50%), followed by randomized experiments (25%), quasiexperiments (16%), DBR (7%), and pre-post designs (3%). These relative frequencies are generally consistent with the earlier results and demonstrate that CSCL continues to use a range of research designs. DBR is somewhat infrequent in this dataset. This likely signals difficulties and challenges associated with DBR and yet it is also not clear when and how a program of research might be represented as several separate articles rather than as a single program of design-based CSCL research. These difficulties may be due to both publication pressures and limitations to the length of articles. An important characteristic of CSCL research studies is that they tend to be conducted “in the wild.” Classroom settings are formal learning situations that are guided by teachers. Laboratory settings are controlled settings where data collection is carried out outside the context of classrooms or other authentic learning situations. Other settings include CSCL outside laboratories or classrooms such as workplace, online communities, or informal learning environments (e.g., teacher workshops, professional conferences). In the corpus, 80% of the research was conducted in classroom settings (up from 74% in the earlier research). Of the classroom studies, 52% were descriptive, 20% were randomized experiments, 17% were quasiexperimental, and 8% were DBR. As anticipated, the DBR studies were almost exclusively set in classrooms. For example, Zhang et al. (2009) study used Knowledge Forum and examined how different ways of forming groups affected collective knowledge building over three iterations of instructions. As part of a design-based classroom study, Looi, Chen, and Ng (2010) examined the effectiveness of Group Scribbles (GS) in two Singapore science classrooms codesigned by teachers and found that the GS classroom performed better than the traditional classroom by traditional assessments. In an experimental classroom study, Borge, Ong, and Rosé (2018) compared the effects of two different individual reflective scripts on group regulation in threaded discussions as part of a course on information sciences and technology. It is clear that CSCL research focuses on ecologically valid settings that are oriented toward being useful.
3.3
Data Sources and Analysis in CSCL
To make sense of the messiness of the classroom context (Kolodner, 2004), CSCL researchers use different types and sources of data. For example, text, video, and log process data can reveal CSCL learning processes (Derry et al., 2010; Strijbos & Stahl, 2007). Outcomes are data about student or group performance, achievement, or other artifacts. Outcome data provide information about change in knowledge measured in a test or through artifacts that learners construct. There are also miscellaneous data providing evidence about noncognitive and/or situational aspects
An Overview of CSCL Methods
71
of CSCL such as questionnaires that assess perception and motivation of students, interviews, or researcher field notes. In our corpus, most use multiple data sources. Analysis methods refer to the ways that CSCL researchers make sense of the data. CSCL research uses a variety of data ranging from video to synchronous and asynchronous messages. Process data as well as outcome data are collected. Analysis of such diverse range data requires application of quite different methodological approaches. The general categories of quantitative and qualitative analysis are often used to differentiate analysis methods, but each has several subcategories. Quantitative analyses are typically applied to analyze test, survey, or questionnaire data or other data in numeric forms. Code and count, often called verbal analysis or (quantitative) content analysis, are used to quantify qualitative data such as texts or dialogues. The outcome of the code and count analysis can then be subjected to inferential statistics or other more advanced quantitative analysis (Chi, 1997; Jeong, 2013; Neuendorf, 2002). Simple descriptive statistics are another commonly used approach and include common data reduction methods such as frequencies or means. Inferential statistics can be used to make inferences about group differences, whereas modeling refers to more complex analytic techniques such as multilevel analyses that seek to explain causal relationships among different variables. One challenge in doing quantitative analyses is that individuals within groups are not independent of each other which requires either using the group as the unit of analysis or multilevel modeling (Cress, 2008; Paulus & Wise, 2019). Note that the last three types of quantitative analyses are hierarchically related. Modeling statistics presumes the use of inferential statistics, which also presumes the use of descriptive statistics. When properly combined, they can serve as a useful tool to analyze a range of data collected in CSCL settings (see Chiu & Reimann, this volume). Of the studies in the earlier corpus, 40% of the papers used two or more different analytic techniques. In reporting on the research, 88% used at least one quantitative analysis and 47% used at least one qualitative analysis. Of the quantitative analysis approaches used, inferential statistics were most common, followed by code and count and simple descriptive statistics. In some cases, elaborate statistical analyses were used. Hong et al. (2013) designed archaeology games around digital archives in Taiwanese museums and collected survey data from teams of high school game participants after they played the game. Here the structural equation modeling technique was used to test a model about the effects of gameplay self-efficacy on game performance and perceived ease of gameplay. Such sophisticated modeling techniques are not common which may be because small sample sizes are often a limiting factor in CSCL. Not all data can be usefully quantified with code and count. Coding and counting the frequency of certain codes, even when it can be done, may not reveal what is going on during collaboration. A deeper and more holistic analytic approach is needed to analyze data sources such as interview data or field notes which were collected in 25% and 13% of the studies in the corpus, respectively. Within qualitative analyses (qualitative), content analysis refers to systematic text analysis (Mayring, 2000). Conversation or discourse analysis refers to analyses that analyze conversations or discourse, but can vary considerably in their approaches and
72
C. E. Hmelo-Silver and H. Jeong
techniques (e.g., Gee & Green, 1998; Uttamchandani & Lester, this volume). Grounded theory refers to qualitative analytic techniques that emphasize the discovery of theory based on the systematic analysis of data. Codes, concepts, and/or categories can be formed in the process of formulating a theory, but interpreted quite differently from the way they are used in the quantitative analysis (Straus & Corbin, 1990). Interaction analysis examines the details of social interaction as they occur in practice and generally relies on collaborative viewing of video (Derry et al., 2010; Jordan & Henderson, 1995). There are also several other established qualitative methods such as narrative analysis, thematic analysis, or phenomenography. Qualitative methods are not merely about analysis, but often refer to a whole approach to inquiry that prescribes research objective, design, data collection method, as well as analysis. Boundaries of different qualitative analyses are not always clear-cut. In many cases, the approach to qualitative analysis was what Jeong et al. (2014) referred to as loosely defined—that which did not appear to be linked to any specific analytic traditions. For qualitative analyses, loosely defined qualitative analyses remained the most common, appearing in 25% of the studies (consistent with the earlier results). In addition, qualitative content analysis appeared in 12% of the papers; other welldefined techniques were reported in less than 5% of the papers each (e.g., interaction analysis, conversation analysis, and grounded theory, in decreasing order of frequency). The quality of “loosely defined” analyses was quite variable. Some studies used these to complement statistical analysis and as a tool to illustrate and explore the observed differences that were identified quantitatively (Schwarz & Glassner, 2007). In studies that received a loosely defined code, these analyses were often verbatim examples or other excerpts from data supporting the researchers’ observations and/or conclusions (e.g., Minocha, Petre, & Roberts, 2008). Another form of loosely defined qualitative analysis was qualitative summaries of data, often supplemented with simple descriptive statistics (Menkhoff, Chay, Bengtsson, Woodard, & Gan, 2015; Rick & Guzdial, 2006). Some of the limited specificity in the description of qualitative methods may be due to the limited journal space. We need to explore ways to report these qualitative findings while making the rigor of the research clear.
3.4
Mixing Methods in CSCL
Consistent with prior work, the results from this corpus suggest that CSCL research mixes analytical methods and multiple data sources. Although more than half of the papers used exclusively quantitative analysis and some only qualitative analysis, 35% used multiple analytic methods. These choices of analytic tools were related to the particular research designs as shown in Fig. 1. Mixed methods were most often used in descriptive research designs and quantitative methods only were mostly used in experimental designs. Design-based research studies were more likely to use mixed methods than a single method. Although the scope of the study was slightly
An Overview of CSCL Methods
73
35% 30% 25% 20% 15% 10% 5% 0% Quantitative only Descriptive
Qualitative only Experimental
Mixed Design-based
Fig. 1 Analysis types by research design
different, Major et al. (2018) also found that 46% of the studies were quantitative studies but an almost equal number used mixed methods. One typical example of the mixed-method study is a study by Zhang, Tao, Chen, Sun, Judson, and Naqvi (2018) in which they used quantitative methods to compare the scientific sophistication of explanations and topics discussed between two different classroom interventions with the Idea Thread Mapper, a tool to help students and teachers organize and monitor their collective inquiry. Here, the inferential statistics allowed them to determine where there were differences. Qualitative methods allowed the researchers to see how the collective inquiry unfolded with the different instructional designs. Video analysis documented the temporal unfolding and elaboration of collaborative inquiry structures. In a study of CSCL in threaded discussions, Borge, Ong, and Rosé (2018) counted indicators related to regulation from the group discourse, used inferential statistics to compare across conditions, and used a qualitative case study to demonstrate how a group’s collaborative activity and regulation changed over time. In both of these examples, quantitative analysis allowed researchers to see what changes occurred, and qualitative analyses were used to examine how change unfolded. Other ways of mixing methods do not always show up in this kind of analysis. For example, Suthers et al. (2013) brought together CSCL researchers from different methodological traditions to engage in what they called Productive Multivocality in the Analysis of Group Interactions. This series of workshops demonstrated how different methods might be applied to the same datasets and illuminate new
74
C. E. Hmelo-Silver and H. Jeong
understandings that the original researcher might not have considered (Suthers et al., 2013). For example, one dataset was about peer-led team learning in chemistry; the different methods applied to it included ethnographic analysis, two different multidimensional approaches to coding and counting, as well as a social network analysis. The application of different methods revealed tensions in how data need to be prepared for analysis as well as how to conceptualize a given learning event. Different analytic approaches often driven by different research questions contributed toward constructing a richer understanding of the events. The process of resolving differences led to deeper elaboration of the underlying collaborative mechanisms as well. These different approaches provided new insights on the construct of leadership in groups.
3.5
On the Relation Between Theory and Method
Although other chapters focus on theory (Stahl & Hakkarainen, this volume), this section refers to the diversity of theories and the specific relationship to research methods. Consistent with the earlier review in Jeong et al. (2014), analysis of the expanded corpus from 2005 to 2014 (McKeown et al., 2017) shows that CSCL uses diverse theoretical frameworks that include information processing, socio-cognitive, constructivist, sociocultural, communication, social psychology, motivation, and other theoretical frameworks. This aligns well with the argument in Wise and Schwarz (2017) that CSCL does have multiple explanatory frameworks. Some of these differences are related to the multidisciplinary nature of CSCL (Strijbos & Fischer, 2007). Jeong, Hmelo-Silver, and Yu (2014) identified several clusters organized around theory and method. These clusters represent patterns of theoretical perspectives and research designs. One pattern identified is that socio-cultural frameworks tended to be used in qualitative studies, contextualized in classrooms, and with descriptive research designs (e.g., Ares, 2008; Berge & Fjuk, 2006). Another pattern used general constructivist perspectives with quasi-experimental classroom studies (e.g., Dori & Belcher, 2005; Van Drie, van Boxtel, Jaspers, & Kanselaar, 2005). Other patterns were more eclectic with multiple theoretical orientations guiding either descriptive classroom designs or experimental laboratory designs.
3.6
Challenges in CSCL Research Methods
There are many challenges in conducting research in CSCL environments. The technology is only a piece of a complex system of CSCL that also includes pedagogy and collaborative groups in particular contexts (Arnseth & Ludvigsen, 2006). In such environments, one challenge is identifying the appropriate unit of analysis. For studies arising from cognitive and socio-cognitive perspective, this might be the
An Overview of CSCL Methods
75
individual nested within a group but studies coming from a socio-cultural perspective, the group itself and the emergent dialog might be the appropriate unit of analysis (Janssen, Cress, Erkens, & Kirschner, 2013; Ludvigsen & Arnseth, 2017; Stahl, 2006). For others, the overall activity system would be the focus of analytic interest as in Danish’s (2014) study of young children learning about complex systems through collaborative engagement with a simulation in a carefully designed activity system. The curriculum unit was designed around four key activities and the analyses demonstrated how these activities organized students’ collective engagement with the target concepts. Related to the challenge of identifying appropriate units of analysis is that of segmentation and coding. For example, in CSCL when chat and threaded discussions are used, reconstructing the response structure can be a challenge as it is not always clear what participants are responding to (Strijbos & Stahl, 2007). The ability to appropriately segment units for analysis also presents challenges in terms of reliability of coding. The multidisciplinarity of CSCL creates additional challenges for the field. The field is not as cumulative as it might be because researchers tend to ignore results from methodologies unlike their own rather than looking to triangulate across studies with different kinds of methods (Strijbos & Fischer, 2007). The diversity of methods, and even hybrid methods, leads to a lack of standards for how research results are reported which makes the rigor of studies difficult to evaluate. This highlights the importance of documenting how methods have been adapted and combined so that other researchers can use these methods in the future. Developing standards for such research is what Kelly (2004) has called the argumentative grammar for evidentiary support needed to warrant claims. An argumentative grammar is a clear and explicitly stated logic of inquiry that provides the basis for the particular methods selected and how claims can be warranted. This remains a challenge because of the multiple theoretical frames and methodological tools used in CSCL research. Another important challenge in CSCL and much learning sciences research is the challenge of incorporating situational and contextual factors into analysis of CSCL. Arvaja, Salovaara, Häkkinen, and Järvelä (2007) combined individual and group level perspectives as a way to account for context but this remains one of the tensions between qualitative and quantitative research methods. Qualitative methods tend to be excellent at providing rich descriptions of contexts but are also focused very much on the particular. Mixed methods become especially important in accounting for different aspects of cognition, learning, and the learning contexts. Nonetheless, many current approaches to accounting for context are highly labor-intensive and require many person-hours for data collection, management, and analysis, whether it is video data sources or thousands of lines of chat data when trying to understand processes in CSCL settings.
76
C. E. Hmelo-Silver and H. Jeong
4 The Future: Addressing Challenges and New Horizons This corpus and literature analyzed here present recent methodological trends in CSCL but it has not necessarily accounted well for certain aspects of CSCL research. In this last section, we highlight several important trends in methods for analyzing CSCL: temporality, visualizations, automation, and units of analysis. These trends address challenges related to the messy, contextualized, labor-intensive, and collaborative nature of CSCL research. One way of dealing with the situated nature of learning in CSCL is for the research methods to be able to account for the temporal dimension. CSCL researchers have the opportunities to study learning processes that unfold over extended periods of time (Reimann, 2009). A number of researchers have argued that this dimension needs to be addressed explicitly in CSCL research (Järvelä, Malmberg, & Koivuniemi, 2016; Kapur, 2010; Reimann, 2009; Suthers, Dwyer, Medina, & Vatrapu, 2010; Zhang et al., 2018). To accomplish that, Reimann (2009) has argued for events being considered as a central unit of analysis in which entities participate. Reimann notes the importance of formalizing methods of analyzing these kinds of events. Suthers et al. (2010) argue for interactions and uptake as a way of capturing temporal relationships. Uptake is defined as “the relationship present when a participant’s coordination takes aspects of prior or ongoing events as having relevance for an ongoing activity” (Suthers et al., 2013, p. 13). This latter approach considers the importance of contingencies between events, which can include both temporal and spatial coordination across actors and media. Approaches to dealing the temporal nature of CSCL can take highly sophisticated statistical forms (e.g., Chiu, 2018; Csanadi, Egan, Kollar, Shaffer, & Fischer, 2018; Reimann, 2009; Kapur, 2010) as well as more qualitative approaches. For example, quantitative approaches such as sequential data analysis, can analyze the probabilities of how likely one event is to be followed by another. In CSCL, these events could be different kinds of dialogue moves, such as argumentation (Jeong, Clark, Sampson, & Menekse, 2011). To model dynamic collaboration processes, statistical discourse analysis (SDA, Chiu, 2018) estimates the likelihood of particular discourse moves during each turn of talk or online message and the influences of explanatory factors at multiple levels (e.g., individual, group, class). SDA detects pivotal moments that substantially change an interaction and models factors affecting the likelihood of these moments (Chiu, Molenaar, Chen, Wise & Fujita, 2013). The analyses of these temporal dimensions of CSCL can help identify events critical to idea development or failures in social regulation, which in turn can help teachers and designers to determine when and perhaps how to intervene (Oshima, Oshima, & Fujita, 2018). Visualizations can provide both temporal and relational information that could aid in qualitative interpretation of complex CSCL data that go beyond coding and counting utterances (Csanadi et al., 2018; Hmelo-Silver, Liu, & Jordan, 2009; Suthers et al., 2010). This helps address the challenge of dealing with the rich contextual information found in CSCL environments. One way of integrating across different kinds of data and multidimensional coding schemes is through the use of
An Overview of CSCL Methods
77
visualizations. Visualizations take advantage of human perceptual capabilities as they allow one to view and search a large amount of information in a single glance (Larkin & Simon, 1987). They can support perceptual inference and pattern recognition. When computer tools are used to create these visualizations, they can be used to manipulate the representations created, allowing an analyst to examine different parts of interactions, zooming in and zooming out as needed (Hmelo-Silver, Liu, & Jordan, 2009; Huang et al., 2018; Howley, Kumar, Mayfield, Dyke, & Rosé, 2013). For example, Huang et al. used CORDTRA (Chronologically-oriented Representations of Discourse and Tool-Related Activity) diagrams to analyze collaborative modeling practices in a citizen science community, showing their tool-mediated interaction on a longer time scale (a working session) and then used it to zoom into a 5-min excerpt of interest when the modeling tool was particularly salient as a boundary object to support collaboration. Howley et al. (2013) used TATIANA’s sliding window to look at how distributions of discourse codes changed over time. Suthers et al. (2010) constructed contingency graphs that demonstrate how interaction is distributed among participants and tools over time to support their framework for uptake analysis. More recently, Csanadi et al. (2018) have introduced epistemic network analysis to visualize and quantify temporal co-occurrence of codes. This latter approach is unique in allowing statistical comparisons across networks. Many of the approaches described in this chapter require intensive work to code verbal discourse of observational data, online interactions, and patterns of activity from a variety of sources (Law, Yuen, Wong, & Leng, 2011). Work in the future of CSCL will include automated approaches to data analysis and multimodal learning analytics (see Schneider, Worsley, & Martinez-Maldonado, this volume; Wise, Knight, & Buckingham Shum, this volume). Law, Yuen, Wong, and Leng (2011) developed automated coding and visualization to focus on indicators of high-quality asynchronous discussions in Knowledge Forum. Howley and Rosé (2016) have used automated systemic functional linguistic analysis of discussions to illuminate social dimensions of interaction. Nistor et al. (2015) used automated dialogue analysis to identify clusters of central and peripheral participation in virtual communities of practice. Moreover, they found substantial correlation between automated and hand coding of the discourse data. In addition to making sense of discourse and computergenerated data, multimodal learning analytics can be used to make sense of CSCL processes that go beyond what can be captured by a computer (Blikstein & Worsley, 2016; Noroozi, Alikhani, Järvelä, Kirschner, Juuso, & Seppänen, 2019). Blikstein and Worsley noted that in addition to text analysis, similar to the work on automated discourse coding, there are technological capabilities for analysis of speech, handwriting, gesture, and movement. In addition, sensors detecting different physiological markers can be used to infer affective states (e.g., Malmberg et al., 2019). Eye-tracking data can be used to track what learners are attending to and strategies that they are using. Indeed Blikstein and Worsley (2016) argue that the most promising use of gaze data is in small groups to track joint visual attention. Noroozi et al. (2019) note the importance of coordinating multimodal data with video to support studying such complex phenomena as socially shared regulation. Buckingham-Shum and Ferguson (2012) argue that opportunities for designing
78
C. E. Hmelo-Silver and H. Jeong
learning analytics that involves social aspects of learning are particularly timely, challenging, and need to consider the ethical as well as technical issues.
5 Conclusion The methodological landscape of CSCL is complex, and like other aspects of the learning sciences often uses multiple methodological approaches and techniques (Major et al., 2018; Yoon & Hmelo-Silver, 2017). This chapter builds on the review of Jeong et al. (2014) with an update of an additional 5 years of literature. This suggests that the research methods used in the field are stable overall; however, some newer techniques have been introduced in the time since the corpus reported here was generated. These multiple research methods in CSCL are consistent with a survey of learning scientists that demonstrated the learning sciences researchers are involved in research that uses a broad range of methods (Yoon & Hmelo-Silver, 2017). It was surprising to not see design-based research more prominently featured in the sample given that it is the signature methodology taught in many programs in the learning sciences (Sommerhoff, Szameitat, Vogel, Chernikova, Loderer, & Fischer, 2018). Although many challenges remain, CSCL researchers are developing new ways of mixing methods and new techniques to help deal with the messy realworld contexts that characterize research in the learning sciences more generally (e.g., Brown, 1992; Kolodner, 2004). Although this review covers a broad range of CSCL literature, it primarily focuses on STEM domains and education and the years covered in the systematic review. It is important to learn more about the generality of these research methods across other different disciplinary contexts. This review also only focuses on research published in peer-reviewed journals, which may miss some current trends in research methods represented in CSCL conference proceedings. However, given the limited space in such proceedings, it would be challenging to extract the necessary methodological features from studies reported in such venues. In general, CSCL tends to ask a range of research questions, but these are often organized around specific technologies/and or pedagogical interventions. Although most studies have clear theoretical orientation, there were few studies that engaged explicitly in theory development and testing. This provides challenges in building a cumulative science of CSCL. Encouraging however is the range of data sources and analytic tools being used to address these questions. Although CSCL has many challenges in terms of being resource-intensive to analyze and complex to understand, CSCL research methods are an active topic for reflection and research within this research community. These reflective discussions acknowledge the challenges and provide impetus for the development and appropriation of new methods for understanding learning in CSCL environments. Acknowledgments This research was funded by the National Science Foundation under Grant DRL # 1439227.
An Overview of CSCL Methods
79
References Ares, N. (2008). Cultural practices in networked classroom learning environments. International Journal of Computer-Supported Collaborative Learning, 3, 301–326. Arnseth, H. C., & Ludvigsen, S. (2006). Approaching institutional contexts: Systemic versus dialogic research in CSCL. International Journal of Computer-Supported Collaborative Learning, 1(2), 167–185. https://doi.org/10.1007/s11412-006-8874-3. Arvaja, M., Salovaara, H., Häkkinen, P., & Järvelä, S. (2007). Combining individual and grouplevel perspectives for studying collaborative knowledge construction in context. Learning and Instruction, 17, 448–459. Behrens, J. T. (1997). Principles and procedures of exploratory data analysis. Psychological Methods, 2(2), 131. Berge, O., & Fjuk, A. (2006). Understanding the roles of online meetings in a net-based course. Journal of Computer Assisted Learning, 22, 13–23. Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238. Borge, M., Ong, Y., & Rosé, C. P. (2018). Learning to monitor and regulate collective thinking processes. International Journal of Computer-Supported Collaborative Learning, 13, 61–92. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2, 141–178. Buckingham-Shum, S., & Ferguson, R. (2012). Social learning analytics. Educational Technology & Society, 15(3), 3–26. Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. Journal of the Learning Sciences, 6, 271–315. Chiu, M. M. (2018). Statistically modelling effects of dynamic processes on outcomes: An example of discourse sequences and group solutions. Journal of Learning Analytics, 5, 75–91. Chiu, M. M., Molenaar, I., Chen, G., Wise, A. F., & Fujita, N. (2013). Micro-analysis of collaborative processes that facilitate productive online discussions: Statistical discourse analyses of three cases. In M. Clara & E. B. Gregori (Eds.), Assessment and evaluation of time factors in online teaching and learning (pp. 232–263). Hershey, PA: IGI Global. Chiu, M. M., & Reimann, P. (this volume). Statistical and stochastic analysis of sequence data. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Collins, A. (1992). Toward a design science of education. In E. Scanlon & T. O’Shea (Eds.), New directions in educational technology (pp. 15–22). Berlin, Heidelberg: Springer. Cress, U. (2008). The need for considering multilevel analysis in CSCL research—An appeal for the use of more advanced statistical methods. International Journal of Computer Supported Collaborative Learning, 3, 69–84. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage. Csanadi, A., Eagan, B., Kollar, I., Shaffer, D. W., & Fischer, F. (2018). When coding-and-counting is not enough: Using epistemic network analysis (ENA) to analyze verbal data in CSCL research. International Journal of Computer-Supported Collaborative Learning, 13(4), 419–438. Danish, J. A. (2014). Applying an activity theory lens to designing instruction for learning about the structure, behavior, and function of a honeybee systems. Journal of the Learning Sciences, 23, 100–148. Derry, S. J., Pea, R. D., Barron, B., Engle, R. A., Erickson, F., Goldman, R., Hall, R., Koschmann, T., Lemke, J. L., Sherin, M. G., & Sherin, B. L. (2010). Conducting video research in the learning sciences: Guidance on selection, analysis, technology, and ethics. Journal of the Learning Sciences, 19, 3–53.
80
C. E. Hmelo-Silver and H. Jeong
Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–8. Dori, Y. J., & Belcher, J. (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? Journal of the Learning Sciences, 14, 243–279. Gee, J. P., & Green, J. L. (1998). Discourse analysis, learning, and school practice: A methodological study. Review of Research in Education, 23, 119–169. Hmelo-Silver, C. E., Liu, L., & Jordan, R. (2009). Visual representation of a multidimensional coding scheme for understanding technology-mediated learning about complex natural systems. Research and Practice in Technology Enhanced Learning Environments, 4, 253–280. Hong, J. C., Hwang, M. Y., Chen, Y. J., Lin, P. H., Huang, Y. T., Cheng, H. Y., & Lee, C. C. (2013). Using the saliency-based model to design a digital archaeological game to motivate players’ intention to visit the digital archives of Taiwan's natural science museum. Computers & Education, 66, 74–82. Howley, I., Kumar, R., Mayfield, E., Dyke, G., & Rosé, C. P. (2013). Gaining insights from sociolinguistic style analysis for redesign of conversational agent based support for collaborative learning. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 477–494). New York: Springer. Howley, I. K., & Rosé, C. P. (2016). Towards careful practices for automated linguistic analysis of group learning. Journal of Learning Analytics, 3(3), 239–262. Huang, J., Hmelo-Silver, C. E., Jordan, R., Gray, S., Frensley, T., Newman, G., & Stern, M. (2018). Scientific discourse of citizen scientists: Objects for collaborative problem solving. Computers in Human Behavior, 87, 480–492. Janssen, J., Cress, U., Erkens, G., & Kirschner, P. A. (2013). Multilevel analysis for the analysis of collaborative learning. In C. E. Hmelo-Silver, C. Chinn, C. K. K. Chan, & A. M. O'Donnell (Eds.), International handbook of collaborative learning (pp. 112–125). New York: Routledge. Janssen, J., & Kollar, I. (this volume). Experimental and quasi-experimental research in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Järvelä, S., Malmberg, J., & Koivuniemi, M. (2016). Recognizing socially shared regulation by using the temporal sequences of online chat and logs in CSCL. Learning and Instruction, 42, 1–11. Jeong, A., Clark, D. B., Sampson, V. D., & Menekse, M. (2011). Sequential analysis of scientific argumentation in asynchronous online discussion. In S. Puntambekar, G. Erkens, & C. E. Hmelo-Silver (Eds.), Analyzing interactions in CSCL (pp. 207–232). New York: Springer. Jeong, H. (2013). Verbal data analysis for understanding interactions. In C. E. Hmelo-Silver, C. K. K. Chan, C. Chinn, & A. M. O’Donnell (Eds.), International handbook of collaborative learning (pp. 168–181). New York: Routledge. Jeong, H., & Hmelo-Silver, C. (2012). Technology supports in CSCL. In J. van Aalst, K. Thompson, M. J. Jacobson, & P. Reimann (Eds.), The future of learning: Proceedings of the 10th international conference of the learning sciences (ICLS 2012)—Volume 1, full papers (pp. 339–346). Sydney: ISLS. Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of CSCL technology: How can technology support collaborative learning. Educational Psychologist, 51, 247–265. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. International Journal of Computer-Supported Collaborative Learning, 9(3), 305–334. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. Journal of the Learning Sciences, 4, 39–103. Kapur, M. (2010). Temporality matters: Advancing a method for analyzing problem-solving processes in a computer-supported collaborative environment. International Journal of Computer-Supported Collaborative Learning, 6, 39–56.
An Overview of CSCL Methods
81
Kelly, A. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences, 13(1), 115–128. Kelly, G. J. (2006). Epistemology and educational research. In J. L. Green, G. Camilli, & P. B. Elmore (Eds.), Handbook of complementary methods in education research (pp. 33–56). Washington, DC: American Educational Research Association. Kirschner, P. A., & Erkens, G. (2013). Toward a framework for CSCL research. Educational Psychologist, 48(1), 1–8. Kolodner, J. L. (2004). The learning sciences: Past, present, future. Educational Technology, 44(3), 34–40. Koschmann, T., & Schwarz, B. B. (this volume). Case studies in theory and practice. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Larkin, J. H., & Simon, H. A. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11, 65–99. Law, N., Yuen, J., Wong, W. O. W., & Leng, J. (2011). Understanding learners’ knowledge building trajectory through visualizations of multiple automated analyses. In S. Puntambekar, G. Erkens, & C. Hmelo-Silver (Eds.), Analyzing interactions in CSCL: Methods, approaches and issues (pp. 47–82). Boston, MA: Springer. Looi, C.-K., Chen, W., & Ng, F.-K. (2010). Collaborative activities enabled by GroupScribbles (GS): An exploratory study of learning effectiveness. Computers & Education, 54, 14–26. Ludvigsen, S., & Arnseth, H. C. (2017). Computer-supported collaborative learning. In E. Duval, M. Sharples, & R. Sutherland (Eds.), Technology enhanced learning: Research themes (pp. 47–58). New York: Springer International Publishing. Major, L., Warwick, P., Rasmussen, I., Ludvigsen, S., & Cook, V. (2018). Classroom dialogue and digital technologies: A scoping review. Education and Information Technologies, 23(5), 1995–2028. Malmberg, J., Järvelä, S., Holappa, J., Haataja, E., Huang, X., & Siipo, A. (2019). Going beyond what is visible: What multichannel data can reveal about interaction in the context of collaborative learning? Computers in Human Behavior, 96, 235–245. Mayring, P. (2000). Qualitative content analysis. Forum: Qualitative Social Research, 1(2). Art. 20. Retrieved from http://nbn-resolving.de/urn:nbn:de:0114-fqs0002204. McKeown, J., Hmelo-Silver, C. E., Jeong, H., Hartley, K., Faulkner, R., & Emmanuel, N. (2017). A meta-synthesis of CSCL literature in STEM education. In B. K. Smith, M. Borge, E. Mercier, & K.-Y. Lim (Eds.), Proceedings of CSCL 2017. Philadelphia, PA: International Society of the Learning Sciences. Menkhoff, T., Chay, Y. W., Bengtsson, M. L., Woodard, C. J., & Gan, B. (2015). Incorporating microblogging (“tweeting”) in higher education: Lessons learnt in a knowledge management course. Computers in Human Behavior, 51, 1295–1302. Minocha, S., Petre, M., & Roberts, D. (2008). Using wikis to simulate distributed requirements development in a software engineering course. International Journal of Engineering Education, 24(4), 689–704. Neuendorf, K. A. (2002). The content analysis: Guidebook. London: Thousand Oaks. Nistor, N., Trăuşan-Matu, S., Dascălu, M., Duttweiler, H., Chiru, C., Baltes, B., & Smeaton, G. (2015). Finding student-centered open learning environments on the internet: Automated dialogue assessment in academic virtual communities of practice. Computers in Human Behavior, 47, 119–127. Noroozi, O., Alikhani, I., Järvelä, S., Kirschner, P. A., Juuso, I., & Seppänen, T. (2019). Multimodal data to design visual learning analytics for understanding regulation of learning. Computers in Human Behavior, 100, 298–304. Onwuegbuzie, A. J., & Leech, N. L. (2006). Linking research questions to mixed methods data analysis procedures. The Qualitative Report, 11(3), 474–498.
82
C. E. Hmelo-Silver and H. Jeong
Oshima, J., Oshima, R., & Fujita, W. (2018). A mixed-methods approach to analyze shared epistemic agency in jigsaw instruction at multiple scales of temporality. Journal of Learning Analytics, 5(1), 10–24. Paulus, T. M., & Wise, A. F. (2019). Looking for insight, transformation, and learning in online talk. New York: Routledge. Puntambekar, S., Erkens, G., & Hmelo-Silver, C. E. (Eds.). (2011). Analyzing interactions in CSCL: Methodology, approaches, and issues. New York: Springer. Puntambeker, S. & Luckin, R. (2002). Documenting collaborative interactions: Issues and approaches workshop. CSCL 2002. Boulder, CO. Reimann, P. (2009). Time is precious: Variable- and event-centred approaches to process analysis in CSCL research. International Journal of Computer Supported Collaborative Learning, 4, 239–257. Rick, M., & Guzdial, M. (2006). Situating CoWeb: A scholarship of application. International Journal of Computer-Supported Collaborative Learning, 1, 89–115. Sandoval, W. A. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23, 18–36. Schneider, B., Worsley, M., & Martinez-Maldonado, R. (this volume). Gesture and gaze: Multimodal data in dyadic interactions. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Schwarz, B. B., & Glassner, A. (2007). The role of floor control and of ontology in argumentative activities with discussion-based tools. International Journal of Computer-Supported Collaborative Learning, 2, 449–478. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin Company. Shavelson, R. J., & Towne, L. (2002). Scientific research in education. Washington, DC: National Academy Press. Sommerhoff, D., Szameitat, A., Vogel, F., Chernikova, O., Loderer, K., & Fischer, F. (2018). What do we teach when we teach the learning sciences? A document analysis of 75 graduate programs. Journal of the Learning Sciences, 27, 319–351. https://doi.org/10.1080/10508406.2018. 1440353. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge MA: MIT Press. Stahl, G., & Hakkarainen, K. (this volume). Theories of CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Straus, A., & Corbin, J. (1990). Basics of qualitative research. Newbury Park: Sage. Strijbos, J., & Fischer, F. (2007). Methodological challenges for collaborative learning research. Learning and Instruction, 17, 389–393. https://doi.org/10.1016/j.learninstruc.2007.03.004. Strijbos, J.-W., & Stahl, G. (2007). Methodological issues in developing a multi-dimensional coding procedure for small-group chat communication. Learning and Instruction, 17, 394–404. Suthers, D. D., Lund, K., Rosé, C. P., & Teplovs, C. (2013). Achieving productive multivocality in the analysis of group interactions. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 577–612). Springer US. https://doi.org/10.1007/978-1-4614-8960-3_31 Suthers, D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (2013). Productive multivocality in the analysis of group interactions. New York: Springer. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making. International Journal of Computer Supported Collaborative Learning, 1, 315–337. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. Suthers, D. D., Rosé, C. P., Lund, K., & Teplovs, C. (2013). A Reader’s guide to the productive multivocality project. In D. D. Suthers, C. P. Rosé, K. Lund, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 37–59). New York: Springer.
An Overview of CSCL Methods
83
Uttamchandani, S., & Lester, J. N. (this volume). Qualitative approaches to language in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Valcke, M., & Martens, R. (2006). The problem arena of researching computer supported collaborative learning: Introduction to the special section. Computers & Education, 46, 1–5. Van Drie, J., van Boxtel, C., Jaspers, J., & Kanselaar, G. (2005). Effect of representational guidance on domain specific reasoning in CSCL. Computers in Human Behavior, 21, 575–602. What Works Clearinghouse. (2017). Standards handbook version 4.0. Retrieved March 5, 2019, from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pd Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12, 423–467. Yoon, S., & Hmelo-Silver, C. E. (2017). What do learning scientists do? Journal of the Learning Sciences, 26, 167–183. Zhang, J., Scardamalia, M., Reeve, R., & Messina, R. (2009). Designs for collective cognitive responsibility in knowledge building communities. Journal of the Learning Sciences, 18, 7–44. Zhang, J., Tao, D., Chen, M.-H., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with idea thread mapper. Journal of the Learning Sciences, 27(3), 390–430.
Further Readings Arnseth, H. C., & Ludvigsen, S. (2006). Approaching institutional contexts: Systemic versus dialogic research in CSCL. International Journal of Computer-Supported Collaborative Learning, 1(2), 167–185. This paper contrasts systemic and dialogic approach in CSCL research. Systemic approach aims at generating and testing models of the system. Dialogic approach on the other hand focuses on understanding how meaning is constituted in social practice. The paper argues that differences in their analytical practices have consequences for the generation and assessment of findings. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. International Journal of Computer-Supported Collaborative Learning, 9(3), 305–334. This study reports on a content meta-analysis of CSCL research methods that this chapter builds on. It reports in detail on the methodology for doing the systematic review of the CSCL literature as well as how the papers were selected and screened. Major, L., Warwick, P., Rasmussen, I., Ludvigsen, S., & Cook, V. (2018). Classroom dialogue and digital technologies: A scoping review. Education and Information Technologies, 23(5), 1995–2028. This article is a systematic review of the relation between classroom dialogue and educational technology. It considers the nature of the evidence related to how digital technologies mediate productive classroom discourse and the nature of the questions that different studies ask. Reimann, P. (2009). Time is precious: Variable- and event-centred approaches to process analysis in CSCL research. International Journal of Computer Supported Collaborative Learning, 4, 239–257. This article argues for the importance of considering temporality in CSCL research and provides suggestions for addressing these considerations. Suthers, D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (2013). Productive multivocality in the analysis of group interactions. New York: Springer. This volume provides empirical examples of how different methodologies can be used on the same datasets, foster cross-disciplinary discussions, and the development of new insights on CSCL.
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives Camillia Matuk, Kayla DesPortes, and Christopher Hoadley
Abstract Context is a critical consideration in CSCL research and design, yet difficult to delineate. Its definition can encompass aspects of the environment, the learners, the technology, and their histories and cultures. Depending on researchers’ theoretical perspectives and the focus of their study, different aspects of context are forefronted in the data collection and analysis, while others are given less importance. In this chapter, we offer a framework that conceptualizes context in terms of focal, immediate, and peripheral layers surrounding the object of study. We describe how the aspects contained within each layer of context differ depending on one’s theoretical orientation. To illustrate, we offer contrasting examples of CSCL research that approach context from a cognitive perspective and a sociocultural perspective. We end by outlining several areas for future research and highlight the importance of technological advances to keep pace with the theoretical conceptions of context in order to support the design of responsive CSCL environments. Ultimately, we argue that a full understanding of context leads to more robust and ecologically sound CSCL research and design. Keywords Cognitive theory · Context · Computer-supported collaborative learning · Sociocultural theory · Technology
1 Definitions and Scope: What Is Context? Context is a slippery word. It tends to be used to describe things that are not the object of inquiry, but rather the situational or circumstantial elements that matter for interpreting that object. In Computer-Supported Collaborative Learning (CSCL)
C. Matuk (*) · K. DesPortes · C. Hoadley Department of Administration, Leadership and Technology, New York University, New York, NY, USA e-mail: [email protected]; [email protected]; [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_5
85
86
C. Matuk et al.
research, which aims to study the process of collaborative learning with technology, context is everything between and beyond. It might variously include the answer to: Where learning takes place (e.g., a university lecture hall or at home over a game of Scrabble); When learning happens (e.g., in conversation with a coach or while practicing scales on the violin); Who is involved in learning (e.g., professional colleagues, family, teachers, students; including their cultures, histories, and social relationships); What is being learned (e.g., the periodic table or ethics of journalism); How learning is happening (e.g., drill-and-practice or inquiry); and Why learning is occurring (e.g., to achieve a career goal or to pursue a personal interest). Context is critical in describing and explaining the mediating relationships among social interaction, technology use, and learning experiences. For instance, an understanding of students’ learning during a collaborative inquiry project can be sought from learning outcomes; but this understanding is enhanced by an awareness of the unique cultural, historical, and situational aspects that shaped learners’ engagement with the project (Honebein, Duffy, & Fishman, 1993). Likewise, an understanding of the value of a piece of educational groupware can be obtained by analyzing how individuals interact with it during a specific task; however, such an understanding is enriched by also considering the circumstances under which learners are engaging with that groupware, their relevant prior knowledge, and how their interactions are influenced by relationships with other learners and their instructor. Part of the difficulty of defining context is in setting its boundaries. This requires untangling the multitude of tightly linked and interdependent physical, temporal, personal, and cultural aspects of human experience that context can encompass. To develop robust understandings of CSCL with reasonable effort, researchers must decide which contextual elements to foreground, which to background, and which to consider beyond the scope of a study. These decisions will vary depending on the subject of study. For example, a researcher might decide that relevant contextual elements center on the what and the how of learning (e.g., students learning the periodic table through drill and practice), while irrelevant elements of context might include the why and where of learning (e.g., students preparing for an end-of-year test in a grade school classroom). Where researchers draw these boundaries moreover differs with their individual goals, the opportunities, and constraints of a study’s instantiation, and the norms and practices of the research communities in which they participate.
2 History and Development 2.1
Context as a Matter of Paradigm or Perspective
CSCL bridges many different disciplinary communities, each with their own ontological and epistemological paradigms, and thus their own notions of context. In
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
87
linguistics, for example, context encompasses the dialogue between people, and allows for an interpretation that goes beyond the literal meaning of what is said (Goodwin & Duranti, 1992). In human–computer interaction (HCI), by contrast, context is “any information that can be used to characterize the situation of an entity” (e.g., user or software tool) (Abowd et al., 1999). This information often includes users’ thoughts and goals, relevant cultures and histories, individuals’ prior knowledge, social and institutional norms and expectations, the disciplinary positioning of the content to be learned, and environmental characteristics that can be sensed such as, location, light, sound, temperature, etc. (Baldauf, Dustdar, & Rosenberg, 2007; Prekop & Burnett, 2003; Winters & Price, 2005). Over time, context has come to be seen not as a set of static elements that surround the object of study, but as a set of elements that evolve through people’s interactions and dialog, and that offer “resources for its appropriate interpretation” (Goodwin & Duranti, 1992, p. 2). Consider, for example, researchers studying how a mobile application helps students to learn about environmental science by enabling them to collect field observations during a college-level course. A computer scientist might define context as the students’ location, as measured by the geospatial location of the mobile device when in use. Meanwhile, an instructional designer might view context to encompass such aspects as learners’ academic backgrounds, prior knowledge, and whether the learner is at home, in a classroom, or the field. However, neither of these paradigms would fully specify the meaning of context in this particular example. Moreover, these researchers’ treatment of context may also deviate from their disciplinary norms based on their own study goals and analysis of the situation. The instructional designer might in one case, research the entire semester-long chemistry course, and treat prior coursework as context. In another case, that same instructional designer might research a single unit within the semester, and treat the prior units within the course as context. Similarly, the computer scientist might in one case take context to include all of the other applications a user has on the mobile device and how they can share data between those applications and the learning software. In another case, they might regard this information to be central to the CSCL intervention, and so specify it a priori. To study and support learning, CSCL researchers and designers must be intentional and explicit in their regard for context. That is, within a given study, what does context encompass? How might context change over time, and why? Moreover, how are the elements of context influenced by one’s theoretical stance, the paradigms to which one ascribes, and one’s purpose in a given study? When is the inclusion of certain elements of context necessary to answer certain kinds of research questions, and how should one approach such an analysis? This chapter does not attempt to fully answer each of these questions, but to instead offer a general framework for conceptualizing context, so that researchers may better consider its implications in their work.
88
2.2
C. Matuk et al.
Focal, Immediate, and Peripheral Layers of Context
There is no consensus among CSCL researchers on a fixed set of aspects that constitute “context.” However, most researchers loosely conceptualize context in terms of three nested layers surrounding the subject of study. The aspects contained within these layers depend on one’s theoretical perspective and research goals. Below, we provide broad delineations between the subject of study and the surrounding layers of context. The subject of study is typically some aspect or process of collaborative learning that the research aims to understand. For example, a researcher might explore ways to support students in building on one another’s ideas during an inquiry activity; or to understand how learners’ individual sociocultural histories can enrich a group’s collective learning experiences. Whether the CSCL technology itself is the subject of study or part of the focal context (see below) may vary based on researchers’ priorities. Researchers might, for instance, seek to explore the role of technologyenhanced scaffolds in facilitating productive dialogue, in which case both the scaffolds and the dialogue are the subjects of study. The focal context includes elements essential to supporting and understanding collaborative learning, and that most CSCL researchers, regardless of theoretical perspective, will always consider. Depending on the subject of study, these elements might include the designed CSCL tools, activities, or curricula as context for the subject of study. The immediate context includes elements outside of the focal context, but still relevant to the subject of study. Certain or all of these elements may or may not be considered, depending on the scope of a study and on researchers’ specific goals. Examples of immediate contextual elements can include learners’ prior knowledge, attitudes, experiences, and relationships with their collaborators. The peripheral context includes the multitude of broader influences on the subject of study. Of all three layers of context, the inclusion of this layer is the most dependent on researchers’ theoretical perspectives. These elements might include the cultural and historical dimensions of a community or situation, and the larger cultural and institutional structures within which the subject of study is embedded. In the following sections, we discuss two orientations toward context that have emerged from cognitivist and socioculturalist traditions, and that influence what contextual layers are foregrounded, backgrounded, and excluded (Fig. 1). We describe the theoretical origins of context from these perspectives, and highlight their implications for research and design in CSCL. We then explore issues for further research that emerge from tensions in the mismatch between theoretical and technological advances in CSCL related to context. We argue that regardless of one’s theoretical orientation, thoughtful attention to context contributes to more robust and ecologically valid understandings and designs in CSCL.
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
89
Fig. 1 The three layers of context—focal, immediate, and peripheral—pictured as concentric circles surrounding the subject of study (i.e., collaborative learning with technology, which is pictured as black circles at the center of each diagram). Boundaries between layers (indicated by dotted lines) are not fixed. The degree to which different layers are intentionally integrated into an analysis of the subject (indicated by the more and less intense shading of the circles) differs between the cognitivist (left) and the sociocultural (right) perspectives
3 State of the Art: Two Theoretical Perspectives on Context 3.1 3.1.1
Cognitive Perspectives on Context Theoretical Origins
Cognitive perspectives on CSCL view learning as located within the individual mind, and augmented by external artifacts and other people (Perkins, 1993). This perspective expands the information processing model used to describe individual learning, to also describe group cognitive processing (Hinsz, Tindale, & Vollrath, 1997; Hutchins, 1995; Salomon, 1997). At an individual level, information processing involves the movement of portions of prior knowledge from long-term memory to short-term memory, Here, it is filtered and organized upon encountering new information, then encoded back into, and ultimately restructuring long-term memory (Hinsz et al., 1997; Smith & Semin, 2004). At a group level, meanwhile, collaborators co-construct knowledge through its sharing, negotiation, and modification; processes that are distributed across activities, artifacts, and other people. Context here might focus on the objects and agents surrounding certain interactions and flows of information, and the mediators of various cognitive processes. While cognitive research typically acknowledges the influence of peripheral contextual elements that lie outside of the bounded cognitive system (e.g., the school, the neighborhood, or the larger museum setting), these tend to be viewed as elements
90
C. Matuk et al.
that can be factored out, rather than as elements that are inextricable from the findings. Thus, unlike research from a sociocultural perspective (see below), these elements tend not to be systematically captured in the data collection, nor explicitly integrated into the analysis of collaborative learning.
3.1.2
Research and Design Implications
CSCL researchers with a cognitive perspective are primarily concerned with how computational artifacts influence group cognition. They may seek to build theory on how tools mediate group cognitive processes, or to refine design principles for computational environments that support group cognition. One example of cognitively oriented CSCL research is found in social models of self-regulated learning (SRL). Based on theories of social cognition, SRL describes the ways that individuals direct their own learning, such as by setting goals, pursuing strategies, monitoring progress, and adapting to challenges (Järvelä & Hadwin, 2013; Saab, 2012; Zimmerman, 2008). Social models of SRL emphasize, to varying degrees, contextual factors—such as environment, culture, the provision of external assistance, and learners’ previous experiences—which can influence individuals’ self-regulation (Hadwin, Järvelä, & Miller, 2011). For example, co-regulation describes how the interactions between an individual and others may support that individual’s self-regulation, while socially shared regulation describes the processes by which groups of learners regulate their collective cognition, and co-construct shared learning (Hadwin, Oshige, Gress, & Winne, 2010). CSCL researchers interested in self-regulation focus on understanding how individual, environmental, and circumstantial factors impact the cognitive, social, motivational, and emotional processes involved in successful collaborative learning, such as achieving common ground and negotiating diverse viewpoints. CSCL environments designed with a cognitive perspective on context include group awareness and notification tools that allow group members to query the status of their partners’ thinking (Bodemer & Dehler, 2011; Bodemer, Janssen, & Schnaubert, 2018); to visualize their social interactions over time (Kirschner & Kreijns, 2003); to align their individual goals (Miller & Hadwin, 2012); to monitor plans, activities, and roles (Carroll, Neale, Isenhour, Rosson, & McCrickard, 2003); and to track one another’s emotions and motivations (Järvelä, Volet, & Järvenoja, 2010). There are also shared gaze tools, which can facilitate team members’ coordination during joint tasks (Brennan, Chen, Dickinson, Neider, & Zelinsky, 2008); and tools that enable partners to share and build on one another’s ideas, such as through digital notes contributed to a shared repository (Scardamalia & Bereiter, 2007). In each of these examples, the CSCL environment is designed to promote cognitive and coordinative processes and strategies that will either serve an individual learner’s self-regulation or the co-construction of shared knowledge among a group of learners. Typical research within the cognitivist perspective aims to describe learning that takes place within or shortly after an intervention. Studies might involve observing
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
91
the impacts on collaborative learning of specific task structures and supports (e.g., scaffolding, modeling, feedback), and of interactions between individuals and tools. Methods may consist of controlled experiments with pre-, post-, and delayed posttest designs, or artifact and microgenetic discourse analyses, often with a focus on the role of technology in facilitating discrete tasks that occur in specific, bounded moments. Cognitivist CSCL researchers primarily consider context in terms of the environment in which collaborative learning takes place (a forum of a MOOC vs. a mobile learning field course), and the situational aspects of learning that can be measured and documented in a given moment (e.g., partners’ beliefs, emotional states, prior domain knowledge, perceptions of a task, uses of particular tools, and so forth). In other words, they place most emphasis on what happens in the moment and with the available tools, and less emphasis on what may have transpired, or of what may later transpire, outside of that intervention, especially at a sociocultural and historical level (see Box 1). Box 1: Example of a Cognitively Oriented Perspective on Context: Students’ Collaborative Problem-Solving in an Online Collaborative Tutoring System A research team developed an online collaborative tutoring system for students to learn about fractions. To understand the impacts of the system on students’ collaborative problem-solving, the researchers examine students’ work on a unit on fractions, completed within the tutoring system. The researchers observe students’ collaborative activities and administer a pre- and post-assessment of students’ knowledge of fractions and attitudes toward mathematics. They conclude that the tutoring system had impacts on students’ collaborative problem-solving strategies and on their learning gains. Their analysis explains these impacts in terms of how knowledge was distributed across collaborators and technology in the tutoring system, and the specific contributions of the platform’s designed scaffolds. It also shows different impacts based on partners’ prior knowledge of and attitudes toward mathematics. The researchers use these findings to articulate design implications for similar collaborative tutoring platforms in the domain of mathematics. Here, the subject of the research is students’ collaborative problem-solving of fractions, and the focal context is the tutoring platform. Accordingly, data collection is confined to the problem-solving activities that take place between student collaborators within the platform, and during the time that students are studying the unit. Analyses also probe influences of the immediate context, including students’ prior knowledge of, and attitudes toward mathematics. They may also comment on potential influences of classroom characteristics and other objects or agents that are introduced into their activities (e.g., the teacher’s spontaneous whole class discussion). (continued)
92
C. Matuk et al.
Box 1 (continued) The peripheral context is meanwhile less relevant to the goal of this research, and so intentionally excluded from consideration. Here, the peripheral context consists of such aspects as the learners’ individual histories, identities, and the cultural experiences that might govern different students’ engagement in the classroom and within the tutoring platform. Whereas, these peripheral aspects of context would be critical to a socioculturally oriented researcher’s explanation of learning, they do not impact the cognitively oriented researcher’s interpretation of findings or their conclusions regarding fractions learning within this platform. Rather, a cognitively oriented researcher acknowledges that different findings may result within different platforms and classrooms.
3.2 3.2.1
Sociocultural Perspectives on Context Theoretical Origins
In contrast to cognitively oriented CSCL research, socioculturally oriented CSCL research see learning as fundamentally embedded in sociocultural and historical systems. This follows Vygotsky’s argument that the social, cultural, and historical dimensions of a learning experience are inextricable from the learners’ individual characteristics, and from the settings in which learning takes place (Vygotsky, 1987). According to Vygotsky, context reflects social and political histories and structures, which affect how learning can and will occur; a learner’s culture is a lens for sensemaking and navigating an educational experience; and learning is a social experience to which other people also bring their unique and shared social and cultural histories (Glassman, 2001). Context is thus dynamically constructed in the interactions between players and activities, and so is unique to each enactment of that activity (Dourish, 2004; Glassman, 2001). Thus, what the cognitively oriented researcher relegates to the background, the socioculturally oriented researcher considers essential to an explanation of learning. Context is less seen as the “inputs” to cognitive systems, and rather as the historical and emergent cultural terrain in which individual learners reproduce or produce new activities or practices.
3.2.2
Research and Design Implications
CSCL researchers with a sociocultural perspective are particularly focused on the sociocultural and historical dimensions of individuals, activities, and artifacts, and how these influence interactions among individuals and technology (Stahl, 2002). Typical research methods involve discourse and interaction analyses, interviews, and ethnography. These qualitative approaches aim to provide rich descriptive
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
93
interpretations of the interactions among people’s interactions with one another and with their environment. Ultimately, these efforts inform the creation of new technologies and practices that are adopted, appropriated, and sustained (Chan, 2011; Timmis, 2014). In socioculturally oriented CSCL research and design, context is acknowledged in the emphasis on learners’ connections to culture, place, and community, often with a focus on language, representations, and cultural tools (Barton & Hamilton, 2012; Barton & Tan, 2010). For example, Steier, Kersting, and Silseth (2019) examine how learners bring representations from outside of the designed environment and incorporate them into their collaborative problem-solving processes. For example, to demonstrate flight paths between countries, learners traced their fingers along the surface of a basketball. The authors observe that the representational capacity of these objects emerged as needs arose to communicate in particular situations. Similarly, Kermish-Allen, Peterman, and Bevc (2019) incorporated a community’s funds of knowledge into a citizen science platform. Funds of knowledge refer to the knowledge and skills developed as part of individuals’ everyday function in their homes and communities, and shaped by their sociopolitical and cultural histories. The authors found that this approach promoted both conceptual knowledge and equitable participation by various community stakeholders. In another example, Hontvedt and Arnseth (2013) explored the role of language in constructing contexts for situated learning within a role-playing simulation of a ship at sea. The authors observed how instructors and students used language to signal switching between their simulated roles as crew members on a cruise ship (who use English as their official language), and their actual roles as students and instructor (who use Norwegian, their native language). By switching between Norwegian and English, participants created opportunities to learn through experiences situated within professional activities (e.g., the emergency anchoring of the ship), and to also step out of this simulated context and back into the educational one (e.g., to ask for help from the maritime professional, who was also role-playing during the simulation). In this example, language is one aspect of sociocultural context that reflects cultural patterns and power dynamics within professional and educational institutions. By emphasizing language as the focus of study, the researchers could better understand learners’ experiences within the CSCL environment. From a sociocultural perspective, such foci can deepen our understanding of the dynamic nature of CSCL, and so inform the design of more effective CSCL environments (see Box 2).
94
C. Matuk et al.
Box 2: Example of a Socioculturally Oriented Perspective on Context: Students’ Discussion in an Online Forum on International Politics A research team develops an online forum to support discourse between students during a course on international politics at an American university. To understand how peer and instructional support in the forum contribute to immigrant refugee and American students’ conceptions of international relations, researchers perform a discourse analysis of the forum discussions that take place over the course of the semester; conduct individual interviews throughout the term with selected refugee and non-refugee students, and with the instructor; and triangulate these data to current and historical political events in the United States and refugees’ home countries. Their findings involve rich descriptions of how the classroom community evolved over the course of the semester in terms of their understanding of the course material and their relationships with other course members. They explain how forum interactions among peers and with their instructor contributed to this evolution, placing these in terms of individuals’ lifelong experiences and goals, the rules and expectations for participation in an online forum at their particular university, and the sociopolitical and historical relationships among the relevant countries. They conclude with implications for how to support online forum discussions among students with diverse perspectives and personal experiences with issues of political and societal relevance. Here, the subject of the research is students’ collaborative learning about international relations, and the focal context is the online discussion forum. Aspects of the immediate context are also important, including learners’ prior knowledge of political science and the instructors’ goals for the course. Where cognitively oriented research would tend to constrain examination of contextual aspects that lie within these bounds, socioculturally oriented research views the sociopolitical dimensions of the peripheral context as essential to an explanation of learning. Thus, they attend to differences in cultural expectations about schooling, technology, instructor authority, and so forth. They also situate their analysis within an understanding of the societal and institutional structures that have previously marginalized some of these learners, how CSCL activities entrench or disrupt these structures, and how they impact learners’ practices. Although these cultural, historical, and political dimensions are not necessarily an explicit part of the course’s learning goals, these researchers nonetheless consider it critical to understand how they constrain and shape students’ collaborative learning experiences.
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
95
4 The Future In addition to a clear theoretical understanding of context in CSCL, it is important to have a robust technological operationalization of context. The first context-aware technologies were based on definitions of context as the physical setting of activities, and identified such measurable aspects as the user’s location and the other people in their company (Schilit, Adams, & Want, 1994). CSCL has since benefited from developments in social and context-aware computing that have allowed systems to provide learners with customized guidance, recommendations, and resources based on roles (Luna et al., 2015), personality (Abrahamian, Weinberg, Grady, & Stanton, 2004), location (Chen, 2013), activities (Bourguin & Derycke, 2001; Laffey, Hong, Galyen, Goggins, & Amelung, 2009); and emotional or interpersonal states (e.g., confusion, frustration, ineffective collaboration, or emotions (Nalepa, Kutt, & Bobek, 2019)). However, context-aware computing is still far from capturing the complexity and ambiguity of sociocultural and historical aspects of context, particularly in collaborative learning situations (Chalmers, 2004; Rogers, 2006). We will address some of these challenges in a discussion of three directions for future research, in which we consider the implications of context’s definition and operationalization for CSCL research. First, research might consider how conceptions of context shift with advances in technology. For instance, the non-portability of older technologies, and the difficulty of carrying out longitudinal research across places such as home and school, have created a blind spot on context in CSCL research. What does context mean to CSCL researchers and designers when learners move through space, time, and social structures (Banks et al., 2007; Shapiro, 2019)? How do we design for learning that is not solely situated within particular singular environments, but integrated across experiences of learners’ lives (e.g., Ito et al., 2013)? When a learner moves from one location to another, from one situation to another, which aspects of context are the same, and which are different? Knowing such answers may inform design priorities (c.f. Winters & Price, 2005). Similarly, as learners adopt and adapt, then discard, different technologies throughout their lifespan, how might notions of context incorporate the historicity of these different technologies, and their impacts on learners’ experiences? Second, as technological capacity grows, we might strive to develop technological systems that create more nuanced models of context. Learning analytics technologies have begun to capture multimodal data that seem to map onto the social aspects of context. However, making sense of these data is nontrivial, and often involves ad hoc algorithms for each application. Unifying our theoretical perspectives on context is one challenge, but unifying our operationalizations of data about context in technology is another. As Wise and Schwarz (2017) write, “the substantive question is not if we should embrace computational approaches to understanding collaborative learning, but how to develop practices and norms around their use that maintain the community’s commitment to theory and situational context” (p. 441). Consider how the advertising industry struggles to operationalize notions
96
C. Matuk et al.
such as “unique users” or “website stickiness.” Yet, over time, these measures do become standardized and allow online advertising markets to thrive. Likewise, learning scientists must consider what metrics are relevant to context, and how these might be specified what, even if they do not directly map onto the theoretical or learning implications. For example, certain biometrics such as heart rate, sweat, eye movement, brainwaves, and facial expressions are directly extended to notions such as engagement, attentiveness, or even learning. It is important for researchers to have precision in, and consensus on these measures (even if we do not yet know what they mean), as well as awareness of the potential harm of misconstruing the precision and application of data. A related third area for future research is to adequately characterize context in CSCL in ways that do not marginalize or mischaracterize people and their experiences. Much of the scientific enterprise is shaped by the power structures and cultural assumptions of researchers within society at given times (e.g., Bauchspies, Croissant, & Restivo, 2006; Kuhn, 1962; Latour, 1987). We now recognize that standard demographic categories once used to describe research contexts are less useful than fully characterizing the histories of individual learners (Gutiérrez & Rogoff, 2003). Whereas doing so is challenging, CSCL researchers must reexamine prior work with an eye toward identifying how assumptions of homogeneity of people and settings systematically disempower some learners or researcher perspectives (Esmonde & Booker, 2016). They must moreover set out to monitor and limit the harm that computing systems can cause to marginalized learners when these systems assume limited conceptions of context. Whether systems incorporate gross or fine-grained sociological characteristics, we must develop minimum standard of care practices to ensure that the ways we frame context in our studies do not perpetuate social injustice (Noble, 2019).
5 Conclusion Context is both a unifying theme across theoretical stances in CSCL, and a source of productive tension that can contribute to our evolving understanding of learning and design in CSCL (c.f. Hoadley, 2010). We have offered a framework for thinking of context in CSCL in terms of three nested layers: focal, immediate, and peripheral. Depending on one’s theoretical perspective, focus on one or the other layers may receive more or less emphasis. Although the cognitive and sociological perspectives may seem opposing or incompatible, each is essential to CSCL, and uniquely appropriate for particular kinds of research questions. Even while researchers may choose to align with one or the other perspective, their work is inevitably most robust when they recognize that learning implicates all layers of context, when they are sensitive to the implications of all perspectives, and when they use each perspective to enrich the other (Akkerman et al., 2007; Borge & Mercier, 2018; Packer & Goicoechea, 2000).
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
97
We see the differences between and within paradigms in their conceptions of context to be a fertile area for pushing the field forward. First, by committing to articulating these different conceptions of context within CSCL, we clarify our assumptions about what is or is not relevant to a particular study, and thus, our interpretations, and conclusions. Second, by understanding that the notion of context can be a proxy for how we theorize our CSCL interventions, negotiating our language around context, and explaining our interpretive lenses across disciplines, we can identify tensions and connections between those disciplinary perspectives. For instance, discussing “place” as an aspect of context for CSCL might mean very different things to an anthropologist and a computer scientist; but a discussion of the similarities and differences in our meanings of “place” may be intellectually generative. Third, by developing shared approaches to examining context, we might ensure that we capture the most interesting or important aspects of our interventions. In this way, we might better understand the transferability of findings, be aware of the limitations of our interpretations of data, and clarify design implications for CSCL across other settings. Although a universal taxonomy of context may be unattainable or undesirable, a nuanced, cumulative understanding of the applications of context can illuminate the emergent, contingent, and variable outcomes of CSCL environments. Acknowledgments We gratefully acknowledge the contributions of Nikol Rummel, who helped conceptualize early structures of this chapter, and the reviewers who helped to clarify our arguments.
References Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999). Towards a better understanding of context and context-awareness. In International symposium on handheld and ubiquitous computing (pp. 304–307). Berlin, Heidelberg: Springer. Abrahamian, E., Weinberg, J., Grady, M., & Stanton, C. M. (2004). The effect of personality-aware computer-human interfaces on learning. Journal of Universal Computer Science, 10(1), 27–37. Akkerman, S., Vandenbossche, P., Admiraal, W., Gijselaers, W., Segers, M., Simons, R., & Kirschner, P. (2007). Reconsidering group cognition: From conceptual confusion to a boundary area between cognitive and socio-cultural perspectives? Educational Research Review, 2(1), 39–63. Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4), 263–277. Banks, J. A., Au, K. H., Ball, A. F., Bell, P., Gordon, E. W., Gutiérrez, K. D., et al. (2007). Learning in and out of school in diverse environments: Life-long, life-wide, life-deep. Seattle: LIFE Center (The Learning in Informal and Formal Environments Center) and the Center for Multicultural Education, University of Washington, Seattle. Retrieved December 6, 2019, from http://life-slc. org/docs/Banks_etal-LIFE-Diversity-Report.pdf. Barton, A. C., & Tan, E. (2010). “It changed our lives”: Activism, science, and greening the community. Canadian Journal of Science, Mathematics, and Technology Education, 10(3), 207–222.
98
C. Matuk et al.
Barton, D., & Hamilton, M. (2012). Local literacies: Reading and writing in one community. New York: Routledge. Bauchspies, W. K., Croissant, J., & Restivo, S. P. (2006). Science, technology, and society: A sociological approach. Malden, MA: Blackwell Publishers. Bodemer, D., & Dehler, J. (2011). Group awareness in CSCL environments. Computers in Human Behavior, 27(3), 1043–1045. Bodemer, D., Janssen, J., & Schnaubert, L. (2018). Group awareness tools for computer-supported collaborative learning. In International Handbook of the Learning Sciences (pp. 351–358). New York: Routledge. Borge, M., & Mercier, E. (2018). Towards a cognitive ecological framework in CSCL. In Proceedings of International Conference of the Learning Sciences, ICLS. International Society of the Learning, 1, 336–343. Bourguin, G., & Derycke, A. (2001). Integrating the CSCL activities into virtual campuses: Foundations of a new infrastructure for distributed collective activities. In P. Dillenbourg, A. Eurelings, & K. Hakkarainen (Eds.), Proceedings of Euro-CSCL 2001, Fifth European Perspectives on Computer-supported Collaborative Learning (pp. 123–130). Wellington: McLuhan Institute. Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B., & Zelinsky, G. J. (2008). Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition, 106(3), 1465–1477. Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human Computer Studies, 58(5), 605–632. https://doi.org/10.1016/S1071-5819(03) 00024-7. Chalmers, M. (2004). A historical view of context. Journal of Computer Supported Cooperative Work (CSCW), 13(3–4), 223–247. Chan, C. (2011). Bridging research and practice: Implementing and sustaining knowledge building in Hong Kong classrooms. International Journal of Computer-Supported Collaborative Learning, 6(2), 147–186. Chen, C. M. (2013). An intelligent mobile location-aware book recommendation system that enhances problem-based learning in libraries. Interactive Learning Environments, 21(5), 469–495. Dourish, P. (2004). What we talk about when we talk about context. Personal and Ubiquitous Computing, 8(1), 19–30. Esmonde, I., & Booker, A. N. (Eds.). (2016). Power and privilege in the learning sciences: Critical and sociocultural theories of learning. Milton Park: Taylor & Francis. Glassman, M. (2001). Dewey and Vygotsky: Society, experience, and inquiry in educational practice. Educational Researcher, 30(4), 3–14. Goodwin, C., & Duranti, A. (1992). Rethinking context: An introduction. In A. Duranti & C. Goodwin (Eds.), Rethinking context: Language as an interactive phenomenon (pp. 1–42). New York: Cambridge University Press. Gutiérrez, K. D., & Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher, 32(5), 19–25. Hadwin, A. F., Järvelä, S., & Miller, M. (2011). Self-regulated, co-regulated, and socially shared regulation of learning. Handbook of Self-regulation of Learning and Performance, 30, 65–84. Hadwin, A. F., Oshige, M., Gress, C. L. Z., & Winne, P. H. (2010). Innovative ways for using gStudy to orchestrate and research social aspects of self-regulated learning. Computers in Human Behavior, 26, 794–805. Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121(1), 43–64. Hoadley, C. (2010). Roles, design, and the nature of CSCL. Computers in Human Behavior, 26(4), 551–555.
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
99
Honebein, P. C., Duffy, T. M., & Fishman, B. J. (1993). Constructivism and the design of learning environments: Context and authentic activities for learning. In Designing environments for constructive learning (pp. 87–108). New York: Springer. Hontvedt, M., & Arnseth, H. C. (2013). On the bridge to learn: Analysing the social organization of nautical instruction in a ship simulator. International Journal of Computer-Supported Collaborative Learning, 8(1), 89–112. Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., Schor, J., Sefton-Green, J., & Watkins, S. C. (2013). Connected learning: An agenda for research and design. Irvine, CA: Digital Media and Learning Research Hub. ISBN: 978-0-9887255-0-8. Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39. Järvelä, S., Volet, S., & Järvenoja, H. (2010). Research on motivation in collaborative learning: Moving beyond the cognitive-situative divide and combining individual and social processes. Educational Psychologist, 45(1), 15–27. Kermish-Allen, R., Peterman, K., & Bevc, C. (2019). The utility of citizen science projects in K-5 schools: Measures of community engagement and student impacts. Cultural Studies of Science Education, 14(3), 627–641. Kirschner, P. A., & Kreijns, K. (2003). The sociability of computer-mediated collaborative learning environments: Pitfalls of social interaction and how to avoid them. In R. Bromme, F. Hesse, & H. Spada (Eds.), Barriers and biases in computer-mediated knowledge communication—and how they may be overcome (pp. 169–192). Dordrecht, The Netherlands: Kluwer. Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press. Laffey, J., Hong, R. Y., Galyen, K., Goggins, S., & Amelung, C. (2009, June). Context-aware activity notification system: Supporting CSCL. In Proceedings of the 9th international conference on Computer supported collaborative learning (Vol. 2, pp. 171–173). Rhodes, Greece: International Society of the Learning Sciences. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. Luna, V., Quintero, R., Torres, M., Moreno-Ibarra, M., Guzmán, G., & Escamilla, I. (2015). An ontology-based approach for representing the interaction process between user profile and its context for collaborative learning environments. Computers in Human Behavior, 51, 1387–1394. Miller, M., & Hadwin, A. (2012, April). Social aspects of regulation: Measuring socially-shared regulation in collaborative contexts. Paper presented at the annual meeting of the American Educational Research Association, Vancouver, British Columbia, Canada. Nalepa, G. J., Kutt, K., & Bobek, S. (2019). Mobile platform for affective context-aware systems. Future Generation Computer Systems, 92, 490–503. Noble, S. U. (2019, October 3). The problems and perils of harnessing big data for equity & justice, keynote. Presentation presented at Cyberlearning 2019, Alexandria, VA. Retrieved from https:// www.youtube.com/watch?v¼HBdDTIWK8r0. Packer, M. J., & Goicoechea, J. (2000). Sociocultural and constructivist theories of learning: Ontology, not just epistemology. Educational Psychologist, 35(4), 227–241. Perkins, D. N. (1993). Person-plus: A distributed view of thinking and learning. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 88–110). New York: Cambridge University Press. Prekop, P., & Burnett, M. (2003). Activities, context and ubiquitous computing. Computer Communications, 26(11), 1168–1176. Rogers, Y. (2006). Moving on from Weiser’s vision of calm computing: Engaging ubicomp experiences. In P. Dourish & A. Friday (Eds.), International conference on Ubiquitous computing, Lecture Notes in Computer Science (Vol. 4206). New York: Springer. Saab, N. (2012). Team regulation, regulation of social activities or co-regulation: Different labels for effective regulation of learning in CSCL. Metacognition and Learning, 7(1), 1–6.
100
C. Matuk et al.
Salomon, G. (1997). Distributed cognitions: Psychological and educational considerations. New York: Cambridge University Press. Scardamalia, M., & Bereiter, C. (2007). Fostering communities of learners and knowledge building: An interrupted dialogue. In J. C. Campione, K. E. Metz, & A. S. Palincsar (Eds.), Children’s learning in the laboratory and in the classroom: Essays in honor of Ann Brown (pp. 197–212). Mahwah, NJ: Erlbaum. Schilit, B., Adams, N., & Want, R. (1994, December). Context-aware computing applications. In 1994 First workshop on mobile computing systems and applications (pp. 85–90). New York: IEEE Press. Shapiro, B. R. (2019). Integrative visualization: Exploring data collected in collaborative learning contexts. In Proceedings of the 13th international conference on computer supported collaborative learning (CSCL) 2019 (Vol. 1, pp. 184–191). Lyon, France: International Society of the Learning Sciences. Smith, E. R., & Semin, G. R. (2004). Socially situated cognition: Cognition in its social context. Advances in Experimental Social Psychology, 36, 57–121. Stahl, G. (2002, January). Contributions to a theoretical framework for CSCL. In Proceedings of the conference on computer support for collaborative learning: Foundations for a CSCL community (pp. 62–71). Boulder, CO: International Society of the Learning Sciences. Steier, R., Kersting, M., & Silseth, K. (2019). Imagining with improvised representations in CSCL environments. International Journal of Computer-Supported Collaborative Learning, 14(1), 109–136. Timmis, S. (2014). The dialectical potential of cultural historical activity theory for researching sustainable CSCL practices. International Journal of Computer-Supported Collaborative Learning, 9(1), 7–32. Vygotsky, L. S. (1987). The collected works of L.S. Vygotsky: Vol. I Problems of general psychology. R. Rieber & A. Carton (Eds.) (N. Minick, Trans.). New York: Plenum Press. (Original work published 1934). Winters, N., & Price, S. (2005, September). Mobile HCI and the learning context: An exploration. In Proceedings of context in mobile HCI workshop at MobileHCI05 (pp. 19–22). Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Zimmerman, B. J. (2008). Investigating self-regulation and motivation. Historical background, methodological developments and future prospects. American Educational Research Journal, 45, 166–183.
Further Readings Bodemer, D., & Dehler, J. (2011). Group awareness in CSCL environments. Computers in Human Behavior, 27(3), 1043–1045. This special issue presents six studies that explore collaborative learning with group awareness technologies. They demonstrate a cognitively oriented perspective on context, with a focus on capturing, responding to, and informing learners of the social, behavioral, and cognitive aspects that surround and influence their collaboration. Borge, M., & Mercier, E. (2019). Towards a micro-ecological approach to CSCL. International Journal of Computer-Supported Collaborative Learning, 14(2), 219–235. This article argues that an ecological perspective on learning in CSCL better captures the complexity of teachers’ and learners’ decisions and actions. The authors illustrate a microecological approach to analyzing a case of elementary- and middle-grade students in an after-school design club. In their analysis, they identify “transecological disruptions,” events that occur at one level (i.e., between individual learners, between learners, and objects), and influence others (e.g., within
Conceptualizing Context in CSCL: Cognitive and Sociocultural Perspectives
101
collaborative groups, within the community as a whole). This research shows how a multilevel view of context can allow deeper and more valid, descriptive explanations of learning. Duranti, A., & Goodwin, C. (1992). Rethinking context: Language as an interactive phenomenon. New York: Cambridge University Press. This collection of essays examine context in relation to various social activities, including radio communication, medical diagnosis, in-person interaction, and politics. This seminal work argues that context and language form an inseparable dynamic, and continues to influence scholars in social and anthropological linguistics. Jones, R. (2004). The problem of context in computer mediated communication. In P. LeVine & R. Scollon (Eds.), Discourse and technology: Multimodal discourse analysis (pp. 20–33). Washington, DC: Georgetown University Press. This paper argues for the need for linguists to redefine context in the study of computer-mediated communication. As text has since become more diverse than simply written and verbal, the authors encourage a shift in analytic focus away from text alone, and toward the social interactions, identities, and experiences that these new texts enable. Almost two decades later, this argument is worth revisiting as our modes for communication continue to evolve and expand. Zimmermann, A., Lorenz, A., & Oppermann, R. (2007). An operational definition of context. In International and interdisciplinary conference on modeling and using context (pp. 558–571). Berlin, Heidelberg: Springer. This article reviews the history of the concept of context in the domain of context-aware computing. The authors define context in general, formal, and operational terms, which together describe the components of context (e.g., time, location, activity) and how they are used in software engineering. This piece offers a practically oriented perspective on context that is useful for both technology users and designers.
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion Kimberley Gomez, Louis M. Gomez, and Marcelo Worsley
Abstract The underlying aim of this chapter is to contribute to efforts to build and organize the design landscape and vocabulary for conversations about diversity, equity, and inclusion (DEI) in CSCL. Anchoring our discussion is the position that DEI can only really be understood and achieved at scale. We have limited our scope to include the consideration of three critical issues—language, differentiation, and identity—that we believe serve to, however unintentionally, restrict or promote DEI in CSCL, perennial problems that often surface in complex software systems, which may prevent broad-based utility in applications, and how issues of DEI surface themselves in these designed tools and applications. We center this discussion in a few common CSCL applications: contexts like MOOCs, virtual high schools, and networked-based multiplayer games. We highlight three core DEI challenges present in the use of CSCL environments: language, differentiation, and identity as focal components that designers should be aware of as applications move to scale. Keywords Equity · Diversity · Inclusion · CSCL · Language · Differentiation · Identity
K. Gomez (*) · L. M. Gomez Graduate School of Education and Information Studies, University of California, Los Angeles, Los Angeles, CA, USA e-mail: [email protected]; [email protected] M. Worsley School of Education and Social Policy and Computer Science, Northwestern University, Evanston, IL, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_6
103
104
K. Gomez et al.
1 Definitions and Scope In this chapter, we aim to stimulate a conversation about current concerns and opportunities for diversity, equity, and inclusion (DEI) in participation, design, and use of CSCL. To the extent that our field has taken up DEI, much of the focus has been on the role of design and designers in creating artifacts that find use in various contexts. We aim to emphasize a focus on what is being designed and how a spotlight on the inequities found within these designs could press designers with respect to the creation of more DEI-centric design dialogs. The primary focus of this chapter will be to present an analytic perspective on tools and applications that are designed for a broad cross section of people. We argue that such CSCL applications typically fall short in accomplishing DEI. Much of what we point to in CSCL, as tools and applications that are useful and usable by many are, instead, devoid of attention to the diversity in populations, the need to actively promote equity in access and use, and lack design considerations for the experiences, knowledge, and needs of diverse learners. Our aim is to put in sharper relief how focused attention on DEI presents a new round of design challenges for CSCL applications. A particular vantage point, important to this discussion, is that DEI is tied to problems of scale. Although we will develop this point later in the chapter, the idea guiding our analysis is that to genuinely understand the demands of DEI, we must look beyond smallscale examples of use and access for small groups of users toward projects that touch the lives of many users from a variety of backgrounds and abilities and tools that intentionally aim to understand and accommodate the interests, learning, and social interaction needs of all learners. Diversity is not new. Our world has always been diverse, present even in the earliest art, games, and tools. The tools we create to inhabit the world have always, albeit tacitly, sought to serve a broad cross section of people. Too often, the tools we have created are simply not productively inclusive and, instead, reproduce inequity. What is a more recent phenomenon is what might be characterized as increasing demands for rightful attention to diversity and, concomitantly, considerations of what attention to diversity means with respect to the need for equity and inclusion. Consider, for a moment, the humble plastic strip bandage, most often described by the brand name Band-Aid. For each of the authors of this chapter, the Band-Aid, for most of our lives, was a stark example of designers’ seeming lack of recognition and consideration of diversity and, certainly, a lack of effort toward equity of available options. The Band-Aid, itself, is useful. It provides a seemingly innocuous and aesthetically pleasing protection to a scar or cut. However, the level of innocuousness, or how well it fits into its environment (e.g., skin color, from our perspective), represents an abysmal failure. As designed, the Band-Aid simply did not represent the range of skin tones for all the people who were meant to use it. In essence, one way to see the problems of DEI is effectively addressing the presence of variation. The simple Band-Aid, which was designed to meet a specific need for all, works well for some people, sometimes. Addressing problems of DEI in CSCL and other
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
105
domains, at base, is figuring out how to make designed solutions work well for more people, more of the time. We take the perspective that DEI, as it connects to CSCL, are not principal problems that exist at the level of prototypes and boutique efforts. While these sorts of efforts are important, and vitalize CSCL as a field, they are essentially attempts to uncover the promise of an idea. Problems of DEI are much more prominent when designers are directly trying to discern whether a promising idea can work for a broad cross section of people. Recently, our colleagues organized the CSCL 2017 conference where the conference theme problematized equity and access. That effort was meant to capture the field’s evolution with regard to these issues at a moment in time. Our reading of the results of that effort brings us to two conclusions. First, while the concerns with equity and access are deep, ongoing work to actively address these issues is not widespread. Second, much of what was reported, in the CSCL 2017 published proceedings, was broadly connected to DEI, rather than DEI being the focus of the work. This, of course, makes sense when a conference is meant to capture the field’s current perspectives on its ongoing work. In this chapter, we aim to build on the work of that effort, placing a sharp focus on intentionality within DEI. In particular, we hope to lay a foundation for conversations about DEI in CSCL, which will move us to a more common perspective and set of lenses through which designers, researchers, and practitioners can actively take up DEI within CSCL. It is evident that diversity, equity, and inclusion are evolving notions. This evolution is shaped by society and, more specifically, by our field, through the development of more nuanced and, arguably, more sophisticated understandings of our assumptions about others and our responsibility as designers and researchers to them. Historically, diversity, in Western nations, meant having a representation of people, who were perceived to be culturally “different,” typically nonwhite, in what we designed, tested, and/or studied. While a useful starting point, diversity framed by what it wasn’t (i.e., white) was a blunt indicator. Perhaps our ideas, from a definitional perspective, need refreshing. So, in what follows, we start with what we mean by DEI. We then offer historical examples drawn from the field that reflect the field’s attention to DEI and highlight what we believe to be evolving intentionality in that effort. We then provide examples from the current state of the art in CSCL and related fields that illustrate intentional efforts and their impact. We offer illustrations of research that, we believe, constitute exciting indicators of the direction that the field can, and should, take. Common across these examples is a level of intentionality around deliberately designing and conducting CSCL research for more people, more of the time. We conclude with implications of the current state of CSCL design for DEI for future design and research.
106
1.1
K. Gomez et al.
Defining Diversity, Equity, and Inclusion
As we awaken to, and in some efforts (mostly at the edges of CSCL), problematize the limits of our past and current engagement with diversity in our field, we believe it would be helpful to clarify each term. In this section, we offer definitions that we hope will help lay the foundation for a common language at the intersection of DEI and CSCL.
1.1.1
Diversity
Diversity-in-use, in our view, is a measure of the amount of variation that a design can accommodate and who can and can’t use the designed tool for a desired outcome. Here, diversity, as a construct, includes phenotypical, gender, sexuality, behavioral, cultural, and a myriad of other ways that humans, institutions, and practices differ and represent their interests. For us, the term diversity captures the full frame of a person’s intersectional identity. It is important to note that, while we mean those characteristics which are ascriptive, like race and gender, we also include those aspects of identity that are attributable via accomplishment and experience, like being a mathematician or surviving a car crash. We argue that design accommodating diversity should capture relevant ascriptive and attributive intersectionality. As a first step, we must be able to identify and understand the variation that exists in its entirety. We highlight relevance because it is plausible to consider that not all dimensions of an individual’s or group’s profile is activated in every design and use context. While the fullness of intersectionality is always present, only some aspects of it may be germane to effective use for any given design and context of use. Second, we must appreciate the macro (e.g., cultural contexts), meso (e.g., intersecting ascriptive and attributed identity domains), and micro (e.g., family and neighborhood) systems in which users exist and operate. The multilevel complex conspires to create and maintain a range of variation in which a design can be effective. Next, with this multilayered system in view, designers need to develop a deep appreciation of how explicit aspects of the system engender variation and accommodation that ultimately result in utility and usability. It is in these multilevel appreciations that designs move out of the hothouse and lighthouse phases into artifacts that stably serve diverse communities. In this way, we position diversity as a figure-of-merit for an application, a noticeable deviation from typical characterizations of diversity. The ability to accommodate significant use communities as an index of quality in design requires a level of intentionality vis-à-vis diversity that, in our estimation, is not currently present in CSCL design communities. Later in this chapter, we offer a few examples of work that seek to do this and consider how problematizing the way we think about, and treat, diversity can support achievement of equity in outcomes.
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
1.1.2
107
Equity
Here, equity broadly refers to treatment that leads to fair outcomes, rather than equal treatment. Equity and equality are not the same. Equity is the extent to which a design can accomplish uniformly successful outcomes for its users and avoids successful outcomes being coupled to ascriptive or attributed variation like skin color, gender, ability, and location. Equity, in this view, can be seen as fairness in experience, access, and opportunity. A recent study by Starmans, Sheskin, and Bloom (2017) points to evidence that, in general, people value fairness over equivalence. There are situations in which people view equal treatment as fair, but other situations in which they view unequal treatment as fair. Equity is a process that, with intentionality, can lead to inclusion. From a design perspective, to accept equity as a figure-of-merit means that designers have to come to understand how to wield differentiation and intersectionality as a design resource. CSCL designs will need to discern variation and adjust what people see, which communities are promoted for membership, who mentors who, and so on. The idea is that the hallmark of inclusion from a social design resource perspective is better deployment based on what we know about who our users are and the aspirations they bring to an application. In turn, we suspect, this perspective leads to successful designs that are recognized, as such, from the vantage point of more users.
1.1.3
Inclusion
Firstly, inclusion is a perspectival outcome measure: It is what people see in what is presented to them. A more inclusive design is seen as welcoming by more people, and a less inclusive design presents people with elements that they see as systematic barriers to entry. For example, questions of inclusion are enjoined when nondominant users see CSCL designs as presenting barriers to their sense of belonging (Bolger, 2017). In considering inclusion, we borrow from work in Ability-Based Design (Wobbrock, Kane, Gajos, Harada, & Froehlich, 2011) and work in the Learning Sciences on asset-based framing (Pandya & Dibner, 2018). Within the ability-based design framework, there is a certain level of intentionality in how a tool is designed. In essence, any tool is designed to adapt to the abilities or strengths of the users. Part of this is achieved by being explicit, and inclusive, about one’s assumptions concerning who will use the tool and how they will use it. Moreover, similar to work on asset-based framing, the central focus is on ability, not disability. Thus, the idea of inclusion goes beyond merely thinking expansively about a systems’ users, but also valuing the unique contributions, ideas, insights, histories, etc. that may engage them. As we in CSCL design, build and study the use of learning and community platforms and augmented reality spaces, our aim is to provoke a discussion about how these tools positively or negatively press DEI. Bolger (2017) perhaps sums it up
108
K. Gomez et al.
best in noting, “It’s about realizing that diversity efforts, without equitable practices and intentional inclusion, will always fall short.” Our CSCL challenge is to understand how to construct an infrastructure that makes it straightforward to keep DEI in view, in design, and, more importantly, to support intentionality around DEI throughout the research, design, and implementation process.
2 History and Development In 2004, the theme of the ICLS was Embracing Diversity in the Learning Sciences. At that time, diversity was framed as complex social systems, considerations of variation in populations, institutions, and social contexts. Specifically, diversity was framed as “draw[ing] from a diverse set of disciplines to study learning in an increasingly diverse array of settings” (Steinkuehler, Kafai, Sandoval, & Enyedy, 2004, Preface). These concerns were framed as “challenges to studying and changing learning environments across levels in complex social systems.” As such, the conference chairs noted, “This demands attention to new kinds of diversity in who, what, and how we study; and to the issues such diversity raises to developing coherent accounts of how learning occurs and can be supported in a multitude of social contexts, ranging from schools to families, and across levels of formal schooling from preschool through higher education.” A workshop at the 2017 Computer Supported Collaborative Work (CSCW) conference invited designers and researchers who viewed their work as supporting equity, inclusion, and accessibility to reflect about the role of subjects located “at the margins” of digital existence to consider how the work might demarginalize those who are researched, the research itself, and those who are researchers within the CSCW community (Dye et al., 2018). Similarly, the 2017 Interaction Design and Children conference featured a workshop on Equity and Inclusivity (Sobel, Kientz, Clegg, Gonzalez, & Yip, 2017). This latter workshop drew on ways that equity and inclusivity are closely related, noting that “[t]hese issues—equity and inclusivity— complement each other as we can use equitable practices and approaches to promote inclusion in our designs and methods” (p. 762) and highlighted both practical and theoretical considerations for conducting research among diverse populations, and for advancing issues of DEI. Recently, Schlesinger, Edwards, and Grinter (2017) employed a meta-analysis of how identity is portrayed and represented in the CHI Proceedings from 1982 to 2016. In reporting their findings, Schlesinger et al. (2017) urge designers and researchers to pay design and analytic attention to the current blunt state of identity representation in CHI research. Guided by intersectionality theory (Crenshaw, 1990) they note that “previous identity-focused research tends to analyze one facet of identity at a time” (p. 5412) rather than designing for, and analytically examining, the impact of design on, for example, a black, nonnormativesexual male from a low-income rural background. In their call to action, the researchers remind us of what is lost when we fail to recognize the intersectionality
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
109
of people, the complexity of institutional users, and the role of power in learning with, and from, tools and practices. In 2016, Booker, Vossoughi, & Hooper, 2014 asked the field, “How can the learning sciences engage more directly with the political dimensions of defining and studying learning? What might this engagement offer for democratizing learning?” (p. 919). With respect to issues of DEI, the authors called for research and practice that seek to recognize, identify, and support “multiple ways of knowing” (p. 927) and a consideration of not only what should be learned and what learning effectively is, or isn’t, in understanding how learning occurs through practices across multiple contexts, people, and meanings attached to practices, tools, and content. In related work, Vakil, McKinney de Royston, Nasir, and Kirshner (2018) specifically urged our field, in collaborative design and in design-based research efforts to recognize and consider how issues of “race and power mediate relationships between researchers and communities in ways that significantly shape the process of research” (p. 194). In essence, our field and related disciplines are increasingly asking, “How (rather than why) might we interrogate the ways we design?” and “How and when (at what design point or points) can/should we consider diversity of participants, settings, and needs?” As a field, we are increasingly awakening to the import of interrogating how and why we design the way we do.
3 State of the Art There are not enough CSCL applications in broad use to have a fulsome discussion about their DEI impact. Thus, we center our discussion on a few common CSCL applications that were developed outside of the CSCL community: online learning contexts like MOOCs, virtual high schools (VHS), and networked-based multiplayer games. These applications share three key components. First, they have had a broad societal impact and market penetration, satisfying our desire to consider DEI impact. Second, learning, implicit or explicit, arguably plays an important role in the successful unfolding of these applications. Third, social interaction, whether synchronously or asynchronously, evidence suggests, is an important element of accessibility and is at the center of the successful execution of these platforms. We feel the latter two characteristics (i.e., learning and social interactions) are among the first principles for CSCL applications. Space constraints prevent us from offering a full analytic treatment from the perspective of the design challenges that DEI presents to these applications. Rather, in the space we have available, we will take up examples of DEI challenges that appear in these, and related, applications. Our examples are chosen to highlight three of what we think of as the core DEI challenges present in the use of CSCL environments—language, differentiation, and identity. From our perspective, without language in the world of CSCL, there is no opportunity to learn. In a world of significant variation in languages spoken and the way language is used to communicate, if CSCL designs are not attentive to language variation, the designs will significantly disenfranchise many potential
110
K. Gomez et al.
users. The challenge of differentiation is the recognition that one size never fits all. Moreover, just as learners rely on multiple ways of knowing in different configurations, the challenge of differentiation is to be sensitive to those configurations and to have the ability to reconfigure a learning environment in light of that. For us, the sensitivity to identity has to be a core function of CSCL environments. CSCL environments, at base, are social. And, for social settings to gain traction, each individual involved has to be able to recognize that they are being seen and understood for who they are, in all their complexity. In what follows, we attempt to show how language, differentiation, and identity present design challenges to CSCL environments.
3.1
Language
Language poses a barrier for many users of CSCL technologies and CSCL designers. In this section, we offer considerations for building and supporting DEI language environments in CSCL. Here, “language” refers to the linguistic knowledge and resources that users seek to make sense of content presented in CSCL technologies and the language they encounter in CSCL technologies. We highlight issues of language complexity, availability, and accessibility. Language complexity refers to the challenges users face in comprehending expectations for tasks, recognizing and comprehending meaning (explicit or implied), and applying their literacy and language skills to genres. Presumptive literacies (Williams & Gomez, 2002), design-based assumptions about the literacy skills, background, and knowledge of the user, in the content and structure of technologies, present barriers to comprehension. They are often found in the expected uses of communication genres (Cazden et al., 1996; Moje, 2000), such as text, video, charts, graphs, and animation, each of which has a literacy (set of stereotyped processes that, when possessed, allows the learner to unlock meaning). Users may have limited to no experience in using or reading Academic English, reading various genres, interpreting data, and organizing and representing their understandings using various tools. People from communities who have lower levels of formal academic preparation or are from underserved communities are thus disadvantaged. For this reason, explicitness is needed in directions, expectations, context, and scaffolding to guide users toward successful access and use, which may involve explicitly supporting users’ comprehension through design and helping users monitor and revise their understandings. There are currently no design baseline standards that aim to provide a welcoming and supportive user environment. Opportunities, across tools and contexts, are far from uniform for users. This demonstrates a fundamental lack of commitment to the different needs of learners. Language availability refers to both the availability of content in languages other than English and the availability of opportunities to learn and communicate with others using one’s native language and/or a new or less familiar language. Less online content is available in non-Western languages, such as Arabic or Swahili,
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
111
languages with increasing demand (Willems & Bossu, 2012). While Open Educational Resources (OER), of which the vast majority exists in MOOCs, have been created in Western industrial countries, they may not necessarily fit the needs of learners in developing countries (Richter & McPherson, 2012, p. 202). Similarly, virtual high school content, by and large, is delivered in English. In both media, non-English background users are at a disadvantage. Rankin et al. (2006) highlight an apparent shortcoming of English-centric platforms with computer-supportive collaborative games for learning. The work notes that non-native English speakers experienced far lower learning benefits than those with more English proficiency. Moreover, students who were less English proficient tended to interact with non-player characters, whereas students with more English proficiency engaged more with other human players. More recently, researchers have developed metrics to assess social behavior and interactions, particularly prosocial interactions such as turn-taking and collaboration (Emmerich & Masuch, 2016; Maitland et al., 2018). These and other efforts suggest that providing users with opportunities to use social interaction tools can not only make languages other than English available to users, but can provide context-based and more frequent opportunities for users to use L1 and L2 to communicate with others to build skills, accomplish tasks, and meet goals. Language accessibility refers to the aim of creating design standards and principles for end-user HCI interaction, as individuals or in social interactions contexts, online, so that regardless of the platform, or technical and application scenarios (Miesenberger, Ossmann, Archambault, Searle, & Holzinge, 2008), users will be at a minimal disadvantage. Article 9 of the Convention on the Rights of Persons (Márton, Polk, & Fiala, 2013) established standards for supporting people with disabilities in physical and other contexts, reminding designers of important considerations when creating tools, platforms, and content for accessibility, emphasizing attention to text, images, forms, and sounds, and the use of assistive technologies. The guidelines remind designers that users must be able to perceive information and user interface components, and they recommend the availability of “text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language.” As designers, and users, of CSCL tools and content, we must be attentive to creating design content, tools, and contexts that are accessible to all users as they seek to understand, navigate, interact with, and contribute to websites and tools.
3.2
Differentiated Learning
Recently, Rohs and Ganz (2015) applied Knowledge Gap theory to the utility and usability of MOOCs and other OER contexts. He described several ways in which these contexts serve to “reinforce or expand existing inequalities in education [rather] than help to reduce the differences” (p. 15). First, online learning contexts, like MOOCs and VHS, place relatively high demands on users to have well-formed media competence and self-regulation skills (Leven, Bilger, Strauß, & Hartmann,
112
K. Gomez et al.
2013), place equally high demands on users’ self-directed learning capacity (Tan, Divaharan, Tan, & Cheah, 2011) and have been criticized for offering low levels of support and “scarce personalized guidance” (Gutiérrez-Rojas, Alario-Hoyos, PérezSanagustín, Leony, & Delgado-Kloos, 2014, p. 43). Less skillful learners who are learning to learn will be at a disadvantage because course design often does not address these concerns (Rohs & Ganz, 2015). Secondly, the platforms are primarily designed for, and used by, higher SES participants. Studies suggest that these participants are better educated and, as such, often have deeper prior knowledge about a topic and easier acquisition of future topical knowledge (Kizilcec, Saltarelli, Reich, & Cohen, 2017). Third, students of low SES backgrounds are disadvantaged in access and usage in many MOOCs because many courses, while accessible without fees beyond internet usage, cannot be used to achieve certification or toward a degree without payment. Virtual high schools, where students complete their coursework through online courses, engender many of the same challenges as MOOCs. However, given their considerably longer history, they have also experienced some successes. For example, Hart, Berger, Jacob, Loeb, and Hill (2019) report that students in VHSs outperform their peers in traditional classrooms as determined by student course completion. On the surface, this seems ideal, as it provides an opportunity for increased academic achievement. In reality, though, many researchers attribute these differences in performance to a reflection of issues with retention and selfselection. Like MOOCs, VHS have traditionally attracted students that are more academically motivated than in traditional classrooms. Additionally, many lower achieving students discontinue their participation in VHS, which biases comparison in favor of the VHS students (Hart et al., 2019). In their study, Hart et al. compare first-time virtual course takers (students enrolling in a specific course for the first time) with course retakers (students who previously failed a course, online or faceto-face). They find that first-time takers were less likely to graduate high school than their peers in traditional classrooms. Contrarily, virtual retakers are more likely to graduate high school than their peers. While there appears to be an immediate benefit to academic achievement for VHS participants, the downstream effects are more complicated. In essence, while OERs like MOOCs can, in theory, facilitate equalization of access and opportunity, without the necessary “basic skills that facilitate the acquisition and usage of available media, these resources cannot be used at all or only to a very limited degree” (Zillien & Hargittai, 2009).
3.3
Identity
We look for connections to who we are and what we know in many sectors, including books and music, but also in tools. Such connections serve to engender a sense of belonging and efficaciousness. Yet, for many users, finding examples of ourselves in CSCL tools is difficult. People of color, across ethnicities, those who are
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
113
not heteronormative in their sexuality, and people with disabilities, to name just a few, seldom see examples of themselves in games. This is, perhaps, not surprising. The gaming industry is dominated by white, male designers, and in-game diversity looks similar. This is not without consequences to the users. Passmore and Mandry’s (Passmore, Birk, & Mandryk, 2018) recent 92-question study with 300 Americans on the effects of a lack of in-game diversity found that not seeing the self, reflected in a regularly occurring activity, like gaming, can be detrimental, noting the same longterm effects of depression, detachment, disengagement, and low self-worth are present as outcomes, as you would see in everyday racism. The same study found that video game players want to play as characters they can identify with, and, too often, the characters are caricatures. Recent studies by the gaming industry point to increasing dissatisfaction among nonwhite gamers with character representation, which, in itself, carries implied and inferred assumptions (Fussell, 2013). This inattention to broader representation also has consequences for the predominately white and male gamers and game player population. Analogous to the limited range of roles portrayed by nonwhites for decades in film and television, gaming offers few options for nonwhite characters—mostly limited to gang members, drug dealers, and criminals (Fussell, 2013). Yang, Gibson, Lueke, Huesmann, and Bushman’s (2014) study of white game players who played violent video games with a black avatar with stereotypical dialect and features found that when white people “become” a black avatar in a game, they behave more aggressively, but also, when interviewed later about their attitudes toward black people, white game players “came away from the game with reinforced stereotypes, including the belief that blacks are more violent people” (p. 698). Much of the problem in CSCL lies in active or implicit efforts to design “colorblind” tools. In the design of VHS and MOOCs, the oversight has largely been deliberate. Early OER designers aimed to create “borderless, gender-blind, race-blind, class-blind, and bank account-blind” (Agarwal, 2013, para. 3) contexts for learning anywhere, anytime. Yet, for most MOOCs and VHS, the bandwidth is narrow. As such, the tools often fail to recognize users’ learning and engagement needs, serving as a barrier to entry and a barrier to completion. As Arnold (2017) has recently noted, “If you don’t see color, you don’t see me. . . Race is not something we can afford to be blind to—race can be everything for someone’s identity and often affects every aspect of their life [sic] either as marginalization or as privilege” (para. 15). Currently little is known about the sociocultural conditions of OER participants (Hodgkinson-Williams, Arinto, Cartmill, & King, 2017). The default one-size fits all model of technology woefully neglects designing for learners’ relationships between the previous and current social contexts of instruction and learners’ needs. If we are to take DEI seriously, we must be willing to deviate from this model.
114
K. Gomez et al.
4 The Future In this chapter, we have described some historical concerns about diversity and equity in CSCL and related fields and have characterized some of the challenges we face. In this section, we would like to highlight some promising initiatives and directions that we believe, if allowed to influence the direction of CSCL, will help to move us towards adopting DEI in design as a way of life. First, designed environments and designers should more fully represent the array of lifestyles, cultures, and thought that exist locally and globally (Narcisse, 2017). For example, PlayStation Europe now has an LGBT group within the company who, ideally, will increase diversity of sexual orientation and gender expression in non-stereotypical ways in video games. Other companies have increased the representation of women, people with disabilities, and ethnic minorities. Urging companies to move beyond stereotypical representation in diverse identities in video games, Narcisse (2012) reminds us, “The thing to remember is that beneath all the comforting platitudes about a character’s color not mattering, lies a sticky web of stereotypes and cheap myths that can still insult and anger people [who are] playing a game” (para. 5). Scott Page (2007) reports that studies of idea or cognitive diversity found that diverse groups of problem solvers outperformed other, more homogenous groups. Cognitive diversity is also needed. Cognitive diversity is represented in our toolbox of perspectives, heuristics, interpretations, and previous accomplishments, employed daily, and deployed differentially by each of us. When design teams represent this cognitive diversity, the result is better design and better solutions to complex problems (Hong & Page, 2004; Page, 2007; Strijbos, Kirschner, & Martens, 2004). Second, recent research exploring what drives success in online courses has found that, especially for those who are learning to learn, “[t]he most important thing that helps students succeed in an online course is interpersonal interaction and support” (Jaggars & Xu, 2016) and cultural inclusion in course content. Recently, the online course system, College for America, has replaced its courses with projects that support users’ individual progress. Further, researchers at CMU are using machine learning to personalize MOOCs to automatically analyze and provide feedback on student work, develop social ties between learners, and design MOOCs that are effective for students with a variety of cultural backgrounds. In related efforts, researchers in CMU’s Language Technologies Institute and Entertainment Technology Center are working to improve online user retention in MOOCs through increasing their mentoring and team task experiences. Related research has found that the face-to-face resources and activities that are most effective with traditionally underserved students are also essential elements of student learning in the online environment (Archambault et al., 2010; Thorne, Black, & Sykes, 2009; Borup, Graham, & Drysdale, 2014; Curtis & Werth, 2015; Repetto, Cavanaugh, Wayer, & Liu, 2010; Wicks, 2010). Including varying assignments, groupings, and modes of learning in courses and connecting content to realworld skills help to enhance online learning experiences and academic self-concept of minoritized and traditionally underserved students.
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
115
5 Conclusions The underlying aim of this paper is to contribute to efforts to build and organize the design landscape and vocabulary for conversations about diversity, equity, and inclusion in CSCL, anchoring our discussion is the position that DEI can only really be understood and achieved at scale. Given this perspective, we focused on DEI opportunities and consequences in application types that have achieved some scale. When analyzed through this lens, we see that the CSCL community has significant room for improvement. The current state of the art presents a number of opportunities to explore and impact issues of DEI through myriad platforms and tools including MOOCs, VHSs, and network-based multiplayer games. These platforms and tools designed to interface with them can afford learning as well as social interactions. These environments, however, have typically been designed to intentionally overlook issues of DEI possibly to the detriment of all learners. In our estimation, this is the exact opposite of the tack the field ought to be taking. By being agnostic, current platforms simply adopt a one size fits all perspective that, all too often, defaults to being designed for a very narrow portion of our diverse global population. In order to correct this, we suggest careful considerations of language, differentiated learning, and identity as focal components that designers should be alert to as applications move to scale. These considerations are certainly not exhaustive. On the contrary, they represent themes that have been heavily discussed across a variety of academic disciplines, but have, to date, not been well incorporated into CSCL efforts at scale. As a field we need to develop a practical scholarly infrastructure that is intentional about these challenges and which recognizes that failure to account for variation necessarily means ignoring issues of diversity, equity, and inclusion. A deep well of intentionality of this sort sits at the core of advancing DEI in CSCL. We have primarily enacted this discussion in the context of large-scale platforms, which do not represent the totality of CSCL. We posit that this underlying sensitivity can equally be applied in practice to the smaller scale studies frequently found within the CSCL literature. We would, however, challenge the CSCL community to consider that achieving the goals of DEI does necessitate taking up the challenge at scale and encourage readers to consider DEI alongside issues of scalability in CSCL (Law, Zhang, & Peppler, this volume). Some may consider scale and quality as being fundamentally in competition. In our view, this is a false dichotomy. We have all seen examples of technologies or curricula that, when distributed to a larger collection of users, failed to confer the same benefits as were achieved on a smaller scale. This approach to scaling is among the perspective changes that must evolve for the CSCL community to achieve DEI. The most simplistic forms of scale send the same technology to larger number of users. While this may, at times, be effective, true scaling requires designing a platform that can be adapted to various populations and realizing that the objective of true scale is not simply to spread pieces of technology. Effective scale that supports DEI resides in having intellectual tools that can accommodate and embrace
116
K. Gomez et al.
variation. And, researchers can endeavor to problematize or further articulate the ways that assumptions around a default, one-size-fits-all model, fails to offer consistent and effective learning outcomes for a broad spectrum of learners. Such research should venture to further elucidate the contexts and populations for which these experiences hold merit, keeping in mind the importance of intersectionality (Crenshaw, 1990). Not surprisingly, these types of findings can be garnered by conducting research with diverse populations of learners in a manner that respects individual history, aspiration, and accomplishment. In short, intentional design means seeing, viewing, and harnessing learners’ variation. Without question this orientation brings a host of challenges and questions to the CSCL community. We see these challenges and questions as an opportunity to extend the reach of CSCL to more people, more of the time.
References Agarwal, A. (2013). Online universities: It’s time for teachers to join the revolution. The Guardian, 13. Arnold, L. (2017). Opinion: If you don’t see color, you don’t see me. Retrieved from https:// cuindependent.com/2017/04/25/opinion-dont-see-color-dont-see-me-color-blind-race/ CUIndependent. Archambault, L., Diamond, D., Brown, R., Cavanaugh, C., Coffey, M., Fourees-Aalbu, D., Richardson, J., & Zygouris-Coe, V. (2010). Research committee members brief: An exploration of at-risk learners and online education. International Association for K-12 Online Learning. Bolger. (2017, October 24). What’s the difference between diversity, inclusion, and equity? Retrieved from https://generalassemb.ly/blog/diversity-inclusion-equity-differences-in-mean ing/. Booker, A. N., Vossoughi, S., & Hooper, P. K. (2014). Tensions and possibilities for political work in the learning sciences. In Proceedings of International Conference of the Learning Sciences, ICLS. International Society of the Learning Sciences, 2, 919–926. Borup, J., Graham, C. R., & Drysdale, J. S. (2014). The nature of teacher engagement at an online high school. British Journal of Educational Technology, 45(5), 793–806. Cazden, C., Cope, B., Fairclough, N., Gee, J., Kalantzis, M., Kress, G., et al. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60–92. Crenshaw, K. (1990). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43, 1241. Curtis, H., & Werth, L. (2015). Fostering student success and engagement in a K-12 online school. Journal of Online Learning Research, 1(2), 163–190. Dye, M., Kumar, N., Schlesinger, A., Wong-Villacres, M., Ames, M. G., Veeraraghavan, R., et al. (2018). Solidarity across borders: Navigating intersections towards equity and inclusion. In Companion of the 2018 ACM conference on computer supported cooperative work and social computing (pp. 487-494). Emmerich, K., & Masuch, M. (2016). Game metrics for evaluating social in-game behavior and interaction in multiplayer games. In Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology (pp. 1-8). Fussell, S. (2013, April 11). Why the video game industry needs to talk about white men. Venture Beat. Retrieved from https://venturebeat.com/2013/04/11/why-the-video-game-industry-needsto-talk-about-white-men/.
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
117
Gutiérrez-Rojas, I., Alario-Hoyos, C., Pérez-Sanagustín, M., Leony, D., & Delgado-Kloos, C. (2014). Scaffolding self-learning in MOOCs. Paper presented at the European MOOCs Stakeholder Summit. Hart, C. M., Berger, D., Jacob, B., Loeb, S., & Hill, M. (2019). Online learning, offline outcomes: Online course taking and high school student performance. AERA Open, 5(1), 2332858419832852. Hodgkinson-Williams, C., Arinto, P. B., Cartmill, T., & King, T. (2017). Factors influencing open educational practices and OER in the Global South: Meta-synthesis of the ROER4D project. In C. Hodgkinson-Williams & P. B. Arinto (Eds.), Adoption and impact of OER in the Global South (pp. 27–67) Retrieved from https://doi.org/10.5281/zenodo.1037088. Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of highability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 16385–16389. Jaggars, S. S., & Xu, D. (2016). How do online course design features influence student performance? Computers & Education, 95, 270–284. Kizilcec, R. F., Saltarelli, A. J., Reich, J., & Cohen, G. L. (2017). Closing global achievement gaps in MOOCs. Science, 355(6322), 251–252. Law, N., Zhang, J., & Peppler, K. (this volume). Sustainability and Scalability of CSCL Innovations. International Handbook on Computer-Supported Collaborative Learning. Cham: Springer. Leven, I., Bilger, F., Strauß, A., & Hartmann, J. (2013). Weiterbildungstrends in verschiedenen Bevölkerungsgruppen. In F. Bilger, D. Gnahs, J. Hartmann, & H. Kuper (Eds.), Weiterbildungsverhalten in Deutschland: Resultate des Adult Education Survey 2012 (pp. 60–94). Bielfeld: W. Bertelsmann. Márton, S. M., Polk, G., & Fiala, D. R. C. (2013). Convention on the rights of persons with disabilities. Maitland, C., Granich, J., Braham, R., Thornton, A., Teal, R., Stratton, G., & Rosenberg, M. (2018). Measuring the capacity of active video games for social interaction: The Social Interaction Potential Assessment tool Computers in Human Behavior. 87, 308–316. Miesenberger, K., Ossmann, R., Archambault, D., Searle, G., & Holzinge, A. (2008). More than just a game: Accessibility in computer games. HCI and Usability for Education and Work 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, USAB 2008, Graz, Austria, November 20–21, 2008. Proceedings Moje, E. B. (2000). “To be part of the story”: The literacy practices of gangsta adolescents. Teachers College Record, 102(3), 651–690. Narcisse, E. (2012, March 29). Come on, video games, let’s see some black people i’m not embarrassed by. Retrieved from https://kotaku.com/5897227/come-on-video-games-lets-seesome-black-people-im-not-embarrassed-by. Narcisse, E. (2017, February 3). The natural: The trouble portraying blackness in video games. Retrieved from https://kotaku.com/the-natural-the-trouble-portraying-blackness-in-video1736504384. Page, S. E. (2007). Making the difference: Applying a logic of diversity. Academy of Management Perspectives, 21(4), 6–20. Pandya, R., & Dibner, K. (2018). Learning through citizen science: Enhancing opportunities by design. National Academies of Sciences, Engineering, and Medicine, Division of Behavioral and Social Sciences and Education, Board on Science Education, Committee on Designing Citizen Science to Support Science Learning. Washington, DC, National Academies Press. Passmore, C. J., Birk, M. V., & Mandryk, R. L. (2018, April). The privilege of immersion: Racial and ethnic experiences, perceptions, and beliefs in digital gaming. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–19). Rankin, Y., McNeil, M., Shute, M. & Gooch, B. (2006). User centered game design: evaluating massive multiplayer online role playing games for second language acquisition. Sandbox Symposium, Los Angeles, CA.
118
K. Gomez et al.
Repetto, J., Cavanaugh, C., Wayer, N., & Liu, F. (2010). Virtual high schools: Improving outcomes for students with disabilities. The Quarterly Review of Distance Education, 11(2), 91–104. Richter, T., & McPherson, M. (2012). Open educational resources: Education for the world? Distance Education, 33(2), 201–219. Rohs, M., & Ganz, M. (2015). MOOCs and the claim of education for all: A disillusion by empirical data. International Review of Research in Open and Distributed Learning, 16, 6. Schlesinger, A., Edwards, W. K., & Grinter, R. E. (2017, May). Intersectional HCI: Engaging identity through gender, race, and class. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 5412-5427). Sobel, K., Kientz, J. A., Clegg, T. L., Gonzalez, C., & Yip, J. C. (2017). Equity and Inclusivity at IDC. In Proceedings of the 2017 conference on interaction design and children (pp. 761–767). New York, NY: ACM. https://doi.org/10.1145/3078072.3081313. Starmans, C., Sheskin, M., & Bloom, P. (2017). Why people prefer unequal societies. Nature Human Behavior, 1, 0082. Steinkuehler, C. A., Kafai, Y., Sandoval, W. A., & Enyedy, N. (2004). Proceedings of the 6th International Conference on Learning Sciences. Strijbos, J. W., Kirschner, P. A., & Martens, R. L. (2004). What we know about CSCL (pp. 245–259). Dordrecht: Springer. Tan, S. C., Divaharan, S., Tan, L., & Cheah, H. M. (2011). Self-directed learning with ICT: Theory, practice and assessment. Singapore: Ministry of Education. Thorne, S. L., Black, R. W., & Sykes, J. M. (2009). Second language use, socialization, and learning in Internet interest communities and online gaming. The Modern Language Journal, 93, 802–821. Vakil, S. (2018). Ethics, identity, and political vision: Toward a justice-centered approach to equity in computer science education. Harvard Educational Review, 88(1), 26–52. Wicks, M. (2010). A National Primer on K-12 Online Learning. Version 2. International association for K-12 online learning. Willems, J., & Bossu, C. (2012). Equity considerations for open educational resources in the glocalization of education. Distance Education, 33(2), 185–199. Williams, K. P., & Gomez, L. (2002). Presumptive literacies in technology-integrated science curriculum. In G. Stahl (Ed.), Computer support for collaborative learning: Foundations for a CSCL community (pp. 599–600). Mahwah, NJ: Lawrence Erlbaum. Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). Ability-based design: Concept, principles and examples. ACM Transactions on Accessible Computing (TACCESS), 3(3), 1–27. Yang, G., Gibson, B., Lueke, A., Huesmann, R., & Bushman, B. (2014). Effects of avatar race in violent video games on racial attitudes and aggression. Social Psychological and Personality Science, 5, 6. Zillien, N., & Hargittai, E. (2009). Digital distinction: Status-specific types of internet usage. Social Science Quarterly, 90(2), 274–291.
Interrogating the Role of CSCL in Diversity, Equity, and Inclusion
119
Further Readings Maitland, C., Granich, J., Braham, R., Thornton, A., Teal, R., Stratton, G., & Rosenberg, M. (2018). Measuring the capacity of active video games for social interaction: The Social Interaction Potential Assessment tool. Computers in Human Behavior, 87, 308–316. This study investigates the capacity of Active Video Games (AVGs) use to facilitate psychosocial outcomes. This study aimed to establish a reliable system to rate the potential of AVGs to facilitate social interaction among players. Findings identified core elements of AVGs that enable social interaction. Rankin, Y., Morrison, D., McNeal, M., Gooch, B., & Shute, M. (2009). Time will tell: In-Game Social Interactions That Facilitate Second Language Acquisition. Proceedings of the 4th International Conference on Foundations of Digital Games: pp. 161–168 This study considers language socialization in dialogue between native and non-native users of Massively Multiplayer Online Role Playing Games. The researchers developed ClockWerk©, an evaluation tool that visually depicts communication patterns of game dialogue attributed to MMORPGs. Findings indicate that ClockWerk© enables users to temporally correlate the type of social interactions with gameplay activities, gauging their impact on second language acquisition. Richard, G. T. (2013, April). Designing games that foster equity and inclusion: Encouraging equitable social experiences across gender and ethnicity in online games. In Proceedings of the CHI’2013 Workshop: Designing and Evaluating Sociability in Online Video Games Paris, France (pp. 83–88). This paper discusses emerging research on designs that explicitly aim to consider issues of gender and ethnic inclusion and inequity in gaming. The paper describes relationships between representations of gender and ethnicity, harassment, and social exclusion. The paper offers design principles for fostering equity and social inclusion in games. Siriaraya, P., Zaphiris, P., & Ang, C. S. (2013). Supporting social interaction for older users in game-like 3D virtual worlds. Designing and Evaluating Sociability in Online Video Games, 89–93. This study describes how 38 older users engage with 3D and non-3D virtual grocery stores in order to identify key factors which affected their experience and satisfaction in social engagement within the virtual environment. Findings suggest that older users found the avatars unrealistic, and they found it difficult to suspend disbelief while engaged with virtual worlds. Vakil, S. (2018). Ethics, identity, and political vision: Toward a justice-centered approach to equity in computer science education. Harvard Educational Review, 88(1), 26–52. The researcher suggests that Computer science education must engage with critical traditions to achieve a justice-centered approach to equity in computer science (CS) education. The researcher identifies three features of CS education: the content of curriculum, the design of learning environments, and the politics and purposes of CS education reform that are good places to start in working toward a justice-centered approach.
Sustainability and Scalability of CSCL Innovations Nancy Law, Jianwei Zhang, and Kylie Peppler
Abstract CSCL innovations involve dynamic changes taking place at multiple levels within the complex educational ecosystem. Scaling of CSCL innovations needs to pay simultaneous attention to changes along several dimensions, including depth of change, sustainability, spread, and shifts of ownership, as well as evolution of the innovation over time. General models for scaling innovations do not take account of the role that technology may play. This chapter examines the sustainability and scalability of CSCL innovations including the role of technology in fostering sustainable and scalable innovation. We review a range of CSCL innovations that span in- and out-of-school settings to synthesize technology-enabled strategies that address scalability challenges at the classroom and education ecosystem levels. A set of design principles is identified to guide future research and practice to transform education through CSCL innovations. Keywords Sustainability · Scalability · Scaling CSCL innovations · Design principles for scalability · Architecture for learning · Innovation network · Multilevel aligned learning
N. Law (*) University of Hong Kong, Hong Kong, SAR, China e-mail: [email protected] J. Zhang University at Albany, State University of New York, Albany, NY, USA e-mail: [email protected] K. Peppler University of California, Irvine, Irvine, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_7
121
122
N. Law et al.
1 Definitions and Scope Since the inception of computer-supported collaborative learning (CSCL), researchers have been making efforts to transform education toward a new learning paradigm featuring students’ collaborative knowledge building (Scardamalia, Bereiter, McLean, Swallow, & Woodruff, 1989). Over the past three decades, the field has gained rich conceptual and empirical insights in the sociocultural and cognitive processes of collaborative learning as well as the conditions and designs to scaffold such processes, including that of technology support. A number of CSCL platforms underpinned by innovative, socially grounded pedagogies have been developed, such as Knowledge Forum (Scardamalia & Bereiter, 2014), Web-based Inquiry Science Environment (WISE) (Linn, Clark, & Slotta, 2003), the Virtual Math Teams (VMT) (Stahl, 2006), Quest Atlantis (Barab et al., 2007), and Scratch (Resnick et al., 2009). Collaborative learning is further gaining attention in systemlevel education reforms that seek to enhance student engagement and develop twenty-first-century competencies, such as the initiatives in Singapore (Looi, 2013), Hong Kong (Hong Kong Education Bureau, 2015), and Europe (Kampylis, Law, & Punie, 2013). However, despite the progress, how to sustain and scale CSCL innovations in broad educational settings for transformative impacts remains a grand challenge (Chan, 2011; Fishman, Marx, Blumenfeld, Krajcik, & Soloway, 2004; Kolodner et al., 2003; Penuel, 2019; Wise & Schwarz, 2017). This chapter is devoted to understanding the sustainability and scalability of CSCL innovations as interconnected challenges. Sustainability refers to the likelihood for an innovation to be continued over extended periods of time, and scalability refers to the probability that an innovation can be deepened and/or spread beyond the original sphere of adoption. While these two concepts appear to be different, they are in fact closely connected and interdependent (Clarke & Dede, 2009; Coburn, 2003). Efforts for learning innovation and improvement need to be embedded in the multiple contexts in which students learn, teachers teach, and leaders manage school systems (McLaughlin & Talbert, 1993). In addition to valuing student-centered, collaborative, open-ended inquiry as its educational paradigm, CSCL embraces a vision of learning innovations as dynamic and adaptive to cultural, historical, and social changes and contexts, with technology playing an important role. General scalability research does not give special consideration to the role that technology may play in innovation implementation. However, in CSCL research, technology plays an important, interdependent, and coevolving role in supporting the changing educational goals and practices in formal and informal learning contexts. In this chapter, we synthesize the conceptualizations of sustainability and scalability developed in CSCL and other learning innovations, review how different CSCL programs approach sustainability and scalability, and distill these into a set of design principles and strategies in order to guide future efforts. We draw on current literature to identify issues and strategies that are relevant to researchers as well as policymakers, school leaders, and practitioners who share the concern for making CSCL a pedagogy of choice for education reform. We also examine the scalability of CSCL in terms of the technology used, the interdependence between technology and learning practices, and the facilitating role that the technology may play in fostering
Sustainability and Scalability of CSCL Innovations
123
scalability. Throughout this chapter, we focus on CSCL in K-16 in- and out-ofschool learning settings, though there are also broader applications for CSCL in professional and adult learning.
2 History and Development: Sustainability, Scalability, and DBIR Educational efforts to institute system-wide changes in curriculum and pedagogy in response to wider social, economic, and political changes first became prominent in the 1960s (Cremin, 1961; Cuban, 1984, 1990; Elmore & Associates, 1990; Elmore & McLaughlin, 1988). Earlier models of change were underpinned by the assumption that innovations go through a stage of prototyping and refinement before scaling, which was considered essentially as a process of diffusion through replication (Rogers, 1962). After decades of research that shows educational innovations to be challenging and seldom sustainable, Coburn (2003) challenged the static model of scaling and put forward a four-dimensional dynamic model of scale. At the core of this model is the idea that expanding the adoption of an innovation in order to achieve lasting change involves not only spread, but deepening changes in three additional dimensions: depth, sustainability, and shift of ownership. Spread refers to the adoption and enactment of a learning innovation in a greater number of classrooms (or other learning settings), including its activity structures, materials, as well as the underlying beliefs, norms, and principles. The depth of an innovation is gauged based on the consequential change enabled, which goes beyond surface structures or procedures to a focus on changes in teachers’ beliefs, norms of social interaction, and the underlying learning principles adopted in their professional practices. Sustainability highlights the need to implement the innovation over time for lasting change. In addition, scaling requires a shift of ownership of the innovation and reform from “external” agents to internal stakeholders (districts, schools, teachers, learners), who take on the responsibility of building the capacity to sustain, deepen, and spread principled changes themselves. Building on the above conceptualization, Clarke and Dede (2009) added a further, important dimension—evolution—highlighting emergence as a hallmark of innovations that demonstrate scalability. According to the dynamic model of innovation as advanced by Coburn (2003) and Clarke and Dede (2009), sustainability and scalability are two interrelated design challenges for learning innovations. As noted above, sustainability refers to the continual implementation and refinement of a learning program over time in its original or subsequent specific setting(s) despite the changing conditions and demands (Clarke & Dede, 2009; Coburn, 2003). Scalability places more emphasis on the spread and growth of the learning innovation in broader contexts and conditions beyond its original setting(s). However, from a design perspective, it is important that these be considered together rather than separately. Spread does not simply refer to the wider adoption of activity structures or materials, but importantly through within-unit spreading in order to bring about internalization of norms and
124
N. Law et al.
principles (Coburn, 2003). Classrooms are nested within schools, districts, and broader educational ecologies forming complex systems (Lemke & Sabelli, 2008). The beliefs and practices of teachers within as well as outside of a teacher’s own school or district will also have an impact on the teacher’s motivation and persistence in sustaining the innovation practice. Thus, the long-term sustainability of an innovation also depends on its scalability. Sustainability is fundamental to scaling: “The distribution and adoption of an innovation are only significant if its use can be sustained in original and even subsequent schools” (Coburn, 2003, p. 6). However, sustainability does not imply a simple static continuation of the innovation in its original form. It is inevitable that any educational innovation involves changing the composition and characteristics of the educational ecology: the learning goals and processes, roles of the teacher, the learner, and the institution (Law, Yuen, & Fox, 2011). What is to be sustained are not the surface features of an innovation, but the learning outcome goals and design principles. International comparative studies of technology-enabled learning innovations show that the five dimensions of scalability interact and are interdependent (Kampylis et al., 2013). Studies have also shown that innovations that started with scale as a network of schools demonstrated greater resilience and sustainability over time (Law, Kankaanranta, & Chow, 2005). Since sustainability is subsumed under scalability as one of its five dimensions, we will use the term scalability as the overarching concept. The term sustainability will only be used if we refer only to this specific dimension. Sometimes, pedagogical innovations that emerge without central agency may successfully scale, given the appropriate technology platform and connectivity support. An example of this is eTwinning, a European-wide initiative to connect teachers from different countries in an effort to develop students’ multicultural awareness through online collaboration (Kampylis & Punie, 2013). eTwinning provides secure online spaces and tools for virtual meetings and collaboration to facilitate cross-border student interactions and projects. It has experienced phenomenal growth with a total of almost 800,000 registered teachers in 2020. For pedagogical innovations to be scalable, they need to be guided by design principles pertaining to two levels of theory: a theory of learning and a theory of change/implementation that form a coherent alignment. The former underpins the design of the innovation at the classroom level to realize the learning experiences that would bring about the desired learning outcomes. In CSCL, knowledge building (Scardamalia & Bereiter, 2014) and knowledge integration (Linn et al., 2003) belong to this type of theory. The latter addresses the ontological tensions created between the new practices and the existing educational ecology. Productive CSCL practices require transforming the classroom into a collaborative community of inquiry with students taking on high-level agency. Such classroom practices represent a new culture of learning that runs against traditional frameworks of curriculum, assessment, and management. Therefore, to facilitate educational change with CSCL principles, programs, and tools, researchers need to consider how people learn in CSCL contexts as well as to understand factors and conditions that enable or inhibit the educational transformation necessary toward the innovation vision (McKenney, 2018). Implementation research in the learning sciences, including CSCL, values the authenticity of the research context. Thus, strictly controlled experimental designs
Sustainability and Scalability of CSCL Innovations
125
are not preferred. Instead, research–practice partnerships (RPP, Coburn & Penuel, 2016), which are long-term collaborations between researchers and practitioners (e.g., teachers, leaders, school districts) to codesign and improve educational innovations are popular models of implementation. Design-based research (DBR, also referred to as design experiments) has been developed as a method to test theoryinformed interventions in authentic settings through iterative cycles of design, testing, and improvement in partnership with teachers (Collins, Joseph, & Bielaczyc, 2004). The goals encompass improving both theory (the design principles) and practice. More recently, a specific type of design-based research method—designbased implementation research (DBIR)—has emerged in recognition of the need for researchers and practitioners to work together on design issues not only at the classroom level but also on those institutional and other contextual factors (Fishman, Penuel, Allen, Cheng, & Sabelli, 2013) necessary for the educational interventions to be effective, sustainable, and scalable. The participatory nature of DBR/DBIR favors “open-ended social innovations” that not only scale externally designed products (tools, programs) but also the processes and methods by which new learning practices are codesigned in specific contexts (Booker & Goldman, 2016).
3 State of the Art: Design Strategies for Scaling CSCL Innovations Taking the stance that sustainability is one dimension of scaling educational innovations, this section provides an overview of the research to scale CSCL in schools as well as in out-of-school settings. CSCL is typically organized around open-ended, inquiryoriented tasks, where the learning process is collaborative and dynamic, and the outcome is generative and often socially shared. For CSCL practices to be scaled, simple dissemination of externally designed tools and activities to more settings would not be adequate as it risks turning classroom reforms into surface changes and losing the ethos of the deep principles (Brown & Campione, 1996). For new learning approaches to contribute to true educational transformations, they need to be embedded in locally cultivated knowledge practices (Hakkarainen, 2009), which are new social practices developed in specific contexts to channel and sustain students’ productive inquiry and collaborative interactions for idea advancement. Given the intent of CSCL to bring about transformative classroom changes, gaps and misalignments are expected to emerge between the new learning culture and the existing systems of practices. Thus, researchers need to work with practitioners and other stakeholders to critically reflect on how the existing educational practices are enacted and sustained by epistemic beliefs, social and power relations, resources, and time–space coupling. Such reflection informs co-envisioning of possible futures to create new relations and conditions that nurture transformed forms of learning practices and to examine how learning evolves in the new context (Bang & Vossoughi, 2016; Cole, 2007; Zhang, 2010).
126
N. Law et al.
CSCL innovations differ in terms of their specific theoretical underpinnings, design principles, the CSCL technologies they use, as well as the specific learning contexts they serve. These determine the types of inquiry and collaboration at the core of the innovation, the roles of the learner, the teacher (or informal learning educator), as well as the roles played by the selected technology in scaffolding collaborative learning. As a result, different CSCL innovations face different challenges in scaling and involve different mechanisms to support scalability. The mechanisms to sustain and scale CSCL innovations need to address the two levels of challenges—classroom and school-cum-broader-education-ecosystem levels—in a coherent manner. The first (classroom) level challenge requires the creation of a systematic and robust learning model with support systems that can effectively engage the learners and teachers (i.e., educators) in sustained, collaborative learning practices in the classroom or other learning settings. The second-level challenge requires the creation of supportive contexts and infrastructures beyond the learner–educator interaction level to sustain and grow the innovation, overcome barriers, and build new alignment with the changing conditions and demands of the educational institutions and systems. We synthesize below the design strategies adopted by various CSCL programs at these two nested levels as well as the challenging issues encountered.
3.1
Design of Sustainable CSCL Models and Technologies to Scaffold Productive Learning Interactions
Design for sustainable CSCL needs to provide a transparent model in terms of how the CSCL practices are organized and implemented in light of the educational goals and principles. In this section, we identify four common features of CSCL environments that have been adopted to support a sustainable learning model, from its initial uptake to the continual adaption, improvement, and reinvention of the learning practices in changing contexts.
3.1.1
Principle-Based Collaboration Environments to Guide CSCL Practice
CSCL innovations value a principle-based approach that turns the core learning principles and associated indicators into shared classroom norms and ideals to guide classroom interactions (Brown & Campione, 1996; Engle & Conant, 2002; Scardamalia, 2002). Many collaborative and inquiry-based programs also specify activity cycles and guidelines that inform how principles are to be implemented in practice (Kolodner et al., 2003). Core principles of CSCL are further made transparent through the design of collaborative environments. For example, the WISE program translates the four guiding principles of knowledge integration into a set of
Sustainability and Scalability of CSCL Innovations
127
design patterns that are incorporated in a library of inquiry projects, which are open to teachers’ adaptation (Linn, 2006; Slotta, 2004). Similarly, the Scratch program is guided by four principles to develop creative thinkers: projects for making, passion, peers, and playful experimentation (Resnick, 2017). Knowledge building adopts a principle-based approach to enabling knowledge-creating practices in classrooms, guided by a set of 12 principles (Scardamalia, 2002; Scardamalia & Bereiter, 2014). The principles are made transparent through Knowledge Forum, which provides a collective knowledge space for each knowledge-building community, uses a set of online discourse scaffolds to guide collaborative knowledge building, and uses analytic tools to track students’ personal and collaborative progress and provide ongoing feedback (Chen & Zhang, 2016; Scardamalia & Bereiter, 2014). Boundarycrossing designs further extend student interaction to higher social levels across different communities, forming a larger social context for knowledge building. Students can share major insights and challenges with broader knowledge builders for mutual learning and continual build-on, including building on the ideas from previous school years (Yuan & Zhang, 2019; Zhang & Chen, 2019). Long-term studies of classroom innovations guided by knowledge-building principles (Scardamalia, 2002) using Knowledge Forum document how teachers work with the core principles to design and improve classroom practices (Zhang, Hong, Scardamalia, Toe, & Morley, 2011). Core principles such as epistemic agency, collective responsibility, and knowledge-building discourse serve to guide student participation and focus teachers’ pedagogical thinking, planning, experimentation, and reflection on/in practice, leading to continual improvement of knowledgebuilding processes and outcomes.
3.1.2
Discourse Scaffolds and Collaboration Scripts to Inform Students’ Engagement
To inform and enhance students’ learning interactions, researchers have designed different scaffolding supports offered by the teacher or distributed in the technology environments, which form a synergy (Tabak, 2004). As a specific type of scaffolding, structured collaboration scripts are designed to help create routines and structures that lessen the effort needed to sustain high-quality learning practices over time. These scripts are often embedded in a collaboration platform to specify and sequence various learning tasks and activity procedures and distribute different roles among students to guide their discourse and social interactions (Kirschner & Erkens, 2013; Kollar, Fischer, & Slotta, 2007). The teacher engages in classroom orchestration to integrate and adapt multiple scripts of learning activities in order to cope with many constraints, including the expectations of the curriculum and assessment, time, and space. Research on classroom orchestration highlights the need to make the educational workflow usable, visible, and tangible. This empowers the teacher to not only select a set of predetermined scripts but also to coordinate the different activities in a coherent way in their evolving classroom context (Dillenbourg, 2013).
128
3.1.3
N. Law et al.
Reflective Supports for Student-Directed Regulation and Structuration of Collaborative Learning Practices
Sustainable CSCL environments foster students’ capacity to direct and reflect on their personal and collaborative knowledge processes for continual improvement. Researchers have tested various strategies to support student metacognition and socially shared regulation of collaborative learning (Winne, Hadwin, & Perry, 2013). These include using group awareness tools, adaptive agents, and visualization and feedback tools to support the ongoing monitoring and optimization of individual and collective processes (Järvelä et al., 2016). In addition to the externally designed reflective support, recent research further highlights the use of student-generated, emergent structures to shape their evolving inquiry and discourse beyond the original framing and boundaries. This process is framed as “reflective structuration” (Zhang et al., 2018), which refers to the reflective processes by which members of a community co-construct shared inquiry structures (i.e., collective goals, directions, processes) over time to channel their personal and collaborative actions. Reflective online supports, such as Idea Thread Mapper, allow students and their teacher to co-organize/reorganize their inquiry directions and groups over time based on emergent needs, enhancing student-driven collaborative processes and knowledge outcomes (Zhang et al., 2018). The above-reviewed discourse scaffolds and reflective supports work together to create a synergy between the external, distributed scaffolding support, and studentdirected generative efforts. An emerging line of research focuses on creating learning analytics to support teacher scaffolding and student reflection in CSCL settings (Wise, Knight, & Shum, this volume). These include designing teacher dashboard tools to monitor student participation and collaboration and inform teacher orchestration; analytics to detect emerging directions, progress, and gaps; and intelligent feedback tools to enhance students’ reflective awareness and intentional engagement (Chen & Zhang, 2016; Wise, 2019; Wise, Knight, & Buckingham Shum, this volume).
3.1.4
Discipline-Specific CSCL Programs and Resources to Support Curriculum and Assessment Innovation
While many CSCL models and learning supports are applicable across curricula, CSCL practices need to be refined within the context of a specific discipline(s) and created in a way that induces and shares high-quality practices. One type of CSCL innovation focuses on supporting learning of disciplinary knowledge and skills in specific curriculum areas, such as WISE and Virtual Math Teams. WISE (Linn et al., 2003) provides a library of inquiry projects that teachers can adopt directly or customize to meet their specific classroom settings. Virtual Math Teams (VMT) supports collaborative learning of mathematics and mathematical discourse through the provision of software, curriculum, pedagogy, and research methods (Stahl, 2009). The software environment provides an integrated collaborative learning
Sustainability and Scalability of CSCL Innovations
129
environment comprising a number of chat tools and thread features, including a shared whiteboard for constructing drawings related to a mathematical problem. Collectively, the above four features of CSCL environments are culminations of research efforts that provide a robust foundation for the field to develop sustainable designs of CSCL: with core CSCL principles translated into classroom norms, with adaptive supports embedded in the multilevel collaborative environment and enriched by discipline-specific resources, serving to enhance student agency for continually advancing their joint inquiry practices in specific disciplinary/interdisciplinary contexts. Given that these design features are often investigated and adopted in different research contexts by different teams independently, there have not been explicit efforts to seek conceptual coherence or practical alignment in implementation across these features. The ongoing debate between scripted and non-scripted collaborative learning (Bereiter et al., 2017) also highlights the need for further research to address the tensions between prescriptive guidance structures and student agency in classroom practices, and between fidelity and adaptability in CSCL implementation. Such research will guide further advances in the development of coherent ongoing support for CSCL classroom practices as dynamic social systems.
3.2
Design of Supportive Architectures for Learning to Foster Scalability
CSCL innovations introduce changes in educational goals, roles, and practices, and require different priorities in terms of technology infrastructure, support services, and organizational routines for them to become scalable. Addressing the challenge of scalability is a key design focus in DBIR efforts, which are generally organized as research–practice partnerships (RPP), involving stakeholders at multiple levels of the education system (Coburn & Penuel, 2016). RPP projects connecting multiple schools in similar innovation initiatives, such as the Knowledge Building International (KBIP) (Laferrière et al., 2015), to function as innovation networks. It has been found that even when all five dimensions of scalability can be demonstrated in an innovation network, there can still be individual schools that stop engaging in the innovation and leave the network. Thus, successful scaling at the innovation network level may not imply scalability at the school level. Scaling pedagogical innovations is a multilevel challenge (Davis, 2017), requiring changes and aligned learning at teacher, school, community, and system levels. Such aligned learning needs to be facilitated through an appropriate architecture for learning (Stein & Coburn, 2008), which is broadly described as the organizational structure, interaction mechanisms (e.g., established routines described in Spillane, Parise, & Sherer, 2011), artifacts, and technology that are available to facilitate sharing and communication of ideas. The architecture for learning plays an important role in facilitating decision-making that progressively consolidate contextual changes favorable to the innovation at
130
N. Law et al.
different levels of the education system. Below we highlight the key architectures for learning that support multilevel connected learning across teacher, school, and network levels in DBIR contexts.
3.2.1
Teacher Learning and Innovation Through Codesign
Teacher codesign of learning environments and learning experiences has become a widely adopted model for teacher learning (Mor, Ferguson, & Wasson, 2015). This is also a commonly adopted model for teacher learning in CSCL, as CSCL innovations are characterized by a focus on collaborative inquiry. As technology plays a central role in CSCL, codesign for CSCL implementation generally involves teachers using the related technology not only in their learning design work but also in their own professional reflection, sharing, and discourse. This form of teacher learning has the advantage of fostering shared vision and meanings of classroom change as well as ownership over the innovation, which engender intentional efforts for continual improvements (Teo, 2017; Zhang et al., 2011). CSCL practices also require new designs of learning assessment. Through design-based research, van Aalst and Chan (2007) worked closely with teachers to codesign student-directed assessment for knowledge building using e-portfolios and other technology support. At the same time, new learning analytics have been designed to assess students’ collaborative knowledge building and generate automated feedback (Chen & Zhang, 2016; Resendes, Scardamalia, Bereiter, Chen, & Halewood, 2015). The Knowledge Building Community Project in Singapore involves teachers in codesigning new formal and informal assessments using a set of analytics tools, including designing new report cards to keep track of student progress in content knowledge and in a set of twenty-first-century competencies (Teo, 2017).
3.2.2
Network Models of Professional Learning and Collaboration
Whereas codesign provides experiential professional learning opportunities that are directly connected to teachers’ day-to-day professional practice, research in CSCL highlights the importance of creating collaborative, professional learning communities, and knowledge-building networks in which teachers engage in open, professional inquiry, and collaborate to support continual innovation and improvement (Chan, 2011; Goldman, 2005; Teo, 2017; Zhang et al., 2011). In such communities, teachers talk about their classroom stories; reflect on progress, problems, and challenges; share learning designs, classroom actions, observations, and reflections; work together to address difficulties and coinvent better curriculum/pedagogical designs and support materials, and share how students think in the disciplinary areas as reflected through students’ ongoing work. Such professional inquiry and collaboration may extend beyond local schools through distributed networks (Chan, 2011; Hong, Scardamalia, & Zhang, 2010; Laferrière et al., 2015). For the
Sustainability and Scalability of CSCL Innovations
131
professional communities and networks to sustain, it is critical to develop and support cohorts of “change leaders,” who engage with their colleagues in reflective inquiry into their teaching and learning and nurture an innovative culture that can sustain and spread (Goldman, 2005).
3.2.3
School–UNiversity–Government (SUNG) Partnerships to Scaffold Multilevel Aligned Learning
Educational systems are complex systems comprising hierarchically nested levels— students, teachers, classrooms, schools, districts, systems—that are interdependent (Davis, 2017). In such a complex system, the learning outcomes at a higher level become the conditions for learning at a lower level (Law, Niederhauser, Christensen, & Shear, 2016). School–UNiversity–Government (SUNG) partnerships (Laferrière et al., 2015) are a form of RPPs that can play an important role in providing shared vision and agency for advancing and scaling multilevel, research-based innovations. Beyond providing peer learning opportunities to teachers, SUNG partnerships can scaffold and mediate top-down and bottom-up initiatives to foster aligned changes across levels. The evolution of roles that takes place (Laferrière et al., ibid) as the tensions in the innovation network shifts in parallel with the increasing scale of the network echo the need for infrastructuring as argued by Penuel (2019). Even when different measures of scale are progressing in an innovation network, there is still fragility (Laferrière et al., 2015) as agency needs to be exercised at all levels of the complex educational system within which the innovation is embedded. In-depth studies of school-level change conducted within three SUNG networks show great diversities in development over time. Despite similar system-level conditions and network-level support, agency and appropriate architectures for learning at the school level have been shown to be critical for sustainability and scalability of CSCL innovations (Law et al., 2018).
3.2.4
Design and Implementation of Sustainable Out-of-School Practices and Communities
Out-of-school settings can be highly productive spaces to cultivate our understanding of CSCL at scale. This is, in part, because out of school spaces are non-compulsory and thus reveal very early on in iterative technology design cycles what is of interest to learners and the conditions to which individuals will continue to engage with minimal institutional support. For this reason, many researchers have sought to design and deploy CSCL technologies in out-of-school spaces before bringing them into K–16 settings (e.g., Fields & Kafai, 2009; Greenhow, Gibbins, & Menzer, 2015). Such was the case in the design and scaling of the online computer platform, Scratch (scratch.mit.edu). Scratch was initially designed for use in the international Computer Clubhouse Network (Kafai, Peppler, & Chapman, 2009). The early
132
N. Law et al.
testing of this technology took place at a select number of Computer Clubhouse sites until it became the most popular technology in these spaces (Peppler & Kafai, 2009). Following this version of the design and testing, the platform was rolled out to the whole Clubhouse network and was made generally available to the public through the scratch.mit.edu website (Resnick et al., 2009). As teachers and parents saw the platform being widely used in the out-of-school hours, they looked for opportunities to bring it into the school day, creating a community of practice to share and support computer programming that spanned school and recreational hours. The platform’s scalability can be attributed to the fact that teachers and parents could see the educational value of the tool, while the core mechanics of the platform made it inviting for youth to deepen their practices beyond the school day (Maloney, Peppler, Kafai, Resnick, & Rusk, 2008). The broader field of serious games has pursued similar strategies, seeking out games with core mechanics that work well in out-of-school settings in order to teach core disciplinary content and other types of soft skills to youth (Salen & Zimmerman, 2004). This is similar to the work done by researchers looking into Wikipedia (Forte & Bruckman, 2006) and Fanzine work (Lewis, Black, & Tomlinson, 2009) to investigate general principles to engage young people in selfselected engagement in high-quality learning practices. Researchers in these areas seek out how these principles can be leveraged to reimagine how CSCL experiences can bridge school and informal learning environments. While well-designed CSCL environments have been successful in engaging young people in out-of-school learning, it is important not to lose sight of the ecological perspectives surrounding in-person and online communities, and how to design and support these broader initiatives at a community and citywide infrastructure. As typical out-of-school environments lack a natural infrastructure to scale these innovations, new infrastructure—in the form of technologies hosted online, formalized curriculum, models, professional development and training, etc.—needs to be designed to support the spread of CSCL innovations in both the short- and long term. Another key difference between in-school and out-of-school CSCL innovations is that, when the innovation applies to out-of-school learning, participants are likely to be more diverse in age, prior experiences, and background, which provides opportunities for studying how CSCL innovations impact lifewide and lifelong learning. How databases are merged, how learning environments are connected, and researchers learn and report on this data are all considerations that factor into designing an out-of-school CSCL technology in order for the various actors in a learning ecosystem to access what they need from the innovation. In recent years, there have been concerted efforts to look at sustainable and scalable models for these types of infrastructures. One example is the Hive Learning Networks (Hive NYC, 2019). Hives is an umbrella model for youth-serving afterschool programs comprised of various youth-serving nonprofit organizations (i.e., museums, libraries, advocacy groups, clubs, community centers, etc.) that are colocated and organized around the shared purpose of developing an urban network that connects youth with organizations and learning environments aligned with their passions.
Sustainability and Scalability of CSCL Innovations
133
While it is encouraging that these learning infrastructures are forming, the field needs new technologies to facilitate collaboration between providers and offer visibility into the student learning process. One example is the City of Learning (chicagocityoflearning.org) project (Barron, Gomez, Martin, & Pinkard, 2014; Digital Youth Network, 2019), which sought to create a shared infrastructure for highlighting all the learning happening across Chicago and connecting in- and outof-school learning environments.
4 The Future: Conclusions and Next Directions Addressing the challenges of sustainability and scalability is critical to the field of CSCL for it to achieve its dream of educational transformation. The concept of scaling is multifaceted: It involves sustaining an innovation over time, deepening the innovation for transformative changes, while spreading and adapting the innovation to a broad spectrum of learner populations and conditions. The process of scaling CSCL requires a complex system approach to educational and cultural changes; instead of simply thinking about how to bring pre-developed tools and practices to more school and classroom settings, researchers need to partner with educational practitioners, policy makers, and institutional partners to evolve aligned systemic changes at different levels of the education ecosystem. The core learning principles of CSCL can be used to inform shared vision building and guide continual refinement of learning design in specific contexts to address multiple demands and constraints.
4.1
Design Principles for Scalable CSCL Innovations
Drawing upon lessons learned from the work in CSCL and other related fields, we summarize eight emerging design principles to inform future work in this area. The first four principles pertain to the nexus between CSCL learning theories, pedagogical models, and technology. These principles address design issues at the classroom level and are specific to CSCL. The last four principles pertain to building a supportive architecture for aligned learning across multiple levels of the education ecosystem. These principles draw also on general educational innovation research beyond CSCL, and their applications are not limited to CSCL. 1. Maximize the principle-based scaffolding potential of the CSCL technology in designing learning interactions and activities. Robust CSCL models are inextricably connected to the collaboration environment developed by the researchers concerned. The former underpins the design of the latter, and the latter plays a crucial role in supporting the envisioned CSCL interactions. Teachers, researchers as well as school leaders, and policy makers need to recognize the
134
2.
3.
4.
5.
6.
N. Law et al.
importance of this principle and facilitate its realization within the context of their role and capacity in the innovation. Integrate learning design scaffolds into the CSCL environment to support teacher learning and codesign. Whereas CSCL technology features offer scaffolding for the collaborative interactions of learners, there is not much attention given to the design/provision of technology scaffolds that would help teachers in the design of principle-based learning activities appropriate for the targeted students and learning outcomes. More attention to the design of technology features and functions that support teachers in their design and codesign of CSCL tasks and interactions would provide valuable support to teacher learning. Make learning at multiple levels visible through learning analytics and visualization tools. If multilevel aligned learning is important for scalable CSCL innovations, there need to be multilevel collaboration environments with rich artifacts and interaction mechanisms to scaffold multilevel interaction and aligned learning (Zhang & Chen, 2019). An important research challenge is to design data-based tools to provide feedback to different stakeholders, so that they can visualize and understand the progress regarding the achievement of core learning outcomes and the extent to which the learning interactions follow the principlebased CSCL model at the different levels. These tools and the analysis results generated can become boundary objects for sharing of ideas and negotiation in the refinement of routines and other aspects of the practice. Build mechanisms to support the coevolution of CSCL technology alongside changing CSCL practices as the innovation progresses. Innovation implementation is not a simple replication process but involves dynamic changes as the nexus between research and practice progresses. Using the sociological theory of technology adoption as sociotechnical systems (Geels, 2005), Law and Liang (2019) identified sociotechnical coevolution (i.e., the intentional iterative redesign of the digital learning environment) to prioritize and promote more desirable pedagogical practices as a feature of scalable e-learning innovations. Promote ownership and agency across levels. Shift of ownership is one of the dimensions for scalability of educational innovations. Long-term sustainability and scalability can only be achieved through deep changes at the level of the educational ecosystem. Ownership and agency for the CSCL innovation need to be shared among teachers, school, and district leaders. Stakeholders at different levels need to share the vision of the innovation and take responsibility for and pride in its success. Develop organizational structures and routines to support collaborative codesign of curriculum and assessment that align with the CSCL goals. The adoption of new learning technologies and innovative pedagogies inevitably generate tension with existing practices which can only be resolved through the development of new organizational structures and routines. Changing organizational infrastructures requires decision-making at the institutional level on a continuing basis as the innovation progresses. For decision-making to be responsive, timely, and efficacious, there needs to be multilevel participation and joint ownership for change across levels.
Sustainability and Scalability of CSCL Innovations
135
7. Leverage the power of networks to scaffold multilevel aligned learning throughout the educational ecosystem. Collaborative inquiry and professional dialogues lead to the creation and accumulation of principled practical knowledge about what works, how, and why (Bereiter, 2014; Means & Penuel, 2005). Networkbased collaborative inquiry provides a context and supports for forming shared agendas of change and indicators/benchmarks of successful implementation, and for building human capacity and nurturing change leaders through multiple professional learning communities centered around authentic problems in everyday practices. 8. Create a nexus of practice (Hui, Schatzki, & Shove, 2016) that connects children and youth with technology and existing social culture to scale out-of-school CSCL innovations. Maker activities using well-designed technology platforms are popular forms of out-of-school CSCL activities that attract large numbers of children and youth around the globe. Unlike school-based contexts where there are ready infrastructures that can be leveraged to mediate and scaffold, the principle-based learning activities, external funding support to set up youthfocused community infrastructures, such as the Hive Learning Network, have shown to be very valuable in scaling these out-of-school creative learning activities. Given that challenges and effective strategies at the architecture for learning level are largely independent of the specific CSCL models and technologies involved, there are theoretically greater possibilities of as well as potentially greater benefits from cross-network collaboration of CSCL-related RPPs across a wider theoretical and technological spectrum. In addition to enriching the literature on scalability of educational innovation, such collaborations will also provide opportunities for exploring theoretical questions regarding CSCL, such as how collaborative learning interactions operate across the different social levels and timescales and what types of learning supports are desirable at the different levels.
4.2
Implications for Policy
Design principles are advice to guide different aspects in the design of CSCL environments and implementations. They are not intended to be prescriptive, but rather to inspire design work for more efficacious solutions. Hence these are directly relevant to people working in RPPs that focus their work on CSCL. On the other hand, the scalability of educational innovations is also greatly dependent on appropriate policy-level support. There are some policy implications derived from the above design principles, which are mentioned below. 1. Provide funding and policy-level support for different forms of SUNG partnership to steer the two connected levels of principle-based design for DBR and DBIR. Prescribed criteria for funding support, monitoring, and evaluation can be designed to align with the above design principles to enhance scalability.
136
N. Law et al.
2. Promote cross-network communication and sharing of artifacts, technological tools, and resources. Different RPPs may be guided by different learning theories and design principles and use different CSCL technologies. It does not mean that these practices and tools are necessarily incompatible. There may also be benefits from learning from other RPP networks about strategies and practices that promote (or hamper) scalability. Policies that promote the creation of usable and shareable knowledge and artifacts across RPP networks would be valuable. 3. Provide funding and policy support for urban out-of-school CSCL hubs for children and youth. To serve as effective infrastructure for scalability of these informal learning activities, these hubs should not be standalone organizations but should connect with relevant community groups, institutions, and infrastructure so as to create a nexus of practice for the children and youth participating in these activities. As elaborated throughout this chapter, the scalability of CSCL innovations can only be achieved through systemic aligned strategies at multiple levels. Appropriate policy support at the system level is crucial for CSCL innovations to contribute to sustainable and scalable educational transformations.
References Bang, M., & Vossoughi, S. (2016). Participatory design research and educational justice: Studying learning and relations within social change making. Cognition and Instruction, 34, 173–193. Barab, S., Dodge, T., Tuzun, H., Job-Sluder, K., Jackson, C., Arici, A., et al. (2007). The Quest Atlantis project: A socially-responsive play space for learning. The educational design and use of simulation computer games, pp. 159–186. Barron, B., Gomez, K., Martin, C. K., & Pinkard, N. (2014). The digital youth network: Cultivating digital media citizenship in urban communities. Cambridge: MIT Press. Bereiter, C. (2014). Principled practical knowledge: Not a bridge but a ladder. Journal of the Learning Sciences, 23(1), 4–17. Bereiter, C., Cress, U., Fischer, F., Hakkarainen, K., Scardamalia, M., & Vogel, F. (2017). Scripted and unscripted aspects of creative work with knowledge. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL2017) (Vol. 2, pp. 751–757). Philadelphia, PA: International Society of the Learning Sciences. Booker, A., & Goldman, S. (2016). Participatory design research as a practice for systemic repair: Doing hand-in-hand math research with families. Cognition and Instruction, 34(3), 222–235. Brown, A. L., & Campione, J. C. (1996). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations in learning: New environments for education (pp. 289–325). Mahwah, NJ: Lawrence Erlbaum Associates. Chan, C. K. K. (2011). Bridging research and practice: Implementing and sustaining knowledge building in Hong Kong classrooms. International Journal of Computer-Supported Collaborative Learning, 6(2), 147–186. Chen, B., & Zhang, J. (2016). Analytics for knowledge creation: Towards agency and design-mode thinking. Journal of Learning Analytics, 3(2), 139–163. Clarke, J., & Dede, C. (2009). Design for scalability: A case study of the river city curriculum. Journal of Science Education and Technology, 18, 353–365.
Sustainability and Scalability of CSCL Innovations
137
Coburn, C. E. (2003). Rethinking scale: Moving beyond numbers to deep and lasting change. Educational Researcher, 32(6), 3–12. Coburn, C. E., & Penuel, W. R. (2016). Research–practice partnerships in education: Outcomes, dynamics, and open questions. Educational Researcher, 45(1), 48–54. Cole, M. (2007). Sustaining model systems of educational activity: Designing for the long haul. In J. Campione, K. Metz, & A. S. Palinscar (Eds.), Children’s learning in and out of school: Essays in honor of Ann Brown (pp. 71–89). New York, NY: Taylor & Francis. Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences, 13(1), 15–42. Cremin, L. A. (1961). The transformation of the school: Progressivism in American education (pp. 1876–1957). New York, NY: Alfred A. Knopf. Cuban, L. (1984). How teachers taught: Constancy and change in American classroom (pp. 1890–1980). New York, NY: Longman. Cuban, L. (1990). Reforming again, again, and again. Educational Researcher, 19(1), 3–13. Davis, N. (2017). Digital technologies and change in education: The arena framework. New York, NY: Routledge. Digital Youth Network. (2019). About DYN. Retrieved February 23, 2019, from http:// digitalyouthnetwork.org/#about-us. Dillenbourg, P. (2013). Design for classroom orchestration. Computers & Education, 69, 485–492. Elmore, R. F., & Associates. (1990). Restructuring schools: The next generation of educational reform. San Francisco, CA: The Jossey-Bass Publishers. Elmore, R. F., & McLaughlin, M. W. (1988). Steady work. policy, practice, and the reform of American education. Santa Monica, CA: The RAND Corporation. Engle, R. A., & Conant, F. R. (2002). Guiding principles for fostering productive disciplinary engagement: Explaining an emergent argument in a community of learners classroom. Cognition and Instruction, 20(4), 399–483. Fields, D. A., & Kafai, Y. B. (2009). A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world. International Journal of Computer-Supported Collaborative Learning, 4(1), 47–68. Fishman, B., Marx, R. W., Blumenfeld, P., Krajcik, J., & Soloway, E. (2004). Creating a framework for research on systemic technology innovations. Journal of the Learning Sciences, 13(1), 43–76. Fishman, B. J., Penuel, W. R., Allen, A. R., Cheng, B. H., & Sabelli, N. (2013). Design-based implementation research: An emerging model for transforming the relationship of research and practice. National Society for the Study of Education, 112(2), 136–156. Forte, A., & Bruckman, A. (2006). From Wikipedia to the classroom: Exploring online publication and learning. In Proceedings of the 7th international conference on learning sciences (pp. 182–188). New York: International Society of the Learning Sciences. Geels, F. W. (2005). The dynamics of transitions in socio-technical systems: A multi-level analysis of the transition pathway from horse-drawn carriages to automobiles (1860–1930). Technology Analysis & Strategic Management, 17(4), 445–476. Goldman, S. R. (2005). Designing for scalable educational improvement: Processes of inquiry in practice. Greenhow, C., Gibbins, T., & Menzer, M. M. (2015). Re-thinking scientific literacy out-of-school: Arguing science issues in a niche Facebook application. Computers in Human Behavior, 53, 593–604. Hakkarainen, K. (2009). A knowledge-practice perspective on technology-mediated learning. International Journal of Computer-Supported Collaborative Learning, 4(2), 213–231. Hive NYC. (2019). About. Retrieved February 23, 2019, from http://hivenyc.org/about-hive-nyc/ Hong, H.-Y., Scardamalia, M., & Zhang, J. (2010). Knowledge society network: Toward a dynamic, sustained network for building knowledge. Canadian Journal of Learning and Technology, 36, 1. Retrieved from http://www.cjlt.ca.
138
N. Law et al.
Hong Kong Education Bureau. (2015). Report on the fourth strategy on information technology in education. Hong Kong: Education Bureau. Retrieved from https://www.edb.gov.hk/attachment/ en/edu-system/primary-secondary/applicable-to-primary-secondary/it-in-edu/ITE4_report_ ENG.pdf. Hui, A., Schatzki, T., & Shove, E. (Eds.). (2016). The nexus of practices: Connections, constellations, practitioners. Milton Park: Taylor & Francis. Järvelä, S., Kirschner, P. A., Hadwin, A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual-and group-level shared regulatory activities. International Journal of Computer-Supported Collaborative Learning, 11(3), 263–280. Kafai, Y. B., Peppler, K., & Chapman, R. (Eds.). (2009). The computer clubhouse: Creativity and constructionism in youth communities. New York, NY: Teachers College Press. Kampylis, P., Law, N., & Punie, Y. (Eds.). (2013). ICT-enabled innovation for learning in Europe and Asia: Exploring conditions for sustainability, scalability and impact at system level. Luxembourg: Publications Office of the European Union. Kampylis, P., & Punie, Y. (2013). Case report 1: eTwinning—The community for schools in Europe. In P. Kampylis, N. Law, & Y. Punie (Eds.), ICT-enabled innovation for learning in Europe and Asia: Exploring conditions for sustainability, scalability and impact at system level (pp. 21–35). Luxembourg: Publications Office of the European Union. Kirschner, P. A., & Erkens, G. (2013). Toward a framework for CSCL research. Educational Psychologist, 48, 1–8. Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17, 708–721. Kolodner, J. L., Camp, P. J., Crismond, D., Fasse, B., Gray, J., Holbrook, J., et al. (2003). Problembased learning meets case-based reasoning in the middle-school science classroom: Putting learning by design (tm) into practice. The Journal of the Learning Sciences, 12(4), 495–547. Laferrière, T., Allaire, S., Breuleux, A., Hamel, C., Law, N., Montané, M., et al. (2015). The knowledge building international project (KBIP): Scaling up professional development using collaborative technology. In C. K. Looi & L. W. Teh (Eds.), Scaling educational innovations (pp. 255–276). Singapore: Springer. Law, N., Kankaanranta, M., & Chow, A. (2005). Technology-supported educational innovations in Finland and Hong Kong: A tale of two systems. Human Technology, 1(2), 176–201. Law, N., & Liang, L. (2019). Sociotechnical coevolution of an elearning innovation network. British Journal of Educational Technology, 50(3), 1340–1353. Law, N., Niederhauser, D. S., Christensen, R., & Shear, L. (2016). A multilevel system of quality technology-enhanced learning and teaching indicators. Journal of Educational Technology & Society, 19(3), 72–83. Law, N., Toh, Y., Laferriere, T., Hung, D., Lee, Y., Hamel, C., et al. (2018). Refining design principles for scalable innovation networks through international comparative analysis of innovation learning architectures. Paper presented at the Annual Conference of the American Educational Research Association, New York. Law, N., Yuen, A., & Fox, B. (2011). Educational innovations beyond technology: Nurturing leadership and establishing learning organizations. New York: Springer. Lemke, J., & Sabelli, N. (2008). Complex systems and educational change: Towards a new research agenda. Educational Philosophy and Theory, 40(1), 118–129. Lewis, L., Black, R., & Tomlinson, B. (2009). Let everyone play: An educational perspective on why fan fiction is, or should be, legal. International Journal of Learning and Media, 1, 1. Linn, M. C. (2006). The knowledge integration perspective on learning and instruction. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 243–264). New York: Cambridge University Press. Linn, M. C., Clark, D., & Slotta, J. D. (2003). WISE design for knowledge integration. Science Education, 87(4), 517–538.
Sustainability and Scalability of CSCL Innovations
139
Looi, C. K. (2013). Case report 6: Singapore’s third Masterplan for ICT in education (mp3). In P. Kampylis, N. Law, & Y. Punie (Eds.), ICT-enabled innovation for learning in Europe and Asia: Exploring conditions for sustainability, scalability and impact at system level (pp. 91–102). Luxembourg: Publications Office of the European Union. Maloney, J., Peppler, K., Kafai, Y. B., Resnick, M. & Rusk, N. (2008). Programming by choice: Urban youth learning programming with Scratch. Published in the proceedings by the ACM Special Interest Group on Computer Science Education (SIGCSE) conference, Portland, OR. McKenney, S. (2018). How can the learning sciences (Better) impact policy and practice? Journal of the Learning Sciences, 27(1), 1–7. https://doi.org/10.1080/10508406.2017.1404404. McLaughlin, M. W., & Talbert, J. E. (1993, March). Contexts that matter for teaching and learning: Strategic opportunities meeting the nation’s educational goals. Stanford, CA: Center for Research on the Context of Secondary School Teaching. Stanford University. Means, B., & Penuel, W. R. (2005). Scaling up technology-based educational innovations. Scaling up success: Lessons learned from technology-based educational improvement, 2005. Mor, Y., Ferguson, R., & Wasson, B. (2015). Editorial: Learning design, teacher inquiry into student learning and learning analytics: A call for action. British Journal of Educational Technology, 46(2), 221–229. Penuel, W. R. (2019). Infrastructuring as a practice of design-based research for supporting and studying equitable implementation and sustainability of innovations. Journal of the Learning Sciences, 28(4–5), 659–677. Peppler, K., & Kafai, Y. B. (2009). Making games, art, and animations with Scratch. In Y. B. Kafai, K. Peppler, & R. Chapman (Eds.), The computer clubhouse: Creativity and constructionism in youth communities (pp. 47–57). New York, NY: Teachers College Press. Resendes, M., Scardamalia, M., Bereiter, C., Chen, B., & Halewood, C. (2015). Group-level formative feedback and metadiscourse. International Journal of Computer-Supported Collaborative Learning, 10, 309–336. Resnick, M. (2017). Lifelong Kindergarten: Cultivating creativity through projects, passion, peers, and play. Cambridge, MA: MIT Press. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., et al. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67. Rogers, E. M. (1962). Diffusion of innovations (1st ed.). New York, NY: Free Press. Salen, K., & Zimmerman, E. (2004). Rules of play: Game design fundamentals. Cambridge: MIT Press. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal Education in a Knowledge Society (pp. 67–98). Chicago: Open Court. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (2nd ed., pp. 397–417). New York, NY: Cambridge University Press. Scardamalia, M., Bereiter, C., McLean, R. S., Swallow, J., & Woodruff, E. (1989). Computersupported intentional learning environments. Journal of Educational Computing Research, 5 (1), 51–68. Slotta, J. D. (2004). The web-based inquiry science environment (WISE) scaffolding knowledge integration in the science classroom. In M. C. Linn, E. A. Davis, & P. Bell (Eds.), Internet environments for science education (pp. 203–232). Mahwah, NJ: Lawrence Erlbaum Associates Publishers. Spillane, J. P., Parise, L. M., & Sherer, J. Z. (2011). Organizational routines as coupling mechanisms policy, school administration, and the technical core. American Educational Research Journal, 48(3), 586–619. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2009). Studying virtual math teams. New York, NY: Springer. 626 pages. Stein, M. K., & Coburn, C. E. (2008). Architectures for learning: A comparative analysis of two urban school districts. American Journal of Education, 114(4), 583–626.
140
N. Law et al.
Tabak, I. (2004). Synergy: A complement to emerging patterns of distributed scaffolding. Journal of the Learning Sciences, 13(3), 305–335. Teo, C. L. (2017, June). Symmetrical advancement: Teachers and students sustaining idea-centered collaborative practices. Invited keynote at the International Conference of Computer-Supported Collaborative Learning (CSCL 2017). Philadelphia, United States. van Aalst, J., & Chan, C. K. K. (2007). Student-directed assessment of knowledge building using electronic portfolios in Knowledge Forum. Journal of the Learning Sciences, 16, 175–220. Winne, P., Hadwin, A., & Perry, N. (2013). Metacognition and computer-supported collaborative learning. In C. E. Hmelo-Silver, C. A. Chinn, C. K. K. Chan, & A. O’Donnell (Eds.), International handbook of collaborative learning (pp. 462–479). New York, NY: Routledge. Wise, A. F. (2019). Learning analytics: Using data-informed decision-making to improve teaching and learning. In O. Adesope & A. G. Rudd (Eds.), Contemporary technologies in education: Maximizing student engagement, motivation, and learning (pp. 119–143). New York: Palgrave Macmillan. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Yuan, G., & Zhang, J. (2019). Connecting knowledge spaces: Enabling cross-community knowledge building through boundary objects. British Journal of Educational Technology, 50(5), 2144–2161. Zhang, J. (2010). Technology-supported learning innovation in cultural contexts. Educational Technology Research & Development, 58(2), 229–243. Zhang, J., & Chen, M.-H. (2019). Idea Thread Mapper: Designs for sustaining student-driven knowledge building across classrooms. In K. Lund, G. Niccolai, E. Lavoué, C. Hmelo-Silver, G. Gweon, & M. Baker( Eds.), A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings, Proceedings of the 13th International Conference on Computer-Supported Collaborative Learning (CSCL 2019), Volume 1, pp. 144-151. Lyon, France: International Society of the Learning Sciences. Zhang, J., Hong, H.-Y., Scardamalia, M., Toe, C., & Morley, E. (2011). Sustaining knowledge building as a principle-based innovation at an elementary school. Journal of the Learning Sciences, 20(2), 262–307. Zhang, J., Tao, D., Chen, M., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with Idea Thread Mapper. Journal of the Learning Sciences, 27(3), 390–430.
Further Readings Chan, C. K. K. (2011). Bridging research and practice: Implementing and sustaining knowledge building in Hong Kong classrooms. International Journal of Computer-Supported Collaborative Learning, 6(2), 147–186. This is a case study of scaling and sustaining Knowledge Building in Hong Kong classrooms. The analysis documents efforts and changes at different levels, including educational policies reform, the knowledge-building teacher network, and knowledge building design and practice in classrooms. Clarke, J., & Dede, C. (2009). Design for scalability: A case study of the river city curriculum. Journal of Science Education and Technology, 18, 353–365. This article builds on Coburn’s (2003) framework to examine the multiple dimensions of scale: depth, sustainability, spread, shift in ownership, and introduces “evolution” as an additional dimension. This framework is applied to guide the scalability design of the River City project.
Sustainability and Scalability of CSCL Innovations
141
Coburn, C. E., Russell, J. L., Kaufman, J. H., & Stein, M. K. (2012). Supporting sustainability: Teachers’ advice networks and ambitious instructional reform. American Journal of Education, 119(1), 137–182. This study uses qualitative social network analysis and qualitative comparative analysis to study the relationship between sustainability and teachers’ social networks when resources and supports were removed in year 3 of an innovative mathematics curriculum across a district. Kampylis, P., Law, N., & Punie, Y. (Eds.). (2013). ICT-enabled innovation for learning in Europe and Asia: Exploring conditions for sustainability, scalability, and impact at system level. Luxembourg: Publications Office of the European Union. This in-depth 153-page report presents seven cases of ICT-enabled innovations for learning from Europe and Asia, describing scale, learning objectives, the role of technology, and implementation strategies. The report also presents relevant lessons learned and conditions for scalability, impact, and sustainability. Looi, C. K., & Teh, L. W. (Eds.). (2015). Scaling educational innovations. Singapore: Springer. This is an edited volume comprising a collection of theoretical and empirical studies on scaling educational innovations that have a strong pedagogical focus. Some of the empirical studies are directly related to CSCL innovations.
Part II
Collaborative Processes
Communities and Participation Yotam Hod and Stephanie D. Teasley
Abstract This chapter explores the closely related concepts of “communities” and “participation” as they relate to the field of computer-supported collaborative learning (CSCL). We describe how these terms have become foundational to work in CSCL, both in terms of theory development and for understanding how to design effective CSCL communities. In our review of the state-of-the-art, we highlight how rapid technological developments have created new opportunities for more meaningful and advanced forms of participation in communities. We conclude with our challenge to the future direction of the field, describing how notions of participation in communities are currently being renegotiated within the conception of the “spatial turn.” In the era of “big data” that includes exhaustive records of educational data in formal and informal spaces, we caution our CSCL colleagues to remember our theoretical roots and humanistic values as we move forward and continue to shape the future of CSCL. Keywords Communities · CSCL · Participation · Spatial turn · Sociocultural
1 Definitions and Scope In this chapter, we view “communities” and “participation” as two deeply intertwined constructs, both of which are foundational to computer-supported collaborative learning (CSCL). In general, the field views collaboration from the perspective of active participation in a community—ranging from a small group or classroom to large collections of people with some shared involvement Y. Hod (*) Department of Learning, Instruction, and Teacher Education, University of Haifa, Haifa, Israel e-mail: [email protected] S. D. Teasley School of Information, University of Michigan, Ann Arbor, MI, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_8
145
146
Y. Hod and S. D. Teasley
(Hillery, 1982), such as Wikipedia users or those belonging to particular cultural groups. While the range of perspectives taken on communities and participation make them highly contested terms, their centrality deserves special recognition and attention as they tell one of the central stories of the field. Therefore, the purpose of this chapter is to shine a light on these two inseparable ideas; to add clarity to the way CSCL researchers think about them and—paradoxically—to draw out their tensions and complexities. To do this, we will chart a course that covers the prehistory, emergence, development, and future directions of CSCL through the prism of participation in communities. Communities and participation are hardly new ideas; it is valuable to explore their long history to put their range of meanings for the CSCL community in context. From a historical perspective, the cognitive revolution of Homo sapiens approximately 60,000 year ago marked the beginnings of rapid cultural developments that uniquely define our species (Harari, 2014). The shift from hunting and gathering to agricultural society as part of the Neolithic revolution approximately 12,000 years ago was the first known time that humans collectively organized and began participating in communities (Barker, 2009). By about 600 BCE, the first scientific learning communities emerged within Greek (Ionian) society, based on principles such as intellectual freedom, diversity of perspectives, and collective knowledge advancements mediated by shared artifacts (i.e., writing as a tool for thinking; Bielaczyc & Collins, 2006). These sophisticated forms of participation in communities have paralleled the spread of humanity around the planet, with a general trend over millennia toward unification toward participation in larger and more complex communities (Harari, 2014). Scholarship on participation in communities has been evident with the rise of the social sciences in the nineteenth century and throughout the twentieth century. Sociologists like Durkheim (1893/1933) described mechanical and organic solidarity, with the latter referring to the ways that advanced societies have individuals who complement each other based on their differentiated participation or specializations. Tönnies (1887/1963) distinguished between human participation in community and society, with the former based on habits and traditions and the latter on economy and nation. These sociological accounts have grown into a plethora of perspectives across a number of fields. By the mid-twentieth century, Hillery (1955) described 94 definitions of communities, with only several generalizations possible between them—that the only agreement among definitions is that communities involve people, and that with some exceptions, communities belong to a broader class of social interaction. Community psychologists (Sarason, 1974), social learning theorists (Bandura, 1989), developmental psychologists (Bronfenbrenner, 1986), and political scientists (Dagger, 2002) have all approached participation in communities in different ways since that time, demonstrating the rich yet elusive nature of these conjoined terms.
Communities and Participation
147
2 History and Development Within the context of education and learning, Dewey (1897) forcefully argued that schools and society should not be disconnected. He believed that the purpose of schooling was to allow students to participate in real community life, a point expressed in his famous quote that education “is a process of living and not a preparation for future living” (p. 78). Many examples of schools run as communities have followed since, such as A. S. Neill’s Summerhill Schools—designed as democratic communities where students had choice-of-action (Hemmings, 1972). Alexander Meiklejohn’s experimental college from 1927 to 1932 at the University of Wisconsin–Madison was perhaps the first documented case of a learning community in formal, higher education (Meiklejohn, 1932/1981). Meiklejohn fostered participation in an intellectual community based on common interests and the intrinsic love of learning (Nelson, 2001). In the mid-century, Carl Rogers (1969) conceived of the classroom as a “community of learners.” His humanistic approach to education suggested certain necessary and sufficient conditions like unconditional positive regard and active listening so that people could feel fully received (Rogers, 1961/ 1995). Rogers argued that participation in such communities could free the individual from social and cultural restrictions, allowing people to express their inherent desire and freedom to inquire and learn. In recent decades, the notion of learning communities has further propagated discourse about schools and classrooms as well as residential, professional, online, and informal learning contexts and settings. Grossman, Wineburg, and Woolworth (2001) describe how “‘community’ has become an obligatory appendage to every educational innovation. Even a cursory review of the literature reveals the tendency to bring community into being by linguistic fiat” (pp. 4–5). The liberal use of the term community to describe any gathering of learners underscores the problematic nature of participation in communities and the risk of becoming incoherent. The sociocultural turn a half-century ago provided a unique perspective that became foundational for CSCL around these ideas.
2.1
Socioculturally Minded Theories Come on the Scene
While this panoply of theories and ideas on communities and participation have been widely evident in the past century, the 1970s and 1980s saw a convergence of perspectives on the cultural basis of cognitive development that have been very influential in the formation of CSCL (Ludvigsen, Lund, & Oshima, this volume). This has come to be known most widely today as the sociocultural approach (Rogoff & Chavajay, 1995). The foundation for understanding the reciprocal relations between cognition and social relations was provided by work developed much earlier, by influential theorists including Mead (1934), Piaget (1950), and Vygotsky (1962, 1978). Piaget’s and Vygotsky’s theoretical approaches (the latter of which
148
Y. Hod and S. D. Teasley
was only translated to English and began gaining wide traction in academia with the publication of Mind in Society, 1978) were first elaborated in the developmental psychology literature in work by Perret-Clermont (1980), Damon (1979), Cole and Scribner (1974, 1977), Wertsch (1985), and others who were interested in issues such as the social construction of meaning and the implications of a sociocultural approach to learning. When the sociocultural perspective came on the scene, discourse on learning shifted from one based on the metaphor of “acquisition” to “participation” (Sfard, 1998). From the acquisitionist perspective, knowledge is akin to some external, selfsustained object that can be transmitted, transferred, and retained. While this metaphor is useful and has been used widely to conceive learning, it has two weaknesses. First, by posing knowledge as an external object, a problematic (and dubious) dichotomy between a person and ideas is created. This inevitably creates an agelong “learning paradox” about how people bridge between knowledge that belongs to individual and that which is public (Bereiter, 1985). The second weakness is that by creating this separation, knowledge can become commoditized. While not a necessary implication, it can lead to a hegemonic view of knowing where people are categorized based on what knowledge they have or don’t have (Sfard, 1998). The participationist perspective, which treats knowledge as a cultural–historical phenomena that are negotiated-in-action (Bakhtin, 1981; Rogoff, 1995; Wertsch, 1998), removes this separation between knowing, doing, and being and instead sees them as inseparable and intertwined (Herrenkohl & Mertl, 2010). The late 1980s and early 1990s produced a number of influential papers and books by prominent learning researchers, such as J. S. Brown and Collins (e.g., Brown, Collins, & Duguid, 1989; Collins, Brown, & Newman, 1989), A. Brown and Campione (1990), and Rogoff (1990), as well as several edited volumes (Resnick, Levine, & Teasley, 1991; Rogoff & Lave, 1984) that expanded the participationist (sociocultural) accounts of learning. This helped to build a foundation for exploring how membership in a learning community—big or small, formal or informal—can support learning. Lave and Wenger’s (1991) socioculturally minded perspective, called situated learning, made the connection between participation and communities explicit. In their view, communities are social networks where novices learn by being apprenticed into the norms and practices of experts, and gradually achieve expertise through both individual and collective action. Thus, the learning process is viewed as the transformation of participation in a community of practice from being a newcomer, or peripheral participant, into becoming a more central, meaningful, contributing member (Lave & Wenger, 1991; Rogoff, 1994). This view also affected the way learning scientists and incipient CSCL researchers began thinking about how to design learning environments and the tools within them. By and large, the socioculturally minded thinking behind these learning communities was informed by the notion of providing students with learning experiences that approximate the authentic cultures of an intended domain, such as mathematics or scientific inquiry (Brown et al., 1989). To foster students’ enculturation of these authentic communities, learning communities simulate (through tools, activities, discourse, etc.) and/or give direct access to the members
Communities and Participation
149
of the practicing communities (Hod & Sagy, 2019; Radinsky, Bouillion, Lento, & Gomez, 2001). A formative example around this time was Fifth Dimension, an afterschool program where partnerships between local community and higher education institutions provided the students with many collaborative opportunities. These included the opportunity to interact with peers, adults, undergraduate students who were interested in the children’s development, and in some cases, in long-distance interactions with participants in international sites. Overall, it was designed as an “educational activity system that offers school-aged children a specially designed environment in which to explore a variety of off-the-shelf computer games and game-like educational activities” (Cole & Distributive Literacy Consortium, 2006, p. 6).
2.2
Establishment of the Field Around These Foundational Ideas
While the learning sciences was being established as a field in its own right, ARPANET had become the Internet, and the availability of an online information infrastructure began to have a significant effect on learning research (Leiner et al., 2009). Students had been using the computers from the earliest availability of desktop computing (Papert, 1980; Taylor, 1980), but the Internet opened up opportunities for this use to be both social and distributed. In particular, learning communities were no longer solely organized in physical spaces, and opportunities to participate in collaborative knowledge building no longer required colocation. As researchers in artificial intelligence and cognitive science were investigating how computer systems could be used to fill the role of the teacher/expert, others were beginning to see them as “cognitive tools” for expanding the learning opportunities provided by more traditional educational contexts (Lajoie & Derry, 1993). A NATOfunded workshop in 1989 was the first meeting focusing on “Computer Supported Collaborative Learning” (CSCL). This event launched a research community, following with a conference and journal devoted to this topic. Thus, in general, we can say that “learning communities” from the CSCL perspective are those that aim to usher students (or community members) into a particular culture, or stated differently, seek to foster full participation in communities engaged in certain practices. CSCL is concerned with how this can happen in a natural way, meaning through the ordinary cultural processes in today’s technology-laden world, or when designed-for within educational settings like classrooms.
150
2.3
Y. Hod and S. D. Teasley
The Field Matures
Building on the theoretical ideas around participation and communities combined with the excitement about the possibilities for technologies to mediate learning, the early CSCL community began elucidating how these processes occur within naturalistic (nonexperimental) environments, largely relying on “‘thick description’ of the ‘microgenetic moment’” (Scott, Cole, & Engel, 1992, p. 241) using techniques such as ethnomethodology (Stahl, Koschmann, & Suthers, 2006), and its implications on practice. Specifically, a number of microgenetic studies of participants in the context of collaboration were carried out to identify important processes of successful knowledge building (e.g., Teasley & Roschelle, 1993). Moving the focus of analysis from individual outcomes to the social plane fostered a large number of studies investigating the conditions for successful collaboration, and produced a body of literature demonstrating which particular variables do or do not improve learning outcomes (e.g., size of the group, composition of the group, nature of the task, communication media). However, recognizing that these variables all interact with each other provoked a shift in focus away from documenting under which conditions collaborative learning is better than learning alone, to an analysis of the effects of these variables on the nature of the social interactions comprising collaboration (see Dillenbourg, Baker, Blaye, & O’Malley, 1995). This shift opened the field to various methodologies for capturing process, where the level of analysis varied from the dyad to the classroom, and resulted in numerous global terms for describing “good collaboration,” such as meaning-making, knowledge building, cognitive convergence, and shared understanding (see Puntambekar, Erkens, & Hmelo-Silver, 2011). The progress made on understanding collaborative processes came along with a central discussion in the field—that is still ongoing—about how to design CSCL communities (Kali, 2006). This issue comes not only from the understandings about how productive collaboration happens, but the limits of collaboration in settings or with technologies that intend its participants to learn collaboratively; for example, where students do not reference each others’ work (Hewitt, 2005) or coordinate their efforts (Grasel, Fischer, Bruhn, & Mandl, 2001). This issue has often been viewed from the prism of CSCL scripting (Vogel, Weinberger, & Fischer, this volume; Weinberger, Ertl, Fischer, & Mandl, 2005) and orchestration (Dillenbourg, Prieto, & Olsen, 2018; Slotta & Acosta, 2017), which involves micro- and macro-level technological supports to provide students with guidance about how to interact (Kollar, Fischer, & Hesse, 2006; Fischer, Kollar, Mandl, & Haake, 2007, p. 3). Specifically, the field drew heavily on what was discovered about how people collaborate with the support of technologies to design for particular outcomes (Wise & Schwarz, 2017). In contrast to these scripting approaches, some researchers were investigating the limits of scripting (Dillenbourg, 2002). Knowledge Building Communities (KBCs: Scardamalia & Bereiter, this volume)—first as CSILE: Scardamalia & Bereiter, 1994)—were one of the pioneering contributions that took nondirective,
Communities and Participation
151
principled-based approaches. KBCs were a breakthrough in their emphasis on “knowledge work” over “learning,” with the latter seen as a by-product of participation in the community-wide knowledge-building endeavor (Scardamalia & Bereiter, 2014). The technological tool they developed—the Knowledge Forum (KF)—had core features closely aligned with these participatory ideas, requiring collective cognitive responsibility (Hod, Ya’ari, & Eberle, 2019; Scardamalia, 2002). For example, as participants read notes on the KF, the notes changed color from blue to red to indicate that the reader was aware of someone else’s contribution. KBCs were designed as principle-based approaches in the sense that the focus should be on ideas and the procedures were something that is co-constructed as the different phases of inquiry proceeded (Zhang et al., 2018). The richness and longevity around KBCs have continued as part of what Bereiter (2006) calls one of the “longest running design experiments in education” (p. 18), although the debate about how to scaffold the participation of community members remains open (Bereiter et al., 2017). Concurrent with the early work in CSCL, there was also a growing research literature coming from information scientists and computer scientists interested in human–computer interactions (HCI) broadly. For example, this included work presented at Computer-Supported Cooperative Work (CSCW) and the Computer Human Interaction (CHI) conference. In general, this research centered on investigating the properties of online communities in their many forms, such as forums, blogs, wikis, and newsgroups, and using the empirical research to inform the design of such platforms. For example, Kraut and Resnick (2012) presented insights from social science theories to directly inform community design and management. In their book, they discussed ideas about how to get online communities started, integrate newcomers, encourage commitment, regulate behavior, and understand how to motivate and maintain participation. An exchange of lessons learned from CSCW research to CSCL work, and from CSCL to CSCW has been explored in a series of workshops conducted between 2010 and 2013 (see Goggins, Jahnke, & Wulf, 2013). Although there continues to be crossover work, the communities remain fairly separate.
3 State of the Art: Scholarship on Participation and Communities Building on these early environments, rapid technological developments have opened the door further for CSCL. Cutting-edge innovations have created opportunities for more meaningful and advanced forms of participation in communities. These include designing spaces that reshape participation within classroom communities, connecting people and communities far and wide, and connecting distant classroom communities (Matuk, DesPortes, & Hoadley, this volume).
152
Y. Hod and S. D. Teasley
Future Learning Spaces (Eberle, Hod, & Fischer, 2019; Hod, 2017) have become one of the notions that cover a variety of new learning environment designs that seek to reshape the way students collaborate in their learning communities. Active Learning Classrooms are an example of classrooms that have been redesigned to support students’ collaboration. Originally in physics (and now in numerous other subject areas), Charles, Lasry, and Whittaker (2014) redesigned their classrooms using oval-shaped tables around large smart boards to encourage students to stand up as they negotiate their knowledge together. Based on the KCI model, Tissenbaum and Slotta (2014) developed the SAIL digital infrastructure to create smart classrooms that adapt, in real time, to the advancements in collective inquiry that students make. In one example, students in an immersive simulation of rainforests watch evolution as it occurs over millions of years, using their tablets to populate a cladogram at the center of the room (Lui & Slotta, 2014). Other examples of these immersive simulations include Roomquake and Wallcology (Moher et al., 2015). In Roomquake, the classroom is turned into a simulated active seismic field. Through the use of a subwoofer and with the aid of computerized seismographs, the students need to determine the location of the simulated fault line traversing the classroom. In Wallcology, students use computer monitors to observe simulated, digital ecosystems of insects and vegetation that could be behind the walls of the classroom. At an Architecture department in a graduate school in Israel, students develop competence in designing buildings and spaces by first creating classical 2D digital renderings, then entering into a 3D simulation of their design together with their classmates to virtually experience their spaces together (Sopher, Fisher-Gewirtzman, & Kalay, 2019). In short, a wide range of new tools has been embedded into educational spaces to scaffold students’ participation in much more complex and nuanced ways than ever before (Enyedy & Yoon, this volume; Yoon, Elinich, Wang, Steinmeier, & Tucker, 2012). In addition to providing new opportunities for CSCL within colocated spaces, CSCL can be distributed across distant geographies and time, both at the individual and community levels. This allows for participants to collaborate in communities without necessarily meeting or knowing each other. Scratch, a block-based programming language, is a popular example. Specifically, students post their projects (with all the associated images, sounds, and code blocks) to an online platform, which can then be remixed by the broader Scratch community (Kafai & Fields, 2013; M. Resnick et al., 2009). Connecting classroom idea threads is another example where knowledge building across classrooms and distant communities takes place. A technological platform called Idea Thread Mapper that runs in conjunction with the Knowledge Forum provides a cross-community space for classroom communities who are studying the same topic to view each other’s areas of inquiry and idea threads and build on each others’ ideas about challenging issues (Yuan & Zhang, 2019). Finally, graduate students in a wiki-based community studying learning build on and refine a database of knowledge that is shaped intergenerationally as new cohorts of students enter into the program (Hod & Ben-Zvi, 2018). Collaboratories (Wulf, 1993) and other Internet-based projects demonstrated the feasibility and scientific usefulness of using the Internet to link teams of people, data,
Communities and Participation
153
tools, and facilities to overcome barriers of time and distance (Finholt & Olson, 1997). In addition, collaboratories created new arenas for learning by expanding opportunities, especially for students, to join in with experienced scientists in the conduct of research projects. Early examples such as Bug-Scope, the Cosmic Ray Observatory Project (CROP), and the Space Physics and Aeronomy Research Collaboratory (SPARC) allowed students to join in with experienced scientists in the conduct of research projects involving sophisticated scientific instruments unavailable outside of higher education or national laboratories (Teasley, Finholt, Potter, Snow, & Myers, 2000). The technologies supporting distributed work changed the practice of science and brought related changes to the training of budding scientists at all educational levels. These changes, coupled with the rapid evolution of internet-based tools for data management and analysis, laid the groundwork for what is now called “Citizen Science.” Generally defined as the direct participation of citizens in different stages of scientific research projects, citizen science has grown rapidly in recent years in many scientific fields including biology, physics, astronomy, ecology, geology, and computer science (Silvertown, 2009). For example, with the help of 100,000 citizen scientists, the Galaxy Zoo project was able to classify over one million galaxies within 9 months, a feat which would have been simply impossible to carry out by scientists and computation alone (Clery, 2011). In recent years, there have been efforts to include school communities within these endeavors, such as in Taking Citizen Science to Schools (Hod, Sagy, Kali, & TCSS, 2018). In this type of arrangement, it is not just getting students involved, but getting multiple levels of participation—including from policy makers—which offers a state-of-the-art trend as the CSCL community asks itself about its goal of impacting practice at scale (Wise & Schwarz, 2017).
4 The Future Born out of the history and development described in the sections above coupled with current state-of-the-art scholarship in CSCL, the way researchers are thinking about participation and communities is taking shape as we move into the future. New networked technologies and the ubiquitous access that people have to them, along with rapidly changing social norms around technology use (e.g., as socio-technical systems: Geels, 2004) are challenging the way researchers conceive of learning altogether. Just as the “cultural turn” inspired the field by making “a shift from the view of learning as the acquisition of abstract representations, a covert event taking place in the head of the learner, to a view of learning as a socially-enacted process” (Koschmann, 1999, p. 127), the fundamental notions of participation in communities are currently being renegotiated within the conception of the “spatial turn” where context, identity, and time are blurred by the fluid mobilities between them (Leander, Phillips, & Taylor, 2010).
154
Y. Hod and S. D. Teasley
The conception of the spatial turn generally captures the ways that learning environments have become trans-contextual (Damsa & Jornet, 2016), allowing for greater fluidity as part of “liquid modernity” (Bauman, 2013). From the perspective where spaces are easily bridged and opportunities for both synchronous and asynchronous collaboration become commonplace, participation in communities has had to be reframed to consider lifelong, life-wide, and life-deep learning (Banks et al., 2007). This is commensurate with the sociocultural perspective that views learning as transforming participation and now questions not just how to scaffold a particular communities’ learning, but how it can best scaffold every participant so that their engagement is connected across time and space (Ito et al., 2013). The theoretical opportunity opened by the notion of a spatial turn holds promise in reframing some of the ways the learning sciences (generally) and CSCL (specifically) have looked at participation and communities. Dreier (2008), for example, reconceptualized psychotherapy as a social practice that is interconnected with the different places that clients experience in between sessions. Keifert and Stevens (2018), in attending to the ways that young children inquire within their home environments, shed light on participation in scientific activity endogenously— from the learner’s perspective. This is in comparison to judging inquiry exogenously—from the scientific communities’ perspective, inevitably leading to a deficit view of knowing. The cultural learning pathways framework similarly tries to account for the “connected chains of personally consequential activity and sensemaking—that are temporally extended, spatially variable, and culturally diverse with respect to value systems and social practices” (Bell, Tzou, Bricker, & Baines, 2012, p. 270). By examining how identities are coordinated across the different sociomaterial spaces that people inhabit, we see participation and communities taking a spatial turn, reflective of the ways spaces are fundamentally changing and being reconceptualized. This new type of thinking is transforming research in CSCL. The narrative-based work of Erstad and Sefton-Green (2013) examined the way people’s digital lives were continuously coordinated between the formal and informal communities they participate in. The Learning in a Networked Society (LINKS) center has examined the way learning research in ambient and designed technology-enhanced environments are increasingly porous and inform one another (Kali, Baram-Tsabari, & Schejter, 2019). Clegg et al. (2017) explore how wearable digital devices disrupt traditional setups and rules around how spaces are used, in what has recently been conceptualized as hybrid spaces and third places intertwining scientizing with learners’ everyday lives, cultures, and values. Silvis, Taylor, and Stevens (2018)— referring to locative literacy—report on the way that students engage in a collaborative digital mapping activity in make a broader argument about what counts as valuable learning (Ludvigsen, Cress, Law, Stahl, & Rose, 2018). While these lines of research have made great progress in advancing the field, they highlight the contentious nature of current notions around participation and communities that need to be further unpacked and put under continued empirical scrutiny. This type of reconceptualization has implications across the field of CSCL. For example, research in learning analytics (Wise, Knight, & Buckingham Shum, this
Communities and Participation
155
volume) started with data captured in formal learning spaces, such as the Learning Management Systems (LMS) used as courseware in higher education, but now has increasing opportunities to look at informal learning spaces (Tissenbaum, Kumar, & Berland, 2016). Pinkart (2019) builds on Ito’s (2013) notion of connected learning to argue that “advances in new technologies, reduction in size and cost of Internetconnected devices, and increased access to free WiFi in community spaces. . . that do not assume the school is a siloed institution where learning occurs. . .” (p. 41). While researchers investigating digitally mediated learning increasingly recognize the value of complementary sources of learning traces (Ochoa, 2017), we still have to contend with the fact that we may not be able to capture learning across settings such as Facebook, Piazza, and Instagram where social media providers commoditize their users’ activity, limiting access to data that may reveal the ways in which students coordinate their learning across the communities they participate in. Conceptions of learning characterized by the spatial turn provide relevance and large potentials for learning analytics to support learning in ways that are meaningful for people’s lives, such as by understanding what they are doing in school, with their friends, as hobbies, and within newer online learning environments like Massive Open Online Courses (MOOCs). Being able to see patterns across such large swaths of data also has the potential to inform theory in ways that support the traditional data analysis methods (e.g., see Paquette & Baker, 2017, for revealing how students “game” learning environments), and underscores how the conceptual changes around the notions of participation and communities are shaping the future of the field. With the unprecedented opportunities provided with a “future that is already here” (Isaacson, 2011), we should move forward with some caution and new considerations. The whole internet can be conceptualized both as a community and a potentially limitless collection of communities, with an increasing (and frighteningly permanent) scope of digital footprints being captured. While the predictive modeling that can come about as a result of the availability of data has its lure, there is a danger that all the data are captured and used in ways that do not support the principle that opportunities for learning are seen as good for the public. It is therefore vital for the field to remember its theoretical roots and humanistic values as it moves forward and continues to shape the future of CSCL. As underlying and deep concepts of the field, shining a light on participation and communities helps illuminate how the field of CSCL can grapple with its fault lines and issues that will continue to confront researchers in the years to come.
References Bakhtin, M. M. (1981). The dialogic imagination: Four essays by M. M. Bakhtin (C. Emerson & M. Holquist, Trans.). Austin, TX: University of Texas Press. Bandura, A. (1989). A social cognitive theory of action. In J. P. Forgas & M. J. Innes (Eds.), Recent advances in social psychology: An international perspective (pp. 127–138). North Holland: Elsevier.
156
Y. Hod and S. D. Teasley
Banks, J. A., Au, K. H., Ball, A. F., Bell, P., Gordon, E. W., Gutierrez, K. D., & Zhou, M. (2007). Learning in and out of school in diverse environments: Life-long, life-wide, life-deep. In The LIFE Center (The Learning in Informal and Formal Environments Center) and the Center for Multicultural Education. Seattle: University of Washington. Barker, G. (2009). The agricultural revolution in prehistory: Why did foragers become farmers? Oxford: Oxford University Press on Demand. Bauman, Z. (2013). Liquid modernity. New York: John Wiley & Sons. Bell, P., Tzou, C., Bricker, L., & Baines, A. D. (2012). Learning in diversities of structures of social practice: Accounting for how, why and where people learn science. Human Development, 55 (5–6), 269–284. Bereiter, C. (1985). Towards the solution of the learning paradox. Review of Educational Research, 55, 201–226. Bereiter, C. (2006). Design research: The way forward. Education Canada, 46(1), 16–19. Bereiter, C., Cress, U., Fischer, F., Hakkarainen, K., Scardamalia, M., & Vogel, F. (2017). Scripted and unscripted aspects of creative work with knowledge. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL2017) (Vol. 2, pp. 751–757). Philadelphia, PA: International Society of the Learning Sciences. Bielaczyc, K., & Collins, A. (2006). Technology as a catalyst for fostering knowledge-creating communities. In A. M. O’Donnell, C. E. Hmelo-Silver, & G. Erkens (Eds.), Collaborative learning, reasoning, and technology (pp. 37–60). Mahwah, NJ: Lawrence Erlbaum Associates. Bronfenbrenner, U. (1986). Ecology of the family as a context for human development: Research perspectives. Developmental Psychology, 22(6), 723–742. Brown, A. L., & Campione, J. C. (1990). Communities of learning and thinking, or a context by any other name. In D. Kuhn (Ed.), Developmental perspectives on teaching and learning thinking skills. Contributions to Human Development (Vol. 21, pp. 108–126). Basel: Karger. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42. Charles, E., Lasry, N., & Whittaker, C. (2014). SALTISE: Bringing pedagogical innovations into the physics classroom. Physics in Canada, 70(2), 96–98. Clegg, T., Norooz, L., Kang, S., Byrne, V., Katzen, M., Valez, R., Plane, A., Oguamanam, V., Outing, T., Yip, J., Bonsignore, E., Froehlich, J., & Bonsignore, E. (2017). Live physiological sensing and visualization ecosystems: An activity theory analysis. In Proceedings of the 2017 CHI conference on human factors in computing systems (pp. 2029–2041). New York: ACM. Clery, D. (2011). Galaxy zoo volunteers share pain and glory of research. Science, 333, 173–175. Cole, M., & Distributive Literacy Consortium. (2006). The fifth dimension: An after-school program built on diversity. New York, NY: Russell Sage Foundation. Cole, M., & Scribner, S. (1974). Culture & thought: A psychological introduction. New York: John Wiley & Sons. Cole, M., & Scribner, S. (1977). Developmental theories applied to cross-cultural cognitive research. Annals of the New York Academy of Sciences, 285, 366–373. Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453–494). Hillsdale, NJ: Erlbaum. Dagger, R. (2002). Republican citizenship. In Handbook of citizenship studies (pp. 145–157). Oxford: Sage. Damon, W. (1979). The social world of the child. San Francisco: Jossey-Bass. Damsa, C., & Jornet, A. (2016). Revisiting learning in higher education—framing notions redefined through an ecological perspective. Frontline Learning Research, 4(4), 39–47. Dewey, J. (1897). My pedagogical creed. School Journal, 54(3), 77–80. Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL (pp. 61–91). Heerlen: Open Universiteit Nederland.
Communities and Participation
157
Dillenbourg, P., Baker, M. J., Blaye, A., & O’Malley, C. (1995). The evolution of research on collaborative learning. In E. Spada & P. Reiman (Eds.), Learning in Humans and Machine: Towards an interdisciplinary learning science (pp. 189–211). Oxford: Elsevier. Dillenbourg, P., Prieto, L. P., & Olsen, J. K. (2018). Classroom orchestration. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International Handbook of the Learning Sciences (pp. 180–190). London: Routledge. Dreier, O. (2008). Psychotherapy in everyday life. Cambridge: Cambridge University Press. Durkheim, E. (1893/1933). The division of labor in society. Translated by G. Simpson. Glencoe, IL: The Free Press. Eberle, J., Hod, Y., & Fischer, F. (2019). Future learning spaces for learning communities: Perspectives from the learning sciences. British Journal of Educational Technologies, 50, 2071–2074. https://doi.org/10.1111/bjet.12865. Enyedy, N., & Yoon, S. (this volume). Immersive environments: Learning in augmented + virtual reality. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Erstad, O., & Sefton-Green, J. (Eds.). (2013). Identity, community, and learning lives in the digital age. Cambridge: Cambridge University Press. Finholt, T. A., & Olson, G. M. (1997). From laboratories to collaboratories: A new organizational form for scientific collaboration. Psychological Science, 8(1), 28–36. Fischer, F., Kollar, I., Mandl, H., & Haake, J. M. (2007). Scripting computer-supported collaborative learning: Cognitive, computational, and educational perspectives. New York, NY: Springer. Geels, F. W. (2004). From sectoral systems of innovation to socio-technical systems: Insights about dynamics and change from sociology and institutional theory. Research Policy, 33(6–7), 897–920. Goggins, S. P., Jahnke, I., & Wulf, V. (2013). Computer-supported collaborative learning at the workplace. New York, NY: Springer. Grasel, C., Fischer, F., Bruhn, J., & Mandl, H. (2001). Let me tell you something you do know. A pilot study on discourse in cooperative learning with computer networks. In H. Jonassen, S. Dijkstra, & D. Sembill (Eds.), Learning with multimedia—results and perspectives (pp. 111–137). Frankfurt a. M.: Lang. Grossman, P., Wineburg, S., & Woolworth, S. (2001). Toward a theory of teacher community. The Teachers College Record, 103(6), 942–1012. Harari, Y. N. (2014). Sapiens: A brief history of humankind. New York: Random House. Hemmings, R. (1972). Fifty years of freedom: A study of the development of the ideas of A. S. Neill. London, UK: George Allen and Unwin Ltd. Herrenkohl, L. R., & Mertl, V. (2010). How students come to be, know, and do: A case for a broad view of learning. New York, NY: Cambridge University Press. Hewitt, J. (2005). Toward an understanding of how threads die in asynchronous computer conferences. The Journal of the Learning Sciences., 7(4), 567–589. Hillery, G. A. (1955). Definitions of community: Areas of agreement. Rural Sociology, 20(2), 111–123. Hillery, G. A. (1982). A research odyssey: Developing and testing a community theory. New Brunswick, NJ: Transaction Publishers. Hod, Y. (2017). Future learning spaces in schools: Concepts and designs from the learning sciences. Journal of Formative Design in Learning, 1(2), 99–109. Hod, Y., & Ben-Zvi, D. (2018). Co-development patterns of knowledge, experience, and self in humanistic knowledge building communities. Instructional Science, 46(4), 593–619. Hod, Y., & Sagy, O. (2019). Conceptualizing the designs of authentic computer-supported collaborative learning environments in schools. International Journal of Computer-Supported Collaborative Learning, 14(2), 143–164. Hod, Y., Sagy, O., Kali, Y., & Taking Citizen Science to School. (2018). The opportunities of networks of research-practice partnerships and why CSCL should not give up on large-scale
158
Y. Hod and S. D. Teasley
educational change. International Journal of Computer-Supported Collaborative Learning, 13 (4), 457–466. Hod, Y., Ya’ari, C., & Eberle, J. (2019). Taking responsibility to support knowledge building: A constructive entanglement of spaces and ideas. British Journal of Educational Technologies, 50 (5), 2129–2143. Isaacson, W. (2011). The man in the machine. New York, NY: Simon & Schuster. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., et al. (2013). Connected learning: An agenda for research and design. Portland: BookBaby. Kafai, Y. B., & Fields, D. A. (2013). Connected play: Tweens in a virtual world. Cambridge, MA: MIT Press. Kali, Y. (2006). Collaborative knowledge building using the design principles database. International Journal of Computer-Supported Collaborative Learning, 1(2), 187–201. Kali, Y., Baram-Tsabari, A., & Schejter, A. (2019). Learning in a networked society. New York: Springer International Publishing. Keifert, D., & Stevens, R. (2018). Inquiry as a members’ phenomenon: Young children as competent inquirers. Journal of the Learning Sciences, 28, 240–278. https://doi.org/10.1080/ 10508406.2018.1528448. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts—a conceptual analysis. Educational Psychology Review, 18(2), 159–185. Koschmann, T. (1999). The cultural turn. The Journal of the Learning Sciences, 8(1), 127–128. Kraut, R. E., & Resnick, P. (2012). Building successful online communities: Evidence-based social design. Cambridge: MIT Press. Lajoie, S. P., & Derry, S. (Eds.). (1993). Computers as cognitive tools. New York, NY: Routledge. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press. Leander, K. M., Phillips, N. C., & Taylor, K. H. (2010). The changing social spaces of learning: Mapping new mobilities. Review of Research in Education, 34(1), 329–394. Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D. C., & Wolff, S. (2009). A brief history of the Internet. ACM SIGCOMM Computer Communication Review, 39(5), 22–31. Ludvigsen, S., Cress, U., Law, N., Stahl, G., & Rose, C. (2018). Multiple forms of regulation and coordination across levels in educational settings. International Journal of Computer-Supported Collaborative Learning, 13(1), 1–6. Ludvigsen, S., Lund, K., & Oshima, J. (this volume). A conceptual stance on CSCL history. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Lui, M., & Slotta, J. D. (2014). Immersive simulations for smart classrooms: Exploring evolutionary concepts in secondary science. Technology, Pedagogy and Education, 23(1), 57–80. Matuk, C., DesPortes, K., & Hoadley, C. (this volume). Conceptualizing context in CSCL: Cognitive and sociocultural perspectives. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Mead, G. H. (1934). Mind, self and society. Chicago, IL: University of Chicago Press. Meiklejohn, A. (1932). The experimental college. Madison, WI: University of Wisconsin Press. Moher, T., Slotta, J. D., Acosta, A., Cober, R. M., Dasgupta, C., Fong, C., Gnoli, A., Silva, A., Silva, B. L., Perritano, A., & Peppler, K. (2015). Knowledge construction in the instrumented classroom: Supporting student investigations of their physical learning environment. In O. Lindwall, P. Hakkinen, T. Koschmann, T. Tchounikine, & S. Ludvigsen (Eds.), Exploring the material conditions of learning: The CSCL conference (Vol. II, pp. 548–551). Gothenburg: International Society of the Learning Sciences. Nelson, A. R. (2001). Education and democracy: The meaning of Alexander Meiklejohn, 1872–1964. Madison: University of Wisconsin Press.
Communities and Participation
159
Ochoa, X. (2017). Multimodal learning analytics. In A. C. Lang, G. Siemens, A. Wise, & D. Gasevic (Eds.), The handbook of learning analytics (Vol. 1, pp. 129–141). New York: Society for Learning Analytics Research. Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books. Paquette, L., & Baker, R. S. (2017). Variations of gaming behaviors across populations of students and across learning environments. In E. André, R. Baker, X. Hu, M. Rodrigo, & B. du Boulay (Eds.), Artificial intelligence in education. AIED 2017. Lecture Notes in Computer Science (Vol. 10331). Cham: Springer. Perret-Clermont, A. N. (1980). Social interaction and cognitive development in children. New York: Academic Press. Piaget, J. (1950). The psychology of intelligence. New York, NY: Harcourt Brace. Pinkart, N. (2019). Freedom of movement: Defining, researching, and designing the components of a healthy learning ecosystem. Human Development, 62, 40–65. Puntambekar, S., Erkens, G., & Hmelo-Silver, C. (Eds.). (2011). Analyzing interactions in CSCL. Computer-supported collaborative learning series (Vol. 12). Boston, MA: Springer. Radinsky, J., Bouillion, L., Lento, E. M., & Gomez, L. M. (2001). Mutual benefit partnership: A curricular design for authenticity. Journal of Curriculum Studies, 33(4), 405–430. Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.). (1991). Perspectives on socially shared cognition. Washington DC: APA. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67. Rogers, C. R. (1961/1995). On becoming a person: A therapist’s view of psychotherapy. New York, NY: Houghton Mifflin Harcourt. Rogers, C. R. (1969). Freedom to learn. Columbus, OH: Charles Merrill Publishing Company. Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. New York: Oxford University Press. Rogoff, B. (1994). Developing understanding of the idea of communities of learners. Mind, Culture, and Activity, 1(4), 209–229. Rogoff, B. (1995). Observing sociocultural activities on three planes: Participatory appropriation, guided appropriation and apprenticeship. In J. V. Wertsch, P. Del Rio, & A. Alvarez (Eds.), Sociocultural studies of the mind (pp. 139–164). Cambridge: CUP. Rogoff, B., & Chavajay, P. (1995). What’s become of research on the cultural basis of cognitive development? American Psychologist, 50(10), 859. Rogoff, B., & Lave, J. (Eds.). (1984). Everyday cognition: Its development in social context. Cambridge, MA: Harvard University Press. Sarason, S. B. (1974). The psychological sense of community: Prospects for a community psychology. Oxford, England: Jossey-Bass. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. Liberal Education in a Knowledge Society, 97, 67–98. Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. Journal of the Learning Sciences, 3(3), 265–283. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 397–417). New York, NY: Cambridge University Press. Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Scott, T., Cole, M., & Engel, M. (1992). Chapter 5: Computers and education: A cultural constructivist perspective. Review of Research in Education, 18(1), 191–251. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13.
160
Y. Hod and S. D. Teasley
Silvertown, J. (2009). A new dawn for citizen science. Trends in Ecology & Evolution, 24(9), 467–471. Silvis, D., Taylor, K. H., & Stevens, R. (2018). Community technology mapping: Inscribing places when “everything is on the move”. International Journal of Computer-Supported Collaborative Learning, 13(2), 137–166. Slotta, J. D., & Acosta, A. (2017). Scripting and orchestrating learning communities: A role for learning analytics. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th International conference on computer supported collaborative learning (CSCL) (Vol. 1, pp. 343–350). Philadelphia, PA: International Society of the Learning Sciences. Sopher, H., Fisher-Gewirtzman, D., & Kalay, Y. E. (2019). Going immersive in a community of learners? Assessment of design processes in a multi-setting architecture studio. British Journal of Educational Technologies. https://doi.org/10.1111/bjet.12857. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning: An historical perspective. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 409–426). New York, NY: Cambridge University Press. Taylor, R. (Ed.). (1980). The computer in the school—Tutor, tool, tutee. New York: Teachers College Press. Teasley, S. D., Finholt, T. A., Potter, C. S., Snow, G. R., & Myers, J. D. (2000). Participatory science via the Internet. In Proceedings of the International Conference of the Learning Sciences (pp. 376–383). Ann Arbor, MI: LEA. Teasley, S. D., & Roschelle, J. (1993). Constructing a joint problem space: The computer as a tool for sharing knowledge. In S. P. Lajoie & S. D. Derry (Eds.), Computers as cognitive tools (pp. 229–258). Hillsdale, NJ: Erlbaum. Tissenbaum, M., Kumar, V., & Berland, M. (2016, June). Modeling visitor behavior in a gamebased engineering museum exhibit with hidden Markov models. In Proceedings of the 9th international conference on educational data mining (pp. 517–522). New York: ACM. Tissenbaum, M., & Slotta, J. D. (2014). Developing an orchestrational framework for collective inquiry in smart classrooms: SAIL smart space (p. S3). Boulder, CO: International Society of the Learning Sciences. Tönnies, F. (1887/1963). Community and Society. New York, NY: Harper & Row. Vogel, F., Weinberger, A., & Fischer, F. (this volume). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Vygotsky, L. S. (1962). Thought and language. Cambridge, MA: MIT Press. Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard University Press. Weinberger, A., Ertl, B., Fischer, F., & Mandl, H. (2005). Epistemic and social scripts in computer supported collaborative learning. Instructional Science, 33(1), 1–30. Wertsch, J. V. (1985). Vygotsky and the social formation of mind. Cambridge, MA: Harvard University Press. Wertsch, J. V. (1998). Mind as action. New York, NY: Oxford University Press. Wise, A., & Schwarz, B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12, 423–467. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wulf, W. (1993). The collaboratory opportunity. Science, 261, 854–855. Yoon, S., Elinich, K., Wang, J., Steinmeier, C., & Tucker, S. (2012). Using augmented reality and knowledge-building scaffolds to improve learning in a science museum. International Journal of Computer-Supported Collaborative Learning, 7(4), 519–541. Yuan, G., & Zhang, J. (2019). Connecting knowledge spaces: Enabling cross‐community knowledge building through boundary objects. British Journal of Educational Technology, 50, 3. https://doi.org/10.1111/bjet.12804.
Communities and Participation
161
Zhang, J., Tao, D., Chen, M.-H., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with idea thread mapper. Journal of the Learning Sciences, 27, 390–430. https://doi.org/10.1080/105-8406.2018.1444992.
Further Readings Hod, Y., & Sagy, O. (2019). Conceptualizing the designs of authentic computer-supported collaborative learning environments in schools. International Journal of Computer Supported Collaborative Learning, 14, 143–164. CSCL researchers have approached the design of learning environments largely by considering how participants are provided with opportunities to engage in the culture and practices of specific communities. This article takes a close look at over 25 CSCL designs and, in addition to mapping these, provides a framework to examine multiple and relevant aspects of these environments. A practical benefit of this review is that CSCL researchers or designers who want to create authentic CSCL environments, or are entrenched in a particular design, can now see the big picture and generate innovative ideas. Puntambekar, S., Erkens, G., & Hmelo-Silver, C. (Eds.). (2011). Analyzing interactions in CSCL: Methods, approaches and issues. Computer-supported collaborative learning series (Vol. 12). Boston, MA: Springer Science & Business Media. In this edited volume, Puntambeker, Erkens, and Hmelo-Silver provide chapters from various researchers utilizing different theories and methodologies for capturing collaborative processes. The volume is organized into three areas: group processes, learning within groups, and frameworks for analyzing CSCL, and provides numerous examples of models, analytic methods, and tools that comprise the field. While new advances continue to be made, this 2011 book provides a solid groundwork for understanding research in CSCL. Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.). (1991). Perspectives on socially shared cognition. Washington, DC: APA. The edited volume by Resnick, Levine, and Teasley grew out of a 1989 conference entitled, Socially Shared Cognition, which drew together many of the most active scholars conducting research framing cognition as a social phenomenon. At this time, North American theories of situated cognition were challenging the dominant view of cognitive science, arguing instead that every cognitive act must be viewed as a socially embedded response to a specific set of circumstances. The chapters represent a sampler of research across disciplines including psychology (social, developmental, and cognitive), anthropology, sociology, and linguistics, describing how each author tackles the seemingly difficult divide between human thinking and social functioning. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13. This classic text illuminates how the everyday language of educational researchers and practitioners shifted when the sociocultural perspective came on the scene. Sfard’s suggestion that there are two root metaphors—acquisition and participation—helps unpack some basic assumptions about learning. While not specific to CSCL, the article provides a great explanation for why participation in communities has become such a powerful set of ideas and lens through which to view technology-mediated learning. Zhang, J., Scardamalia, M., Reeve, R., & Messina, R. (2009). Designs for collective cognitive responsibility in knowledge-building communities. Journal of the Learning Sciences, 18(1), 7–44. This empirical paper examined students’ participation in a knowledge-building community over three consecutive years. The authors looked at how the evolving design iterations— emphasizing fixed, interacting, then opportunistic groupings—were related to different outcomes on the online Knowledge Forum. Specifically, participation in the community was measured along the lines of to what degree and how members were reading and building on each others’ posts, as well as how their expertise was distributed (using social network analysis). The article is an elegant model for how to examine participation in CSCL communities.
Collaborative Learning at Scale Bodong Chen, Stian Håklev, and Carolyn Penstein Rosé
Abstract The CSCL community has traditionally focused on collaborative learning in small groups or communities. Given the rise of mass collaboration and learning at scale, the community is facing unprecedented opportunity to expand its views to advance collaborative learning at scale. In this chapter, we first explicate the history and development of collaborative learning at scale and contend that both learning and collaboration need to be reconceptualized for the nascent context. We propose a framework that considers scale as either a problem to be mitigated or an asset to be harnessed, and then review pedagogical and technological innovations representing these two approaches. We conclude by discussing key tensions and challenges facing collaborative learning at scale. Keywords Collaborative learning · Learning at scale · Mass collaboration · Social media · Massive open online courses
1 Definitions and Scope Anecdotal evidence suggests that collaboration at massive scale works differently from what is more commonly studied within the CSCL community, namely collaboration in small groups or pairs. Consider the following example: On April 1, 2017,
B. Chen (*) Department of Curriculum and Instruction, University of Minnesota, Minneapolis, MN, USA e-mail: [email protected] S. Håklev School of Computer and Communication Sciences, École polytechnique fédérale de Lausanne, Lausanne, Switzerland C. P. Rosé Language Technologies Institute and Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_9
163
164
B. Chen et al.
Fig. 1 A glimpse of the Reddit Place canvas
Reddit, a popular social media site, opened up a blank online canvas for its registered users to paint on. This canvas, named Place (or/r/place) (Rappaz, Catasta, West, & Aberer, 2018), was not terribly large for millions of Reddit users. It had a size of 1000 × 1000 pixel squares. The rules of participation were also simple. A user can only paint a pixel one at a time, by changing its color from a 16-color palette, and then needs to wait for several minutes before painting the next one. What happened on this canvas during the next 72 h was impressive. Over one million users painted on the canvas. Spectacular collaboration, confrontation, conflicts, and peacemaking took place both on the canvas and in thousands of “subreddits,” Reddit’s interestbased communities. The evolving and resulting works of art on the canvas were splendid (see Fig. 1 for a glimpse). Given each individual was facing substantial constraints—one pixel at a time, waiting for minutes to place the next pixel—the massive-scale collaborative art on the Reddit Place, the largest of its kind, could not be explained by traditional perspectives of collaboration at a small scale. Complexity, emergence, and self-organization—instead of carefully engineered scaffolds— gave rise to such a phenomenon. Now is the time for the community to stop and consider the distinction between collaboration as we have more typically studied it and this new frontier of collaboration at massive scale (see Ludvigsen, Lund, & Oshima, this volume). We know that historically, empirical research in the CSCL community has been concerned with effective collaboration at a small scale, from two to dozens of learners collaborating for a class period or so (Dillenbourg, 1999). CSCL researchers are now facing unprecedented opportunity to expand thinking on collaboration, from a focus on small groups and class-size communities, to massive online spaces. In particular, given the rise of social media and large-scale online environments, some CSCL scholars argue for a vision that expands the scope of our investigations to include large-scale learning environments in order to be part of the ongoing revolution in human communication toward prevalent connectivity and collaboration (Wise &
Collaborative Learning at Scale
165
Schwarz, 2017). At a local level, collaboration has become a fabric of work in knowledge organizations; at a broader scale, collaboration is key to solving today’s grand challenges such as climate change and space exploration. To facilitate effective and pervasive human collaboration, the CSCL community that specializes in designing socio-technical systems for collaborative learning has much to offer, together with its neighboring communities such as Computer-Supported Cooperative Work (CSCW), Human-Computer Interaction, Social Media and Society, and Learning at Scale. To expand CSCL’s traditional focus to include large-scale learning environments, it is necessary to reexamine our conceptions of collaboration and learning and to interrogate challenges and opportunities brought about by these environments. In this chapter, we attempt to call attention to collaborative learning at scale, present a framework for considering scale in collaboration, and review extant approaches to exploring this novel territory. To this end, we consider the following questions: 1. What implications does scale have for the conceptualization of collaboration and learning? What does it mean to learn collaboratively in a large-scale environment? 2. Which areas should we consider when designing support for collaborative learning at scale? How can we leverage prospects of scale while mitigating the challenges it incurs?
2 History and Development Recent developments in web technologies and participatory cultures have driven hundreds of thousands of people to gather online and participate in various types of online activities. Mass collaboration is a unique type of web gathering that involves a large number of people working toward a shared vision, under facilitation of digital tools and mediation of digital artifacts (Cress, Moskaliuk, & Jeong, 2016). For example, citizen science initiatives such as Zooniverse, eBirds, and Planet Hunters gather a critical mass to achieve ambitious science goals. While project visions and goals are often set by professional scientists, broad participation by citizens is key to the accomplishment of project goals (Brossard, Lewenstein, & Bonney, 2005). By the same token, historians investigate “Ancient Lives” by engaging a mass of volunteers to decipher texts recovered from ancient trash piles1; computer scientists mobilize piecemeal human inputs into a web security measure known as reCAPTCHA to digitize millions of books (Von Ahn, Maurer, McMillen, Abraham, & Blum, 2008); hundreds of investigative journalists self-organized to collaboratively conduct the Pulitzer prize-winning Panama Papers investigation. These social phenomena in large-scale environments have demonstrated new possibilities for
1
See https://www.ancientlives.org/.
166
B. Chen et al.
convening masses of people to achieve ambitious goals that are beyond the reach of a single individual or small teams. In education, scale has also been taken up as a major concern in public and scholarly discourse in recent years. The rise of massive open online courses (MOOCs) in the early 2010s has inspired researchers to study “learning at scale.” In early discourse, the massive scale of MOOCs has often been discussed in technological and economic terms, leading to optimism for leveraging scale to bring quality education to all. After years of development, MOOCs are now seen as a component of an increasingly richer landscape of learning where we have more and more diversified responses to questions surrounding “who learns,” “why learn,” “what to learn,” “how to learn,” and “with whom” (G. Fischer, 2018). New models of instruction powered by technological innovations, e.g., crowd-sourced Q&A, have emerged to harness the scale in MOOCs. However, the potential of scale for devising new models of collaborative learning remains understudied. Collaborative learning at scale draws from advances in mass collaboration that are not primarily concerned with learning2 and in learning at scale that have less of a focus on collaboration. Given the changing landscape of learning—such as the blurred boundaries between formal and informal learning spaces, and between learning and work (Messmann, Segers, & Dochy, 2018)—critically engaging with collaborative learning at scale provides the CSCL community a prime opportunity to design for a larger audience and to make broader societal impacts. However, considering collaborative learning at scale is not straightforward. First, we need to examine our conceptions of learning in nascent, large-scale contexts. This is challenging because the CSCL and Learning Sciences communities already hold kaleidoscopic conceptions of learning (Paavola & Hakkarainen, 2005). In mass collaboration contexts, learning may not necessarily be the primary concern for people who participate; learning could very well be a “means to an end,” a “byproduct,” or an end goal in itself—depending on how learning is contextualized. The second challenge is with regard to the conceptualization of collaboration. Since its inception, the CSCL community has held a high standard, both theoretically and epistemologically, for what can be considered collaboration (Suthers, 2006; Wise & Schwarz, 2017). One example construct espoused by CSCL researchers is intersubjectivity that emphasizes the practices of integrating multiple perspectives of learners to construct shared meaning within groups (Stahl, Koschmann, & Suthers, 2014). For Suthers (2006), intersubjective meaning making involves collaborators to jointly compose interpretations of a dynamically changing context and hereby shape new interpretations by participants. Beyond simply sharing information, intersubjectivity requires perspectives of all collaborators to be presented, considered, interrogated, and negotiated. However, it is questionable whether such intersubjectivity is either 2
People do often learn through mass collaboration such as contributing to a Zooniverse project as a citizen scientist. The humanity as a whole also collectively learn through mass collaboration. However, mass collaboration and its socio-technical conditioning are not tailored specifically human learning even though learning, at both the individual and collective levels, could be a meaningful by-product.
Collaborative Learning at Scale
167
achievable or desirable in large-scale “collaboration” scenarios. Therefore, we need to advance our thinking by interrogating the kinds of learning and collaboration that occur in large-scale environments, as well as considering more deeply the implications of scale in our consideration of collaborative learning.
2.1
Conceptualizing Learning
Multifaceted understandings of learning continue to coexist in the CSCL community, where research is mainly informed by and contributes to the cognitive, social, and cultural traditions of learning. The cognitive tradition is generally focused on how individual minds work. In the CSCL context, this tradition has inspired the script theory of guidance that emphasizes the interplay between external and internal collaboration scripts: While external scripts are designed to scaffold collaborative activities, their operation would interact with a learner’s internal scripts that are cognitively organized knowledge structures about collaboration (Kollar, Fischer, & Slotta, 2007). Effective collaborative activities need to consider, and lead to, the assimilation of learners’ internal scripts. The sociocultural tradition, in contrast, highlights the contexts, practices, and histories in which learning happens. CSCL research inspired by this tradition “locates learning in meaning negotiation carried out in the social world rather than in individuals’ heads” (Stahl et al., 2014, p. 9). That is, instead of treating learning purely as a psychological phenomenon, CSCL is interested in interactional meaning-making (e.g., Stahl et al., 2014), negotiation (e.g., Crawford, Krajcik, & Marx, 1999), and collective knowledge creation (e.g., Bereiter & Scardamalia, 2014; Kimmerle, Moskaliuk, Oeberst, & Cress, 2015). Whether theorized from the sociocultural or cognitive perspective, CSCL is “centrally concerned with meaning and the practices of meaning making in the context of joint activity, and the ways in which these practices are mediated through designed artifacts” (Koschmann, 2002, p. 17). Since its inception, CSCL as a research community is predominantly concerned with learning contexts that are formal, neatly structured in light of curriculum, and scaffolded by well-defined spaces (Wise & Schwarz, 2017). However, what is apparent in emerging learning paradigms—networked learning, connected learning, learning at scale—is deeper fusion between formal and informal learning (Dillenbourg, Järvelä, & Fischer, 2009), and between learning and work (G. Fischer, 2018; Messmann et al., 2018). A data scientist working at a company could be asking questions on StackOverflow, editing a Wikipedia page about feature engineering, taking a MOOC about graph modeling, creating a machine learning model with coworkers, attending a local data science meet-up, and recording a Youtube video to share her work. Learning in the “long-tail” demonstrated in this example has been recognized in the CSCL community (Collins, Fischer, Barron, Liu, & Spada, 2009) as the landscape of learning and knowledge creation continues to shift.
168
B. Chen et al.
Learning is increasingly perpetual and embedded in or intertwined with knowledge creation processes. Learning in large-scale digital environments tends to be informal, ill-structured, and openly networked. Because of the continual fusion between work and learning, between formal and informal learning spaces, learning in scaled environments could be about individual knowledge acquisition, social participation in knowledge flows, co-creation of knowledge artifacts, change in a community’s collective understanding, or some combination of these. The shift at stake here is not new types of informal learning, but the progressive bifurcation of learning and schooling, resulting in seamless cyberlearning across settings, and the broadening conception of learning as traditional boundaries are further blurred (Borgman et al., 2018; Collins & Halverson, 2009; G. Fischer, 2018). To investigate learning in large-scale environments, we need to draw from established traditions of learning theories, cognitive or sociocultural, and also to consider emerging perspectives of learning that are grounded in observations of our increasingly connected world. By recognizing multifaceted perspectives of learning, researchers would be at a better position to recognize the utility of existing perspectives for understanding and designing for learning, and to develop new learning theories within the ever-changing landscape of learning.
2.2
Conceptualizing Collaboration
The CSCL community has specific ideas about what can be considered collaboration. In the classic context of dyads’ collaborative problem-solving, Roschelle and Teasley (1995) define collaboration as “a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem” (p. 70). Attaining and maintaining a shared conception of the joint problem, or common goals (Dillenbourg, 1999), is central to this conceptualization of collaboration. This emphasis on shared goals, instead of the division of labor, is often used to distinguish collaborative learning from cooperative learning (Johnson & Johnson, 2009). “In cooperation, partners split the work, solve sub-tasks individually and then assemble the partial results into the final output. In collaboration, partners do the work together” (Dillenbourg, 1999, p. 8; Stahl et al., 2014). Collaborative teams engage in important group processes such as negotiation, shared meaning-making, and goal coordination, which are in turn definitive features of collaboration. However, these interpersonal processes essential for small-scale collaboration are not necessarily achievable, or desirable, in mass collaboration. For instance, while collaboration in student groups could be synchronous, conceptually well-defined (e.g., about a particular topic), and temporally constrained (e.g., one class period), collaboration within an open-source community is often asynchronous, ill-defined, opportunistic, and prolonged. Because of these important contextual differences, some CSCL researchers argue that large-scale environments are not always suitable for collaborative learning because they cannot live up to standards of collaboration
Collaborative Learning at Scale
169
upheld by the research community (Wise & Schwarz, 2017). Other researchers take a separate stance and argue that we simply need fresh frameworks beyond the cooperation–collaboration distinction to conceptualize joint interaction in larger groups (Jeong, Cress, Moskaliuk, & Kimmerle, 2017). To advance collaborative learning at scale, there is a need to further distill the essence of collaboration. In mass collaboration, while intimate joint activity within a neatly defined space may be unachievable, other essential features of collaboration can remain true, such as the existence of common goals/visions, the presence of intentional coordination, and the notion that group cognition cannot be reduced to individual learning. As articulated by Cress, Barron, Fischer, Halatchliyski, and Resnick (2013), in the context of mass collaboration, a process is collaborative if it “fulfills the conditions that individuals act consciously following a common direction; that they take the perspective of the other participants into account; and that they contribute by building on the accomplishments of others” (p. 558). In this context, learners may have fewer opportunities to directly interact with each other; and instead, their collaboration is more likely to be mediated by multimedia artifacts and external representations, which embody multiple perspectives from learners and sustain shared understanding achieved among them (Jeong et al., 2017). With varied levels of scale in mind, we conceptualize a broadened sense of collaboration as: A coordinated activity guided towards a shared vision, with support from rules and tools, mediation by representations and artifacts, and dependence on intersubjectivity.
With a grasp on the essence of collaboration, we can venture further to imagine new mechanisms or interactive processes to support these features of collaboration in large-scale environments. Such new mechanisms or processes may conflict with models of collaboration initially developed in small-scale environments. We are confronted by lingering questions when facing large-scale learning environments: for example, who is the audience of a contribution (either intentional or actual) when the group is not fully specified (e.g., Marwick, Fontaine, & Boyd, 2017)? What if learner interactions are so mediated by artifacts that the learners cannot or do not interact with each other explicitly? What if the interaction is only momentary despite its potentially long-standing impact on participating individuals (e.g., Rathnayake & Suthers, 2018)? Consideration of collaborative learning at scale necessitates fresh models that capture these individual–mass dynamics that are less explored in the CSCL literature.
3 State of the Art: Considerations and Approaches 3.1
Considering Scale: Problem or Asset
What does scale mean for collaborative learning? In the CSCL literature, scale is explicitly raised as an important consideration in two dimensions: group size and time span (Dillenbourg, 1999). For group size, CSCL research could vary from two
170
B. Chen et al.
Fig. 2 Two considerations of scale in CSCL. Note: This figure is illustrative, and boundaries between two quadrants are not clear-cut
to dozens of students in a class, and more recently to tens of thousands of learners in a MOOC. In terms of timescale, research and design interests cover synchronous interactions within a few minutes and asynchronous interactions extending over several years. Collaboration scenarios with comparatively fewer learners are welltraveled territories in CSCL, regardless of the timescale (see Fig. 2). For example, Roschelle and Teasley (1995) studied collaborative problem-solving in dyads within a 45-min timeframe. Zhang, Scardamalia, Reeve, and Messina (2009) designed for a class of 22 students each year to assume collective responsibility for their community knowledge building. Both examples fall within the well-traversed realm of CSCL research, with the former focused on small groups and the latter concerned with collaborative learning in “large” communities. In recent times, the CSCL community has grown more interested in facilitating collaborative learning at scale, focused particularly on the larger number of participants. Compared to collaborative learning participated by a smaller number of learners, this area is a nascent research interest given it has greatly expanded CSCL’s traditional consideration of scale. Two treatments of scale have emerged from prior work in this area: first, scale as a problem to be mitigated; and then scale as an asset to be harnessed.
3.1.1
Scale as a Problem to Be Mitigated
The introduction of massive scale introduces greater complexity, making it difficult to anticipate learner needs, control learning processes, assess learning progress, and adjust learning supports. Many CSCL projects, which have ambitious goals of bringing about educational change (Wise & Schwarz, 2017), take place in wellresourced contexts. Educational models growing out of these projects depend on
Collaborative Learning at Scale
171
abundant resources and imminent support in order to function as intended. To scale such models to a large number of learners introduces a myriad of challenges, such as creating effective groups, monitoring collaborative activities, coordinating collaborative effort, and managing attention. To mitigate these challenges, various practical strategies have been developed. For example, to aid in managing attention, FutureLearn, a MOOC platform informed by conversation-based pedagogy, has designed forums hosted on its platform to simplify discussion threads through strategically hiding some posts so that learners are not lost in the “hyperspace” (Ferguson & Sharples, 2014); by doing so, it affords small-group “discussion bus” experiences focused on perspective-giving and perspective-taking among peers taking the same MOOC. To coordinate sophisticated collaborative activities in MOOCs, collaboration scripts are deployed in order to organize learners into groups based on expertise and orchestrating their interaction with group-appropriate types of knowledge artifacts (Håklev, Faucon, Hadzilacos, & Dillenbourg, 2017; Håklev & Slotta, 2017). In these cases, innovative technological features and pedagogical strategies are used to tackle problems introduced by scale.
3.1.2
Scale as an Asset to Be Harnessed
Despite challenges introduced by scale, it can also be an asset for peer collaboration. For example, the scale of a MOOC can be leveraged to enable rapid feedback among peers (Kulkarni, Bernstein, & Klemmer, 2015) and to shorten waiting time for peer review (Ferguson & Sharples, 2014). Quick Helper, a social learning intervention designed for edX (Rosé & Ferschke, 2016), counts on the scale to provide a critical mass of learners from different time zones to show up within a reasonable time window after a help request has been posted in order to provide timely and targeted help. On the Scratch online community, the sheer number of active users allows user-initiated events to garner a critical mass that actively produce, share, and remix multimedia artifacts (Roque, Rusk, & Resnick, 2016). Crowdsourced Q&A platforms such as StackOverflow rely on a significant mass of programmers with a wide range of expertise who choose to remain active and continue to contribute for various reasons (Movshovitz-Attias, Movshovitz-Attias, Steenkiste, & Faloutsos, 2013). Social coding on GitHub and Bitbucket depends not only on a large number of coders but also the short network distance between coders (Thung, Bissyande, Lo, & Lingxiao, 2013). The same story can be told about interest-based communities (e.g., iSpot), citizen science platforms, Wikipedia’s WikiProjects, gamer communities, and online fact-checkers (e.g., ClimateFeedback.org). Scale in these cases is treated as an asset that is needed to form effective groups with proper composition, afford the exchange of a wide range of insights, or simply provide a critical mass of artifacts from human participation. Designs for collaborative learning at scale must accommodate both perspectives of scale because scale may simultaneously act as a problem and as an asset in the same collaborative learning scenario. In a FutureLearn MOOC, the “discussion bus” can get filled up more quickly (a problem) but with more diverse perspectives
172
B. Chen et al.
(an asset) (Ferguson & Sharples, 2014). For example, in a teacher education MOOC, learners were split into 12 Special Interest Groups and further divided into groups of 3–6 members who worked together throughout the semester. Collaboration scripts were designed for small groups to mitigate challenges of collaboration in a MOOC, while the very size of the MOOC provided sufficient diversity to allow the scripts to assemble groups according to supplementary or complementary interests (Håklev & Slotta, 2017). This design was meant to leverage the idea that learners benefit differently from small and large group collaboration in MOOCs: They enjoy intensive interactions in small groups and benefit from more diversity in large groups (Eimler, Neubaum, Mannsfeld, & Krämer, 2016). It is up to the learning design team to strategically integrate and balance these two treatments of scale.
3.2
Innovative Pedagogical and Technological Approaches
Over the years, various innovative approaches have been developed to support collaborative learning at scale. Corresponding to each treatment of scale, there are differentiated approaches to designing collaborative processes, social structures, and norms to support collaborative learning at scale (see Fig. 3).
3.2.1
Reduce Scale: Converting a Mass Into Small Groups
As already illustrated in prior examples, designers may divide a mass of learners into smaller groups either to address scale as a problem or harness it as an asset. In doing so, a mass collaboration scenario is transformed into a smaller group/community situation that CSCL research is familiar with. This approach is reflected in several
Fig. 3 The four quadrants of collaborative learning at scale
Collaborative Learning at Scale
173
cases of collaboration in MOOCs (Håklev et al., 2017; Hickey & Uttamchandani, 2017; Kulkarni et al., 2015; Rosé & Ferschke, 2016). In another case, NovoEd as a social learning environment specializes in providing distinctive types of team formation processes in massive online classes: algorithmic teams that are formed algorithmically based on instructor-specified factors (e.g., size of the team and geographical location of the members); and organic teams self-organized by learners themselves (Ronaghi Khameneh, 2017). Studies of organic teams on NovoEd found homophily (across age, location, and education level) and heterophily (across skill set) led to more successful teams. Combining two team formation strategies respectively based on algorithms and learner choice provides a promising direction for this “scale-reduction” approach to supporting collaboration among a mass of learners. Subsequent work inspired by challenges in the NovoEd approach produced an effective method for team formation based on observed exchange of transactive contributions in whole-community discussions (Wen, Maki, Dow, Herbsleb, & Rosé, 2017).
3.2.2
Harnessing the Scale: Scaffolding Collaboration Within a Mass
In contrast to the scale-reduction approach, harnessing the scale of collaboration stretches our thinking on CSCL to newer territories. This direction requires much theoretical work to ground our thinking on collaboration, learning, and knowledge. The coevolution model by Cress and colleagues (Cress, Feinkohl, Jirschitzka, & Kimmerle, 2016) represents such an attempt from the CSCL community. The model posits that mass collaboration depends on bidirectional stimulation between individuals (as cognitive systems) and the mass (as social systems), through which individuals develop new understanding and the mass constructs shared knowledge. In this case, knowledge is not held in individual minds, the mass as a whole system drifts, and collaboration greatly relies on mediational processes. The Scratch online community illustrates the dynamic and bidirectional process of coevolution of individuals and the mass. Interactional processes in this community are primarily mediated by multimedia artifacts and block-based programming artifacts. In contrast with collaborative problem-solving within a dyad, members of the Scratch online community collaborate through “looking inside” artifacts shared by peers, remixing these artifacts, and producing original artifacts of their own (Roque et al., 2016). Creative acts by individuals determine what the community represents, and future actions of individuals are shaped by their interests and what they encounter in the community space. The bidirectional dance between individual Scratch users and the online community characterizes the nature of large-scale collaboration on Scratch. On Wikipedia, collaborative editing is mediated by each article’s “Content page” and coordinated by its “Talk page” participated by Wikipedia editors (Laniado, Tasso, Volkovich, & Kaltenbrunner, 2011). Similar to the Scratch community, individual learning takes place when new editors arrive and gradually internalize the community norms and values, and then become full members of the Wikipedia
174
B. Chen et al.
editor community. Because of the openness of the space, Wikipedia editors are essentially interacting with an amorphous and unfathomable mass of people. However, there are also very localized interactions, sometimes around specific subcommunities or even specific highly contested pages, with distinct conversational roles that characterize the conversational orientation of editors within discussions (Maki, Yoder, Jo, & Rosé, 2017). One of the unique features of Wikipedia is that there can only be one page about each topic, per language. Thus, rather than everyone being entitled to their own opinion and posting in their own space, a single meaning has to be continually renegotiated in light of the Wikipedia’s community standards of Neutral point of view and Dispute resolution. Individual wikipedians bring unique interests and expertise to the massive open community, while the community’s evolving roles, cultures, algorithmic systems are shaping how each contribution is made and taken up by the community (Geiger, 2017; Qin, Cunningham, & SalterTownshend, 2015). To cultivate collaboration in an open space like Wikipedia would necessarily require designs of socio-technical systems to enforce certain cultural values and to inculturate newcomers. The same story can be told about StackOverflow, where the rules of asking proper questions (e.g., being irredundant) in a proper manner (e.g., with minimal code examples) are strictly followed by its members. Technological features, including scaffolding for posting and the upvote/downvote system, are put in place to reinforce these rules of participation. In large open-source projects such as Tensorflow, project leaders and members uphold certain core values and contributing guidelines. Unlike designing for small group or small community collaboration, where enculturation can be done through direct human–human contact, design for collaborative learning in these large-scale environments should focus even more on cultures and rules, as well as strategies of codifying them in supporting sociotechnical systems. Collaboration on Wikipedia is also explained through the lens of stigmergic collaboration, a term inspired by the ways in which social insects indirectly coordinate actions by leaving signs in the environment (Tkacz, 2010). In a nutshell, “stigmergy is a form of mediated communication where signs placed in the environment by agents serve as stimuli to other agents to further transform the environment” (Elliott, 2016, p. 66). In contrast with the coevolution model, stigmergic collaboration provides another framework for understanding and designing collaborative learning at scale by stressing the potential of indirect social signals in shaping the behavior of agents in a shared environment (Elliott, 2016; Susi & Ziemke, 2001). Rather than directly shaping person-to-person interactions, as many traditional CSCL efforts do, stigmergic collaboration shifts the focus to engineering a “field of work” wherein indirect communication among collaborators gives rise to coordinated actions. Stigmergy left by agents in the work field, such as a message “This Wikipedia article needs improvement,” invites individuals to act and thereby changes the environment for future involvement. This mechanism is akin to engaging learners in tagging promising ideas in a small knowledge-building community in order to direct ongoing or future efforts (Chen, Scardamalia, & Bereiter, 2015). Design in light of stigmergic collaboration would therefore focus on indirect
Collaborative Learning at Scale
175
communication among agents, contextual factors, emergent processes, and selforganization. In this scale-harnessing approach, the massive scale could become a problem to solve as well, especially in terms of maintaining learners’ awareness of peers, artifacts, interactions, and trends. On Wikipedia, for instance, awareness tools in the form of recommendation engines (Zhu, Yu, Halfaker, & Terveen, 2018) are designed to recruit new editors into WikiProjects, which are groups formed to achieve collective editing goals. In the actual editing process, bots such as the AAlertBot and the Citation bot play important roles in delivering article alerts to WikiProjects about ongoing discussions or automatically adding missing citation information into Wikipedia articles. These automated systems are important mechanisms for addressing awareness challenges in collaboration at scale. Besides the awareness challenge facing this scale-harnessing approach, another problem is to create collaborative task conditions that can attract and sustain meaningful contributions from various contributors. One strategy to tackle this challenge is to dissect a bigger goal into granular tasks that would not demand substantial effort from an individual and tedious coordination among contributors. For instance, in order for the self-organized collaborative production of open-source software to be effective, individual goals of contribution need to be modular so that participants can work asynchronously, without much need for coordination (Benkler, 2006). Different levels of granularity will allow people with different levels of motivation to work together by contributing small or large-grained modules, which can be combined to form a finished software product (Benkler, 2006). Likewise, the collaborative editing of Wikipedia articles also fulfill this requirement, with dozens or hundreds of editors making small yet meaningful changes to improve an existing article. However, if the tasks are too granular or atomic, mutual engagement among contributors will become too lacking and shared culture or mission will be too obscure or even absent. For example, even though reCaptcha has helped humanity collectively produce millions of labels for machine learning (Von Ahn et al., 2008), absent the requirements for collaboration discussed above (e.g., having a shared vision and mutual attention), the learning effect for humans interacting with reCaptcha may drop to zero.
4 The Future This chapter discusses collaborative learning at scale as a nascent challenge for the CSCL community. On the one hand, with systematic consideration of scale, alternately as a problem or/and as an asset, we seek to develop socio-technical infrastructure for collaborative learning at scale. On the other hand, we can also seek to facilitate collaborative learning at scale in authentic online spaces and communities not specifically designed for learning by forging partnerships with stakeholders of these communities and injecting or facilitating learning in situ. While the first direction would leave CSCL researchers more centrally in control of developing
176
B. Chen et al.
infrastructure, the second direction has the potential to reach a broader audience and weave our ideals of collaborative learning directly into the fabric of increasingly connected societies. Future work on collaborative learning at scale requires fresh ways to conceptualize and design for collaboration and learning. The coevolution model (Cress, Feinkohl, et al., 2016) and the stigmergic collaboration model (Elliott, 2016) have provided illuminating examples. As an interdisciplinary community, we may continue to look for such inspiration from research areas such as network science, complex systems, and informatics. For instance, our sister community of CSCW has done extensive work on multi-agent systems by promoting the status of artifacts so that “active entities (the agents) are in charge of pro-actively pursuing global system goals, while reactive entities (the artifacts) . . . provide the required functions and services” (Omicini, Ricci, & Viroli, 2008, p. 448). This idea is reflected in aforementioned massive-scale environments such as Wikipedia and Github. Emerging work in the area of collaborative learning at scale would benefit from careful consideration of roles these agents can play to harness the benefits of scale while mitigating challenges. By the same token, agent-based simulation, which is widely applied to model complex systems, might offer promising testbeds for experimenting with design ideas for collaboration at various scales. Guided by proper theoretical models, future designs for collaborative learning at scale may be able to focus on advancing technological supports, practices, and cultures that are unique for larger scales. While we have discussed both community-level engagement from which culture emerges, as well as learning from small group interactions in connection with individual wiki pages, we have chalked out a large design space for supporting productive engagement. In that case, the goal is to enable ideas to aggregate and be distributed across large communities, in part through the development of powerful tools to support engagement with the firehose of knowledge. This can both take the form of recommendation engines or sensemaking tools, either of which may take the form of flexible interfaces that allows for easy filtering, searching, bookmarking, tagging, and reorganizing of artifacts. A MOOC discussion forum could become an incredibly powerful knowledge base, if instead of each new post or response adding to a long thread, participants worked together to develop community knowledge resources by integrating new knowledge by wielding tools flexible enough to fashion the growing structure of ideas into an accessible community resource. Since environments that support collaborative learning at scale tend to deploy complex algorithms and machine learning models, the community needs to interrogate the tension between automated/adaptive support and learner agency. Team formation in massive online courses can use a combination of algorithm-based recommendation and learner self-organization (Ronaghi Khameneh, 2017; Wen et al., 2017). As we continue to develop increasingly advanced computer support for collaboration learning at scale, we need design methodologies that illuminate values in algorithmic systems and help us identify design tradeoffs to address value tensions (Chen & Zhu, 2019). Transparency, user control, and conversational interaction are central to these approaches. While the idea of algorithms to
Collaborative Learning at Scale
177
recommend, filter, and connect ideas and artifacts for us is too useful to be abandoned, it is also vital for us to inspect these algorithms and models designed for large-scale collaboration to enhance their fairness and accountability. Collaborative learning at scale as a nascent phenomenon also introduces new methodological challenges. Computational approaches lend themselves to processing large-volume, multimodal data generated from collaborative activities (see Chen & Teasley, (2021, in press); Schneider, Worsley, & Martinez-Maldonado, this volume; Wise, Knight, & Buckingham Shum, this volume). Research efforts incorporating computational methods to examine learning processes need to take into account the intended learning design, technological constraints, and community cultures. When the investigated online environments are authentic and not controllable, or when cultures of collaboration are emphasized, ethnographic research could be especially valuable (e.g., Geiger, 2017). Because of its nascency, collaborative learning at scale could benefit from a reinvention of the multivocal analysis developed by the CSCL community originally developed primarily for small group interaction (Suthers, Lund, Rosé, Teplovs, & Law, 2013), as well as deeper fusion of multimodal data to reveal higher order patterns (Echeverria, MartinezMaldonado, & Buckingham Shum, 2019).
References Benkler, Y. (2006). The wealth of networks: How social production transforms markets and freedom. London: Yale University Press. Bereiter, C., & Scardamalia, M. (2014). Knowledge building and knowledge creation: One concept, two hills to climb. In S. C. Tan, H.-J. So, & J. Yeo (Eds.), Knowledge creation in education (pp. 35–52). Singapore: Springer-Verlag. Borgman, C. L., Abelson, H., Dirks, L., Johnson, R., Koedinger, K. R., Linn, M. C., Lynch, C. A., Oblinger, D. G., Pea, R. D., Salen, K., Smith, M. S., & Szalay, A. (2018). Fostering learning in the networked world: The Cyberlearning opportunity and challenge. Alexandria: National Science Foundation. Brossard, D., Lewenstein, B., & Bonney, R. (2005). Scientific knowledge and attitude change: The impact of a citizen science project. International Journal of Science Education, 27(9), 1099–1121. https://doi.org/10.1080/09500690500069483. Chen, B., Scardamalia, M., & Bereiter, C. (2015). Advancing knowledge building discourse through judgments of promising ideas. International Journal of Computer-Supported Collaborative Learning, 10(4), 345–366. https://doi.org/10.1007/s11412-015-9225-z. Chen, B., & Teasley, S. (2021, in press). Collaboration Analytics. In Handbook of Learning Analytics (2nd Edition). Society for Learning Analytics Research (SoLAR). Chen, B., & Zhu, H. (2019). Towards value-sensitive learning analytics design. Proceedings of the 9th International Conference on Learning Analytics & Knowledge, 2019, 343–352. https://doi. org/10.1145/3303772.3303798. Collins, A., Fischer, G., Barron, B., Liu, C.-C., & Spada, H. (2009). Long-tail learning: A unique opportunity for CSCL? In Proceedings of the 9th International Conference on Computer Supported Collaborative Learning (Vol. 2, pp. 22–24). Rhodes, Greece: International Society of the Learning Sciences. Retrieved from http://dl.acm.org/citation.cfm?id¼1599503.1599512. Collins, A., & Halverson, R. (2009). Rethinking education in the age of technology: The digital revolution and schooling in America. New York, NY: Teachers College Press.
178
B. Chen et al.
Crawford, B. A., Krajcik, J. S., & Marx, R. W. (1999). Elements of a community of learners in a middle school science classroom. Science Education, 83(6), 701–723. Cress, U., Barron, B., Fischer, G., Halatchliyski, I., & Resnick, M. (2013). Mass collaboration—An emerging field for CSCL research. In N. Rummel, M. Kapur, N. Nathan, & S. Puntambekar (Eds.), CSCL Proceedings (Vol. 1, pp. 557–563). Madison, WI: ISLS. Cress, U., Feinkohl, I., Jirschitzka, J., & Kimmerle, J. (2016). Mass collaboration as coevolution of cognitive and social systems. In U. Cress, J. Moskaliuk, & H. Jeong (Eds.), Mass collaboration and education (pp. 85–104). Cham: Springer. https://doi.org/10.1007/978-3-319-13536-6_5. Cress, U., Moskaliuk, J., & Jeong, H. (2016). Mass collaboration and education. New York: Springer. https://doi.org/10.1007/978-3-319-13536-6. Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.), Collaborative-learning: Cognitive and computational approaches (pp. 1–19). Oxford: Elsevier. Dillenbourg, P., Järvelä, S., & Fischer, F. (2009). The evolution of research on computer-supported collaborative learning. In N. Balacheff, S. Ludvigsen, T. de Jong, A. Lazonder, & S. Barnes (Eds.), Technology-enhanced learning (pp. 3–19). Cham: Springer. Retrieved from https://link. springer.com/10.1007%2F978-1-4020-9827-7_1. Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards collaboration translucence: Giving meaning to multimodal group data. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 39, 1–16. https://doi.org/10.1145/3290605. 3300269. Eimler, S. C., Neubaum, G., Mannsfeld, M., & Krämer, N. C. (2016). Altogether now! Mass and small group collaboration in (open) online courses: A case study. In U. Cress, J. Moskaliuk, & H. Jeong (Eds.), Mass collaboration and education (pp. 285–304). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-13536-6_14. Elliott, M. (2016). Stigmergic collaboration: A framework for understanding and designing mass collaboration. In U. Cress, J. Moskaliuk, & H. Jeong (Eds.), Mass collaboration and education (pp. 65–84). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-31913536-6_4. Ferguson, R., & Sharples, M. (2014). Innovative pedagogy at massive scale: Teaching and learning in MOOCs. In Open Learning and Teaching in Educational Communities (pp. 98–111). Cham: Springer. https://doi.org/10.1007/978-3-319-11200-8_8. Fischer, G. (2018). Massive open online courses (MOOCs) and rich landscapes of learning: A learning sciences perspective. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 368–379). London: Routledge/ Taylor & Francis. Geiger, R. S. (2017). Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society, 4(2), 2053951717730735. https://doi.org/10.1177/2053951717730735. Håklev, S., Faucon, L., Hadzilacos, T., & Dillenbourg, P. (2017). Orchestration graphs: Enabling rich social pedagogical scenarios in MOOCs. In Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale (pp. 261–264). New York: ACM. https://doi.org/10.1145/ 3051457.3054000. Håklev, S., & Slotta, J. D. (2017). A principled approach to the design of collaborative MOOC curricula. In Digital education: Out to the world and back to the campus (pp. 58–67). Cham: Springer. https://doi.org/10.1007/978-3-319-59044-8_7. Hickey, D. T., & Uttamchandani, S. L. (2017). Beyond hype, hyperbole, myths, and paradoxes: Scaling up participatory learning and assessment in a Big Open Online Course. In MOOCs and their afterlives: Experiments in scale and access in higher education (pp. 13–36). Chicago, London: The University of Chicago Press. Jeong, H., Cress, U., Moskaliuk, J., & Kimmerle, J. (2017). Joint interactions in large online knowledge communities: The A3C framework. International Journal of Computer-Supported Collaborative Learning, 12(2), 133–151. https://doi.org/10.1007/s11412-017-9256-8.
Collaborative Learning at Scale
179
Johnson, D. W., & Johnson, R. T. (2009). An Educational Psychology success story: Social Interdependence Theory and cooperative learning. Educational Researcher, 38, 5. https://doi. org/10.3102/0013189X09339057. Kimmerle, J., Moskaliuk, J., Oeberst, A., & Cress, U. (2015). Learning and collective knowledge construction with social media: A process-oriented perspective. Educational Psychologist, 50 (2), 120–137. https://doi.org/10.1080/00461520.2015.1036273. Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17(6), 708–721. https://doi.org/10. 1016/j.learninstruc.2007.09.021. Koschmann, T. (2002). Dewey’s contribution to the foundations of CSCL research. In Proceedings of the conference on computer support for collaborative learning: Foundations for a CSCL community (pp. 17–22). Boulder, Colorado: International Society of the Learning Sciences. Retrieved from http://dl.acm.org/citation.cfm?id¼1658616.1658618. Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. R. (2015). PeerStudio: Rapid peer feedback emphasizes revision and improves performance. In Proceedings of the Second (2015) ACM Conference on Learning @ Scale (pp. 75–84). New York, NY, USA: ACM. https://doi.org/10. 1145/2724660.2724670. Laniado, D., Tasso, R., Volkovich, Y., & Kaltenbrunner, A. (2011). When the wikipedians talk: Network and tree structure of wikipedia discussion pages. In ICWSM (pp. 177–184). Menlo Park: AAAI. Retrieved from http://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/ download/2764/3301. Ludvigsen, S., Lund, K., & Oshima, J. (this volume). A conceptual stance on CSCL history. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Maki, K., Yoder, M., Jo, Y., & Rosé, C. P. (2017). Roles and success in Wikipedia talk pages: Identifying latent patterns of behavior. In Proceedings of the 8th International Joint Conference on Natural Language Processing (IJCNLP ’17). Taipei, Taiwan: IJCNLP. Marwick, A., Fontaine, C., & Boyd, D. (2017). “Nobody sees it, nobody gets mad”: Social media, privacy, and personal responsibility among low-SES youth. Social Media + Society, 3(2), 1–14. https://doi.org/10.1177/2056305117710455. Messmann, G., Segers, M., & Dochy, F. (2018). Informal learning at work (1st ed.). London: Routledge. https://doi.org/10.4324/9781315441962. Movshovitz-Attias, D., Movshovitz-Attias, Y., Steenkiste, P., & Faloutsos, C. (2013). Analysis of the reputation system and user contributions on a question answering website: StackOverflow. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (pp. 886–893). New York, NY, USA: ACM. https://doi.org/ 10.1145/2492517.2500242. Omicini, A., Ricci, A., & Viroli, M. (2008). Artifacts in the A&A Meta-model for multi-agent systems. Autonomous Agents and Multi-Agent Systems, 17(3), 432–456. https://doi.org/10. 1007/s10458-008-9053-x. Paavola, S., & Hakkarainen, K. (2005). The knowledge creation metaphor: An emergent epistemological approach to learning. Science & Education, 14(6), 535–557. https://doi.org/10.1007/ s11191-004-5157-0. Qin, X., Cunningham, P., & Salter-Townshend, M. (2015). The influence of network structures of Wikipedia discussion pages on the efficiency of WikiProjects. Social Networks, 43, 1–15. https://doi.org/10.1016/j.socnet.2015.04.002. Rappaz, J., Catasta, M., West, R., & Aberer, K. (2018). Latent structure in collaboration: The case of Reddit r/place. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018) (pp. 261–269). New York: ICWSM. Rathnayake, C., & Suthers, D. D. (2018). Twitter issue response hashtags as affordances for momentary connectedness. Social Media + Society, 4(3), 1–14. https://doi.org/10.1177/ 2056305118784780.
180
B. Chen et al.
Ronaghi Khameneh, F. (2017). Collaborative learning at scale [PhD Thesis]. Stanford: Stanford University. Roque, R., Rusk, N., & Resnick, M. (2016). Supporting diverse and creative collaboration in the Scratch Online Community. In U. Cress, J. Moskaliuk, & H. Jeong (Eds.), Mass collaboration and education (pp. 241–256). Cham: Springer International Publishing. https://doi.org/10.1007/ 978-3-319-13536-6_12. Roschelle, J., & Teasley, S. D. (1995). The Construction of shared knowledge in collaborative problem solving. In Computer supported collaborative learning (pp. 69–97). Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-85098-1_5. Rosé, C. P., & Ferschke, O. (2016). Technology support for discussion based learning: From computer supported collaborative learning to the future of massive open online courses. International Journal of Artificial Intelligence in Education, 26(2), 660–678. https://doi.org/ 10.1007/s40593-016-0107-y. Schneider, B., Worsley, M., & Martinez-Maldonado, R. (this volume). Gesture and Gaze: Multimodal data in dyadic interactions. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Stahl, G., Koschmann, T. D., & Suthers, D. D. (2014). Computer-supported collaborative learning. In R. Keith Sawyer (Ed.), Cambridge handbook of the learning sciences (pp. 479–500). Cambridge: Cambridge University Press. Susi, T., & Ziemke, T. (2001). Social cognition, artefacts, and stigmergy: A comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity. Cognitive Systems Research, 2(4), 273–290. https://doi.org/10.1016/S1389-0417(01)00053-5. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1(3), 315–337. https://doi.org/10.1007/s11412-006-9660-y. Suthers, D. D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (Eds.). (2013). Productive multivocality in the analysis of group interactions. Boston, MA: Springer US. https://doi.org/ 10.1007/978-1-4614-8960-3. Thung, F., Bissyande, T. F., Lo, D., & Lingxiao, J. (2013). Network structure of social coding in GitHub. In 2013 17th European conference on software maintenance and reengineering (pp. 323–326). Genova: IEEE. https://doi.org/10.1109/CSMR.2013.41. Tkacz, N. (2010). Wikipedia and the politics of mass collaboration. PLATFORM: Journal of Media and Communication, 2(2), 40–53. Retrieved from http://www.academia.edu/download/ 2212517/PlatformVol2Issue2_Tkacz.pdf. Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). reCAPTCHA: Humanbased character recognition via web security measures. Science, 321(5895), 1465–1468. Retrieved from http://science.sciencemag.org/content/321/5895/1465.short. Wen, M., Maki, K., Dow, S. P., Herbsleb, J., Rosé, C. P. (2017). Supporting virtual team formation through community-wide deliberation, in Proceedings of the 21st ACM conference on computer-supported cooperative work and social computing. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. https://doi.org/10.1007/s11412-017-9267-5. Zhang, J., Scardamalia, M., Reeve, R., & Messina, R. (2009). Designs for collective cognitive responsibility in knowledge-building communities. Journal of the Learning Sciences, 18(1), 7–44. https://doi.org/10.1080/10508400802581676. Zhu, H., Yu, B., Halfaker, A., & Terveen, L. (2018). Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 194. https://doi.org/10.1145/3274463.
Collaborative Learning at Scale
181
Further Readings Cress, U., Feinkohl, I., Jirschitzka, J., & Kimmerle, J. (2016). Mass collaboration as coevolution of cognitive and social systems. In U. Cress, J. Moskaliuk, & H. Jeong (Eds.), Mass collaboration and education (pp. 85–104). Cham: Springer International Publishing. https://doi.org/10.1007/ 978-3-319-13536-6_5. A model of mass collaboration that follows the self-organization paradigm to describe the coevolution of two closed systems: cognitive systems of individuals and the social system of a mass. Geiger, R. S. (2017). Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society, 4(2), 2053951717730735. https://doi.org/10.1177/2053951717730735. An ethnographic account of Wikipedia’s highly automated social–technical ecosystem that involves human editors and various algorithmic agents. It illuminates how cultural and organizational practices are set up to facilitate massivescale collaboration and why fairness, accountability, and transparency are critical considerations for highly automated infrastructures. Håklev, S., & Slotta, J. D. (2017). A principled approach to the design of collaborative MOOC curricula. In Digital education: Out to the world and back to the campus (pp. 58–67). Cham: Springer. https://doi.org/10.1007/978-3-319-59044-8_7. A case study of scaling up a wellknown CSCL design framework called Knowledge, Community, and Inquiry to thousands of students, this chapter presents the design of a MOOC for teacher professional development that both takes advantage of the large number of learners through crowdsourcing and enables intensive knowledge–work in small groups despite the large scale. Rosé, C. P., & Ferschke, O. (2016). Technology support for discussion based learning: From computer supported collaborative learning to the future of massive open online courses. International Journal of Artificial Intelligence in Education, 26(2), 660–678. https://doi.org/ 10.1007/s40593-016-0107-y. This chapter presents a vision for technology-supported collaborative and discussion-based learning at scale supported by modern Language Technologies. The integration of text mining and conversational agents to enable micro-script support of productive discussion processes can enable approaches such as Team-Based Learning, Project-Based Learning, and Collaborative Reflection in MOOCs and other large-scale contexts. Susi, T., & Ziemke, T. (2001). Social cognition, artefacts, and stigmergy: A comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity. Cognitive Systems Research, 2(4), 273–290. https://doi.org/10.1016/S1389-0417(01)00053-5. A theoretical analysis of the relation between the theory of stigmergy—which is used to explain social insect behavior—and three theories of human’s situated/social cognition, i.e., activity theory, situated action, and distributed cognition. The article makes a case for explaining collective behavior and emergent coordination in terms of stigmergy, and particularly indirect communication through the use of artifacts.
Argumentation and Knowledge Construction Joachim Kimmerle, Frank Fischer, and Ulrike Cress
Abstract We examine the role of argumentation in knowledge construction during computer-supported collaborative learning (CSCL). We describe the history and development of argumentation research from early precursors to the examination of argumentation in everyday life. We also present the development of tools and methods that have been applied for the empirical investigation of argumentation in CSCL. In presenting the state of the art of research on argumentation and knowledge construction, we include studies of reflective interactions and the analysis of “uptake events” in conversation. We also analyze argumentative knowledge construction in online contexts and science education. We discuss the debate on the extent to which argumentation supports the development of domain-specific or domain-general knowledge. In concluding, we point to some potential future development in research on argumentation and knowledge construction, such as the consideration of additional influencing factors like social context or emotions. Keywords Argumentation · Knowledge construction · Computer-supported collaborative learning · Reflective interactions · Co-evolution model
J. Kimmerle (*) · U. Cress Knowledge Construction Lab, Leibniz-Institut für Wissensmedien (Knowledge Media Research Center), Tübingen, Germany Department of Psychology, Eberhard Karls University, Tübingen, Germany e-mail: [email protected]; [email protected] F. Fischer Department of Psychology, Ludwig-Maximilians-University, Munich, Germany Munich Center of the Learning Sciences, Munich, Germany e-mail: frank.fi[email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_10
183
184
J. Kimmerle et al.
1 Definitions and Scope In many situations where people come together to share and advance knowledge, they present arguments to each other and try to convince each other of their point of view. Argumentation and knowledge construction are therefore often closely connected. The main goal of this chapter is to highlight the role that argumentation plays for processes of collaborative learning and knowledge construction, not exclusively, but in particular in domains where people have conflicting or divergent opinions, views, or conceptions. In such situations, argumentation leads to reasoning about how conclusions could or should be drawn (see e.g., Van Eemeren, Grootendorst, Johnson, Plantin, & Willard, 2013). In this respect argumentation is the basis for exchanging, understanding, and elaborating upon different positions, for synthesizing them, and for collaboratively creating new knowledge as a product of the group as a whole. The process in which individual group members participate and introduce their personal positions in order to achieve a common understanding is referred to in this chapter as knowledge construction. Knowledge construction may take place as a collaborative process (in relatively small groups) or as a collective process (in large groups; Cress & Kimmerle, 2018; Weinberger & Fischer, 2006; Chen, Håklev, & Rosé, this volume). Besides research in computer-supported collaborative learning (CSCL) we also take into account that long-standing traditions exist in philosophy, law, political science, and other disciplines that deal with argumentation and offer reflections about the suitability of arguments in a universal way. Scholars and philosophers dealing with logic (e.g., Toulmin, 1958; Wittgenstein, 1961) and epistemology (Faulkner, 2006; Hardwig, 1985; Lehrer, 1987) have placed emphasis on processes of individual or general justification. In psychology and (practical) rhetoric, there are influential traditions that examine the effects of argumentation in terms of persuasive communication (e.g., Stiff & Mongeau, 2016). Other researchers have suggested more normative approaches that consider the quality of argumentation in terms of ideal dialogue types (Keefer, Zeitz, & Resnick, 2000; Walton & Krabbe, 1995). Keefer et al. (2000), for instance, identified critical discussions, explanatory inquiries, eristic discussions, and consensus dialogues as four ideal dialogue types. These types differ in terms of the initial dialogue situation, their main goals, their means, and the objectives of the participants. All of these different traditions have implicitly and explicitly influenced the research on argumentation in CSCL. Accordingly, they are taken up in this chapter at various points, but they do not take center stage, as the focus of this chapter is on the intersection of argumentation research and knowledge construction in CSCL. Argumentation has long been identified as a key process in collaborative learning. It is considered as “a social and cultural resource” for successful collaborative learning (Rigotti & Morasso, 2009, p. 9). The relationship between argumentation and learning is twofold, however, (Schwarz, 2009): It is about the issue of “learning to argue” (i.e., acquiring argumentation skills; e.g., Chinn & Clark, 2013) as well as about the concept of “arguing to learn” (i.e., acquiring specific knowledge through argumentation; Andriessen, Baker, & Suthers, 2003; Stegmann, Wecker, Weinberger, & Fischer, 2012). Neither of these approaches is independent from the other, of course. In both cases, learning is considered from a constructivist viewpoint: Content and argumentative structures are important components of communication and
Argumentation and Knowledge Construction
185
emerge when people discuss controversial topics. Arguments may affect the addressees, who are meant to be convinced by arguments, but they can also in turn affect the initiators themselves (Resnick, Salmon, Zeitz, Wathen, & Holowchak, 1993). As a consequence, initiators may learn through argumentation, for example, by being forced to explicitly state their own assumptions, in terms of a selfexplanation effect (Chi, De Leeuw, Chiu, & LaVancher, 1994). In the discourse of a larger group, ongoing argumentation processes may induce knowledge-related processes on a collective level as well. In focusing on the role of argumentation in the context of collective and collaborative knowledge construction, we will approach this topic from a sociocultural as well as from a socio-cognitive perspective. In order to demonstrate how knowledge may emerge and beliefs may develop, we not only take into account the cognitive processes of individuals who participate in argumentation activities but also the interplay of cognitive and social processes (Kimmerle, Moskaliuk, Oeberst, & Cress, 2015). We claim that in order to understand the development of new knowledge, CSCL needs to examine how several people collaborate and argue with each other.
2 History and Development 2.1
Early Precursors of Argumentation Theories for CSCL
Dialogical exchange among different people has always been considered a prominent source of knowledge transfer. Plato used the form of dialogue in his epistemologically significant works to portray his philosophical approach by making Socrates appear as a dialogue partner (cf. Schwarz & Asterhan, 2010). Over the centuries, many other influential philosophers have also stressed the importance of dialogical approaches for making progress in knowledge acquisition. This aspect is most prominently expressed in Hegel’s dialectic, which in turn was taken up by other scholars, such as Marx, for example. As early as in the 1930s, building on earlier work but relating it to psychological processes, Vygotsky (1980) emphasized the role of social interaction—and the internalization of social interactions—as a foundation for cognitive development of individuals. In particular, he highlighted the internalization of language in this context. Vygotsky assumed that speaking and thinking are closely related processes. Speaking is a process in which thinking takes on a social form. However, speaking is not a direct reflection of the structure of thinking: As thinking transforms into speaking, a thought is formed and modified (Vygotsky, 1987). Vygotsky’s understanding of speech developed in conjunction with the theories of contemporary authors such as Bakhtin (1981) who, from his perspective as a literary scholar, also considered verbal discourse to be a social phenomenon. However, while Bakhtin’s approach is rather dialogic (in the sense that meaning arises from the exchange between dialogue partners), Vygotsky’s educational theory is dialectic (in the sense that contradictions can and should be overcome by the exchange of different points of view and progress is made in this way; see Wegerif, 2008; Trausan-Matu, Wegerif, & Major, this volume).
186
J. Kimmerle et al.
One of the most influential scientific reflections on the importance of argumentation is Toulmin’s (1958) monograph The uses of argument. Toulmin examined the question as to how people can succeed in presenting their opinions and claims in such a way that these are reasonably justified in argumentative terms. He argued for a procedural concept of argument validity, where particular constant elements are recognizable in how argumentation unfolds and variable elements in how argumentation is judged. Toulmin explicitly aimed at considering both categories of elements. Whereas philosophy up to this point had regarded formal logic as the ideal mode of thinking and had placed the inferential purpose of a theoretical argument into the foreground, Toulmin was mainly interested in practical arguments and their function in justification. A practical argument is one that initially comes up with a convenient claim and offers justification for this claim later on. So in this approach, argumentation is the activity of selecting arguments and testing their applicability. In his monograph The uses of argument, Toulmin developed a scheme consisting of six components, where the first three components are obligatory elements for any practical argument, while the last three are not necessary in every case: (1) A claim is a conclusion for which the validity needs to be demonstrated and verified. (2) A ground can be a fact or any evidence that is supposed to be suitable as a basis for the claim. (3) A warrant is an announcement that establishes a connection between the ground and the claim. (4) A backing is a qualifying statement that is presented if the warrant is not sufficiently convincing to the audience in that it provides evidence to believe the warrant. (5) A rebuttal is a statement that identifies potential restrictions that could apply to the original claim. (6) A qualifier is an expression that demonstrates the level of confidence regarding the claim; typically, qualifiers are terms such as “likely” or “in some cases.”
2.2
The Uses of Arguments in Everyday Life and Their Development
In The skills of argument Kuhn (1991) presented a psychological approach to argumentation in which she viewed thinking as a process of argumentation. Building on philosophical considerations, Kuhn provided an analysis of basic argumentation that relied on empirical data about people’s capabilities and skills for reasoning about everyday issues. For this purpose, she set up a research setting in which she interviewed a sample of average people of various ages, including teenagers (ninthgraders) and people in their 20s, 40s, and 60s. The interviews examined the participants’ reasoning about social issues (e.g., crime, unemployment). Participants had to outline their personal theories about the causes of these issues, and they were requested to substantiate the theory through supporting evidence. Moreover, they had to develop an opposing view and a rebuttal to this opposition. Finally, the participants were asked to suggest a remedy for the respective social issue.
Argumentation and Knowledge Construction
187
The findings of Kuhn’s study indicated that people were inclined to present theories with a single cause or with several corresponding causes (see also Nickerson, 1986) and that it was difficult for them to develop opposing theories. Most of the participants were unable to make accurate assessments of the strength of the evidence they had provided. They also had problems in distinguishing between theory and evidence, frequently presenting “pseudoevidence” (i.e., statements that only established the intuitive plausibility of a theory, but not their correctness). Moreover, Kuhn found that this was particularly the case for teens and older people, whose appraisal was also more biased toward their own point of view. In general, Kuhn’s work (1991, 2001; Kuhn & Udell, 2003) indicated that the skills of argument tend to mature during adolescence and that college-educated people have better skills. Kuhn’s work dealt with domain generality and specificity of cognitive skills, as well as with the development of these skills. This work offered a multitude of points of contact for numerous disciplines. Her approach also had a major impact on educators who aimed to teach thinking and argumentation skills because Kuhn’s findings enable educators to specifically address the weaknesses of learners in their argumentation (see also Kuhn, 1993). In a similar vein, Anderson and colleagues (e.g., Anderson et al., 2001; Kim, Anderson, Nguyen-Jahiel, & Archodidou, 2007) also examined the development of reasoning skills in everyday life but added a collective and dynamic component. They studied children (fourth-graders) in small-group discussions and found strong evidence for a snowball effect in collective reasoning: Arguments that had a definite purpose, a so-called stratagem (such as positioning one’s own argument in comparison to those of others, admitting doubt, explicating arguments, or providing supporting evidence for an argument), spread to other children and appeared more often in the further course of the discussion. This snowball effect was found in different classes and groups of varying abilities. In sum, the authors concluded that social propagation plays a major role in the development of thought and language.
2.3
The Development of Methods and Tools
The history and development of argumentation research came along with the development of different methodologies and tools in CSCL research and practice. Implicitly, argumentation and argumentation skills have been key components in many CSCL approaches for a long time. For example, knowledge building (Scardamalia & Bereiter, 1994, this volume) implies that people participate in sampling ideas and mutually integrating their ideas, thereby taking each other’s’ suggestions into account. Group members have to justify their positions, defend them, scrutinize other people’s suggestions, and come to mutual conclusions. This procedure obviously reflects a process of collaborative argumentation. Accordingly, the platforms developed in the knowledge-building context (CSILE and Knowledge Forum; Scardamalia, 2004) ultimately also serve as tools for the purpose of collaborative argumentation.
188
J. Kimmerle et al.
Since the 1990s, there have also been explicit endeavors to support argumentation with educational CSCL tools. A prominent example of such a tool was the software system Belvedere (Suthers et al., 2001; Suthers & Weiner, 1995). Belvedere aimed to demonstrate rhetorical relationships within a discussion. This tool was programmed to support learners who engaged in critical debates about opposing scientific claims. Belvedere enabled learners to compose claims, theories, or hypotheses inside graphical representations during the course of the discussion. In that way, abstract relations could be made visual with concrete graphic representations, enabling learners to navigate better within a debate and discuss certain claims together. Later developments of argumentation tools included electronic scaffolding approaches for facilitating argumentation, such as the ARGUNAUT system that aims to support teachers in dealing with several groups of students who engage in argumentation (Schwarz & Asterhan, 2011). This system allows teachers to act as a moderator, to monitor the progress in group discussions, and cautiously intervene in this process to make sure that the participants engage in constructive discussion. Scaffolding tools have been integrated into the knowledge-building platforms that implicitly direct learners to engage more in argumentation processes. Examples are scaffolds like “my theory” or “this theory cannot explain” that are offered for displaying learners’ contributions (Scardamalia, 2004). More recent tools make use of visualization techniques for data from CSCL settings. The Knowledge Space Visualizer, for example, projects participants’ notes into a graphical user interface in order to show relationships between the notes as graphs (Teplovs, 2008). Similar tools can also be used for dialogue processes in less formalized environments. An example is Cohere, a tool that supports argumentation on the Internet (Buckingham Shum, 2008). Nowadays, more and more methods are being developed that are able to detect various elements of arguments automatically (e.g., Stab & Gurevich, 2014; Swanson, Ecker, & Walker, 2015). In particular, conversational agents can be used to support collaborative learning-related discussions through the Academically Productive Talk approach (e.g., Adamson, Dyke, Jang, & Rosé, 2014). This approach uses generic prompts to help learners formulate and improve their own lines of argumentation. They can also make use of these prompts for critically examining the arguments of the other people involved.
3 State of the Art The current research and practical application of argumentation and knowledge construction are characterized by wide variety and an enormous heterogeneity of available approaches, tools, and methods. In the following paragraphs, we provide an overview of a selection of theories, frameworks, and models that we consider to be key approaches to research on argumentation particularly in relation to collaborative and collective knowledge construction.
Argumentation and Knowledge Construction
3.1
189
Reflective Interactions and the Rainbow Framework
Baker and Lund (1997) have built on the arguing-to-learn approach (see above) by claiming that engaging in the reflective interactions of collectively arguing about problems and potential solutions may support the development of knowledge. Baker, Andriessen, Lund, van Amelsvoort, and Quignard (2007) developed the Rainbow framework, which distinguishes among seven learner activities that differ in how strongly learners are engaged in interactions for educational purposes: (1) “Outside activity” comprises all interactions that are not aimed at achieving an educational task. (2) “Social relation” refers to interactions that aim to organize learners’ relationships with each other regarding the task. (3) “Interaction management” deals with organizing the interactive behavior itself; it includes both processes of communication and coordination. (4) “Task management” refers to planning and monitoring a task and its completion. (5) “Opinions” activities comprise stating beliefs with respect to the task, including the initiating and ending sequences of a discussion. (6) “Argumentation” refers to arguments and counterarguments that are directly related to a thesis itself. (7) Finally, the “broaden and deepen” category refers to interactions concerned with connecting arguments as well as with the meaning and the relations of arguments; this includes the elaboration of arguments and the provision of definitions and justifications. The authors argue that this Rainbow framework provides an analytical method that expands upon previous analysis approaches and helps researchers in analyzing computer-supported debates. In particular, it may facilitate the identification and description of learners’ knowledge-elaboration processes by focusing on aspects of broadening and deepening people’s understanding during the progress of a debate. Accordingly, the framework has a certain potential to identify clear connections between argumentation processes and actual knowledge progress in a group.
3.2
Analysis of Uptakes and the Polyphonic Model of Collaborative Learning
In their framework for the analysis of distributed interaction, Suthers, Dwyer, Medina, and Vatrapu (2010) also focus on the relation between interaction and learning and provide an approach for the analysis of the relevant processes. The authors argue that interaction that is distributed across space, time, and the people involved results in a great variety of data and analytic artifacts. An approach is thus required that is able to uncover and identify key aspects of interaction in a sequence of events. For Suthers et al. (2010), the most important element of interaction is a relationship that they call “uptake.” Uptakes occur when a group member refers to an earlier event as relevant for a current incidence or activity. For the analysis of uptake events, researchers need to consider the interactional relationships between information exchange and argumentation. The authors recommend the application of
190
J. Kimmerle et al.
graphs to create a visual representation of how activities depend on each other. Such “contingency graphs” may be used as tools for illustrating how interactions are distributed across various media. Suthers et al. (2010) distinguish among several types of contingencies. One of these contingencies is “media dependency” that occurs when an activity in a media object is dependent on a prior activity that generated the object in the first place. Another is “temporal proximity,” which is relevant for understanding dialogues in which a contribution is related to a previous contribution. Such contingencies, however, are not necessarily between adjacent contributions; relevant contributions can also spread in time as long as they are available to participants either cognitively or due to media support. Contingencies that build on “spatial organization” can support analysis in media where learners can influence the spatial location of objects. When people place objects close to each other in a two-dimensional computer display, for instance, one may assume that this implies contingencies in terms of some sort of relatedness. “Inscriptional similarities” can also be applied by participants to indicate relatedness. Contingencies may refer to coordination activities based on inscriptional similarities between objects, such as visual resemblance. In contrast, “semantic relatedness” may be assumed when the semantic content overlaps, demanding acknowledgment of meaning beyond a purely inscriptional similarity. This framework supports researchers in conducting multi-method investigations of distributed interaction and is widely applied in the field of CSCL. The polyphonic model of hybrid and collaborative learning (Trausan-Matu, 2009; Trausan-Matu, Stahl, & Sarmiento, 2007) uses polyphony in music as an analogy for how people stimulate each other in collaborative conversations for problem-solving and other learning accomplishments: The “voices” of the different people involved can together create a “melody,” in this case, a solution to a problem, which then facilitates the collaborative inclusion of differing opinions. This analogy also allows for recognizing “dissonances,” such as unconvincing solutions. This model is based on Bakhtin’s (1981) approach and assumes various types of “interanimation.” Trausan-Matu also presents software tools that aid in the analysis of computersupported discussions by visualizing the conversation threads and the impacts of a statement on subsequent contributions. These tools serve an educational purpose in that they assist educators and students in assessing and improving the processes of learning and knowledge construction.
3.3
Argumentative Knowledge Construction and Learning to Argue Online
Argumentation provides not only an opportunity to learn about argumentation itself, but is additionally instrumental for acquiring knowledge in certain domains, such as knowledge about genetics in biology (Weinberger, Stegmann, & Fischer, 2010). The basic idea is that learners engage in discussions around controversial or open
Argumentation and Knowledge Construction
191
questions. They develop and criticize arguments. To be able to do so, learners need to retrieve, apply, evaluate, and possibly refine their knowledge—all of these processes are known to be causally related to learning (see e.g., Brown, Collins, & Duguid, 1989). Tools that implemented different argumentation schemas have been successfully used as structures to guide argumentative knowledge construction in online discussions. For example, a text field was prestructured with Toulmin’s argument schema including data, claim, warrant, and qualifier (see above); or Leitão’s (2000) dialectic model was used to include argument, counterargument, and integration as a preset sequence of message subjects (i.e., pressing reply to an argument would generate an empty message with the term “counterargument” in the subject line; Weinberger et al., 2010). Interestingly, the results of empirical studies showed that the effects of such tools on domain knowledge were relatively modest in comparison to their large effects on knowledge and skills related to argumentation (Wecker & Fischer, 2014). Further studies using this model showed that a learner’s own arguments were much more predictive of the learning gain than the learning partner’s arguments (Vogel, Wecker, Kollar, & Fischer, 2017).
3.4
Argumentation in the Science Classroom and the Domain Specificity of Knowledge
Argumentation and the facilitation of argumentation abilities are particularly relevant in the context of science education (e.g., Osborne, 2010). The starting point for these considerations is the understanding of how science works and how scientific progress tends to occur (Losee, 2004). Professional scientists argue about science, that is, they present their ideas, theories, methods, and so on, which in turn can be taken up, criticized, and developed further by other scientists. In this way, ideas become better over time, and the related knowledge will in turn become more reliable. This collaborative discourse is a standard procedure in science, and thus, as Osborne (2010) claims, this type of argumentation should be standard in science education (see also Kind & Osborne, 2017). Osborne also refers to Toulmin’s approach, describing the beneficial effect of argumentation on scientific progress through the use of rebuttals and counterarguments. As a consequence of this process, students learn to understand that science does not simply consist of irrefutable facts. Learners can revise their old ideas or previous models of understanding when they encounter new ideas and more elaborate considerations. This supports the development of new understandings. In other words, collaborative dialogue may facilitate cognitive processes. What is heavily debated among scholars who deal with scientific reasoning and argumentation is the extent to which argumentation leads to the development of domain-specific or domain-general argumentation skills in learners (Fischer, Chinn, Engelmann, & Osborne, 2018). Educational psychologists, in particular, have claimed that engaging in argumentation may result in benefits across domains
192
J. Kimmerle et al.
(Hetmanek, Engelmann, Opitz, & Fischer, 2018). This position assumes that argumentation is a domain-general skill. Representatives of other disciplines are rather skeptical of this assumption and point out that domain generality is an illusion (Fischer et al., 2018). Their position is that good arguments result from deep knowledge in a domain and a skill to develop well-formed arguments is implied by that deep domain knowledge. From this position, teaching general skills of argumentation is unnecessary because such a skill will not be useful. Recent discussions and investigations cast severe doubts on assumptions of generality (e.g., Samarapungavan, 2018). There have been suggestions, however, that some aspects of argumentation are valid across domains (e.g., Chinn & Duncan, 2018) and that cross-domain validity would be a worthwhile topic of investigation. Interestingly, there were some ambitious approaches in CSCL some time ago addressing this question which aimed to identify the characteristics of CSCL tools of argumentation support in diverse domains (Scheuer, Loll, Pinkwart, & McLaren, 2010).
3.5
The Coevolution Model and the Integration of Opposing Arguments
The coevolution model of individual learning and collective knowledge construction stresses the role of mutual irritation (in terms of conflict, discrepancies, or disequilibrium) between an individual’s cognitive system and a social system in which argumentation takes place (Cress & Kimmerle, 2008; Kimmerle et al., 2015; see also Piaget, 1977). This model identifies relevant processes of learning and knowledge construction that are facilitated by CSCL and social media platforms. These key processes include self-organization, externalization, internalization, and the interaction between externalization and internalization. Complex systems are selforganized: Cognitive systems do not passively process incoming information from their environment; rather, the processing of information implies active construction based on individual prior conceptions (Piaget, 1977). As a result, cognitive structures evolve dynamically. Social systems are likewise self-organized. They also are not simply exposed to information. Communication in social systems refers back to existing prior communication. This earlier communication established interpersonal relationships, prospects, and norms that provide a basis for and lead to further activities in a social system. For people to learn from communication, other people must have externalized their knowledge. Again, this externalization is not a simple transmission of information from cognitive systems to a social system. Instead, communication needs to refer to the social system’s mode of operation to adhere to the rules and goals of a community. This means that information must be presented, prepared, and elaborated in such a way that it will be accepted in a community and can be picked up and adequately further processed by other group members. Finally, the model assumes that learning and knowledge construction require an overlap of an individual’s
Argumentation and Knowledge Construction
193
knowledge and the information being discussed in a social system. At the same time, however, a certain level of incongruity between individual knowledge and the information in a shared digital artifact is needed to stimulate people to engage in exchanging and constructing knowledge. When such disturbances occur in the form of socio-cognitive conflicts (Mugny & Doise, 1978) they may serve as triggers for knowledge dynamics. Such fruitful resolutions of socio-cognitive conflicts are referred to as productive friction in CSCL research (Holtz, Kimmerle, & Cress, 2018).
4 The Future As can be seen from the theories and frameworks presented above, argumentation, knowledge construction, and their relationship are multifaceted topics. Research on these topics in CSCL started with a focus on testing instructional settings and tools that had been designed in order to support learning. With the newer approaches of “arguing to learn” and “learning to argue,” educational purpose has been in the foreground, and the frameworks and tools have become a part of instructional theories and settings. With the social web as it exists today, much more unstructured, spontaneous forms of communication and (mass-)collaboration have become relevant, and it is obvious that people engage in the construction of knowledge not only in formal learning settings but also in such informal communication situations and in hybrid settings in which formal and informal learning are mutually stimulating (Ito et al., 2013). Against this background, the insight that CSCL researchers and practitioners need to consider argumentation as pivotal issues for the quality of collective knowledge construction takes on more significance. It is more and more evident that is it not only the quality of arguments but also other factors that influence knowledge construction, such as the social context and norms, but also the emotional connotations of a domain. Research on knowledge construction and argumentation has to be linked to research about motivated cognition, describing why and how people select arguments and process them in a way that might be potentially biased. With the rapid progress in computational semantics and the use of artificial intelligence, more and more tools will be developed that allow for mining the use of argument and representing the knowledge and argumentative trajectories of individuals and groups visually (Wise, Knight, & Buckingham Shum, this volume). These tools must be combined in a more elaborate way with existing and new theories from CSCL and the Learning Sciences in order to be used meaningfully. In addition, empirical research will be needed to see how such tools can implicitly influence argumentation and knowledge construction and how the use of certain technologies affects the dynamics in a group. Another venue for future research has emerged from an interactionist approach to argumentation. Mercier and Sperber (2017) have proposed a new perspective to understanding the very purpose and function of argumentation. People’s reasoning capability, they argue, is much less flawed than psychologists’ studies suggest. Their
194
J. Kimmerle et al.
main claim is that human reasoning is an implicit rather than consciously controlled cognitive function that yields plausible reasons for an action, enabling others to understand rather than leading them to make conclusions. It is also the case that people are good in producing their own arguments but bad in evaluating them. They do much better in evaluating the arguments of others. From this perspective, engaging in collaborative or collective argumentation trains the participants’ reasoning abilities. In addition, collaborative and collective argumentation is a more reliable epistemic process than separately individual reasoning. CSCL research is in a good position to evaluate some of the challenging hypotheses that can be derived from this model and to add an educational dimension to it. In this context, too, we propose a clear theoretical orientation in the corresponding empirical research work (Stahl & Hakkarainen, this volume). The CSCL community should by no means only refer to the classical pedagogical approaches, which are firmly anchored in this field (see History and Development section in this chapter) but can also benefit to a large extent from sociopsychological and sociological theories and methods of analysis (Cress & Kimmerle, 2018).
References Adamson, D., Dyke, G., Jang, H., & Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. International Journal of Artificial Intelligence in Education, 24(1), 92–124. Anderson, R. C., Nguyen-Jahiel, K., McNurlen, B., Archodidou, A., Kim, S.-Y., Reznitskaya, A., & Gilbert, L. (2001). The snowball phenomenon: Spread of ways of talking and ways of thinking across groups of children. Cognition and Instruction, 19, 1–46. Andriessen, J., Baker, M., & Suthers, D. (Eds.). (2003). Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments. Dordrecht, The Netherlands: Springer. Baker, M., Andriessen, J., Lund, K., van Amelsvoort, M., & Quignard, M. (2007). Rainbow: A framework for analysing computer-mediated pedagogical debates. International Journal of Computer-Supported Collaborative Learning, 2, 315–357. Baker, M., & Lund, K. (1997). Promoting reflective interactions in a CSCL environment. Journal of Computer Assisted Learning, 13, 175–193. Bakhtin, M. (1981). The dialogic imagination. Austin, TX: University of Texas Press. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32–42. Buckingham Shum, S. (2008). Cohere: Towards Web 2.0 argumentation. In Proceedings of the 2nd International Conference on Computational Models of Argument (pp. 97–108). Toulouse, France: IOS Press. Chen, B., Håklev, S., & Rosé, C. P. (this volume). Collaborative learning at scale. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Chi, M. T., De Leeuw, N., Chiu, M. H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439–477. Chinn, C., & Clark, D. B. (2013). Learning through collaborative argumentation. In C. E. HmeloSilver, C. A. Chinn, C. K. K. Chan, & A. M. O’Donnell (Eds.), International handbook of collaborative learning (pp. 314–332). New York: Routledge.
Argumentation and Knowledge Construction
195
Chinn, C. A., & Duncan, R. G. (2018). What is the value of general knowledge of scientific reasoning? In F. Fischer, C. A. Chinn, K. Engelmann, & J. Osborne (Eds.), Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge (pp. 87–111). New York, NY: Routledge. Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building with wikis. International Journal of Computer-Supported Collaborative Learning, 3, 105–122. Cress, U., & Kimmerle, J. (2018). Collective knowledge construction. In F. Fischer, C. E. HmeloSilver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 137–146). New York, NY: Routledge. Faulkner, P. (2006). Understanding knowledge transmission. Ratio, 19, 156–175. Fischer, F., Chinn, C. A., Engelmann, K., & Osborne, J. (Eds.). (2018). Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge. New York, NY: Routledge. Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82, 335–349. Hetmanek, A., Engelmann, K., Opitz, A., & Fischer, F. (2018). Beyond intelligence and domain knowledge: Scientific reasoning and argumentation as a set of cross-domain skills. In F. Fischer, C. A. Chinn, K. Engelmann, & J. Osborne (Eds.), Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge (pp. 203–226). New York, NY: Routledge. Holtz, P., Kimmerle, J., & Cress, U. (2018). Using big data techniques for measuring productive friction in mass collaboration online environments. International Journal of ComputerSupported Collaborative Learning, 13, 439–456. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, B., Rhodes, J., Salen, K., et al. (2013). Connected learning: An agenda for research and design. Irvine, CA: Digital Media and Learning Research Hub. Keefer, M. W., Zeitz, C. M., & Resnick, L. B. (2000). Judging the quality of peer_led student dialogues. Cognition and Instruction, 18, 53–81. Kim, I.-E., Anderson, R. C., Nguyen-Jahiel, K., & Archodidou, A. (2007). Discourse patterns during children’s collaborative online discussions. Journal of the Learning Sciences, 16, 333–370. Kimmerle, J., Moskaliuk, J., Oeberst, A., & Cress, U. (2015). Learning and collective knowledge construction with social media: A process-oriented perspective. Educational Psychologist, 50, 120–137. Kind, P., & Osborne, J. (2017). Styles of scientific reasoning: A cultural rationale for science education? Science Education, 101, 8–31. Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press. Kuhn, D. (1993). Science argument: Implications for teaching and learning scientific thinking. Science Education, 77, 319–337. Kuhn, D. (2001). How do people know? Psychological Science, 12, 1–8. Kuhn, D., & Udell, W. (2003). The development of argument skills. Child Development, 74(5), 1245–1260. Lehrer, K. (1987). Personal and social knowledge. Synthese, 73, 87–107. Leitão, S. (2000). The potential of argument in knowledge building. Human Development, 43, 332–360. Losee, J. (2004). Theories of scientific progress: An introduction. New York, NY: Routledge. Mercier, H., & Sperber, D. (2017). The enigma of reason. Cambridge: Harvard University Press. Mugny, G., & Doise, W. (1978). Socio‐cognitive conflict and structure of individual and collective performances. European Journal of Social Psychology, 8, 181–192. Nickerson, R. (1986). Reflections on reasoning. Hillsdale, NJ: Lawrence Erlbaum Associates. Osborne, J. (2010). Arguing to learn in science: The role of collaborative, critical discourse. Science, 328(5977), 463–466.
196
J. Kimmerle et al.
Piaget, J. (1977). The development of thought: Equilibration of cognitive structures. New York: The Viking Press. Resnick, L. B., Salmon, M., Zeitz, C. M., Wathen, S. H., & Holowchak, M. (1993). Reasoning in conversation. Cognition and Instruction, 11, 347–364. Rigotti, E., & Morasso, S. G. (2009). Argumentation as an object of interest and as a social and cultural resource. In N. Muller Mirza & A.-N. Perret-Clermont (Eds.), Argumentation and education (pp. 9–66). Boston, MA: Springer. Samarapungavan, A. (2018). Construing scientific evidence: The role of disciplinary knowledge in reasoning with and about evidence in scientific practice. In F. Fischer, C. A. Chinn, K. Engelmann, & J. Osborne (Eds.), Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge (pp. 66–86). New York, NY: Routledge. Scardamalia, M. (2004). CSILE/Knowledge Forum®. In Education and technology: An encyclopedia (pp. 183–192). Santa Barbara: ABC-CLIO. Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. The Journal of the Learning Sciences, 3(3), 265–283. Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of the state of the art. International Journal of Computer-Supported Collaborative Learning, 5(1), 43–102. Schwarz, B. B. (2009). Argumentation and learning. In N. Muller Mirza & A.-N. Perret-Clermont (Eds.), Argumentation and education (pp. 91–126). Boston, MA: Springer. Schwarz, B. B., & Asterhan, C. S. (2010). Argumentation and reasoning. In K. Littleton, C. Wood, & J. K. Starmann (Eds.), International handbook of psychology in education (pp. 137–176). Bingley, UK: Emerald Group. Schwarz, B. B., & Asterhan, C. S. C. (2011). E-moderation of synchronous discussions in educational settings: A nascent practice. The Journal of the Learning Sciences, 20(3), 395–442. Stab, C. & Gurevich, I. (2014). Annotating argument components and relations in persuasive essays. Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics (pp. 1501–1510). Stahl, G., & Hakkarainen, K. (this volume). Theories of CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2), 297–323. Stiff, J. B., & Mongeau, P. A. (2016). Persuasive communication. New York: Guilford Publications. Suthers, D., Connelly, J., Lesgold, A. M., Paolucci, M., Toth, E. E., Toth, J., & Weiner, A. (2001). Representational and advisory guidance for students learning scientific inquiry. In K. D. Forbus & P. J. Feltovich (Eds.), Smart machines in education: The coming revolution in educational technology (pp. 7–35). Cambridge, MA: MIT Press. Suthers, D., & Weiner, A. (1995, October). Groupware for developing critical discussion skills. In The first international conference on Computer support for collaborative learning (pp. 341–348). Mahwah: L. Erlbaum Associates Inc.. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5, 5–42. Swanson, R., Ecker, B., & Walker, M. (2015). Argument mining: Extracting arguments from online dialogue. In Proceedings of the SIGDIAL 2015 Conference (pp. 217–226). Stroudsburg: Association for Computational Linguistics.
Argumentation and Knowledge Construction
197
Teplovs, C. (2008). The Knowledge Space Visualizer: A tool for visualizing online discourse. In Proceedings of the International Conference of the Learning Sciences (pp. 1–12). Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press. Trausan-Matu, S. (2009). The polyphonic model of hybrid and collaborative learning. In Handbook of research on hybrid learning models: Advanced tools, technologies, and applications (pp. 466–486). Hershey, NY: Information Science Reference. Trausan-Matu, S., Stahl, G., & Sarmiento, J. (2007). Supporting polyphonic collaborative learning. E-service Journal, 6(1), 59–75. Trausan-Matu, S., Wegerif, R., & Major, L. (this volume). Dialogism. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Van Eemeren, F. H., Grootendorst, R., Johnson, R. H., Plantin, C., & Willard, C. A. (2013). Fundamentals of argumentation theory: A handbook of historical backgrounds and contemporary developments. London: Routledge. Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2017). Socio-cognitive scaffolding with computersupported collaboration scripts: A meta-analysis. Educational Psychology Review, 29(3), 477–511. Vygotsky, L. S. (1980). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press. Vygotsky, L. S. (1987). Thinking and speech. The Collected Works of LS Vygotsky, 1, 39–285. Walton, D. N., & Krabbe, E. C. W. (1995). Commitment in dialogue. Basic concepts of interpersonal reasoning. Albany, NY: State University of New York Press. Wecker, C., & Fischer, F. (2014). Where is the evidence? A meta-analysis on the role of argumentation for the acquisition of domain-specific knowledge in computer-supported collaborative learning. Computers & Education, 75, 218–228. Wegerif, R. (2008). Dialogic or dialectic? The significance of ontological assumptions in research on educational dialogue. British Educational Research Journal, 34(3), 347–361. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46(1), 71–95. Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26, 506–515. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wittgenstein, L. (1961). Tractatus Logico-Philosophicus, trans. D. Pears and B. McGuiness. London: Routledge & Kegan Paul.
Further Readings Andriessen, J., Baker, M., & Suthers, D. (Eds.). (2003). Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments. Dordrecht, The Netherlands: Springer. This edited volume provides an overview of pedagogical applications and CSCL tools that can help learners use conflicting ideas, explanations, and arguments to achieve collaborative learning. Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press. In this monograph, Kuhn describes the argumentative abilities of people of different ages with regard to everyday topics and shows how difficult it is for people to deal critically with their own attitudes. Osborne, J. (2010). Arguing to learn in science: The role of collaborative, critical discourse. Science, 328(5977), 463–466. In this article, Osborne advocates giving learners the opportunity
198
J. Kimmerle et al.
to do scientific reasoning, as known from professional scientists; this is to improve their scientific skills and conceptual understanding. Schwarz, B. B. (2009). Argumentation and learning. In N. Muller Mirza & A.-N. Perret-Clermont (Eds.), Argumentation and education (pp. 91–126). Boston, MA: Springer. Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press. Toulmin examined how people should present their opinions and claims in order to justify them in a convincing manner. In this monograph Toulmin developed a scheme consisting of six components that characterize practical arguments.
Analysis of Group Practices Richard Medina and Gerry Stahl
Abstract This chapter introduces an approach to CSCL research driven by the analysis of data displaying how groups adopt, adapt, and master new collaborative knowledge-building practices. The analysis of group practices can provide unique insight into the accomplishments of teams of students in CSCL settings. It conceptualizes a theory of learning with the group as the unit of analysis in terms of the acquisition of group practices. CSCL pedagogy can then be oriented toward orchestrating the adoption of targeted group practices, supported by CSCL technology. Keywords Ethnomethodology · Group practice · Group cognition · Interaction · Orchestration · Representational practice · Segmentation · Sequential analysis · Social practice · Unit of analysis · Uptake
1 Definitions and Scope: Learning as Acquisition of Group Practices 1.1
Theory: Group Practices as Group-level Constructs
This chapter provides a view of small-group practices as central to computersupported collaborative learning and, indeed, foundational for all human learning. Rather than conceptualizing learning as the accumulation of explicit knowledge, such as the memorization and storage of facts stated in explicit propositions, one can view cognitive development in terms of tacit practices: knowing how to do things, to R. Medina (*) Faculty Specialist in Human-Computer Interaction, Center for Language & Technology, University of Hawai‘i at Mānoa, Honolulu, HI, USA e-mail: [email protected] G. Stahl Professor Emeritus of Computing and Informatics, Chatham, MA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_11
199
200
R. Medina and G. Stahl
behave, to respond, to contribute, to solve specific kinds of problems, to formulate explanations. In CSCL, this involves focusing on group practices as the constituents of collaborative learning, which can be acquired by groups of learners. A “group practice” as conceived here is a group-level construct. That is, it is to be distinguished from, for instance, psychological constructs on the level of the individual mind, such as mental representations or thoughts. On the other side, it is distinct from social practices as studied by social sciences oriented to institutions, communities, cultures, or societies. A theory of CSCL oriented to group practices needs to reconceptualize all the categories of thinking, knowing, and learning at the group level. A focus on group practice in no way denies the existence and importance of individual thinking, knowledge, skills, habits, inclinations, emotions, etc. Nor does it dispute the power of social practices and cultural resources. Rather, practices and other cognitive or epistemological constructs at the individual, small-group, and community levels are seen as interacting with each other intimately. Although it is particularly difficult to find adequate detailed interaction data to analyze the mechanisms of inter-level influences, it is clear that individuals acquire their major cognitive tools like language, narration, or argumentation from their larger cultural context and that such acquisition takes place through small groups such as their immediate family, close friends, gangs, tribes, or teams. The following slogans are suggestive of this: “It takes a village to raise a child” and “All I know I learned in kindergarten.” These are settings in which young children acquire language, social behavior, and norms of interaction. If you look closely, you see that this happens overwhelmingly in games, disputes, and modeling within dyads, triads, and other small groups within the extended family, village, or kindergarten, including between adults and children as well as among peers—largely through imitation and repetition. Empirical analysis of group practices (see Additional Readings below) shows that a typical learning process happens as follows, with interactions among different levels of description: • A small group adopts a practice that may have been introduced into the group by one of its members or been drawn from the larger culture. • The small group may try out the practice and even discuss it explicitly to some extent. • If the group adopts the practice, it becomes a resource for future behavior of that group and may then be used tacitly, without further discussion. • Subsequently, members of the group may adopt the group practice as their own individual skill, having learned it collaboratively. Small-group practices can also have effects in the opposite direction, influencing their communities. Over historical timespans, cultures have evolved new practices for constructing knowledge by adopting practices of small groups. These can then be spread to their citizens through acquisition by small groups and subsequent adoption by individuals. For instance, small groups of ancient Greeks developed the practices of geometry, which included formulating deductive proofs (Netz, 1999). The
Analysis of Group Practices
201
practices of proving were then acquired by groups of Greek philosophers and eventually adopted throughout Western culture as practices of argumentation (Latour, 2008). In each generation, these practices were introduced to groups of students and ultimately adopted by individuals as rational thinking.
1.2
Pedagogy: Curriculum for Acquiring Group Practices
The recognition of the centrality of group practices to human learning can motivate an approach to pedagogy. Teaching can be driven by the goal of encouraging small groups of students to acquire group practices that are considered foundational to a given academic domain. For instance, school geometry involves practices of constructing and labeling figures, proving theorems, and identifying dependencies of geometric elements upon each other. Analysis of interaction among small groups working on geometry problems in a CSCL environment has identified the adoption of numerous relevant group practices (Çakir, Zemel, & Stahl, 2009; Medina, Suthers, & Vatrapu, 2009; Öner & Stahl, 2015; Stahl, 2016). The accumulation of these practices by the groups constituted their collaborative learning of the subject. Further analysis at other levels could reveal consequent changes in individual knowledge and in classroom instructional practices.
1.3
Design: Planning to Sequence Group Practices
Pedagogy associated with CSCL approaches to teaching a given subject can be designed to promote specific identified practices. It is always important to ensure that groups have acquired basic collaboration practices, such as taking turns, involving all group members, directing joint attention, and maintaining common ground. There are also practices involving using the available technological affordances. In addition, groups must acquire important practices of the subject matter. Then, they need to employ discourse practices to maintain group agency and to reflect upon their collaborative learning. Because learning takes place through intertwined levels of individual, smallgroup, and community processes, it is important to design mutually supportive mechanisms for different levels and to orchestrate their application. For instance, teacher-centered presentations and individual reading of background information can motivate and orient small-group CSCL activities that follow. The group activities in turn can be reinforced through whole-class discussion that presents, compares, and reflects upon the groups’ knowledge artifacts. Effective orchestration of activities can coordinate and mutually reinforce related individual, group, and social practices.
202
1.4
R. Medina and G. Stahl
Technology: CSCL Supports for New Group Practices
All these practices can be designed into a CSCL environment through sequencing tasks, providing resources, and carefully wording instructions, as well as design of domain-specific technology for construction and modeling. For instance, mechanisms that provide relevant textual information can introduce practices that are established in the broader culture, such as standard procedures. Shared spaces in a collaborative online environment can support joint attention and stimulate shared exploration leading to group practices. Persistent summaries of collaborative learning can enable the establishment of individual knowledge. Affordances like text highlighting, eye-tracking display, line-coloring options, and pointing tools can support joint attention and shared focus within digital group workspaces (Çakir et al., 2009; Schneider & Pea, 2013).
1.5
Methodology: Analysis of Adopted Group Practices
For educational researchers, an important question is how an observer can know what practices groups have acquired. If all the group interaction has taken place within a well-instrumented CSCL environment, then the necessary data may be readily available for analysis. This assumes that all interaction, including both discourse and visual presentation (drawing, pointing, construction sequence, highlighting, etc.) has been captured and preserved in the data corpus. Whereas mechanisms of individual and community learning may involve unobservable processes like mental modeling, individual motivation, or social dispersion, the acquisition and performance of group practices are necessarily public processes. The discourse moves that make up the acquiring of new group practices must be available to the members of the group to allow them to work together. Consequently, researchers may be able to see the same things as the group members display to each other. Of course, the researchers observe their captured data from a distanced analytic perspective, whereas the members interact to the fleeting original displays from within their active engaged perspectives. The students may not be aware of their involvement in the adoption of group practices; this is usually a tacit process, which is not articulated in the minds or speech of the participants. However, researchers can analyze and document the process. This chapter suggests procedures for doing this kind of analysis of the adoption of group practices—particularly through methods of interaction analysis.
Analysis of Group Practices
203
2 History and Development: From Individualto Group-Level Constructs 2.1
Prehistoric Spirits as Explanations of Expertise
How learning takes place, how knowledge is developed, and how some individuals gain above-average expertise are questions that have always been raised. In olden times and ancient cultures, the answers often involved external, nonhuman sources such as spirits, ephemeral voices, or special gods. For instance, artists were inspired—that is, filled from outside with spiritual substances—perhaps by their muse or by divine guidance. Later, expertise was attributed to a mysterious quality of genius. In this view, it was considered an attribute of an individual person. However, the source of this attribute was not subject to explanation or investigation. Alternatively, knowledge was taken as a mythic attribute of a culture. The intelligence or sophistication of members of one culture was considered more advanced than that of members of other cultures, who were branded as barbaric or primitive.
2.2
Rational Minds as Thinkers
Modern views treat an individual’s behavior and knowledge as rooted in a rational mind. This approach parallels the development of science and is mirrored in the history of Western philosophy. Science dispensed with the world of spirits, eventually substituting hypotheses about mental representations, neural networks, and social institutions. Plato (340 BCE) argued against explanations involving Greek gods and situated truth in the efforts of the self-reflective individual. Aristotle (330 BCE) developed the first system of logical inference and pursued empirical investigation to discover knowledge. The conception of man as a rational mind reached its extreme expression in Descartes’ (1633) philosophy, which was expanded in Kant’s (1787) analysis of pure reason as the product of each individual human mind. Rationalist theories still dominate much of science and popular thought. Economics and psychology, for instance, often model people as rational decisionmakers or as deductive reasoners. However, philosophy since Hegel (1807) paints a more dynamic picture in which human knowledge and reasoning develop over time through interaction with others in groups and cultures. Scientific theories relevant to CSCL have followed various philosophic trends of the past two centuries.
204
2.3
R. Medina and G. Stahl
Individuals Constructing Understanding
Constructivist theories (e.g., Cobb, 1994; Packer & Goicoechea, 2000) argue that students necessarily construct new knowledge for themselves, using their existing conceptualizations and past knowledge. This is a Kantian view of explicit individual knowledge. Polanyi (1966) proposed an alternative view of knowledge as being primarily tacit. For instance, children learn to ride a bike through bodily feelings that are not spoken in words. The perspective of tacit knowledge can be generalized to apply to most learning. We learn without being explicitly aware of the processes of learning or articulating them in speech or thought (silent self-talk). Rather, we learn through mimesis (imitation) and routine (repetition). Tacit learning typically takes place in interaction with others in dyads, family units, or small groups. It is largely preserved in habitual behavior.
2.4
Social Practice
Theories of social practice (Bourdieu, 1972/1995; Giddens, 1984; Goodwin, 2013; Lave, 1988, 1991, 1996; Lave & Wenger, 1991; Reckwitz, 2002) can be considered a natural consequence of this move away from rationalist theories to tacit conceptualizations. Social practices are not the result of explicit negotiation, agreement, or social contract. They arise tacitly through interaction and habituation. Theories of social interaction have been developed by social scientists (anthropologists, sociologists, linguists), so they generally locate the practices at the level of society, culture, or community. However, most of their empirical examples of social practices take place situated in the interaction of small groups, such as apprentices with their master (Lave & Wenger, 1991). For CSCL, the theory can be reconceptualized and studied at the small-group unit of analysis. Perhaps the most detailed analyses of social practices have been carried out in the field of ethnomethodology and conversation analysis. The following sections review major findings of this research. For additional rendering of qualitative analysis, including conversation analysis, see Uttamchandani and Lester (this volume).
2.5
Ethnomethodology and Sequential Organization
The sequential ordering of situated interaction is a central characteristic of joint human activity. An instance of human communication can be seen as a temporally unfolding series of communicative actions. How these actions relate from one moment to the next and from one participant to another within a setting has been
Analysis of Group Practices
205
the empirical focus of ethnomethodology (EM) and its applied field, conversation analysis (CA) (Garfinkel, 1967; Goodwin & Heritage, 1990). One of the systemic aspects of sequential organization of interaction explored in CA is the notion of turn taking (Sacks, Schegloff, & Jefferson, 1974). A turn is defined by an adjacency pair where one utterance by one participant is followed by a second utterance by another participant. For example, a greeting, such as “How are you?” invites a response, such as “Fine!” at the appropriate next speaking opportunity. This is an oversimplification, as offering no response may be taken as a (non)response, thus opening up a range of relevant subsequent sequential mechanisms, or turns, to be worked. This greeting example illustrates an important consideration for our analysis of small-group practices: The sequential structure of joint human activity is fundamentally negotiated. Issues emerge in our joint activity (e.g., the relevance or irrelevance of the nonresponse) that shape other courses of action and their sequential structures. Studies in CA have identified and described these kinds of sequentially organized structures in a multitude of different settings. The notion of a turn-taking system offers an analytic framework for investigating how interactions might vary structurally within and across specific settings (e.g., casual telephone conversations vs. doctor–patient consultations). Turn-taking in a variety of different discursive settings reveals a number of different contingencies, such as the number of parties involved in the interaction, the organization of topic openings and closings, and the allocation of turns (Schegloff, 1990; Schegloff & Sacks, 1973). Thus, the analysis of turn taking forms an empirical foundation for tracing discernable practices within small-group interaction.
2.6
Interaction in the Setting
The turn-taking apparatus advanced by CA practitioners has served as a productive analytic tool for clarifying the relationship between setting and interaction. Schegloff (1991) refers to how the external elements (anterior to language) of the situation are made relevant and consequential for the interaction, i.e., how participants’ immediate actions are contingent on resources in the setting for coordinating and ordering their interaction. These resources include the stream of talk preceding the next utterance as well as the semiotic and material elements that make up the setting and are referenced in the interaction. This notion of relevance requires that analyses seek the points in interaction in which participants organize and account for referents in the conduct of sequential action (turn-taking structure). Procedural consequentiality highlights those instances in which the setting itself (e.g., courtroom vs. living room) informs and shapes sequential structures. This view is particularly noteworthy for CSCL, as our concern is the impact that rich semiotic settings and technologies have on collaborativelearning processes.
206
2.7
R. Medina and G. Stahl
Multimodal Sequential Analysis and Representational Practice
A wide variety of studies have leveraged the analytic insight of EM and CA to draw attention to the configuration of the speaker’s body, the semiotic elements of the setting, and their coordination in the sequential organization of action (Goodwin, 1994, 2000a, 2018; Streeck, 1996). Goodwin’s studies consistently demonstrate how the semiotic, material, and embodied elements of the setting are relevant and consequential to the structure of interaction. Action is not limited to utterances but is distributed across a range of multimodal resources available to participants. Discussions of indexicals—how language references elements of the setting—in this regard are often central to explaining and describing the role of media artifacts (Zemel & Koschmann, 2013). Goodwin (2013) convincingly argues, however, that the semiotic environment is not limited to reference, but is itself manipulated in communicative action. One of Goodwin’s formidable contributions is how semiotic action is included in structural explanations of human interaction (Goodwin, 2018). EM and CA traditions specify the focus of inquiry on the sequentiality of interaction. In so doing, they afford a starting point for empirical analysis of technology-mediated interaction that tightly couples user actions with the particulars of the setting. In CA generally, the setting is established through talk. Other similarly motivated lines of work such as that by Goodwin extend analysis by including semiotic, material, and embodied elements of the setting. There has also been some analysis of how sequentiality and turn-taking unfold in CSCL settings such as text chat (Zemel & Çakir, 2009). The following section discusses the concept of uptake as a reformulation of sequentiality with particular relevance to CSCL.
2.8
Uptake as the Unit of Interaction
Making sense of the sequential structure of interaction and its deployment within CSCL environments presents a degree of complexity for analysis. Interaction settings may be asynchronous or synchronous, and participants may be copresent or geographically distributed. Further, CSCL actions may extend beyond the verbal modality: dragging an object across the screen or posting a graphic. Participants can draw upon semiotic, material, and embodied elements of the setting in organizing their interactions. A useful strategy to begin with might be to recognize how participant actions are evidenced to be relevant and consequential for activity. How and where are actions positioned in the sequential unfolding of the larger activity, and how do those actions relate to prior actions? The notion of uptake has been proposed as a useful concept for investigating precisely these questions. Suthers, Dwyer, Medina, and Vatrapu (2010) describe uptake as a relational construct that identifies a participant action as appropriating aspects of a prior or
Analysis of Group Practices
207
ongoing setting as relevant for ongoing interaction. This definition is deliberately abstract, enabling it to be purposed in a wide range of interactional analysis. It is also intended to support a diverse range of theoretic and methodological approaches. Uptake specifies a relation between a user action and some aspect of the environment. A potential gain of interpreting interaction as uptake is that uptake does not privilege one particular communicative modality (e.g., verbal adjacency pairs) or granularity over another. A warranted interpretation of uptake only specifies that one human action is appropriating aspects of a prior or ongoing element of the setting while also transforming that setting. The value of uptake for the analysis of technology-mediated interaction is its provision for a more flexible consideration of sociological and technological contingencies. This value also extends into analytic interpretations and reportable findings, as discussed below.
2.9
Group Cognition
Focusing on uptake or the adjacency pair as the unit of interaction locates research at the small-group level of the discourse or shared cognition that takes place between or across individuals. It includes contributions from two or more individuals, but cannot be reduced to a mental achievement of either individual or even a simple sum of their mental representations. The parts of the uptake or adjacency pair elicit and respond to each other, thus happening outside the heads of any one participant, but constituting a relationship among them. The relationship necessarily takes place in the public arena of the group, where it is shared by and visible to the participants (and potentially to researchers). The cognition that takes place here is an achievement of the group as such; it can be conceptualized as group cognition (Stahl, 2006). The analysis of group cognition in terms of interaction through adjacency pairs or intersubjective meaning making through uptake (Suthers, 2006) provides a methodological basis for studying the adoption of group practices as the origin of collaborative learning. It thereby offers a rigorous approach to the study of CSCL, including a method for providing feedback to the iterative design of CSCL interventions. We now consider a procedure to conduct such analysis.
3 State of the Art: Analysis of Group Practices at Multiple Sequential Orders This section outlines a methodological approach to analysis of group practices. The approach builds on foundations of ethnomethodological inquiry by maintaining a primary concern with the sequential organization of interaction (Jordan & Henderson, 1995; Schegloff, 2007). The overall strategy of the approach attempts to provide a hierarchically organized account of observed practices by identifying different
208
R. Medina and G. Stahl
Fig. 1 Illustration of four segments each composed of subsequences at different granularities
structures of sequential interaction as data points (or segments). When fully assembled, these structures provide an informative view of the hierarchical and sequential processes of small-group interaction in CSCL settings (Stahl, 2020, Investigations 16, 24, 25). Thus, our goal is to build a structural description of observed interaction that can be used as a resource—within the larger understanding of small-group interaction sketched above—for addressing various research questions and contributing to different theoretical and applied research agendas. The steps of the analysis presented here are extrapolated from the “Eight C’s” outlined by Fisher and Sanderson (1996). Their approach to exploratory sequential data analysis (ESDA) enumerates a succession of analytic activities for handling observational data. The intent behind the set of procedures is to progressively arrive at a structured understanding and representation (referred to as “smoothing”) of sequential data records. The smoothing process adapted for this description can be seen as working with multiple, mutually compositional units of analysis: (a) microanalysis (documentation of turn-by-turn relevancies), (b) structure (determination of interactional structure), and (c) macrostructure (formation of interactional structures such as group practices). Our procedure applies three of the eight ESDA smoothing operations as relevant for analysis of small-group practices. These operations are (1) segmentation into chunks, (2) descriptive comments, and (3) relational connections (see Fig. 1 and following sections). It is important to note that the procedure is iterative, moving back and forth from one smoothing operation to the other as the analysis unfolds.
3.1
Content Logging
An initial pass over the data is conducted to establish and mark off major sections of the data stream and possibly to synchronize time indices across multiple data sources (e.g., video and software-generated log files). Content logging is a preparatory step, crucial for gaining a sense of the scope of the activity captured. After the initial logging, analysis cycles through the three relevant ESDA operations.
Analysis of Group Practices
3.2
209
Segmentation
Segmentation is the identification of boundaries between adjacent interaction events that together form a sequential structure. A data element at the lowest granularity is an elementary participant interaction (e.g., a conversational turn). Participant actions are sequentially organized within the interaction, creating boundary points for segmentation. These segments may range from short exchanges such as a reply to a question or may extend into longer structures concerned with, for example, specific topics or problems introduced by the participants. The purpose of this smoothing technique is not to reorder the continuous nature of interaction in its setting, but to identify its elements and structure in a tractable manner. Identified segments, on further analysis, may contain smaller chunks or segments. Figure 1 provides a schematic of this process. Each of the four labeled segments may contain sequential structures within it, identifiable at different granularities. An important analytic feature that emerges as a result of segmentation is the transition between segments. A transition may be acute, such as the boundary between two separate days of interaction. The gaps between a, b, c, and d in Fig. 1 indicate this kind of boundary. Transitions may occur within particular episodes more subtly, such as a signaled change of topic or focus (e.g., the gaps between the inner shapes in Fig. 1). In general, transitions between segments may dramatically expose the organizational and coordinative work involved in interactional practices (Jordan & Henderson, 1995). In addition to the segmentation of observed interactions in the data set, it is possible to adjust analytic focus on aspects of the data that are of concern for a research study. For example, a segmentation analysis could be conducted on inscriptional activity involving CSCL text or drawing tools. Focused segmentation, in this case, would result in subsequences of inscriptional activity occurring within longer segments of interaction.
3.3
Segment Description
A segment is then analyzed in a turn-by-turn approach strongly influenced by techniques used in CA. A turn unit consists of an utterance or chat contribution, gesture, gaze, drawing, or manipulation of the interaction environment. At a fine granularity, we look at the relationship between actions to determine how the prior turn is taken up or handled by the next turn, which it may have elicited. This close inspection typically yields the identification of communicative mechanisms. Microanalysis of a segment is recorded as annotations that might draw on technical terms commonly utilized in CA studies or, alternatively, as emergent vocabularies for describing the interaction structures observed. The result of this phase is a mixture of common technical terms, labels, and terms deemed adequate by the analyst in documenting a segment.
210
3.4
R. Medina and G. Stahl
Relations Among Segments
The next step in the procedure identifies and describes connections among segments, some of which may extend beyond immediate interaction contexts or may form repeated behavioral patterns or group practices. Figure 1 illustrates how the scheme is utilized to determine connections: arrows between segments indicate relations that emphasize the contingency or relevance of one segment to another. Evidence for drawing connections between segments is based on the following baseline heuristics: • Uptake of prior resources. – Using references to prior elements (“indexicality”). – Transporting prior elements into the current context (“temporal bridging”). • Invocation of a prior (established) sequential structure (a conversational “social practice” or a local “group practice”). • Anticipatory projection of a future (desired) element (“group agency”). The microanalysis of segments conducted in steps 1 and 2 above provides an empirical frame in which to observe how the participants orient to and make relevant their talk as well as their action. A critical component for making these observations of sequential structure and its elements is the identification of referents that evidence indexical relations between and within turns. Referents that are under-determined in the immediate interaction but can be located in prior observed situated settings warrant the identification of a connection (e.g., the arrow between d and b in Fig. 1). These “missing” referents provide a demonstration of how prior situated activity is made relevant and consequential for immediate turn-taking sequences (Koschmann, LeBaron, Goodwin, & Feltovich, 2001; Koschmann, Sigley, Zemel, & Maher, 2018; Medina et al., 2009). Another heuristic that is applied to determine connections between segments is based on the identification of procedural consequentiality. Here, we explicitly examine how the contextual setting facilitates, conditions, and constrains immediate actions. Technology-mediated settings are participant-enacted spaces configured through use, which support the redeployment of discernable actions (Drew & Heritage, 1992; Robinson, 2013). Identifying these actions and their relationship to the setting enables the analyst to form empirically grounded claims about observed group practices.
3.5
Identifying Adoption of Group Practices
The methods just reviewed have been applied to the identification of group practices in a number of case studies of mathematics problem-solving by groups in CSCL environments (Çakir, 2009; Koschmann, Stahl, & Zemel, 2009; Öner, 2016; Stahl,
Analysis of Group Practices
211
2009; Zemel & Koschmann, 2013). Some of these studies have applied interaction analysis to “longer sequences” of adjacency pairs, as are required for mathematical problem solving (Stahl, 2020, Investigations 23, 24, 25). The analysis of the group interaction must demonstrate how participants make their references relevant and how they establish the procedural consequentiality of their practice within their shared situation. A group practice can be identified as a segment of interaction that a group periodically repeats in response to certain conditions. If a group is learning/acquiring a new practice, sequential analysis may be able to capture group interactions exploring and deciding upon the new behavior to adopt. For instance, a group of math students might develop a geometric construction procedure through considerable exploration and debate and then adopt it as a regular technique in similar future problems. In mathematics, when such practices are accepted into the broader culture, they may be called “theorems”; once proven explicitly, they can be applied without discussion (Husserl, 1936/1989). Knowledge grows through the acceptance and application of practices and their associated artifacts—by individuals, small groups, and communities. A longitudinal study of a small group learning online collaborative dynamic geometry identified the adoption of about 60 group practices, including practices of collaboration, problem-solving, geometric construction, technology usage, and explanatory discourse (Stahl, 2016). Other case studies have applied this approach to rich data sets containing multiple video and screen recordings of small-group interaction in a science classroom (Medina, 2013). These case studies point the way for a new vision of CSCL, centered on the analysis of group practices.
3.6
Computer-Supported Analysis of Group Practices
The above approach to analysis and identification of group practices can be supported by data-driven research agendas that require cataloging segments and annotations and involve linking segments to data in video, log files, or other primary sources (e.g., Dyke, Lund, & Girardot, 2009). For example, if segments are viewed as n-gram data points, opportunities arise for automated pattern detection, feature extraction, and other computational methods for processing and investigating sequential structures. To the extent that computer analysis of group practices can be accomplished in real time, it could contribute to learning analytics, potentially informing teachers about which groups adopted certain targeted practices.
212
R. Medina and G. Stahl
4 The Future: Fostering Group Practices 4.1
Theory: Acquiring Group Practices
CSCL can be reconceptualized as the support of groups of learners to acquire group practices that contribute to their collaborative learning. Collaborative learning itself can be conceived in terms of the adoption of specific group practices, which provide various aspects of the group’s cognitive abilities. Since individual students often adopt for themselves practices that they first acquired as part of a group-cognitive experience, and communities often evolve new social practices through the transmission of these group practices, collaborative learning, and group practices can be considered to play a potentially central, foundational role in human learning at all levels. Contemporary theories of practice (such as Bourdieu, 1972/1995; Goodwin, 2000b; Hakkarainen, 2009; Lave & Wenger, 1991; Lipponen, Hakkarainen, & Paavola, 2004; Medina et al., 2009; Polanyi, 1966; Reckwitz, 2002; Schatzki, Knorr-Cetina, & Savigny, 2001; Suchman & Trigg, 1991) reject the traditional rationalist, cognitivist, and individualist views of learning, thinking, and knowing. They reconceptualize the basic processes and products of cognition as largely tacit, habitual practices. For CSCL, with its focus on collaborative meaning making within small groups in computer-mediated contexts, the practice-oriented conceptualizations of these social theories must be shifted to the group unit of analysis. Underlying effective collaborative learning is the maintenance of intersubjectivity, the ability of participants to understand and interact with each other. Intersubjectivity is based on our living in one world as the ultimate context of our understanding (Stahl, 2020, Investigation 18) and is maintained through the establishment of common ground through interactional mechanisms such as repair of misunderstandings (Clark & Brennan, 1991). Mutual understanding is supported by joint attention to the object of consideration (Tomasello, 2014). Knowledge that contributes to collaborative learning or that results from it is necessarily shared knowledge. Intersubjectivity, joint attention, and shared knowledge are some of the many group-level constructs needed for a theory of CSCL oriented to group practice (Stahl, 2020, Investigations 19, 20, 21).
4.2
Pedagogy: Sequencing Group Practices
Analysis of group practices has been carried out largely with interaction data on virtual math teams engaged in mathematical problem-solving of middle-school combinatorics and dynamic geometry (Stahl, 2009, 2020). This is because interesting usable data were available from these instrumented online sessions. The same approach could be applied to other learning domains if adequate process data are
Analysis of Group Practices
213
collected. For instance, a number of CSCL researchers have studied collaborative learning in which they conclude that group processes played a central role, but they did not have detailed, continuous interaction data to explore how these processes actually unfolded. They only had data to demonstrate that there was a change between two-time instances that they analyzed (e.g., Barron, 2003; Kapur & Kinzer, 2009; Schwartz, 1995), and they had to speculate about intervening group-cognitive processes. A longitudinal study of dynamic geometry (Stahl, 2016) involved a sequence of 8-h-long sessions, each with a geometry figure to manipulate, discuss, and construct. The collaboration environment included a shared workspace with a geometry application that restricted manipulation of points, lines, and figures based on how they were constructed. There were sample figures to manipulate, textual instructions to guide the session, and a chat interface for group communication. The tasks for the sequences of sessions were carefully planned—based on previous mathematical experience and numerous trials—to encourage the accumulation of specific group practices. Group practices had to be established in roughly this order: • Be able to use the computer and the collaboration environment. • Be able to communicate in chat, repair mistakes and misunderstandings, propose actions. • Use the dynamic-geometry app; find menu options; create points, lines, and figures. • Drag geometric objects to observe their behavior. • Construct figures so they would embody desired constraints or dependencies. • Discuss why a geometric figure behaved the way it did (argumentation, explanation, proof). Using the methods discussed in this chapter, researchers were able to identify when groups adopted practices such as these, what difficulties they encountered, and when they failed to establish these practices.
4.3
Design: Orchestrating Group Practices
CSCL is not a standalone educational approach. Collaborative learning is not always the best approach, and it is usually more effective when combined with complementary approaches in ways that take into account the interactions among the individual, small group, and community levels of description. However, collaborative learning can be uniquely effective in introducing important practices. In a school context, a teacher may orchestrate CSCL sessions to fit into a sequence of varied learning modes. Perhaps an introductory presentation by the teacher will motivate a new topic. Then individual reading might provide background information. At that point, collaborative exploration can lend a creative and interactive process of discovery, supported by discussion and sociability. Perhaps a homework assignment would open an opportunity for students to adopt recent group
214
R. Medina and G. Stahl
practices as their own individual behaviors. The topic could conclude with a class discussion session and an individual writing of reflections. The written reflection could also be shared with group members, perhaps leading to a group position paper on the topic. Acquired group practices could thereby influence individual and classroom learning.
4.4
Technology: Supporting Group Practices
Computer support for multiple modalities can be used to support specific group practices. For instance, generic text chat or discussion forums can support argumentation, but there can also be designed affordances of special CSCL argumentation environments that foster negotiation or analysis of argumentation structure (Schwarz & Baker, 2017). Pointing and other graphical manipulation tools can represent references from one screen icon to another (Mühlpfordt & Wessner, 2009). Eye-tracking displays can enhance joint attention by indicating where each participant is looking (Schneider & Pea, 2013). A shared workspace can be important for providing a “joint problem space” (Teasley & Roschelle, 1993) and acting as a group memory that can even bridge discontinuities in group presence (Sarmiento, 2007; Sarmiento & Stahl, 2008). The workspace can be taken a step further with simulations or modeling, as with VMT’s dynamic-geometry app or Roschelle’s model of acceleration.
4.5
Methodology: Analyzing Group Practices
The analytic methodology presented in this chapter offers the CSCL researcher a way to discover and document the adoption of group practices as a dynamic view into collaborative learning. Importantly, this view can guide ongoing design iterations. The analysis of group practices opens up a contemporary approach to designing and assessing education. Group practices stand at the center of collaborative learning, which is foundational for human learning.
References Aristotle. (330 BCE). Metaphysics (H. G. Apostle, Trans.). Bloomington, IN: Indiana University Press. Barron, B. (2003). When smart groups fail. The Journal of the Learning Sciences, 12(3), 307–359. Bourdieu, P. (1972/1995). Outline of a theory of practice (R. Nice, Trans.). Cambridge, UK: Cambridge University Press.
Analysis of Group Practices
215
Çakir, M. P. (2009). Chapter 7: The organization of graphical, narrative and symbolic interactions. In G. Stahl (Ed.), Studying virtual math teams (pp. 99–140). New York, NY: Springer. Çakir, M. P., Zemel, A., & Stahl, G. (2009). The joint organization of interaction within a multimodal CSCL medium. International Journal of Computer-Supported Collaborative Learning, 4(2), 115–149. Clark, H., & Brennan, S. (1991). Grounding in communication. In L. Resnick, J. Levine, & S. Teasley (Eds.), Perspectives on socially-shared cognition (pp. 127–149). Washington, DC: APA. Cobb, P. (1994). Learning mathematics: Constructivist and interactionist theories of mathematical development. Dordrecht, Netherlands: Kluwer. Descartes, R. (1633). Discourse on method and meditations on first philosophy. New York, NY: Hackett. Drew, P., & Heritage, J. (1992). Talk at work: Interaction in institutional settings. Cambridge, UK: Cambridge U Press. Dyke, G., Lund, K., & Girardot, J. J. (2009). Tatiana: An environment to support the CSCL analysis process. In the proceedings of the 9th International Conference of CSCL. Proceedings pp. 58–67. Fisher, C., & Sanderson, P. (1996). Exploratory sequential data analysis: Exploring continuous observational data. Interactions, 3(2), 25–34. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Giddens, A. (1984). The constitution of society. Outline of the theory of structuration. Berkeley, CA: U of California Press. Goodwin, C. (1994). Professional vision. American Anthropologist, 96(3), 606–633. Goodwin, C. (2000a). Action and embodiment within situated human interaction. Journal of Pragmatics, 32, 1489–1522. Goodwin, C. (2000b). Practices of color classification. Mind, Culture, and Activity, 7(1&2), 19–36. Goodwin, C. (2013). The co-operative, transformative organization of human action and knowledge. Journal of Pragmatics, 46(1), 8–23. Goodwin, C. (2018). Co-operative action. Cambridge, UK: Cambridge University Press. Goodwin, C., & Heritage, J. (1990). Conversation analysis. Annual Review of Anthropology, 19, 283–307. Hakkarainen, K. (2009). A knowledge-practice perspective on technology-mediated learning. International Journal of Computer-Supported Collaborative Learning, 4(2), 213–231. Hegel, G. W. F. (1807). Phenomenology of spirit (J. B. Baillie, Trans.). New York, NY: Harper & Row. Husserl, E. (1936/1989). The origin of geometry (D. Carr, Trans.). In J. Derrida (Ed.), Edmund Husserl’s origin of geometry: An introduction (pp. 157–180). Lincoln, NE: University of Nebraska Press. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103. Kant, I. (1787). Critique of pure reason. Cambridge, UK: Cambridge University Press. Kapur, M., & Kinzer, C. K. (2009). Productive failure in CSCL groups. International Journal of Computer-Supported Collaborative Learning, 4(1), 21–46. Koschmann, T., LeBaron, C., Goodwin, C., & Feltovich, P. (2001). Dissecting common ground: Examining an instance of reference repair. In J. D. Moore & K. Stenning (Eds.), Proceedings of the twenty-third annual conference of the cognitive science society (pp. 516–521). Mahwah, NJ: Lawrence Erlbaum Associates. Koschmann, T., Sigley, R., Zemel, A., & Maher, C. A. (2018). How the “machinery” of sense production changes over time. In J. W. A. E. G.-M. S. P. Doehler (Ed.), Longitudinal studies on the organization of social interaction (pp. 173–191). New York, NY: Springer. Koschmann, T., Stahl, G., & Zemel, A. (2009). “You can divide the thing into two parts”: Analyzing referential, mathematical and technological practice in the VMT environment. In In the proceedings of the international conference on Computer Support for Collaborative
216
R. Medina and G. Stahl
Learning (CSCL 2009). Rhodes, Greece: CSCL. Web: http://GerryStahl.net/pub/cscl2009tim. pdf. Latour, B. (2008). The Netz-works of Greek deductions. Social Studies of Science, 38(3), 441–459. Lave, J. (1988). Cognition in practice: Mind, mathematics and culture in everyday life. Cambridge, UK: Cambridge University Press. Lave, J. (1991). Situating learning in communities of practice. In L. Resnick, J. Levine, & S. Teasley (Eds.), Perspectives on socially shared cognition (pp. 63–83). Washington, DC: APA. Lave, J. (1996). Teaching, as learning, in practice. Mind, Culture, and Activity, 3(3), 149–164. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge, UK: Cambridge University Press. Lipponen, L., Hakkarainen, K., & Paavola, S. (2004). Practices and orientations of CSCL. In J.-W. Strijbos, P. Kirschner, & R. Martens (Eds.), What we know about CSCL: And implementing it in higher education (pp. 31–50). Dordrecht, Netherlands: Kluwer Academic Publishers. Medina, R. (2013). Cascading inscriptions and practices: Diagramming and experimentation in the group scribbles classroom. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 291–309). New York, NY: Springer. Medina, R., Suthers, D. D., & Vatrapu, R. (2009). Chapter 10: Representational practices in VMT. In G. Stahl (Ed.), Studying virtual math teams (pp. 185–205). New York, NY: Springer. Mühlpfordt, M., & Wessner, M. (2009). Chapter 15: The integration of dual-interaction spaces. In G. Stahl (Ed.), Studying virtual math teams (pp. 281–293). New York, NY: Springer. Netz, R. (1999). The shaping of deduction in Greek mathematics: A study in cognitive history. Cambridge, UK: Cambridge University Press. Öner, D. (2016). Tracing the change in discourse in a collaborative dynamic-geometry environment: From visual to more mathematical. International Journal of Computer-Supported Collaborative Learning, 11(1), 59–88. Öner, D., & Stahl, G. (2015). Tracing the change in discourse from visual to more mathematical. Unpublished manuscript. Web: http://GerryStahl.net/pub/tracing.pdf. Packer, M., & Goicoechea, J. (2000). Sociocultural and constructivist theories of learning: Ontology, not just epistemology. Educational Psychologist, 35(4), 227–241. Plato. (340 BCE). The republic (F. Cornford, Trans.). London, UK: Oxford University Press. Polanyi, M. (1966). The tacit dimension. Garden City, NY: Doubleday. Reckwitz, A. (2002). Toward a theory of social practices: A development in culturalist theorizing. European Journal of Social Theory, 5, 243–263. Robinson, W. P. (Ed.). (2013). Communication in development. London, UK: Academic Press. Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn-taking for conversation. Language, 50(4), 696–735. Sarmiento, J. (2007). Bridging: Interactional mechanisms used by online groups to sustain knowledge building over time. In In the proceedings of the international conference on ComputerSupported Collaborative Learning (CSCL ‘07). New Brunswick, NJ: CSCL. Web: http:// GerryStahl.net/vmtwiki/johann.pdf. Sarmiento, J., & Stahl, G. (2008). Extending the joint problem space: Time and sequence as essential features of knowledge building [nominated for best paper of the conference]. In In the proceedings of the International Conference of the Learning Sciences (ICLS 2008). Utrecht, Netherlands: ICLS. Web: http://GerryStahl.net/pub/icls2008johann.pdf. Schatzki, T. R., Knorr-Cetina, K., & Savigny, E. v. (Eds.). (2001). The practice turn in contemporary theory. New York, NY: Routledge. Schegloff, E. (1991). Reflections on talk and social structure. In E. Boden & D. Zimmerman (Eds.), Talk and social structure: Studies in ethnomethodology and conversation analysis (pp. 44–70). Berkeley, CA: University of California Press. Schegloff, E., & Sacks, H. (1973). Opening up closings. Semiotica, 8, 289–327.
Analysis of Group Practices
217
Schegloff, E. A. (1990). On the organization of sequences as a source of ‘coherence’ in talk-ininteraction. In B. Dorval (Ed.), Conversational organization and its development (pp. 51–77). Norwood, NJ: Ablex. Schegloff, E. A. (2007). Sequence organization in interaction: A primer in conversation analysis. Cambridge, UK: Cambridge University Press. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. Schwartz, D. (1995). The emergence of abstract representations in dyad problem solving. The Journal of the Learning Sciences, 4(3), 321–354. Schwarz, B., & Baker, M. (2017). Dialogue, argumentation and education: History, theory and practice. Cambridge, UK: Cambridge University Press. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2009). Studying virtual math teams. New York, NY: Springer. Stahl, G. (2016). Constructing dynamic triangles together: The development of mathematical group cognition. Cambridge, UK: Cambridge University Press. Stahl, G. (2020). Theoretical investigations: Philosophical foundations of group cognition. New York, NY: Springer. Streeck, J. (1996). How to do things with things. Human Studies, 19, 365–384. Suchman, L. A., & Trigg, R. (1991). Understanding practice: Video as a medium for reflection and design. In J. Greenbaum & M. Kyng (Eds.), Design at work: Cooperative design of computer systems (pp. 65–90). Hillsdale, NJ: Lawrence Erlbaum Associates. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1(3), 315–337. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. Teasley, S. D., & Roschelle, J. (1993). Constructing a joint problem space: The computer as a tool for sharing knowledge. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools (pp. 229–258). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Tomasello, M. (2014). A natural history of human thinking. Cambridge, MA: Harvard University Press. Uttamchandani, S., & Lester, J. N. (this volume). Qualitative approaches to Language in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Zemel, A., & Çakir, M. P. (2009). Chapter 14: Reading’s work in VMT. In G. Stahl (Ed.), Studying virtual math teams (pp. 261–276). New York, NY: Springer. Zemel, A., & Koschmann, T. (2013). Recalibrating reference within a dual-space interaction environment. International Journal of Computer-Supported Collaborative Learning, 8(1), 65–87.
Further Readings Medina et al. (2009) Representational practices in VMT analyze the adoption of several group practices by a team of students discussing geometry problems. Stahl (2006) Group Cognition provides the initial discussion of group cognition as a central concept for analyzing CSCL interactions. The idea of group cognition arose in the writing of this book and led to the focus on group practice a decade later.
218
R. Medina and G. Stahl
Stahl (2013) Translating Euclid presents multiple perspectives on the Virtual Math Teams project. It includes the first analysis of the adoption of a group practice more fully discussed in the preceding reference. Stahl (2016) Constructing Dynamic Triangles Together follows the collaborative learning of a team of three girls longitudinally over 8 weeks as they begin to learn dynamic geometry. The book identifies about 60 group practices that the team adopts. Stahl (2020)—Theoretical Investigations bring together many of the past articles in the International Journal of CSCL and recent essays by the journal editor that are most relevant to this chapter. Together, they point in the direction of CSCL theory indicated here for the future.
Dialogism Stefan Trausan-Matu
, Rupert Wegerif
, and Louis Major
Abstract Dialogism offers a theoretical framework for understanding computersupported collaborative learning (CSCL). This framework begins with Mikhail Bakhtin’s claim that meaning making requires the interanimation of more than one ‘voice’ as in polyphonic music. Dialogism offers an approach that leads to understanding through the juxtaposition of multiple perspectives. As well as having implications for how we research CSCL, dialogism also has implications for how we conceptualise the goal of CSCL, suggesting the aim of deepening and widening dialogic space. This chapter reviews research within a dialogic CSCL frame, offers a cutting-edge example and presents predictions and suggestions for the future of dialogism within CSCL. Keywords Dialogism · Dialogic education · Dialogue · Dialogic · CSCL · Educational technology · Edtech · Digital technology · Polyphonic model
1 Definitions and Scope Communication is central to CSCL. Therefore, any approach for analysing collaboration in this context needs to draw on, explicitly or implicitly, a philosophical view of language. Dialogism is one such perspective. It was introduced by Mikhail Bakhtin in the mid-twentieth century (Bakhtin, 1984), and has had a significant influence on fields including philosophy (Clark & Holquist, 1984), education (Matusov, 2007), linguistics (Ducrot, 2001; Nølke, 2017), sociology (Markova, S. Trausan-Matu (*) Department of Computer Science and Engineering, University Politehnica of Bucharest, Bucharest, Romania e-mail: [email protected] R. Wegerif · L. Major Faculty of Education, University of Cambridge, Cambridge, UK e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_12
219
220
S. Trausan-Matu et al.
2003), psychology (Shotter, 1995) and cultural studies (Wertsch, 1993). It considers that everything in life is ‘dialogue, that is, dialogic opposition’ (Bakhtin, 1984, p. 42) and how ‘in dialogism there is always more than one meaning . . . it places so much stress on connections between differences’ (Holquist, 2002, p. 40). This emphasis on differences, in addition to the interanimation of ‘voices’, is fundamental for a dialogic understanding of meaning making by CSCL researchers. Bakhtin developed his understanding of dialogism through considering the dialogue between and within texts, in particular the dialogue between characters within Dostoevsky’s novels (Bakhtin, 1984). For Bakhtin, however, dialogism characterises many aspects of our lives, for instance, polyphonic music, which is based on contrapuntal (‘note counter note’) relationships among interanimating voices: ‘contrapuntal relationships in music are only a musical variety of the more broadly understood concept of dialogic relationships’ (Bakhtin, 1984, p. 42). Bakhtin’s dialogism is also considered an important theoretical framework for understanding collaborative learning (Koschmann, 1999; Stahl, 2006; TrausanMatu, 2010; Wegerif, 2007). In the context of research in CSCL specifically, dialogism may be viewed as a lens for examining collaborative learning among the other existing lenses. However, comparatively to the other lenses, the dialogic lens should be at least ‘bifocal’: ‘one must be careful to discriminate between its use as a lens for close-up work and its ability to serve as an optic for seeing at a distance’ (Holquist, 2002, p. 110). This is similar to the analysis of polyphonic music, which requires following each voice individually while, at the same time, following its contribution to the musical piece as a whole. In general, and in CSCL in particular, ‘voices’ can be conceptualised, in addition as belonging to participants, to represent ideas, perspectives and attitudes (Trausan-Matu, 2010), which interanimate. The term dialogic is sometimes used quite loosely in education in a way that makes it seem to be almost synonymous with collaborative learning. We prefer to use dialogic as a technical term referring to the theory of dialogism which, as quoted above, claims that understanding is dialogic, thus, meaning making requires the interanimation of more than one ‘voice’ (where the term ‘voice’ is understood in the extended generalised sense referred to previously). Such a conceptualisation complements other positions in the CSCL field, including trialogical learning and objectoriented collaboration (Paavola & Hakkarainen, this volume) and metacognition (Järvelä, Malmberg, Sobocinski, & Kirschner, this volume). Dialogism offers a challenge to the use of those methods and methodologies for researching CSCL that are monologic in essence. Monologism as a methodological principle seeks to find a single correct viewpoint or ‘true’ perspective. This motivation can seem very useful in practice but, according to dialogic theory, the assumption that there can be only one single true perspective is an illusion. Where there is meaning there is necessarily more than one perspective. One way to make sense of this claim is to understand that for dialogism the meaning of anything is an answer to a question which we are asking either explicitly or implicitly, and questions are always asked within dialogues within contexts. The sign on the wall that says ‘No Smoking’ implies that someone might be thinking about smoking. To understand it
Dialogism
221
we need to understand that context. The same is true of any claims made in research. Claims to truth are answers to questions raised within shared inquiry. To understand them, we need to understand those questions and the different voices that are in play. Dialogism, therefore, in contrast to monologism, supports a view of research as offering understanding through the juxtaposition and interanimation of multiple ‘voices’ with a view to informing educational design. To claim that something is dialogic implies that to understand it is to participate in a dialogue in which at least two, probably more, voices, are in play together. Similarly to polyphonic music, voices may have different features and play different ‘tunes’ (e.g., different ideas), have equal importance (one not being dominant), and enter in a sequence of divergences followed by convergences, on both transverse and longitudinal dimensions (Trausan-Matu, 2010, 2013). This resulting fabric is similar to creative conversations (e.g., brainstorming), the polyphonic model being a lens for analysing creativity in (and also designing) CSCL sessions. Further, to claim that education is dialogic implies not only that it is taught through dialogue, but that it aims at dialogue: that it is education for dialogue as well as education through dialogue (Wegerif, 2019). For an individual learner, this means becoming more dialogic, open to engaging with and learning through others and otherness, better at asking good questions and holding multiple perspectives together in creative tension. At a more collective level of analysis, education for dialogue is about inducting students to participate in larger cultural dialogues in a way that, at the same time, expands and deepens those dialogues. The significance of dialogic theory or dialogism to CSCL research has been noted (e.g., Koschmann, 1999; Stahl, Cress, Ludvigsen, & Law, 2014). Defined through reference to the interanimation of voices, dialogism is useful as a contrast to other competing paradigms or theoretical frameworks in the field of CSCL. Where much CSCL discourse relies on the metaphor of construction, dialogism uses the foundational metaphor of meaning as a ‘spark’ across difference. As Voloshinov1 comments, meaning ‘is like an electric spark that occurs only when two different terminals are hooked together’ (Voloshinov, 1973, p. 103). This leads to the idea of a productive dialogue as a polyphony, in which several voices contribute in different ways to an overall meaning. Although much literature in the field of collaborative learning claims to value differences, metaphors such as ‘finding common ground’ and ‘co-constructing knowledge’ indicate a possible underlying assumption of an ontology of identity or the idea that ultimately meaning is grounded in definable ‘things’. For dialogism, difference is fundamental to meaning such that dissonances (divergences) are the ‘sparks’ towards creative discourse construction. This underlying ontology of difference distinguishes dialogism as a
1
Voloshinov is one of those who made up a group of early 20th-century Russian scholars that has been called the ‘Bakhtin Circle’ (Lambirth et al., 2016), or as some have suggested, was the name was used by Bakhtin for publishing while he was banned by the authorities (Holquist, 2002, p. 7–8; Clark & Holquist, 1984, pp. 146–147).
222
S. Trausan-Matu et al.
paradigm within CSCL even when the practical outcomes of dialogism look similar to social constructivism (Wegerif, 2007, Chap. 3). Exposing the tension between dialogism and monologism should not be read as dismissing the monologic side of the argument. This is not a ‘yes/no’ or ‘true/false’ binary. To be consistent with dialogic theory, we have to acknowledge that the meaning and value of dialogism itself depend upon its difference from monologism. The difference between monologism and dialogism should, therefore, rather be read as a potential polyphonic weaving of multiple voices. This includes ‘dialogic’ voices in constructive, if sometimes challenging, tension with more ‘monologic’ voices. Our polyphony approach to diversity within CSCL is different from the common multiple lens approach as, for us, the voices in play are not just about epistemology or different ways in which we look at a single reality but they are also about ontology or different ways of understanding the nature of the reality that we are looking at.
1.1
Unpacking Dialogism
Starting with the claim that the meaning of any utterance (or ‘meaning unit’) is given by its position and role within a dialogue, one way in which dialogism in education can be understood is as offering a theory of meaning. Meaning is not fixed, as it depends both on previous utterances (that are being responded to) and future utterances (that are in some way anticipated) (Linell, 2009; Rommetveit, 1992). In this way, meaning can be conceptualised as always requiring a debate among several ‘voices’, each ‘voice’ keeping its particularities, including divergent positions, in addition to the coordination (‘getting in sync’) with the others (Gee, 2013). Meaning and creative thinking are built not only by coordination but also by divergences in a polyphonic weaving of ‘voices’ (Trausan-Matu, 2010). Through the ‘interanimation’ of different voices (Bakhtin, 1981), dialogism puts the methodological emphasis on the process of the emergence of meaning in the gap between voices sometimes also referred to as ‘dialogic space’ (Lambirth, Bruce, Clough, Nutbrown, & David, 2016; Wegerif, 2019). This considers that the meanings of words, signs, people, technology and so on are not understood as fixed, but rather as emerging within dialogue and within dialogic space. Knowledge is not understood as fixed or based on ‘facts’. Instead, it is considered to emerge through dialogue. Importantly, what is meant by dialogue is interpreted in an extended sense. As Linell brings out (Linell, 2009) there is always a ‘double dialogicality’ in which utterances in any dialogue need to be understood, not only in their situation but also as part of a longer term dialogue with their situation, culturally and historically defined. Thus, in addition to dialogues between situated and physically embodied human voices, an understanding of dialogism extends to also include ‘voices’ conceived as aspects of the larger social context (Wegerif, 2019), or as distinct positions, ideas or threads of reasoning (Trausan-Matu, 2010). For example, an online computer-supported dialogue about mathematics between
Dialogism
223
two 9-year old students might invoke and engage with the ‘voice’ of mathematics as a discipline. Related to this is an understanding of dialogism as engaging learners in the longer term dialogues of culture. There is always ‘intertextuality’ between dialogues so it is appropriate that these long-term dialogues of culture, the dialogue of history, for example, can also be sometimes referred to collectively as the dialogue of humanity or the ‘conversation of mankind’ (Oakeshott, 1959). Computer technology is often used as a means of linking students’ everyday ideas, or the spontaneous and often situated way in which things are understood, with more technical concepts belonging to the long-term cultural dialogue. Dialogues around micro-worlds in science, for instance, allow students to test and develop their understandings leading them from everyday understandings of concepts like force, to the understandings of these concepts that are held in the relevant expert communities of practice (e.g., Roschelle & Teasley, 1995). This approach is common to CSCL, but what makes it more distinctively dialogic is the goal in education of teaching for dialogue as an end in itself. A focus on transmitting knowledge is replaced with the idea that we are teaching students ‘the dialogue so far’ in any area with a view to them joining that dialogue as active participants. All knowledge is taught as questionable, and the business of questioning and constructing knowledge is also taught. An approach to dialogic education as induction into the long-term dialogues of culture perhaps has some overlap with social constructivism; the idea that knowledge is socially constructed and one aim of education is to draw students into the process of social knowledge construction through building ‘knowledge objects’ together (Bereiter, 2005: Paavola & Hakkarainen, this volume). One difference from social constructivism, however, is that for dialogism there is a reduced emphasis on the importance of constructing ‘knowledge’ when this is understood as taking an objective or material form. Instead, the primary focus is on developing the quality of dialogue. Dialogism is not just about the construction of new knowledge but views drawing learners into dialogue with voices from the past as a key function of education and, indeed, is part of helping learners to find their own voice. Implicitly, this involves learners participating in a dialogue with absent cultural voices. The ideas of drawing learners into dialogue with voices from the past and the ‘intertextuality’ between dialogues are in consonance with the work of Russian scholar Mikhail Bakhtin (1895–1975), commonly associated with the ideas of multivocality, dialogism and polyphony. However, he did not directly apply his insights to the field of education. In addition to being relevant for understanding dialogism as a theory of meaning, it is also worth noting that Bakhtin’s ideas are applicable to an understanding of dialogism as ontology (Markova, 2003; Sidorkin, 1999). This ontological perspective (i.e., the study of being or what is really there) suggests dialogism is about more than the use of dialogue as ‘a tool’ for knowledge construction. Rather, it focuses on dialogue itself as an end in education, perhaps the most important end. This is a distinctive contribution of dialogism to the field of CSCL.
224
1.2
S. Trausan-Matu et al.
Dialogism in the Context of CSCL
Dialogism and its polyphonic model are powerful ‘multifocal’ lenses for CSCL research as they enable an examination of how discourse threads in CSCL conversations weave together. There is little doubt that contexts of CSCL, with their technologically mediated forms of discourse and interaction, provide new forms of discussion and offer innovative access for exploring dialogue (Stahl et al., 2014). For example, only in CSCL chat sessions, in contrast with face-to-face conversations, multiple threads of discussion may occur in parallel, giving birth to a polyphony of voices (Trausan-Matu, 2010). Not all collaborative processes are, however, necessarily dialogic. They may be monologic in essence, driven by only one voice, the others only being its accompaniment (like in monophonic or homophonic music), without divergences, without a debate between independent voices (like in polyphonic music). CSCL is concerned with interactions between learners, specifically collaboration and communication. In this context, it is perhaps unsurprising that ‘dialogue’ may be observed to appear quite often and, therefore, be referred to in the day-to-day language sense as ‘dialogic’. This is, however, a superficial interpretation and ‘real’ dialogue involves much more. It is possible for learners to collaborate to achieve a joint task, including contributing to the process of knowledge construction, without interanimation, without entering into dialogue. A ‘real’ dialogue, on the other hand, implies divergences, negotiations and debates among different points of view, and is as creative as it is critical. In the age of our global ‘Network Society’ (Castells, 2004), the Internet along with the affordances of digital technology for supporting interaction highlight the relevance of dialogism for education. ‘Cyberspace’ is an imaginary, but nonetheless real world, where the frontiers are blurred and the ‘other’ exists through the inference of communication (Breton, 2003). Or, conceived another way, cyberspace is a dialogic space supporting the interplay of potentially billions of ‘voices’. Wikipedia2 is one example of a new possibility for peer-to-peer knowledge construction where there are always multiple voices in play and no ultimate certainty or master narrative. Multiple voices ‘interanimate’ also, obviously, in instant messenger (chat) conversations, forum discussions, microblogs, etc., in a dialogue of participants as well as of ideas. The affordances of CSCL for learning may, thus, be extended to support dialogue not only between physically situated addressees but cultural or contextual ‘voices’ also. Dialogism may be seen as a ‘multifocal’ lens, which allows to look at the same time both locally, at each voice, and at the coherent whole, as a musicologist analyses a polyphonic piece. Other lenses used in CSCL research focus mainly locally, for identifying and classifying utterances (Chiu, 2013), uptake acts (Suthers & Desiato, 2012), adjacency pairs (Stahl, 2006) or transacts (Gewon, Jain, McDonough, Raj, & Rosé, 2013), and only then going to a ‘global’ level by making 2
https://en.wikipedia.org/wiki/Main_Page. (Accessed 22 Feb 2019).
Dialogism
225
statistics or identifying patterns, including via machine learning methods. Transacts, uptakes, adjacency pairs and argumentation analysis (Kimmerle, Fischer, & Cress, this volume) consider how pairs of utterances contribute to the dialog that constructs knowledge. A further method, social network analysis, computes statistics over the number of replies, connections or other relations between various items. Such lenses provide important data for analysing CSCL sessions. In some cases, for example, transacts, lenses tend to be ‘bifocal’. In addition, dialogism looks in parallel to local and global levels through a ‘multifocal’ lens, considering how at least two voices (ideas) interanimate in long sequences of divergent and convergent pairs of utterances and may give birth to new ideas (Trausan-Matu, 2013), in a knowledge creation process (Scardamalia & Bereiter, this volume).
2 History and Development 2.1
Theoretical Underpinnings
Dialogic education has deep roots in oral education traditions. The Ancient Greek philosopher Socrates, essentially an oral thinker who taught through dialogue, is often credited as being the originator of dialogic education. Some principles of dialogic education are referred to in ancient Indian texts and even feature on the pillars of Asoka that date back to the fourth century BCE (Sen, 2005). Halaqah, the idea of forming circles of learning, is a traditional Islamic approach to education that continues to be used to this day (Ahmed, 2014). Martin Buber outlined the distinction between an objectifying stance, which he called ‘I-It’, and a subjectifying or dialogic stance which he called ‘I-Thou’ (Buber, 1958). Buber’s idea of ‘das Zwischen’ or the ‘space of the “in-between”’ that is entered into in dialogue (Buber, 1958) is foundational for the theory of dialogic space introduced previously. Paulo Freire was the first to explicitly articulate a dialogic theory of education in the context of what he called a ‘pedagogy of the oppressed’ (Freire, 1968/2018). Freire’s concern was with an education which empowered learners to speak for themselves and to be able to name their own world. However, the writings of Bakhtin have been particularly influential for the recent interest in dialogism in CSCL. Bakhtin’s analysis of dialogism is more philosophical and literary than educational, often based on the way in which texts enter into ‘dialogic’ relations (Bakhtin, 1986): ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction’ (Bakhtin, 1984, p. 110). Vygotsky’s model of mediation, drawn from Marx’s account of the use of tools as mediated physical forces acting on objects in the world, is also relevant to the recent development of dialogism in CSCL (Vygotsky, 1978, p. 54). While Vygotsky did not directly coordinate his interests in talk and social interaction into an explicit focus on dialogue (Howe & Abedin, 2013), he argued how the acquisition and use of language plays an important role in developing learners’ thinking. According to
226
S. Trausan-Matu et al.
Vygotsky, thinking is a mediated and internalised form of self-talk, a dialogue with oneself (Stahl et al., 2014). He describes language as both a cultural tool (for the development and sharing of knowledge amongst members of a community or society) and as a psychological tool (for structuring the processes and content of individual thought), proposing that there is a close relationship between these two kinds of use, which can be summed up in the claim that ‘intermental’ (social, interactional) activity forges some of the most important ‘intra-mental’ (individual, cognitive) capabilities (Mercer, Hennessy, & Warwick, 2017). Vygotsky’s (1986) idea of the zone of proximal development (ZPD) in particular, where learners are drawn beyond their current understanding by working with a teacher, adult or more competent peer (Kazak, Wegerif & Fujita, 2015), is one that brings the idea of dialogic relations into education. In the ZPD, the teacher has to engage with the perspective of the student (and vice versa) in order to connect the development of ideas in the student to the pre-existing culture (Vygotsky, 1986). This also relates to an understanding of dialogism as drawing learners into dialogue with the voices from the past, as well as with the current global dialogue of humanity on the Internet, in order to help them find their own voice. This is because even as an individual act, the use of language in thought, speech or writing retains the dialogical character of all language as a historically evolved and culturally established medium of communication among people (Stahl et al., 2014). Referring to mathematics, this is an idea Sfard (2007) considers when she states: mathematical discourse learned in school is a modification of children’s everyday discourses, learning mathematics may be seen as transforming these spontaneously learned colloquial discourses rather than as building new ones from scratch (Sfard, 2007, p. 575).
While it is fair to say that dialogism emerged under the umbrella of the socio-cultural tradition (Koschmann, 1996, 1999), there is also a strong argument that dialogism can be viewed as a separate paradigm in its own right. The distinction between the two can be brought out through considering the idea of situated learning that is often used to define socio-cultural approaches. While on the one hand dialogues are situated in an empirical sense (i.e., occurring at a certain time, in a certain place, between particular individuals), on the other how we understand our situation depends on dialogues in which acts of situating ourselves have to occur. This points to the more original reality of dialogic space as an underlying space of creativity opened up in dialogues; a space within which ideas of space, time, history, self and other are formed and can be unformed. There is therefore always also something unsituated in dialogues explaining their infinite potential for creativity (Wegerif, 2006). The focus on the social and historical situatedness of cognition and learning that defines the socio-cultural paradigm tends to limit its capacity to offer a full theory of education. The unsituated–situatedness of dialogism enables it to offer a theory of education that is appropriate for the Internet Age. Dialogues on the Internet are not situated in a conventional way. Communication on the Internet can offer a partial instantiation of dialogue with an unbounded horizon. The dialogic theory of education that fits the needs of the Internet Age is the idea of education as the expansion and deepening of dialogic space pulled outwards by the call of the
Dialogism
227
‘Infinite Other’ (Wegerif, 2013). This theory links education to the more political sounding aim of ‘global democracy’ (Wegerif, 2017).
2.2
Contemporary Developments
Even a cursory analysis of the titles and abstracts of the articles published in the International Journal of Computer-Supported Collaborative Learning (ijCSCL) reveals how an interest in dialogism has been a key focus over the past decade. Work in this one venue alone has considered dialogism in a range of ways and at varying analytical ‘levels’. Examples include research investigating how a dialogic stance can provide insights into how institutional practices shape the meanings and functions of CSCL tools (Arnseth & Ludvigsen, 2006); a dialogic approach for examining interaction, and how this can help in the design of effective pedagogical approaches related to the use of wikis in education (Pifarré & Kleine Staarman, 2011); how dialogical positions can be used to understand identity trajectories in a collaborative blended university course (Ligorio, Loperfido, & Sansone, 2013); and, how CSCL affordances and dialogic learning can engage disengaged students (Slakmon & Schwarz, 2014). A scoping review of the literature from the year 2000 onwards focusing on the use of technology in supporting dialogue provides insights into the ways digital tools can support, extend and transform dialogue and interaction in the classroom in particular (Major, Warwick, Rasmussen, Ludvigsen, & Cook, 2018). This review identified 72 studies3 published since 2000 across 18 countries, including both small and larger scale analyses. Technology investigated included Computer-Mediated Communication tools, Interactive Whiteboards, subject-specific learning tools, mobile ‘apps’, tablet computers, blogging/microblogging tools, wikis and touch table technology. Three overarching themes, each with several subthemes, were identified. First, ‘dialogue activity’—featuring alternative perspectives (both exposure to and taking into account others’ views); knowledge co-construction; using dialogue to express meta-cognitive learning; and using dialogue to scaffold understanding. The more holistic theme of ‘learning environment’—featuring learner autonomy; learner inclusion and participation; classroom atmosphere; interpersonal relationships; motivation and engagement. A final third theme of ‘technological affordances’—featuring creation of a shared dialogic space; mediating interaction; externalisation of ideas; informing teaching; multimodality; pace; provisionality; representation of content; temporal factors. While focusing on research undertaken in classrooms, this scoping review provides a useful framing device for reviewing new developments relating to the analysis of dialogism and technology more broadly in other contexts. And in
3 Appendix One of Major et al. (2018) provides the full references for all 72 studies included in this scoping review.
228
S. Trausan-Matu et al.
addition to demonstrating how there is global interest in combining dialogic educational approaches and digital technology, it highlights how affordance, interdependency and dialogue itself appear to be key concepts that frame the social situation in which students build knowledge and meaning with and through digital tools.
3 State of the Art: Analysing and Designing for Dialogism In this section, we examine how the dialogic perspective of CSCL inspired from Bakhtin’s polyphony theory (Bakhtin, 1984) is an appropriate paradigm for analysing the phenomena appearing in discourse building in collaborative learning (Koschmann, 1999; Stahl, 2006; Trausan-Matu, 2010). We do this by considering the example of the polyphonic model and the associated, computer-supported, analysis method.
3.1 3.1.1
The Polyphonic Model and the Associated, Computer-Supported, Analysis Method Introducing the Polyphonic Model
As previously outlined, dialogic knowledge construction in CSCL conversations is a process that implies interanimation of several voices, in a generalised sense (Trausan-Matu, 2010). It is a source of what Chiu calls micro-creativity (Chiu, 2013), or knowledge building (Scardamalia & Bereiter, this volume) for instance, in the case of learners collaboratively rebuilding mathematical proofs in small groups. Like any creative process, it needs sequences of divergences and convergences (Csikszentmihalyi, 1996) among different, dialogic positions, reflecting concepts and ideas, which should interanimate, eventually generating a coherent discourse. Such a weaving is characteristic also to polyphonic music, which, in fact, reveals a fundamental feature of human beings (Bakhtin, 1984; Pesic, 2017), whom are able to cope with more than one voice (in the generalised sense considered in this text) at the same time (Pesic, 2017). The sequence of passing dissonances/divergences, that induce tension, resoluted by consonances/convergences according to contrapuntal relations reflect a general feature of our life: the trend for both variation (novelty, avoiding monotony) and unity, what Bakhtin compared to the centrifugal and centripetal forces from physics (Bakhtin, 1981). Repetition and rhythm, essential musical features, are also very important for enabling involvement in conversations (Tannen, 2007), with even neurological data proving the importance of these factors in human language (Levitin, 2006; Sacks, 2007). Bakhtin characterises polyphony, as ‘different voices singing variously on a single theme [. . .] exposing the diversity of life and the great complexity of human
Dialogism
229
experience’ (Bakhtin, 1984, p. 42; italics belong to the author). Quoting Glinka, he also emphasises how: ‘Everything in life is counterpoint, that is, opposition’ (Bakhtin, 1984, p. 42; italics belong to the author). Contrapuntal relationships in polyphonic music assure divergence/opposition/dialogism among two or more separate tunes (voices) that are played or sung at the same time, while however, achieving a coherent whole. They ‘are only a musical variety of the more broadly understood concept of dialogic relationships’ (Bakhtin, 1984, p. 42). Polyphony can be viewed as the merging of the longitudinal, sequential dimension of voices’ development and the transversal one, that is, the co-occurrence of voices (TrausanMatu, 2010). Starting from the above considerations, polyphony appears to be particularly well suited for modelling collaborative knowledge construction in small groups. This is because collaborative learning naturally involves participants with multiple voices (in the generalised sense, ideas), which, in a polyphonic construction have the possibility to fully manifest their personalities, putting in value their particularities in order to construct knowledge, targeting CSCL success. Similarly, to polyphonic jazz music improvisation (Trausan-Matu, 2010) or novels (Bakhtin, 1984), the polyphonic model of discourse considers voices, in an extended sense, as threads of ideas, concepts and even words that enter in dialogic relations. The different particularities of opposing voices/ideas generate divergences, inducing tension solved by convergences, according to interanimation patterns, the final result being a coherent and creative discourse. In many collaborations such a polyphonic weaving may be identified, not only in textual or verbal interaction but also in non-verbal cases, in gestures, for example, in classrooms (Trausan-Matu, 2013). As an example of polyphonic weaving, we present in the central part of Fig. 1 a fragment of a CSCL chat session where students had to debate about the requirements for an interactive computer application (the thin curly arrows on the left are important references between utterances, explicitly indicated by learners using a facility of the chat environment (Holmer, Kienle, & Wessner, 2006)—the number of the referenced utterance is in the ‘Ref’ column—and the straight lines are repetitions of words that become voices). The process of knowledge construction involves several threads of concepts (‘topic’, ‘presentation’, ‘reply’), which behave like voices that interanimate through divergences and convergences. For example, in the beginning of the chat the participants identify several divergences among the three concept voices (linking utterances Nr. 18, 23, 27 and 30): They find the reply method cumbersome, they do not like that replies are linearly represented and how topics are presented. As a resolution, convergences appear at utterances Nr. 24, 27 and 28. A very important one is at Nr. 28, proposing ‘a tree presentation’ for replies, several other convergences continuing it. However, the ‘but’ discourse marker at utterance Nr. 30 (surrounded by a diamond in Fig. 1) clearly indicates another divergence, now between the ‘reply’ and ‘topic’ voices. This divergence is also resoluted by a convergence with the ‘representation’ voice (‘You need also a clever visual representation’), as suggested by the ‘also’ discourse marker.
230
S. Trausan-Matu et al.
Fig. 1 Interanimation of concepts in a CSCL chat
The interanimation of the three voices is illustrated from a different perspective on the right side of Fig. 1. The sequence of divergences and convergences is similar to a creative process (Csikszentmihalyi, 1996), being a base for knowledge creation (Scardamalia & Bereiter, this volume) or to the consonances that resolute dissonances in music (Kolinski, 1962). Gee also uses a music-related image for socially built discourse, that he names ‘Discourse’, with a capital ‘D’: ‘Being in a Discourse means being able to engage in a particular sort of “dance” with other people, words, deeds, values, feelings, . . .’ (Gee, 2013). Gee’s ‘dance’ Discourse might, at a first sight, be similar to the idea of a polyphonic improvisation, because it also involves different personalities that aim to achieve a joint goal. However, the majority of ‘dances’ need a synchronised participation, unexpected divergences/dissonances being not welcome. Gee considers that personalities should align to the social Discourse, while in polyphony discourse is constructed by personalities that manifest their differences, they have divergences, entering into debates. In the polyphonic model, there is an equal emphasis on a Discourse, as a whole, and voices, as individuals that are influenced by existing Discourses, sometimes divergent, but which jointly achieve new Discourses. The relation between polyphony and dialogue, in general, may be understood better starting from Bakhtin’s remark that ‘authentic polyphony [. . .] did not and could not have existed in the Socratic dialogue’ (Bakhtin, 1984, p. 178), referring probably to the fact that in polyphony all participants (voices) should be with equal importance, there should not be an authoritarian, leading voice (like, e.g., in some dances, where one of the two participants is the leader).
Dialogism
3.1.2
231
The Polyphonic Analysis Method
There are several approaches for the analysis of CSCL conversations. Some of them are done without computerised support using, for example, conversation analysis methods (Zemel, Xhafa, & Çakir, 2009). Others use in various types of software tools: for statistical analyses (Zemel et al., 2009) as well as natural language processing or/and social networks analysis (Dong, 2005; Rosé et al., 2008; Suthers & Desiato, 2012; Trausan-Matu, Dascalu, & Rebedea, 2014). The polyphonic analysis method of CSCL conversations is grounded on the polyphonic model, starting from the idea that voices may also be threads of concepts and ideas in dialogues, manifested by the repetition or by semantically related chains of words or phrases. The analysis method is aimed to reveal the collaboration process, considering fundamental features of dialogism: multivocality, divergences and convergences, interanimation, polyphony, authoritarian voices, chronotopes and ventriloquism (Trausan-Matu, 2010; Trausan-Matu et al., 2014). The polyphonic analysis of CSCL conversations starts from ‘the profound dialogism of the word’ (Bakhtin, 1984, p. 292): ‘Dialogue is studied merely as a compositional form in the structuring of speech, but the internal dialogism of the word (which occurs in a monologic utterance as well as in a rejoinder), the dialogism that penetrates its entire structure, all its semantic and expressive layers, is almost entirely ignored’ (Bakhtin, 1981, p. 279). The polyphonic analysis of CSCL conversations may be done manually (like in the example from Fig. 1) or using computerised support (natural language processing and social network analysis), in a sequence of steps that involve delimitation of utterances, identification of the candidates for voices starting from the threads of main concepts occurring in utterances and analysis of the interanimation among voices, starting from divergences and convergences (Trausan-Matu et al., 2014). As a result of this polyphonic analysis, even a numerical measure of interanimation may be computed for characterising the collaboration, together with each learner’s participation degree (Dascalu, Trausan-Matu, McNamara, & Dessus, 2005; Trausan-Matu et al., 2014). Several systems were implemented starting from this methodology. Among these, the most complex are PolyCAFe (Trausan-Matu et al., 2014) and ReaderBench (Dascalu et al., 2005). Several other researchers started from Bakhtin’s ideas in order to catch specific dialogism features in conversations, considering their tempos and chronotopes (Ligorio & Ritella, 2010) or repetition and rhythm (Tannen, 2007). However, we believe that a more complex analysis should consider the polyphonic weaving of discourse (Trausan-Matu, 2010), which, additionally, may also beneficiate from computer support (Trausan-Matu et al., 2014).
232
3.2
S. Trausan-Matu et al.
Designing Dialogic CSCL Sessions
In order to enhance knowledge construction, CSCL chat sessions (or discussions on forums) may be designed and (if moderated) conducted for inducing dialogism (Stahl, 2009) and polyphony (Trausan-Matu, 2010). In this aim, for example, the imposed subject of CSCL chat sessions might be a debate among some concepts followed by constructing a solution, learners being instructed to enter in divergences followed by convergences. An excerpt from such a chat is shown in Fig. 1. This approach was used at University Politehnica of Bucharest also in a more specific way: Students were told that each of them should support a technology presented at the course, taking the role of the representative of a company that sells that technology (thus, becoming a voice in the extended sense). They were encouraged to take divergent positions but eventually try to have convergent achievements (Trausan-Matu et al., 2014). Dialogism views that the very reason new understanding emerges is due to the ‘gap’ or ‘space’ between different perspectives. Closely related to the idea of ‘dialogic space’ introduced previously, one of the main causal mechanisms of dialogic learning can be described as the ‘switch’ whereby a student is drawn to see or feel things from a new perspective. For participants in dialogue, the gap opens up into an experienced dialogic space within which various voices are in relationship and able to ‘interanimate’ each other. According to Bakhtin (1986), it is because of this gap that dialogue is possible in the first place. Importantly, dialogic switches do not only occur with physically present voices and physically present tools but also with virtual cultural voices, for example the virtual voice of a ‘generalised other’ (Mead, 1934/1962) or ‘superaddressee’ (Bakhtin, 1984) position which might be that of, for example, the point of the community of mathematicians (Kazak, Wegerif, & Fujita, 2015). The affordances of CSCL tools and supporting pedagogy can be ‘engineered’ to support and facilitate this process. For example, in the Metafora project, tools were designed to prompt students to take different perspectives with a series of ‘hats’ representing the attitudes appropriate for different stages in collaborative problem-solving. Here, as often with the use of avatars, the design of online ‘tools’ became also the design of ‘voices’ (Yang, Wegerif, Dragon, Mavrikis, & McLaren, 2013). The dialogic paradigm in CSCL generates the unique pedagogical aim of dialogue as an end in itself which translates as the idea that the aim of education might not be the construction of shared knowledge so much as the expansion of dialogic space. This is exemplified in a number of Internet-mediated education projects. Empatico,4 for example, is a platform offering links between schools around the world with an associated pedagogy designed to promote understanding and empathy across cultural diversity. It aims to reach children between the ages of 7 and 11. Generation Global5 is a similar project aimed more at young people aged 12–16 with 4 5
https://empatico.org (Accessed 22 Feb 2019). https://generation.global. (Accessed 22 Feb 2019).
Dialogism
233
the explicit aim of preventing violent extremism through promoting openmindedness. As well as blogging, Generation Global supports internet-mediated video links between classes around the world with a particular focus on schools in countries that have some history of conflict. A recent evaluation of the impact of Generation Global over a 1-year period involving over 1000 participants found evidence of increased ‘dialogic open-mindedness’ (Wegerif et al., 2017). This study developed a dialogic research methodology appropriate for the Internet combining an ‘inside’ perspective or phenomenology of experience gained through interviews and online ethnography with a more ‘outside’ measure. The aim of combining an inside view with an outside view here is not to reduce one to the other but to generate new insights and understanding through juxtaposing them in such a way that they interanimate and inter-illuminate each other. Following the dialogic theory of Merleau-Ponty, this methodology is called ‘Chiasm’ (Kershner, Hennessy, Wegerif, & Ahmed, 2020).
4 The Future Maybe one big challenge facing CSCL is to support the emergence of a planetary intelligence able to respond to the many global challenges humanity faces (Lévy & Bononno, 1997). This implies a need to develop an entirely new theory and practice of education. Dialogism, and the dialogic theory of education for the Internet Age that stems from it, has the potential to address this challenge. Dialogic education is not just education for the already established educational ends; beyond these it is also education for encouraging creativity, a polyphony of voices and unbounded dialogue, which prepares the conditions for a possible future dialogic democracy, i.e., a democracy which is not so much focussed on voting as on reaching understanding and, where possible, agreement, through dialogue. The kind of educational projects required to take this forward include the support of dialogue as an end itself, exemplified above by the Empatico and Generation Global projects, but also more focussed projects supporting teams of students in classrooms around the world learning how to work together in responding to global challenges. Such projects require the Learning to Learn Together (L2L2) (Yang et al., 2013) and polyphonic approaches to pedagogy developed within dialogic theory and partly described above. Artificial intelligent conversational agents, e.g., Apple Siri, are widely available today. Several such agents have been developed and have shown their potential as support for online dialogic teaching and learning, either replacing a real tutor or even trying to be participants in a CSCL chat (e.g., Graesser, Cai, Morgan, & Wang, 2017; Kumar & Rosé, 2011; Rus, D’Mello, Hu, & Graesser, 2013; Tegos, Demetriadis, Papadopoulos, & Weinberger, 2016; Wegerif & Major, 2018). This is likely to be a growing area of dialogic research in the future, supported also by the expected advances of natural language processing. Potentially these agents could be enhanced
234
S. Trausan-Matu et al.
if they were able to identify and generate divergences and to propose convergences in order to induce interanimation and polyphony. Perhaps the biggest challenge that dialogism faces within the CSCL research community is that of misunderstanding. The assumptions of monologism are so ingrained in some scientific research traditions that it seems hard for many to appreciate the dialogic difference, the idea that meaning is never a ‘thing’ but always a spark across difference. Forms of design-based research—such as in some of the projects illustrated above as well as in the chapter by Kali and Hoadley (this volume)—offer a way to understand and conduct research in a way that is compatible with this dialogic insight. The aim of design-based research into effective educational dialogue ‘online’ is not to reduce the variety of voices in play to a single true representation. The aim of such research is to expand dialogic space, designing in ways that bring more voices into play and that improve the quality of the dialogue through bringing quite diverse perspectives into dialogically creative relationships. Another approach to CSCL research, which challenges the still dominant monologic tradition, is the Chiasm methodology described in the previous section. The idea here is to study online learning through juxtaposing two main perspectives or stances: the inside-out perspective of an interpretation of lived experience and the outside-in perspective of objective measures that attempt to locate and compare instances of learning. What makes this Chiasm approach applicably to researching dialogism is that there is no reduction of these two perspectives to a single representation but the recognition that understanding is always a creative act; a spark across difference.
References Ahmed, F. (2014). Exploring halaqah as research method: A tentative approach to developing Islamic research principles within a critical ‘indigenous’ framework. International Journal of Qualitative Studies in Education, 27(5), 561–583. Arnseth, H. C., & Ludvigsen, S. (2006). Approaching institutional contexts: Systemic versus dialogic research in CSCL. International Journal of Computer-Supported Collaborative Learning, 1(2), 167–185. Bakhtin, M. (1986). Speech genres and other late essays. Austin: University of Texas Press. Bakhtin, M. M. (1981). The dialogic imagination: Four essays (C. Emerson & M. Holquist, Trans.). Austin, London: The University of Texas Press. Bakhtin, M. M. (1984). Problems of Dostoevsky’s poetics (C. Emerson, Trans. C. Emerson Ed.). Minneapolis: University of Minnesota Press. Bereiter, C. (2005). Education and mind in the knowledge age. New York, NY: Routledge. Breton, D. L. (2003). Adeus ao corpo. O Homem-Máquina: a Ciência Manipula o Corpo (pp. 123–137). São Paulo: Companhia Das Letras. Buber, M. (1958). In R. G. Smith (Ed.), I and thou (Translated by Smith RG). Edinburgh: T & T Clark. Castells, M. (2004). The network society A cross-cultural perspective. Northampton, MA: Edward Elgar.
Dialogism
235
Chiu, M. M. (2013). Social metacognition, micro-creativity and justifications: Statistical discourse analysis of a mathematics classroom conversation. In Productive Multivocality in the Analysis of Collaborative Learning (pp. 141–160). New York: Springer. Clark, K., & Holquist, M. (1984). Mikhail Bakhtin. Cambridge, MA: Harvard University Press. Csikszentmihalyi, M. (1996). Creativity: Flow and the psychology of discovery and invention. New York: Harper Collins. Dascalu, M., Trausan-Matu, S., McNamara, D. S., & Dessus, P. (2005). ReaderBench—Automated Evaluation of Collaboration based on Cohesion and Dialogism. International Journal of Computer-Supported Collaborative Learning, 10(4), 395–423. Dong, A. (2005). The latent semantic approach to studying design team communication. Design Studies, 26(5), 445–461. Ducrot, O. (2001). “Quelques raisons de distinguer “locuteurs” et “énonciateurs””, Polyphonie— linguistique et littéraire. Documents de Travail, 3, 19–41. Freire, P. (1968/2018). Pedagogy of the oppressed. London: Bloomsbury Publishing. Gee, J. P. (2013). Discourse vs. discourse, in the encyclopedia of applied linguistics. Hoboken, NJ: Willey-Blackwell. Gewon, G., Jain, M., McDonough, J., Raj, B., & Rosé, C. (2013). Measuring prevalence of otheroriented transactive contributions using an automated measure of speech-style accommodation. IJCSCL, 8(2), 245–265. Graesser, A. C., Cai, Z., Morgan, B., & Wang, L. (2017). Assessment with computer agents that engage in conversational dialogues and trialogues with learners. Computers in Human Behavior, 76, 607–616. Holmer, T., Kienle, A., & Wessner, M. (2006). Explicit referencing in learning chats: Needs and acceptance. In W. Nejdl & K. Tochtermann (Eds.), First European Conference on Technology Enhanced Learning, EC-TEL 2006 (pp. 170–184). Crete, Greece: Springer. Holquist, M. (2002). Dialogism. Bakhtin and his World. Second edition. London: Routledge. Howe, C., Abedin, M. (2013). Classroom dialogue: a systematic review across four decades of research. Camb J Educ 43(3):325–356. Järvelä, S., Malmberg, J., Sobocinski, M., & Kirschner, P. A. (this volume). Metacognition in collaborative learning. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Kali, Y., & Hoadley, C. (this volume). Design-based research methods in CSCL: Calibrating our epistemologies and ontologies. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Kazak, S., Wegerif, R., & Fujita, T. (2015). The importance of dialogic processes to conceptual development in mathematics. Educational Studies in Mathematics, 90(2):105–120. Kershner, R., Hennessy, S., Wegerif, R., & Ahmed, A. (2020). Research methods for educational dialogue. London: Bloomsbury Publishing. Kimmerle, J., Fischer, F., & Cress, U. (this volume). Argumentation and knowledge construction. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Kolinski, M. (1962). Consonance and dissonance. Ethnomusicology, 6(2), 66–74. Koschmann, T. (Ed.). (1996). CSCL: Theory and practice of an emerging paradigm. Mahwah, NJ: Lawrence Erlbaum Associates. Koschmann, T. (1999). Toward a dialogic theory of learning: Bakhtin’s contribution to learning in settings of collaboration (pp. 308–313). Palo Alto, CA: Proceedings of the Computer Supported Collaborative Learning Conference. Kumar, R., & Rosé, C. P. (2011). Architecture for building conversational agents that support collaborative learning. IEEE Transactions on Learning Technologies, 4(1), 21–34. Lambirth, A., Bruce, T., Clough, P., Nutbrown, C., & David, T. (2016). Dialogic space theory. In The Routledge international handbook of philosophies and theories of early childhood education and care (pp. 165–175). London: Routledge.
236
S. Trausan-Matu et al.
Levitin, D. (2006). This is your brain on music. In The science of a human obsession. London: Penguin Books Ltd.. Lévy, P., & Bononno, R. (1997). Collective intelligence: Mankind’s emerging world in cyberspace. New York: Perseus Books. Ligorio, M. B., Loperfido, F. F., & Sansone, N. (2013). Dialogical positions as a method of understanding identity trajectories in a collaborative blended university course. International Journal of Computer-Supported Collaborative Learning, 8(3), 351–367. Ligorio, M. B., & Ritella, G. (2010). The collaborative construction of chronotopes during computer-supported collaborative professional tasks. International Journal of ComputerSupported Collaborative Learning, 5(4), 433–452. Linell, P. (2009). Rethinking language, mind, and world dialogically. Charlotte, NC: Information Age Publishing. Major, L., Warwick, P., Rasmussen, I., Ludvigsen, S., & Cook, V. (2018). Classroom dialogue and digital technologies: A scoping review. Education and Information Technologies, 23(5), 1995–2028. https://doi.org/10.1007/s10639-018-9701-y. Markova, I. (2003). Dialogicality and social representations: The dynamics of mind. Cambridge: Cambridge University Press. Matusov, E. (2007). Applying Bakhtin scholarship on discourse in education: A critical review essay. Educational Theory, 57(2), 215–237. Mead, G. H. (1934/1962). Mind, self and society. Chicago: University of Chicago Press. Mercer, N., Hennessy, S., & Warwick, P. T. (2017). Dialogue, thinking together and digital technology in the classroom: Some educational implications of a continuing line of inquiry. International Journal of Educational Research, 97, 187–199. https://doi.org/10.1016/j.ijer. 2017.08.007. Nølke, H. (2017). Linguistic Polyphony. The scandinavian approach: ScaPoLine. Studies in pragmatics, 16. Leiden: Brill. Oakeshott, M. (1959). The voice of poetry in the conversation of mankind: An essay. Cambridge: Bowes & Bowes. Paavola, S., & Hakkarainen, K. (this volume). Trialogical learning and object-oriented collaboration. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Pesic, P. (2017). Polyphonic Minds. Music of the Hemispheres. Cambridge, MA: MIT Press. Pifarré, M., & Kleine Staarman, J. (2011). Wiki-supported collaborative learning in primary education: How a dialogic space is created for thinking together. International Journal of Computer-Supported Collaborative Learning, 6(2), 187–205. Rommetveit, R. (1992). Outlines of a dialogically based social-cognitive approach to human cognition and communication. In The dialogical alternative: Towards a theory of language and mind (pp. 19–44). Oslo: Scandinavian University Press. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. In Computer supported collaborative learning (pp. 69–97). Berlin, Heidelberg: Springer. Rosé, C. P., Wang, Y. C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., & Fischer, F. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer-Supported Collaborative Learning, 3(3), 237–271. Rus, V., D’Mello, S., Hu, X., & Graesser, A. (2013). Recent advances in conversational intelligent tutoring systems. AI Magazine, 34(3), 42–54. Sacks, O. (2007). Musicophilia: Tales of Music and the Brain. New York, NY: Vintage Books. Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Sen, A. (2005). The argumentative Indian: Writings on Indian history, culture and identity. New York: Farrar, Straus & Giroux.
Dialogism
237
Sfard, A. (2007). When the rules of discourse change, but nobody tells you: Making sense of mathematics learning from a commognitive standpoint. Journal of the Learning Sciences, 16(4), 567–615. Shotter, J. (1995). Dialogical psychology. In J. A. Smith, R. Harre, & L. van Langenhove (Eds.), Rethinking psychology (pp. 160–178). London: Sage. Sidorkin, A. M. (1999). Beyond discourse: Education, the self and dialogue. New York: State University of New York Press. Slakmon, B., & Schwarz, B. B. (2014). Disengaged students and dialogic learning: The role of CSCL affordances. International Journal of Computer-Supported Collaborative Learning, 9(2), 157–183. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2009). Studying virtual math teams. New York, NY: Springer. Stahl, G., Cress, U., Ludvigsen, S., & Law, N. (2014). Dialogic foundations of CSCL. International Journal of Computer-Supported Collaborative Learning, 9(2), 117–125. Suthers, D., & Desiato, C. (2012). Exposing chat features through analysis of uptake between contributions. In 45th Hawaii International Conference on System Sciences (pp. 3368–3377). Maui, HI: IEEE. Tannen, D. (2007). Talking voices: Repetition, dialogue, and imagery in conversational discourse (2nd ed.). Cambridge, UK: Cambridge University Press. Tegos, S., Demetriadis, S., Papadopoulos, P. M., & Weinberger, A. (2016). Conversational agents for academically productive talk: A comparison of directed and undirected agent interventions. International Journal of Computer-Supported Collaborative Learning, 11(4), 417–440. Trausan-Matu, S. (2010). The Polyphonic Model Of Hybrid And Collaborative Learning. In F. Wang, L. J. Fong, & R. C. Kwan (Eds.), Handbook of research on hybrid learning models: Advanced tools, technologies, and applications (pp. 466–486). Hershey, NY: Information Science Publishing. Trausan-Matu, S. (2013). Collaborative and differential utterances, pivotal moments, and polyphony. In D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions. Vol. 15, Computer-supported collaborative learning series (pp. 123–139). New York, NY: Springer. Trausan-Matu, S., Dascalu, M., & Rebedea, T. (2014). PolyCAFe—automatic support for the polyphonic analysis of CSCL chats. International Journal of Computer-Supported Collaborative Learning, 9(2), 127–156. Voloshinov, V. N. (1973). Marxism and the philosophy of language. London: Seminar Press. Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: MIT Press. Vygotsky, L. S. (1986). Thought and language. Cambridge, MA: MIT Press. Wegerif, R. (2006). A dialogic understanding of the relationship between CSCL and teaching thinking skills. International Journal of Computer-Supported Collaborative Learning, 1(1), 143–157. Wegerif, R. (2007). Dialogic education and technology: Expanding the space of learning (Vol. 7). Berlin: Springer Science & Business Media. Wegerif, R. (2013). Dialogic: Education for the internet age. London: Routledge. Wegerif, R. (2017). Introduction. Education, technology and democracy: Can internet-mediated education prepare the ground for a future global democracy? Civitas Educationis. Education, Politics, and Culture, 6(1), 17–35. Wegerif, R. (2019). Dialogic Education. Oxford encyclopedia of research on education. Oxford: Oxford University Press. Wegerif, R., Doney, J., Richards, A., Mansour, N., Larkin, S., & Jamison, I. (2017). Exploring the ontological dimension of dialogic education through an evaluation of the impact of Internet mediated dialogue across cultural difference. Learning, Culture and Social Interaction, 20, 80–89.
238
S. Trausan-Matu et al.
Wegerif, R., & Major, L. (2018). Buber, educational technology, and the expansion of dialogic space. AI & SOCIETY, 34, 109–119. Wertsch, J. (1993). Voices of the Mind: A sociocultural approach to mediated action. Cambridge: Harvard University Press. Yang, Y., Wegerif, R., Dragon, T., Mavrikis, M., & McLaren, B. (2013). Learning how to learn together (L2L2): Developing tools to support an essential complex competence for the Internet Age. Proceedings of Computer Supported Collaborative Learning Conference. Zemel, A., Xhafa, F., & Çakir, M. P. (2009). Combining coding and conversation analysis of VMT chats. In G. Stahl (Ed.), Studying virtual math teams. Computer-supported collaborative learning series (Vol. 11). Boston, MA: Springer.
Further Readings Koschmann, T. (1999). Toward a dialogic theory of learning: Bakhtin’s contribution to understanding learning in settings of collaboration. In Proceedings of the 1999 conference on computer support for collaborative learning (p. 38). Indiana: International Society of the Learning Sciences. The author of the paper proposes the dialogic theory of M. M. Bakhtin as a theoretical framework for computer-supported collaborative learning. Multivocality, polyphony, heteroglossia and intertextuality, fundamental concepts of dialogism introduced by Bakhtin, are introduced and discussed as a basis for considering collaborative learning essentially based on dialog, seen as a third metaphor: Learning by transaction, in addition to learning as acquisition and as participation. Major, L., Warwick, P., Rasmussen, I., Ludvigsen, S., & Cook, V. (2018). Classroom dialogue and digital technologies: A scoping review. Education and Information Technologies, 23(5), 1995–2028. A scoping review of the literature from 2000 onwards focusing on the use of technology in supporting classroom dialogue. It identifies 72 studies published since 2000 across 18 countries, including both small and larger scale analyses. Three overarching themes are identified, each consisting of a number of sub-themes. The review provides a useful framing device for reviewing new developments relating to the analysis of dialogue and technology. Stahl, G., Cress, U., Ludvigsen, S., & Law, N. (2014). Dialogic foundations of CSCL. International Journal of Computer-Supported Collaborative Learning, 9(2), 117–125. The paper introduces a special issue of the International Journal of Computer-Supported Collaborative Learning. It considers the dialogical perspective as an important theoretical framework for CSCL and it presents the roots and influences of this approach (the ideas of Vygotsky, Bakhtin, Dewey and Mead). The four papers on the special issue are discussed and classified as belonging to two categories: The first two papers consider the group and the interactions among participants as subjects of analysis and the next two focus on individual opinions, actions and behaviours. Trausan-Matu, S. (2010). The polyphonic model of hybrid and collaborative learning. In F. Wang, L. J. Fong, & R. C. Kwan (Eds.), Handbook of research on hybrid learning models: Advanced tools, technologies, and applications (pp. 466–486). Hershey, NY: Information Science Publishing. The paper presents in detail the polyphonic model of discourse with emphasis on CSCL conversations, considering also blended (hybrid) learning. It discusses in detail basic concepts of the model, such as utterances, voices, interanimation and polyphony. A classification and examples of interanimation patterns are provided. Several visualisations of the interactions in CSCL chats, provided by an implemented computer application, are illustrating how elements of the polyphonic weaving may be analysed. Wegerif, R. (2007). Dialogic education and technology: Expanding the space of learning (Vol. 7). Berlin: Springer Science & Business Media. The program of research reported in this book reveals key characteristics of learning dialogues and demonstrates ways in which computers and networks can deepen, enrich and expand such dialogues. It develops a dialogic perspective by
Dialogism
239
drawing upon work in communications theory, psychology, computer science and philosophy. This perspective foregrounds the creative space opened up by authentic dialogues. The central argument of the piece is that there is a convergence between this dialogic perspective in education and the affordances of new information and communications technology.
Trialogical Learning and Object-Oriented Collaboration Sami Paavola and Kai Hakkarainen
Abstract This chapter delineates different approaches to technology-mediated learning that emphasize “object-oriented” collaboration. The chapter introduces, more specifically, trialogical learning, as distinguished from individual knowledge acquisition (“monological”) or from participation in social interaction and meaning making (“dialogical” approaches, see Trausan-Matu, Wegerif, & Major, this volume). We briefly introduce object-oriented collaboration and the trialogical approach where human learning and activity are targeted at jointly developed knowledge artifacts and related knowledge practices. As objects and objectorientedness have become centrally important for understanding collaboration in modern knowledge work, the facilitation of trialogical processes of collaborative learning is crucial in educational contexts. Several approaches focusing on objectoriented collaboration are analyzed, including those that use different terminology. The trialogical approaches appear to form a continuum with dialogical theories and meaning-making traditions often highlighted in CSCL research. Finally, we anticipate future uses of trialogical learning and object-oriented collaboration. Keywords Object-oriented collaboration · Trialogical learning · Knowledgecreation metaphor · Knowledge practices
1 Definitions and Scope The meaning of artifacts and objects as central elements in human interaction and collaborative activity is nowadays highlighted in relation to learning theories (Paavola & Hakkarainen, 2009), design research (Ewenstein & Whyte, 2009), maker-centered learning (Papert & Harel, 1991; Riikonen, Seitamaa-Hakkarainen, & Hakkarainen, 2018), science and technology studies (Latour, 1996), and S. Paavola (*) · K. Hakkarainen Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland e-mail: sami.paavola@helsinki.fi; kai.hakkarainen@helsinki.fi © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_13
241
242
S. Paavola and K. Hakkarainen
organization studies (Engeström & Blackler, 2005). Artifact mediation plays an important role in several theoretical frameworks, such as distributed cognition (Pea, 1993), knowledge building with conceptual artifacts (Bereiter, 2002), cultural– historical activity theory (Engeström, 2015), actor-network theory focusing on the heterogeneous networks of humans and nonhumans (Latour, 1996), and recent discussions on sociomateriality (Leonardi, Nardi, & Kallinikos, 2012; Orlikowski, 2009). Sociomateriality is a posthumanist framework maintaining that objectoriented technology-mediated activity entangles emergent configurations between material (technology) and social (technology use in social contexts) elements. Sociocultural approaches have long emphasized that tools and concepts mediate human interaction with the world and between humans (Miettinen & Paavola, 2018). The central role of artifacts and objects has been emphasized in technology-mediated collaborative learning from the very beginning, starting with Papert’s constructionism (learning by making) (Papert & Harel, 1991) and moving through learning by design (Kolodner, 2002). Object-oriented technology-mediated collaboration goes a step further beyond mere intersubjectivity (human–human interaction) to consider the role of artifacts and objects in human learning and development. Objects and artifacts do not solely have a mediating role, but (knowledge) objects, artifacts, and practices can themselves be seen as targets for collaborative development and modification in educational and professional contexts. The sustained pursuit of the advancement of jointly developed artifacts and practices not only orients collaboration but at the same time requires new ways of organizing collaborative learning and working. Object-driven activities then require new theoretical conceptualizations of different aspects of collaborative learning. Trialogical learning builds on the idea of object-orientedness. Rather than being a mere intersubjective process, collaboration is embedded in heterogeneous networks of artifacts and objects, characterized by Latour (1996) as interobjectivity (see Medina & Stahl, this volume). It refers to those forms of collaborative learning where people are collaboratively and systematically developing shared, tangible “objects” (conceptual or material artifacts, practices, ideas) together. The term has been developed in relation to technology-mediated learning when new technology makes collaborative knowledge processes durable and provides new affordances and means for trialogical efforts of creating knowledge-laden artifacts and related practices. It is not a specific pedagogical model but rather a framework to facilitate, support, and develop objectoriented collaboration and knowledge creation in different contexts. The terms “trialogical learning” and “trialogical approach” are neologisms that were originally presented as a main characteristic of the theories representing the knowledge-creation metaphor of learning (Paavola & Hakkarainen, 2005). The knowledge-creation metaphor of learning is separated from the metaphors of acquisition (the transmission or acquisition of conceptual or factual knowledge from textbooks or from a teacher to learners more or less straightforwardly) and participation (growing up with the prevailing practices of a specific community) (Paavola, Lipponen, & Hakkarainen, 2004). The well-known distinction between the acquisition (AM) and the participation metaphors (PM) of learning was suggested by Anna
Trialogical Learning and Object-Oriented Collaboration
243
Sfard (1998). At that time, theories on the meaning of participation in cultural practices and communities (PM) were challenging traditional “cognitivist” approaches (AM) on learning and human cognition. The cognitivist approaches defined individuals as a site of learning and the human mind as a container of factual and/or conceptual knowledge. The knowledge-creation metaphor is inspired by classic theorists such as Peirce, Popper, and Vygotsky and by educational and organizational theories by Engeström, Nonaka, and Bereiter. It refers to theories of collaborative work and learning, which emphasize dynamic processes for transforming prevailing knowledge and practices. The knowledge-creation metaphor also highlights the importance of cultural practices along with individual initiatives in learning, and ways of transforming traditional dichotomies of human activity and learning (concepts vs. practices, individuals vs. collectives, subjects vs. objects, humans vs. nonhumans). We have ourselves long argued that CSCL would benefit from focusing on theories and models relevant in the modern knowledge society that aim at understanding how people collaboratively advance knowledge or transform their communities (Paavola et al., 2004). A common characteristic of these theories is that they do not concentrate on processes of knowledge acquisition by individual learners (a “monological” approach) nor just on processes of participation in social interaction (a “dialogical” approach), but on understanding those processes where common objects of activity are developed collaboratively (both individually and collectively). Interaction is trialogical when it happens through developing common objects (or objects of activity), not just between people or between people and their environment. From the trialogical perspective, collaboration is not only a matter of sharing meaning and understanding but involves shared efforts of advancing envisioned epistemic objects (e.g., artifacts and practices) that are given tangible (i.e., materially embodied) form in terms of writing, visualization, prototyping, or other means. Concrete artifacts produced in collaborative processes are often generated when seeking to reach envisioned epistemic objects (Knorr Cetina, 2001) at the edge of knowing. Intermediary artifacts are stepping stones that enable advancing the inquiry toward epistemic objects that itself become more complex and open up new questions when pursued. Although collaboration is a social process, individual agents have an important role in it, assuming fertile support provided by community-level practices. The role of mediating artifacts as tools has been emphasized in CSCL research throughout its history. Vygotsky’s cultural–historical theory highlighting the role of artifacts in cultural mediation has influenced CSCL in many ways. Activity theory assists in understanding the dual role of artifacts and tools in relation to objects (Engeström, 2015; Miettinen & Virkkunen, 2005). CSCL environments function as tools that mediate participants’ learning and knowledge-creation processes in various institutional and cultural settings. The participants are engaged in solving complex problems, building and creating, and sharing and advancing knowledge in terms of constructing epistemic artifacts. A growing network of such artifacts mediates, as tools, the learners’ subsequent learning activities and associated inquiries. It can be maintained that digital tools have for long supported mainly either “the information genre” or “the communication genre” (Enyedy & Hoadley, 2006; Lakkala et al., 2009) rather than communal knowledge creation, or trialogical
244
S. Paavola and K. Hakkarainen
processes and actual work with shared objects. The pursuit of object-oriented collaboration is a socially emergent and nonlinear process where the generated artifacts direct and guide the advancement of inquiry in unpredictable ways, affecting future trajectories of collaborative activity. Integrating external tools as instruments of learning activity is a developmental process of its own. Both individuals and communities must undergo a process of gradually transforming artifacts into instruments of their activity (Béguin & Rabardel, 2000). Appropriating CSCL tools for remediating learning activity requires adapting and transforming both the external tools (instrumentation) and the participants’ cognitive–cultural schema (instrumentalization). Because CSCL environments tend to offer a wide variety of tools, it is critical to investigate “which tools are actually picked up and appropriated by learners and how they put them to use for object-oriented endeavors” (Lund & Rasmussen, 2008). Appropriating novel collaborative instruments as tools of collective activity in education or at workplaces is not a trivial matter but requires extensive effort to develop support for social practices. Accordingly, using CSCL environments to foster object-oriented collaboration with knowledge artifacts requires cultivating social practices that support expansive working with knowledge, i.e., knowledge practices (Hakkarainen, 2009). By knowledge practices, we refer to the personal and social practices related to working with knowledge. The term “knowledge” is used in the broadest sense, to include what is explicit or stated in official discourse (e.g., approved texts); what is implicit, informing one’s habits; and the knowledge that underlies the competencies of experts, for example, “procedural knowledge.” Central characteristics of object-oriented knowledge practices are the deliberate transformation of prevailing practices in relation to unfolding knowledge objects (Knorr Cetina, 2001), the systematic pursuit of novelty, and constant work at the edge of competence (Bereiter & Scardamalia, 1993). Trialogical learning is then linked to “the practice turn” in social theory (Schatzki, Knorr Cetina, & Von Savigny, 2001), which assists educational researchers in understanding the theoretically tangible social transformations needed for making educational innovations happen.
2 History and Development The background for trialogical learning and object-oriented collaboration can be sought from various theoretical approaches. They build on classic approaches emphasizing mediation as a basis for understanding human activities, from Hegel and Marx to Vygotsky and subsequent generations of sociocultural researchers. They emphasize “augmentationist” frameworks, according to which human intelligence is augmented and develops through the evolution of external symbolic artifacts and their systems rather than within a human head (Donald, 1991; Skagestad, 1993). Charles Peirce’s emphasis on broadly conceived semiotic mediation and Karl Popper’s theory of objective knowledge provide the philosophical background for this kind of approach. Human beings can control their behavior from
Trialogical Learning and Object-Oriented Collaboration
245
the outside, that is, culturally by using signs and tools (Vygotsky, 1978). As an extension of this, Wartofsky maintained in his historical epistemology that an “[a] rtifact is to cultural evolution what the gene is to biological evolution” (Wartofsky, 1979, p. 205). More recently, distributed cognition (Hutchins, 1995; Pea, 1993) has challenged the idea that human learning takes place mainly in people’s minds or inside their skin, maintaining that learning is materially distributed between minds and cultural–historically developed tools, practices, and environments. The emergence of literacy transformed human-cognitive architecture as profoundly as earlier leaps in biological evolution. It opened various external memory fields for writing and visualization that assist in solving significantly more complex problems than can be done with the unaided human mind (Donald, 1991). The other aspect is the social distribution of intelligence. Human beings are ultra-social and hyper-collaborative beings in nature who are able to merge and fuse intellectual effort and create collective cognitive systems together. CSCL capitalizes on these human capabilities of materially, socially, historically, and culturally distributed cognition (Pea, 1993; Vygotsky, 1978). In the social sciences and organizational learning, the “practice turn” has been discussed for some time (Schatzki et al., 2001). There are different kinds of practice theories (see Miettinen, Paavola, & Pohjola, 2012), but mainly they aim at transcending traditional dichotomies of human and nonhuman entities by emphasizing materially mediated and/or embodied activities. The initial theories of CSCL were strongly rooted in cognitive–psychological learning research and, accordingly, foregrounded conceptual and mental aspects of learning (e.g., research on intentional learning, inquiry learning, and conceptual change). Although such research assisted in understanding various aspects of personal learning and the development of expertise, it runs into difficulties when attempting to implement CSCL practices in educational institutions. The reason for this difficulty was because the notion of social practices was undertheorized by most of the cognitively inclined researchers. Yet, implementing CSCL in educational practices calls for transforming technologymediated social systems and institutional practices that were invisible to researchers of information and communication technologies. Although information was conveyed and communication was established, the established systemic practices or learners and teachers did not tend to change beyond a superficial level. Practicerelated considerations regarding situated and participatory aspects of learning (Lave & Wenger, 1991) assisted the theoretical and methodological advancement of CSCL research. As important as the practice turn and situated learning theories have been, such approaches run into difficulties in properly addressing knowledge-related aspects of learning in educational and professional contexts. Digital technologies make knowledge processes a durable and visible part of personal and collaborative learning (Nerland, 2012). Emerging CSCL practices, as well as CSCW practices, engage participants in sustained efforts of building, sharing, and creating knowledge. The trialogical approach foregrounds sustained work with epistemic artifacts and practices as a central aspect of collaborative learning and knowledge-intensive work. Consequently, Jensen, Lahn, and Nerland (2012) address future challenges of
246
S. Paavola and K. Hakkarainen
knowledge work and state that we need a “knowledge turn” after the practice turn. Knowledge must again return to considerations of professional learning so as to adequately address the epistemification or scientification of digitalized professional practices currently dealing with increasingly complex problems, multi-professional collaboration, international quality systems, and global standards (Hakkarainen, Palonen, Paavola, & Lehtinen, 2004). Following Knorr Cetina (1999), the trialogical framework understands knowledge as a practice to be socio-materially embodied in digital collaborative technologies and the practices of groups, communities, organizations, and networks. The practice-based considerations of knowledge work are critical because the objects of professional work are becoming increasingly complex and messy and require advanced problem-solving, expansive learning, and the creation of new knowledge, anchored on supporting collectively shared knowledge practices. The notion of objects has become, in different forms, more prevalent in current social scientific research (Ewenstein & Whyte, 2009). Objects have intrigued, among others, philosophers (Harman, 2018), science and technology researchers (Knorr Cetina, 1999; Latour, 2005), organizational researchers (Engeström & Blackler, 2005), engineering and architectural design researchers (Ewenstein & Whyte, 2009; Paavola & Miettinen, 2018), and CSCL researchers (Bereiter, 2002). The notion of “boundary objects” allowing collaboration across different communities without consensus is nowadays often used (Star & Griesemer, 1989). “Epistemic objects” refer to the open-ended nature of investigative projects that involve pursuing partially understood objects of research and development at the edge of the knowledge and understanding that guide the inquiry, becoming constantly more complex when studied (Knorr Cetina, 1999, 2001). “Intermediary objects” refer to evolving versions and different phases of the objects to be constructed (Paavola & Miettinen, 2018; Vinck, 2011). As stated above, objects are not important solely as mediating artifacts. Objectorientedness highlights the dynamic and motivating meaning of objects in collaboration. Activity theory has highlighted that the object of activity indicates the motive of each activity (Engeström, 2015); “the object of an activity is its true motive” (Leontjev, 1978, p. 62). Investigators can make “sense” of an activity by analyzing the nature of the objects pursued. In an educational context it is, for example, different to do the learning assignments as “schoolwork” than to orient toward solving vital community problems, contributing to society, and creating something that is useful for others to use. Modern knowledge work is more complex and interdisciplinary than before, and the expansion of the object places theoretical and methodological challenges on activity theory (Spinuzzi, 2011). Spinuzzi has highlighted that changes in work activity mean that objects are nowadays often representational objects (such as models, applications, plans) and more multidimensional than before: “more broadly circulated, shared, and interpreted in different activities” (Spinuzzi, 2011, p. 463). Digitalization has reshaped many aspects of object-oriented learning and working in terms of digital instruments, which enable constructing the digitally augmented objects being inquired about, representing and sharing objects, engaging in real-time virtual interaction around
Trialogical Learning and Object-Oriented Collaboration
247
artifacts in-the-making, intermixing the digital and material features of objects, and building extended networks for the pursuit of inquiry and knowledge creation. At the same time, the modern world challenges traditional ways of understanding the nature of practices and objects as a part of knowledge work. Creative knowledge work requires the dynamic, creative, and reflective notion of practices instead of standard procedures, recurrent processes, and rule-based routines. Knorr Cetina (2001) has highlighted the motivating force of “epistemic objects” in such expert work as the pursuit of innovations or academic research. The targeted epistemic objects are open-ended and “have the capacity to unfold indefinitely” (Knorr Cetina, 2001, p. 181). The pedagogic models being investigated by learning scientists may, accordingly, be understood as epistemic objects. Epistemic objects function as motivating factors in learning and social activity, and direct and guide personal and collaborative activity both in education and professional work (Jensen et al., 2012; Nerland & Jensen, 2012; also Hakkarainen et al., 2004). A central aspect of CSCL has been to engage students in a research-like, progressive inquiry process guided by their own question, working theories, and other knowledge objects (Hakkarainen et al., 2004). By following the dominating tradition of the philosophy of science, progressive inquiry was initially understood to be mainly a conceptual process (influenced by theories close to the acquisition metaphor of learning). In accordance with Bereiter’s (2002) knowledge-building theory, the objects pursued in collaborative learning were understood as conceptual artifacts. Sustained efforts of implementing CSCL in education made us aware of the importance of embedding epistemic activities into deliberately cultivated social practices. The parallel pursuit of investigations in knowledge-intensive organizations has assisted in understanding how expert knowledge is distributed and stretched over concepts and instruments, methods and procedures, embodied arrangements of laboratory spaces, and networks of peers and experts (Knorr Cetina, 1999; Latour & Woolgar, 1986; Pickering, 1995), instead of arising from the mere rational process of individual minds. Knowledge creation and invention appear to rely on collectively cultivated epistemic practices that guide and channel the participants’ intellectual efforts in creative and expansive ways, instead of representing mysterious individual gifts or creative talents. The trialogical framework emerged from associated efforts of expanding Bereiter’s (2002) knowledge-building approach toward a practice-based direction by taking epistemic, social, and material practices into account (Hakkarainen, 2009). Collaborative knowledge advancement presupposes the transformation of related knowledge practices, which requires time and sustained efforts from teachers, students, and researchers. A set of design principles have been developed to support trialogical knowledge practices (Hakkarainen & Paavola, 2009; Paavola, Lakkala, Muukkonen, Kosonen, & Karlgren, 2011). They have a dual nature: (1) they point out characteristics that can be called “trialogical” and (2) give broad guidelines for enhancing the trialogical features of the learning settings in question. The set of design principles (DPs) of trialogical learning was developed as:
248
S. Paavola and K. Hakkarainen
DP1: Organizing activities around advancing shared objects. DP2: Supporting the integration of personal and collective agency and work (through developing shared objects). DP3: Fostering long-term processes of knowledge advancement with shared objects, whether artifacts or practices. DP4: Emphasizing development and creativity in shared objects through transformations and reflection. DP5: Promoting the cross-fertilization of various knowledge practices and artifacts across communities and institutions. DP6: Providing flexible tools for developing artifacts and practices. CSCL research has traditionally emphasized interactional and dialogic theories of learning and human cognition and shared meaning making (Koschmann, 1999; Stahl, Koschmann, & Suthers, 2006; Suthers, 2006). The role of artifacts and objects in CSCL research has, however, become clearly more prominent recently (see e.g., Ludvigsen, Stahl, Law, & Cress, 2015; Stahl, Ludvigsen, Law, & Cress, 2014). The meaning making and dialogical tradition highlight issues such as meaning, intersubjectivity, dialogues, language, communication, different voices and perspectives, and common ground, whereas trialogical approaches highlight jointly produced artifacts, (shared or intermediary) objects, collaboration, tools, indexicality, and (knowledge) practices. There is no sharp contrast but rather a continuum between meaning-making approaches on the one hand and trialogical approaches with objectorientedness on the other (see Paavola & Hakkarainen, 2009). Further, intermediate forms exist between dialogical and trialogical interaction such as “anchored discussion” or “object-oriented discussion,” where the discussion is focused on a specific theme or parts of some document. All in all, dialogical and trialogical approaches bring forth different kinds of emphases. Trialogical learning focuses on collaborative processes of developing artifacts and deliberate efforts to transform prevailing practices. It typically focuses on the activities of small groups and the organization of their collaboration (see the chapter concerning group practices). However, object-orientedness brings forward the need to broaden the perspective and the unit of analysis. Work with knowledge artifacts requires sustained efforts of developing related social practices (Hakkarainen, 2009). By relying on Bakhtin’s theory of chronotopes, it may be maintained that objectorientedness has certain temporal and spatial implications (Ritella & Hakkarainen, 2012). The temporal structure of activity is transformed by changing participants’ intangible ideas into shared epistemic artifacts and, thereby, bringing the results of past inquiries to the present. The spatial transformation involves (a) sharing objectdriven inquiries regardless of location and making remote knowledge resources immediately accessible and (b) working with objects through qualitatively different semiotic spaces organized in multiple ways. These spatial and temporal processes may be fused to create a novel chronotope of technology-mediated collaborative learning. Further, cultural–historical activity theory has been used in CSCL research to highlight the complex social, cultural, and historical dynamics that influence CSCL practices (Timmis, 2014).
Trialogical Learning and Object-Oriented Collaboration
249
3 State of the Art The trialogical approach is not a specific pedagogic model but rather a metalevel framework for identifying, examining, and fostering learning in line with the knowledge-creation metaphor of learning, going beyond mere individual knowledge acquisition or social participation. This kind of learning is by its nature interventionist and transformative. It aims at transforming current practices of learning by taking into account and developing theories, pedagogical practices, and technologies in line with object-oriented collaboration. It can be argued that human learning and productive cultural activity are inherently trialogical in nature in terms of involving collaborative efforts of creating and extending shared objects of activity. But at the same time, trialogical learning requires conscious effort and deliberate ways of organizing collaboration to be successful. The trialogical approach has been and can be developed in various directions depending on the context and methodological choices. In this chapter, we present some of these directions. Trialogical learning and object-oriented collaboration are quite broad notions (as with “dialogical learning”) which can be interpreted in many ways. An important starting point for the trialogical framework has been learning taking place in educational institutions. With the assistance of the trialogical design principles, the focus has been on ways of promoting knowledge-creating learning from elementary to higher education. Elements of trialogical learning can be found from many pedagogic models, such as problem-based learning, project-based learning, flipped learning, game-based learning, learning by design, and learning by making. The emphasis is, however, on supporting certain kinds of processes of learning. The trialogical approach highlights sustained efforts of pursuing targeted epistemic objectives by modifying and developing targeted and tangible outcomes. Pedagogic models often highlight practical aspects of scaffolding learning processes, whereas the trialogical approach focuses on combining practical support with ambitious epistemic and creative efforts. The trialogical design principles assist in making pedagogic implementation of these approaches more “trialogical” in terms of involving systemic efforts of creating and developing shared artifacts and cultivating associated knowledge practices. In different contexts, the design principles are implemented in varying ways. Bereiter and Scardamalia’s knowledge-building approach has highlighted collaboration with conceptual artifacts (Bereiter, 2002; Scardamalia & Bereiter, 2014b). Some newer approaches have developed knowledge building in connection with practice-based research and especially with cultural-historical activity theory (Zhang et al., 2018). This kind of “knowledge creation” approach (Tan, So, & Yeo, 2014) provides one kind of trialogical framework where work with conceptual artifacts is central. Several researchers have analyzed collaborative knowledge-creation processes and ways of promoting them in different educational contexts. Damşa and Ludvigsen (2016) have analyzed the co-construction of knowledge objects. They are elaborating the variety of interactions around concepts, ideas, and knowledge
250
S. Paavola and K. Hakkarainen
objects, some of which materialize into texts or related knowledge objects. Muukkonen, Lakkala, and Paavola (2011) have studied knowledge creation in university courses and analyzed the pedagogical infrastructures promoting objectoriented inquiry. The design principles of trialogical learning are used to analyze and develop trialogical learning, especially in the context of higher education (Moen, Morch, & Paavola, 2012). Teachers’ ways of organizing and redesigning courses using trialogical design principles have been studied in different educational contexts (see Ilomäki, Lakkala, Toom, & Muukkonen, 2017; Lakkala, Toom, Ilomäki, & Muukkonen, 2015). Students’ competencies required in knowledge work are analyzed using the knowledge-creation metaphor of learning and work on shared objects as a basic framework (Muukkonen, Lakkala, Toom, & Ilomäki, 2017). Cress and Kimmerle (2008) have analyzed “collaborative knowledge building” around Wikipedia articles. They developed a model using Luhmann’s systems theory and Piaget’s cognitive theory to analyze these kinds of processes. Wikipedia is an interesting topic to analyze because Wikipedia articles are usually not just based on knowledge sharing but on collaborative means of modifying these articles. Cress and Kimmerle do not use the term “trialogical learning” in their article, but the processes are quite similar (see also Cress & Kimmerle, 2018). It should be remarked, however, that some trialogical processes are more ambitious epistemically than others. Bereiter (2010) has maintained that the stated Wikipedia aim of reaching consensus and presenting impartial views is different from the disciplined pursuit of idea improvement that characterizes knowledge building. The rationale of having academic investigators taking an active part in designing, assessing, and improving trialogical learning processes at educational institutions is to provide the students and teachers access to the knowledge practices of research and creative knowledge work. Thereby they are cross-fertilizing educational and creative professional knowledge practices. The trialogical approach aims at anchoring knowledge-creating learning experiments at schools on expert-like practices of co-inquiry and codesign, and capitalizing on research on expertise as well as the newest practice-based standards of science education (Duschl & Bismack, 2016; Osborne, 2014). The present knowledge–practice-driven approach relies on pedagogic applications of three mutually supporting lines of knowledge-creating activity, i.e., (1) scientific practices, (2) engineering practices, and (3) practices of learning through collaborative design.
3.1
Inquiry Learning and Engineering Practices
Scientific practices engage students in working with objects such as questions, generating working hypotheses, carrying out experiments, analyzing results, visualizing and modeling results, presenting evidence-based arguments, and reporting (Osborne, 2014). The knowledge-creation potential of such practices even at the elementary level of education has been revealed through various studies on inquiry learning, investigative learning, and knowledge building. Engineering practices, in
Trialogical Learning and Object-Oriented Collaboration
251
turn, focus on applying scientific knowledge to investigating complex open-ended challenges as objects, envisioning potential solutions, determining their criteria, constructing and iteratively testing solutions, modeling solutions, comparing their strengths and weaknesses, and building and communicating results (Krajcik & Shin, 2014). Scientific and engineering practices provide critical resources for understanding and integrating the knowledge and methods required for trialogical learning. Together scientific and engineering practices allow integrating inventive activities with cross-cutting curricular challenges across many subject domains.
3.2
Collaborative Design and Maker Culture
Collaborative designing involves team effort to find and construct solutions for a design challenge (Kangas, Seitamaa-Hakkarainen, & Hakkarainen, 2013; Koh, Chai, Wong, & Hong, 2015; Seitamaa-Hakkarainen, Kangas, Raunio, & Hakkarainen, 2012). Further, the design process involves iteratively and nonlinearly developing the design objects involved in ideation (coming up with design ideas); studying users and their needs; analyzing constraints; exploring and testing various aspects of design; creating mock-ups and prototypes; getting feedback from peers, users, and experts; and constructing and manufacturing the design object. Design objects are instantiated in a series of successively more refined artifacts and productions, which enable finding novel perspectives and going beyond the information given. Many investigators from Piaget to Papert and Bruner have emphasized the importance of learning by constructing and inventing artifacts. The present fabrication technology allows bringing the practices of the maker culture to schools in the form of feasible knowledge-creation projects with hitherto unforeseen complexity, intellectual challenge, and aesthetic appeal (Blikstein, 2013; Halverson & Sheridan, 2014; Kafai, Fields, & Searle, 2014). In Finland, compulsory craft studies and associated laboratory spaces enable integrating such activities into regular school work (Korhonen & Lavonen, 2017). Students use the traditional instruments of laboratories in art, craft, technology, and science education (STEAM). Further, students are being introduced to digital fabrication technologies such as 3D CAD and 3D printing, the construction and programming of robots, the design and construction of circuits, and wearable computing (e-textiles), by which one may create multifaceted complex artifacts. Such trialogical practices enable young children to construct complex controllable artifacts with hybrid physical, digital, and virtual features (Riikonen et al., 2018). Maker-centered knowledge practices entail participation in such modes of invention as generating questions, building working theories, solving complex problems, formulating and pursuing promising ideas, sketching, prototyping, and making. Learning by making involves interaction between ideas, instruments, socio-material spaces, and embodied experiences in creative externalization, material explorations, prototyping, and the reciprocal, continued refinement of intangible and tangible design ideas.
252
S. Paavola and K. Hakkarainen
4 The Future Research on object-oriented collaboration, trialogical learning, and knowledge creation has the potential to be developed in different educational, professional, and academic research contexts. Pursuing these lines of research is important because productive participation in the emerging knowledge society, which is oriented toward building a sustainable future, will require the cultivation of novel competencies and practices by all citizens. Instead of merely promoting intellectual elites, all citizens need sophisticated innovation competencies and related knowledge practices, and associated identities as potential developers and creators of knowledge. This requires an improved understanding of associated cultural creative practices in learning and the processes involved in gaining or deepening knowledge, and organizing knowledge work and learning. Different socio-cultural dimensions (technological, social, institutional) that affect how learning is organized or understood need to be taken into account. Building a productive culture of object-oriented collaboration at educational institutions is a socially emergent process (Sawyer, 2005), which requires deliberate long-term orchestration (Viilo, Seitamaa-Hakkarainen, & Hakkarainen, 2016). Different time scales and layers of activities have an effect on the development of collaborative learning (see Stahl, 2013). This requires building a research–practice partnership that engages both researchers and practitioners in solving persistent problems of practice related to school transformation, that is, from a long-term developmental perspective starting from real-world practices and problems (Coburn & Penuel, 2016). The trialogical framework emerged from efforts to foster object-oriented collaborative learning and knowledge-creation processes through CSCL environments. The research objects and practices reflect the nature of the data that various kinds of environments (i.e., discussion forums, knowledge-building environments) provided. Novel digital fabrication technologies have enabled engaging students in coinventing complex artifacts with hybrid physical and digital features. Students are also engaged in learning by creative game design, and corresponding codesign projects with virtual reality (VR) and augmented reality (AR) are not far in the future. All of these investigations involve potential trialogical features in terms of sustained collaborative object-driven activity. Collaborative work with digital objects involves “virtual materiality” or “digital materiality” with novel forms of tangibility (see Paavola & Miettinen, 2018), which changes the dynamics and ways of collaboration. Digital artifacts build on a “dubious ontology” (Ekbia, 2009) that challenges existing theories of human activity and learning in many ways. This requires novel conceptualizations on the role of virtual reality and hybrid practices as a part of trialogical processes. Research on CSCL is mediated by the evolving ecology of socio-digital tools. From the perspective of expanding trialogical learning practices across educational fields, it is important that CSCL is no longer confined to monolithic learning environments. The emerging ecology of socio-digital technologies involves
Trialogical Learning and Object-Oriented Collaboration
253
integrated systems of mobile and wireless technologies, enabling any place to be potentially transformed into a learning space within and outside educational institutions (Hakkarainen, Hietajärvi, Alho, Lonka, & Salmela-Aro, 2015). There are thousands of applications that can, at least potentially, be used to facilitate objectoriented collaboration processes. With new cohorts of teachers and students comfortable with various socio-digital technologies, this provides new affordances for eliciting teachers’ and students’ epistemic agency. Without the guidance of educational theories and innovative pedagogies, such grassroot educational efforts may serve narrow rather than in-depth epistemic objectives. Trialogical learning aims at supporting sustained and focused object-driven inquiry in contrast to conventional social media practices that are often fragmented and shallow in nature. The trialogical emphasis on object-driven collaboration does not mean that human development and personal learning is not important. As indicated by Stetsenko (2005), object-orientedness (or object-relatedness) means that individual and collective processes are interrelated and coevolving rather than separate processes. When participants are working with the object, the object is, so to speak, molding the participants by fostering learning and development. The development of human subjectivity and agency should be seen as a central part of the development of collaborative and collective processes. The combination of individual and collective agency, and how that combination can be analyzed requires more theoretical and empirical research. This is also a methodological challenge for the future of CSCL research. Digital ecologies of learning and schooling enable tracking students’ personal, social, and object-oriented learning processes in novel ways (Larusson & White, 2014). Knowledge-creation processes cannot be rigidly scripted but should involve self-organized and socially emergent processes (Scardamalia & Bereiter, 2014a) that may be hard to anticipate and complex to analyze. Research on learning analytics is currently developing instruments and methods for tracing personal and social learning processes (Buckinghan Shum & Deakin Crick, 2016; Chen & Zhang, 2016). From the trialogical perspective, it is important to develop instruments and methods for following the trajectories of the evolving objects that learners are working with. In this regard, epistemic network analysis, which enables examining the interconnectedness of ideas in qualitative discourse data, may provide useful resources (Shaffer et al., 2009). Novel instruments and tools, such as the Idea Thread Mapper (Zhang et al., 2018), have been developed for visually representing the collective progression of an object-driven learning process. The aim is to empower learners to use their own learning data and foster their ownership and shared regulation of the knowledge-creation process. It appears to us that beyond institutional and structural reasons, CSCL practices have not penetrated strongly into the educational system because CSCL researchers have underestimated the in-depth challenges associated with instrumental genesis at the personal and collective levels as well as the associated transformation of social and cultural practices. Investigators have often reported one-shot experiments where participants have appropriated a collaborative learning pedagogy with collaborative technologies. Alternatively, mature CSCL cultures have been analyzed that appear to
254
S. Paavola and K. Hakkarainen
miraculously enact sophisticated learning practices without comprehensive developmental trajectories leading to advanced knowledge practices. It can be maintained that all successful cultures of CSCL are simultaneously also expansive learning communities (Engeström, 2015) focused on problematizing current practices, envisioning changes, and gradually, step by step, consolidating novel inquiry practices. Consequently, we presume that knowledge-practice oriented, sociomaterial investigations on working with knowledge artifacts, which take different “layers” of activities into account, are becoming more prevalent in the future of CSCL research. Implementing knowledge-creation practices in education requires the appropriation of a developmental perspective where the focus is on the epistemological, cognitive, social, and technological infrastructures of CSCL (Bielaczyc, 2013; Lakkala, Muukkonen, Paavola, & Hakkarainen, 2008), that is, designing synergetic scaffolding for knowledge-creating learning cultures (Tabak, 2004). Successful implementation of CSCL practices entangles epistemic and sociomaterial aspects of learning together (see e.g., Fenwick & Edwards, 2010). The enlarging body of interventionist approaches, from design experiments (Bielaczyc, 2013) to designbased implementation research (Penuel, Fishman, Cheng, & Sabelli, 2011), and educational improvement science (Bryk, Gomez, Grunow, & LeMahieu, 2015) to social design experiments (Gutierrez & Jurow, 2016), has been developed to address various aspects of the research–practice partnership needed for transforming education through CSCL.
References Béguin, P., & Rabardel, P. (2000). Designing for instrument-mediated activity. Scandinavian Journal of Information Systems, 12, 173–190. Bereiter, C. (2002). Education and mind in the knowledge age. Hillsdale, NJ: Erlbaum. Bereiter, C. (2010). Can children really create knowledge? Canadian Journal of Learning and Technology, 36, 1. Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves. Chicago, IL: Open Court. Bielaczyc, K. (2013). Informing design research: Learning from teachers’ design of social infrastructure. The Journal of the Learning Sciences, 22, 258–311. Blikstein, P. (2013). Digital fabrication and “making” in education: The democratization of innovation. In J. Walter-Herrmann & C. Buching (Eds.), FabLabs: Of machines, makers, and inventors. Bielefeld: Transcript. Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How American schools can get better at getting better. Cambridge, MA: Harvard Education Press. Buckinghan Shum, S., & Deakin Crick, R. (2016). Learning analytics for 21st century competencies. Journal of Learning Analytics, 3, 6–21. Chen, B., & Zhang, J. (2016). Analytics for knowledge creation: Towards epistemic agency and design mode thinking. Journal of Learning Analytics, 3, 139–163. Coburn, C. E., & Penuel, W. R. (2016). Research-practice partnership in education: Outcomes, dynamics, and open questions. Educational Research, 45, 48–54. Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building with wikis. International Journal of Computer-Supported Collaborative Learning, 3 (2), 105.
Trialogical Learning and Object-Oriented Collaboration
255
Cress, U., & Kimmerle, J. (2018). Collective knowledge construction. In F. Fischer, C. E. HmeloSilver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences. London: Routledge. Damşa, C. I., & Ludvigsen, S. (2016). Learning through interaction and co-construction of knowledge objects in teacher education. Learning, Culture and Social Interaction, 11, 1–18. Donald, M. (1991). Origins of the modern mind. Cambridge, MA: Harvard University Press. Duschl, R., & Bismack, A. S. (Eds.). (2016). Reconceptualizing STEM education: The central role of practices. London: Routledge. Ekbia, H. R. (2009). Digital artifacts as quasi-objects: Qualification, mediation, and materiality. Journal of the American Society for Information Science and Technology, 60(12), 2554–2566. Engeström, Y. (2015). Learning by expanding. An activity-theoretical approach to developmental research. First published 1987. Cambridge: Cambridge University Press. Engeström, Y., & Blackler, F. (2005). On the life of the object. Organization, 12(3), 307–330. Enyedy, N., & Hoadley, C. M. (2006). From dialogue to monologue and back: Middle spaces in computer-mediated learning. Computer-Supported Collaborative Learning, 1(4), 413–439. Ewenstein, B., & Whyte, J. (2009). Knowledge practices in design: The role of visual representations as ‘epistemic objects’. Organization Studies, 30(1), 7–30. Fenwick, T., & Edwards, R. (2010). Actor-network theory in education. London: Routledge. Gutierrez, K. D., & Jurow, S. (2016). Social design experiments: Toward equity by design. The Journal of the Learning Sciences, 25, 565–595. Hakkarainen, K. (2009). A knowledge-practice perspective on technology-mediated learning. International Journal of Computer-Supported Collaborative Learning, 4, 213–231. Hakkarainen, K., Hietajärvi, L., Alho, K., Lonka, K., & Salmela-Aro, K. (2015). Socio-digital revolution: Digital natives vs digital immigrants. In J. D. Wright (editor-in-chief) International encyclopedia of the social and behavioral sciences (Vol. 22, 2nd ed., pp. 918–923). Amsterdam: Elsevier. Hakkarainen, K., & Paavola, S. (2009). Toward a trialogical approach to learning. In B. Schwarz, T. Dreyfus, & R. Hershkowitz (Eds.), Transformation of Knowledge Through Classroom Interaction (pp. 65–80). London: Routledge. Hakkarainen, K., Palonen, T., Paavola, S., & Lehtinen, E. (2004). Communities of networked expertise: Professional and educational perspectives. In Advances in Learning and Instruction Series. Amsterdam: Elsevier. Halverson, E., & Sheridan, K. M. (2014). The maker movement in education. Harvard Educational Review, 84(4), 495–504. Harman, G. (2018). Object-oriented ontology: The new theory of everything. New York: Penguin. Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT. Ilomäki, L., Lakkala, M., Toom, A., & Muukkonen, H. (2017). Teacher learning within a multinational project in an upper secondary school. Education Research International, 2017, 1614262. https://doi.org/10.1155/2017/1614262. Jensen, K., Lahn, L. C., & Nerland, M. (2012). Introduction: Professional learning in new knowledge landscapes: A cultural perspective. In K. Jensen, L. C. Lahn, & M. Nerland (Eds.), Professional learning in the knowledge society (pp. 1–24). Rotterdam, The Netherlands: Sense. Kafai, J., Fields, D. a., & Searle, K. A. (2014). Electronic textiles as disruptive designs. Supporting and challenging maker activities in schools. Harvard Educational Review, 84(4), 532–556. Kangas, K., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2013). Figuring the world of designing: Expert participation in elementary classroom. International Journal of Technology and Design Education, 23, 425–442. Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge, MA: Harvard University Press. Knorr Cetina, K. (2001). Objectual practices. In T. Schatzki, K. Knorr Cetina, & E. Von Savigny (Eds.), The practice turn in contemporary theory (pp. 175–188). London: Routledge.
256
S. Paavola and K. Hakkarainen
Koh, J. H. L., Chai, C. S., Wong, B., & Hong, H. Y. (2015). Design thinking for education: Conceptions and applications in teaching and learning. London: Springer. Kolodner, J. (2002). Facilitating the learning of design practices: Lesson learned from an inquiry in science education. Journal of Industrial Teacher Education, 39(3), 1–31. Korhonen, T., & Lavonen, J. (2017). A new wave of learning in Finland: Get started with innovation! In S. Choo, D. Sawch, A. Villanueva, & R. Vinz (Eds.), Educating for the 21st century: Perspectives, policies and practices from around the world (pp. 447–467). Singapore: Springer. Koschmann, T. D. (1999). Toward a dialogic theory of learning: Bakhtin’s contribution to understanding learning in settings of collaboration. In C. M. Hoadley & J. Roschelle (Eds.), Proceedings of the Computer Support for Collaborative Learning (CSCL) 1999 Conference (pp. 308–313). Mahwah, NJ: LEA. Krajcik, J. S., & Shin, N. (2014). Project-based learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 275–297). New York: Cambridge University Press. Lakkala, M., Muukkonen, H., Paavola, S., & Hakkarainen, K. (2008). Designing Pedagogical Infrastructures in University Courses for Technology-Enhanced Collaborative Inquiry. Research and Practice in Technology Enhanced Learning, 3, 33–64. Lakkala, M., Paavola, S., Kosonen, K., Muukkonen, H., Bauters, M., & Markkanen, H. (2009). Main functionalities of the Knowledge Practices Environment (KPE) affording knowledge creation practices in education. In C. O’Malley, D. Suthers, P. Reimann, & A. Dimitracopoulou (Eds.), Computer supported collaborative learning practices: CSCL2009 conference proceedings (pp. 297–306). Rhodes, Creek: International Society of the Learning Sciences (ISLS). Lakkala, M., Toom, A., Ilomäki, L., & Muukkonen, H. (2015). Re-designing university courses to support collaborative knowledge creation practices. Australasian Journal of Educational Technology, 31(5), 521–536. Larusson, J. A., & White, B. (Eds.). (2014). Learning Analytics: From Research to Practice. London: Springer. Latour, B. (1996). On interobjectivity. Mind, Culture, and Activity, 3(4), 228–245. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press. Latour, B., & Woolgar, S. (1986). Laboratory life: The construction of scientific facts. In Princeton University Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Leonardi, P. M., Nardi, B. A., & Kallinikos, J. (2012). Materiality and organizing: Social interaction in a technological world. Oxford: Oxford University Press. Leontjev, A. N. (1978). Activity, consciousness and personality. Englewood Cliffs: Prentice Hall. Ludvigsen, S., Stahl, G., Law, N., & Cress, U. (2015). From the editors: Collaboration and the formation of new knowledge artifacts. International Journal of Computer-Supported Collaborative Learning, 10(1), 1–6. Lund, A., & Rasmussen, I. (2008). The right tool for the wrong task? Match and mismatch between first and second stimulus in double stimulation. International Journal of Computer-Supported Collaborative Learning, 3(4), 387. Medina, R., & Stahl, G. (this volume). Analysis of group practices. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Miettinen, R., & Paavola, S. (2018). Beyond the distinction between tool and sign: Objects and artefacts in human activity. In A. Rosa & J. Valsiner (Eds.), The Cambridge handbook of sociocultural psychology (pp. 148–162). Cambridge: Cambridge University Press.
Trialogical Learning and Object-Oriented Collaboration
257
Miettinen, R., Paavola, S., & Pohjola, P. (2012). From habituality to change: Contribution of activity theory and pragmatism to practice theories. Journal for the Theory of Social Behaviour, 42(3), 345–360. Miettinen, R., & Virkkunen, J. (2005). Epistemic objects, artefacts and organizational change. Organization, 12(3), 437–456. Moen, A., Morch, A., & Paavola, S. (Eds.). (2012). Collaborative knowledge creation: Practices, tools, concepts. Rotterdam: Sense Publishers. Muukkonen, H., Lakkala, M., & Paavola, S. (2011). Promoting knowledge creation and objectoriented inquiry in university courses. In S. Ludvigsen, A. Lund, I. Rasmussen, & R. Säljö (Eds.), Learning across sites: New tools, infrastructures and practices. New perspectives on learning and instruction (pp. 172–189). Oxon, UK: Routledge. Muukkonen, H., Lakkala, M., Toom, A., & Ilomäki, L. (2017). Assessment of competences in knowledge work and object-bound collaboration during higher education courses. In E. Kyndt, V. Donche, K. Trigwell, & S. Lindblom-Ylänne (Eds.), Higher education transitions: Theory and research (pp. 288–305). New York: EARLI Book Series New Perspectives on Learning and Instruction. Nerland, M. (2012). Professions as knowledge cultures. In K. Jensen, L. C. Lahn, & M. Nerland (Eds.), Professional learning in the knowledge society (pp. 27–48). Rotterdam, The Netherlands: Sense. Nerland, M., & Jensen, K. (2012). Epistemic practices and object relations in professional work. Journal of Education and Work, 25(1), 101–120. Orlikowski, W. J. (2009). The sociomateriality of organisational life: Considering technology in management research. Cambridge Journal of Economics, 34(1), 125–141. Osborne, J. (2014). Teaching scientific practices. Journal of Science Teacher Education, 25, 177–196. Paavola, S., & Hakkarainen, K. (2005). The knowledge creation metaphor—An emergent epistemological approach to learning. Science & Education, 14, 537–557. Paavola, S., & Hakkarainen, K. (2009). From meaning making to joint construction of knowledge practices and artefacts: A trialogical approach to CSCL. In C. O’Malley, D. Suthers, P. Reimann, & A. Dimitracopoulou (Eds.), Computer supported collaborative learning practices: CSCL2009 conference proceedings (pp. 83–92). Rhodes, Creek: International Society of the Learning Sciences (ISLS). Paavola, S., Lakkala, M., Muukkonen, H., Kosonen, K., & Karlgren, K. (2011). The roles and uses of design principles for developing the trialogical approach on learning. Research in Learning Technology, 19(3), 233–246. Paavola, S., Lipponen, L., & Hakkarainen, K. (2004). Modeling innovative knowledge communities: A knowledge-creation approach to learning. Review of Educational Research, 74, 557–576. Paavola, S., & Miettinen, R. (2018). Dynamics of design collaboration. BIM models as intermediary digital objects. Computer Supported Cooperative Work (CSCW), 27(3–6), 1113–1135. Papert, S., & Harel, I. (1991). Constructionism. New York: Ablex. Pea, R. D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 47–87). Cambridge: Cambridge University Press. Penuel, W. R., Fishman, B. J., Cheng, B. H., & Sabelli, N. (2011). Organizing research and development at the intersection of learning, implementation, and design. Educational Research, 40, 331–337. Pickering, A. (1995). The mangle of practice: Time, agency, and science. Chicago: University of Chicago Press. Riikonen, S., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2018). Bringing practices of co-design and making to basic education. In J. Kay & R. Luckin (Eds.), Proceedings of the 13th International Conference on the Learning Sciences “Rethinking learning in the digital age: Making the learning sciences count” (pp. 248–255). London, UK: Institute of Education, University College London.
258
S. Paavola and K. Hakkarainen
Ritella, G., & Hakkarainen, K. (2012). Instrument genesis in technology mediated learning: From double stimulation to expansive knowledge practices. International Journal of ComputerSupported Collaborative Learning, 7, 239–258. Sawyer, R. K. (2005). Emergence: Societies as complex systems. Cambridge, MA: Cambridge University Press. Scardamalia, M., & Bereiter, C. (2014a). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 397–417). New York, NY: Cambridge University Press. Scardamalia, M., & Bereiter, C. (2014b). Smart technology for self-organizing processes. Smart Learning Environments, 1, 1. Schatzki, T. R., Knorr Cetina, K., & Von Savigny, E. (Eds.). (2001). The practice turn in contemporary theory. London: Routledge. Seitamaa-Hakkarainen, P., Kangas, K., Raunio, A.-M., & Hakkarainen, K. (2012). Collaborative design practices in technology-mediated learning. Design and Technology Education: An International Journal, 17, 54–65. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27, 4–13. Shaffer, D. W., Hatfield, D., Svarovsky, G. N., Nash, P., Nulty, A., Bagley, E., Frank, K., Rupp, A. A., & Mislevy, R. (2009). Epistemic network analysis: A prototype for 21st-century assessment of learning. International Journal of Learning and Media, 1, 33–53. Skagestad, P. (1993). Thinking with machines: Intelligence augmentation, evolutionary epistemology, and semiotic. The Journal of Social and Evolutionary Systems, 16(2), 157–180. Spinuzzi, C. (2011). Losing by expanding: Corralling the runaway object. Journal of Business and Technical Communication, 25(4), 449–486. Stahl, G. (2013). Learning across levels. Computer-Supported Collaborative Learning, 8, 1–12. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning: An historical perspective. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (pp. 409–426). New York: Cambridge University Press. Stahl, G., Ludvigsen, S., Law, N., & Cress, U. (2014). CSCL artifacts. International Journal of Computer-Supported Collaborative Learning, 9(3), 237–245. Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, ‘Translations’ and boundary objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39. Social Studies of Science, 19(3), 387–420. Stetsenko, A. (2005). Activity as object-related: Resolving the dichotomy of individual and collective planes of activity. Mind. Culture, and Activity, 12(1), 70–88. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning (IJCSCL), 1(3), 315–337. Tabak, I. (2004). Synergy: A complement to emerging patterns of scaffolding. The Journal of the Learning Sciences, 13, 305–335. Tan, S. C., So, H. J., & Yeo, J. (Eds.). (2014). Knowledge creation in education. New York, NY: Springer. Timmis, S. (2014). The dialectical potential of Cultural Historical Activity Theory for researching sustainable CSCL practices. International Journal of Computer-Supported Collaborative Learning, 9(1), 7–32. Trausan-Matu, S., Wegerif, R., & Major, L. (this volume). Dialogism. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Viilo, M., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2016). Teacher’s long-term orchestration of technology-mediated collaborative inquiry project. Scandinavian Journal of Educational Research, 62(3), 407–432. Vinck, D. (2011). Taking intermediary objects and equipping work into account in the study of engineering practices. Engineering Studies, 3(1), 25–44.
Trialogical Learning and Object-Oriented Collaboration
259
Vygotsky, L. S. (1978). Mind in society. The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wartofsky, M. (1979). Models: Representation and scientific understanding. Dordrecht: Reidel. Zhang, J., Tao, D., Chen, M. H., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with Idea Thread Mapper. Journal of the Learning Sciences, 27(3), 390–430.
Further Readings Damşa, C. I., & Ludvigsen, S. (2016). Learning through interaction and co-construction of knowledge objects in teacher education. Learning, Culture and Social Interaction, 11, 1–18. The article presents an empirical study, employing a design-based research approach, of student teachers’ learning through collaborative, small-group projects and work on shared knowledge objects. The aim was to understand how knowledge objects, e.g., teaching and learning materials, emerge through students’ interaction, how they are developed through iterative co-construction, and how they play a role in the learning process. Interaction data and knowledge objects generated by groups were analyzed through qualitative methods, with a focus on the types of interaction, the uptake of ideas and concepts, and their co-elaboration. Kangas, K., Seitamaa-Hakkarainen, P., & Hakkarainen, K. (2013). Figuring the world of designing: Expert participation in elementary classroom. International Journal of Technology and Design Education, 23, 425–442. The article examines elementary school students’ participation in knowledge-creating learning that involves collaborative design and making of artifacts. With support of a professional designer, students were engaged in figured world of designing and guided to appropriate associated knowledge practices. Paavola, S., & Hakkarainen, K. (2009). From meaning making to joint construction of knowledge practices and artefacts: A trialogical approach to CSCL. In C. O’Malley, D. Suthers, P. Reimann, & A. Dimitracopoulou (Eds.), Computer supported collaborative learning practices: CSCL2009 conference proceedings (pp. 83–92). Rhodes, Creek: International Society of the Learning Sciences (ISLS). The article presents the basics of the trialogical approach to learning. The use of this notion is explained as well as theoretical backgrounds for the approach in line with the knowledge-creation metaphor of learning (Paavola et al. 2004; Hakkarainen et al. 2004). The paper also makes a comparison between trialogical and dialogical theories of learning and their uses in CSCL (computer-supported collaborative learning) research. Ritella, G., & Hakkarainen, K. (2012). Instrumental genesis in technology-mediated learning: From double stimulation to expansive knowledge practices. International Journal of ComputerSupported Collaborative Learning., 7(2), 239–258. This article addressed theoretical foundations of CSCL. The article elaborates concepts of epistemic mediation, chronotope, double stimulation, instrumental genesis, and knowledge practices and their interrelations in the context of promoting educational transformations in the digital age. Zhang, J., Tao, D., Chen, M. H., Sun, Y., Judson, D., & Naqvi, S. (2018). Co-organizing the collective journey of inquiry with Idea Thread Mapper. Journal of the Learning Sciences, 27(3), 390–430. The article addressed the role of technology-mediated knowledge practices in socially organizing collective inquiry processes within two CSCL classrooms. The study revealed that promising directions of object-driven (trialogical) inquiry can be monitored with Idea Thread Mapper (ITM). Moreover, practices of reflective structuration supported long-term advancement of inquiry in terms of active participation, inter-connected contributions, and coherent scientific understanding.
Knowledge Building: Advancing the State of Community Knowledge Marlene Scardamalia and Carl Bereiter
Abstract “Knowledge Building” may be understood as synonymous with “knowledge creation,” as that term is used in organizational science and innovation networks, amplified by a concern with educational benefit and well-being of participants, knowledge for public good, and complex systems conceptions of knowledge creation. Thus, knowledge-building classrooms and networks function in design mode, with “design thinking” as a basic mode of thought. Although one among many constructivist approaches, Knowledge Building is distinguished by an emphasis on advancing the state of community knowledge (comparable to advancing the “state of the art”) and on “epistemic agency”: students’ collective responsibility for idea improvement. Knowledge Forum technology is designed to support knowledge-creating discourse within and between communities and to provide feedback tools that students themselves can use in exercising epistemic agency. Pragmatic epistemological issues are discussed, including prescribed activity structures, external scripts, and conceptual and material artifacts, as these issues relate to self-organization and creative knowledge work. Keywords Knowledge Building community · Knowledge Building analytics · Knowledge Forum · Epistemic agency · Conceptual artifacts · Activity structures · Constructivism
1 Definitions and Scope As recently as the turn of the century, the expression “knowledge building” was to be found only in the writings of a small circle of researchers. It has since made its way into ordinary language and can be found in well over a million web documents, with variations in meaning ranging upward from its being merely an ornamental synonym M. Scardamalia (*) · C. Bereiter University of Toronto, Toronto, ON, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_14
261
262
M. Scardamalia and C. Bereiter
for “learning” to its representing something much more akin to “knowledge creation” as practiced in the research disciplines and innovative organizations. Accordingly, we will capitalize “Knowledge Building” to refer to the educational approach that has grown out of the original nucleus, which has expanded greatly in the learning sciences (Yoon & Hmelo-Silver, 2017) and become institutionalized in Knowledge Building International—a membership organization that conducts annual Knowledge Building Summer Institutes. These Summer Institutes have several times been held concurrently with CSCL meetings. The present chapter is thus limited to a focus on Knowledge Building (upper case) and does not presume to embrace the variety of other approaches that make use of the term “knowledge building” (here used in lower case) with greater or lesser similarity to the meaning of “knowledge building,” as elaborated in this chapter. What, then, distinguishes upper-case Knowledge Building from various other approaches that adopt the label “knowledge building” or at least use the term in their self-descriptions? It is impossible to draw sharp boundaries, but how close an educational approach is to Knowledge Building can be judged by answers to a few key questions: • How much of the focus is on the production of community knowledge—that is, knowledge comparable to the “state of the art” in a discipline, profession, or industry—that goes beyond collaborative construction of individual knowledge (van Aalst, 2009)? • How much “epistemic agency” (Damşa, Kirschner, Andriessen, Erkens, & Sins, 2010) is turned over to the students and how consistently does the teacher work to enable students to take on higher levels of agency? Higher levels, traditionally reserved for the teacher, include goal setting, problem and research formulation and revision, and assessment of collective progress (Messina & Reeve, 2006; Toth & Ma, 2018). • How much attention is given to conceptual artifacts—e.g., explanations, theories, problem formulations, interpretation of results from experiments, and application—as objects of inquiry and knowledge creation beyond the production of tangible products such as documents and “knowledge-embodying” material artifacts (Stahl, 2008)? • How engaged are students in advancing community knowledge and how much pleasure, satisfaction, and sense of community identity do they gain from it? Idea generation, as in brainstorming, comes naturally to young students, but sustained creative work with ideas, aimed at developing useful conceptual and material artifacts, may be largely or totally absent from the educational experience of many students. • How much allowance and support are given for self-organization at cognitive and social levels (Jörg, 2016) as distinct from adherence to prescribed (and sometimes ritualized) activity structures, as proposed by Brown and Campione (1994) for communities of learners? • Overall, how much does life in the educational setting resemble life in a knowledge-creating organization, such as a research laboratory or an innovative
Knowledge Building: Advancing the State of Community Knowledge
263
company or network (Chen & Hong, 2016)? Do community members share goals, a progressive refinement mindset, and commitment to coherenceproducing efforts, with doing and making, repairing, and finding ways around dead ends a norm of engagement? Knowledge-creating organizations differ considerably in different contexts—and not all are committed to knowledge for public good. A knowledge-creating organization in an educational context is different from knowledge-creating organizations in other contexts; nonetheless, there are degrees of resemblance that can range from hardly any resemblance to deep resemblance in terms of goals and processes, despite observable differences. Knowledge Building may be most readily understood as synonymous with “knowledge creation,” as that term is used in organizational science and knowledge management (e.g., Fischer & Fröhlich, 2013)—amplified by a concern with educational benefit and well-being of participants, knowledge for public good, and complex systems conceptions of knowledge creation. Bielaczyc and Collins (2006) itemized distinguishing characteristics of historically important knowledgecreating communities and found that these could be mapped closely on to the practices they observed in two schools engaged in educational Knowledge Building. Knowledge-Building classrooms, in line with other knowledge-creating organizations and networks, function in a mode that gives priority to “design thinking,” with the more common “critical/analytic thinking” playing an important role. Knowledge Forum is also shaped to give “design mode” a central place.
2 History and Development Knowledge Building as a program of theoretical, pedagogical, and technological development originated in the 1980s with the present authors and early collaborators. Several reviews of its history have been published, the most extensive being by Chan (2013) and Chen and Hong (2016). It was preceded by research on written composition, expertise, and intentional learning. Its intellectual roots, in addition to the Deweyan, Piagetian, and Vygotskian roots common to constructivist approaches, were in philosophy of knowledge—particularly, Popper’s (1972) concept of World 3, in which ideas have a semiautonomous status, Lakatos’s (1970) concept of progressive research programs, and Thagard’s (1989, 2007) theory of explanatory coherence. Other researchers have brought different foundational perspectives such as activity theory (Paavola, Lipponen, & Hakkarainen, 2004; van Aalst & Hill, 2006), semiotics (Wells, 2002), Nonaka’s SECI model of knowledge creation (Tan & Tan, 2014), and a socio-cognitive perspective encompassing a number of theoretical views (Stahl, 2000). We see the Popper–Lakatos–Thagard constellation as especially pertinent to Knowledge Building in that it makes knowledge creation an observable form of cultural practice, not a metaphor (cf. Paavola et al., 2004), and gives conceptual artifacts a standing along with physical artifacts such as texts, art objects, and hardware (cf. Stahl, 2008). This seems essential if Knowledge Building/
264
M. Scardamalia and C. Bereiter
knowledge creation is to play a central role in school subjects that are crucially concerned with the conceptual. Furthermore, the theories of Popper, Lakatos, and Thagard are highly compatible with and help to flesh out a complex systems account of how the creation of new knowledge is even possible (Gabora & Kauffman, 2016; Li & Kettinger, 2006)—a basic question that theories of organizational knowledge creation such as Nonaka’s fail to address (Gourlay, 2006). Kimmerle, Moskaliuk, Oeberst, and Cress (2015) provide a detailed argument for taking a complex systems approach to the construction of both community and personal knowledge, treating these as similar processes at two different levels: the social and the cognitive. We would add that if, following Popper, we recognize conceptual artifacts as real things that can be worked on at both social and personal levels, the separation between levels gives way in Knowledge Building to knowledge work conducted as a social process.
3 State of the Art As indicated by the 135,000 Google Scholar documents currently referencing it, knowledge building is a very diverse research field. There are at least ten active research groups around the world. The annual meetings held by Knowledge Building International, referred to earlier, have served to foster cross-site collaborations (e.g., Laferrière, Law, & Montané, 2012), and Knowledge Building Research International is in the process of being organized as a charitable organization to support more integrated research. Conceptually, a thread that ties many knowledgebuilding research projects together is a set of 12 interconnected knowledge-building principles, originally formulated by Scardamalia (2002). The six questions posed in the introduction to this chapter reflect these principles. A fuller account of the principles from a teacher perspective may be found in the Knowledge Building Gallery (Resendes & Dobbie, 2017). One empirical question that has been the object of a variety of studies is “Is Knowledge Building happening?” The 12 knowledge-building principles have served as points of reference for much of this research. Law and Wong (2003) examined the extent to which these principles were manifested in the online work of about 250 students in various school grades engaged in various knowledge-building projects. The investigators distinguished principles that were consistently manifested in students’ notes, others that were characteristic of deeper engagement in an investigation, principles that appeared to depend on teacher guidance, and two principles of which there was no evidence in students’ notes: “pervasive knowledge building” (which by definition would lie outside the work on specific projects) and “symmetric knowledge advancement” (which is an interpersonal/intercommunity construct and therefore not discoverable within the work of individual students). However, in research that examined changes in social–semantic networks over time, symmetric knowledge advancement in the form of rotating leadership based on influential contributions was found to be common in knowledge-building
Knowledge Building: Advancing the State of Community Knowledge
265
classrooms (Ma, Tan, Teo, & Kamsan, 2017). Perhaps the most direct way of addressing the “Is it happening?” question is by portfolio assessment. Using a condensed set of knowledge-building principles, van Aalst and Chan (2007), had students document both collective knowledge-building accomplishments and their own contributions to them. A less direct but potentially illuminating kind of evidence comes from having students themselves explain knowledge-building principles (Toth & Ma, 2018). Teachers’ own understanding of knowledge-building principles is an important but sensitive issue addressed in case studies (Law, Yuen, & Tse, 2012; Messina & Reeve, 2006). Other research questions address the effectiveness of knowledge building and its potential as education for knowledge creation: “How realistic is it to think of students as knowledge creators?” “How can knowledge building address the many changes to school practices required to make it a reality?” “Does knowledge building enable advances across social, cognitive, and emotional dimensions?” “Is it possible to establish effective and scalable professional development with teachers themselves operating in Knowledge Building communities?” “Do online interactions within student and teacher online communities demonstrate patterns found in open innovation networks?” These issues and many more are addressed through knowledge-building research. Generally, studies spanning the curriculum, elementary to tertiary levels, and across achievement levels, nations, and in-and-out of school contexts provide favorable response to these questions (see Further Readings suggested at the end of this chapter, especially the issue of the Canadian Journal of Learning and Technology devoted to Knowledge Building).
3.1
Knowledge-Building Technology
From its earliest days, in the 1980s, Knowledge Building has been closely tied to technology specifically designed to support it. The intervening years have brought a rich diversity of software and hardware that can be put to creative use in knowledge building but does not eliminate and in many cases increases the need for technology that supports knowledge-creating thought and discourse. Originally called CSILE (Computer Supported Intentional Learning Environment) (Scardamalia, Bereiter, McLean, Swallow, & Woodruff, 1989) and in more recent versions called Knowledge Forum, it is free technology aimed at making social and knowledge processes transparent so that students are able to successfully take on high levels of agency in advancing community knowledge. Thus, for example, it provides epistemic markers (familiarly known as “scaffolds”) that suggest actions relevant to various knowledge-building purposes—common ones being “My theory,” “I need to understand,” “A better theory,” “This theory does not explain,” “New information,” and “Putting our ideas together.” Unlike other educational software that has adopted such supports, Knowledge Forum does not require students to use the markers or to use them in a determined order. They can be turned off, used during composition, or afterward for reflective review. For instance, Grade 2 students, aided by analytic
266
M. Scardamalia and C. Bereiter
tools in Knowledge Forum, reviewed their community profile and noted that while they had many theories they had very little new information to support them. They redirected their work to find more information without teacher suggestion. Furthermore, the markers are fully editable and it is common for teacher and students to codesign them, by mutual agreement modifying them or introducing a whole new set or multiple sets for different contexts. Knowledge Forum provides two-dimensional “views” (three-dimensional views are under development) where icons representing student notes can be organized, with student teams designing backgrounds and curating views. Ongoing Knowledge Forum developments by an international team include advances with note types (e.g., audio, video, graphic, mathematical, multilingual); video annotation; accessibility; and cross-community search and visualizations of semantic profiles. The goal is to provide a way into Knowledge Building for everyone and crossing of boundaries to engage the world beyond the classroom. Features can be turned on or off to support local needs. Teams are extending and refining student-usable analytic and feedback tools and integrating software to enhance knowledge-building discourse, as elaborated below. Teachers often ask if it is possible to do Knowledge Building without Knowledge Forum. The answer is obviously yes (the Copernican Revolution made do with quill pens and paper), and there are examples of teacher practices that represent significant starts using ordinary classroom materials (e.g., Haneda & Wells, 2000; Resendes & Dobbie, 2017). However, purely oral discussion is not sufficient. It is fine for idea generation, but there needs to be a way of keeping ideas alive and making them objects of inquiry and further development. This often means either that the teacher serves as the collective memory, which takes an essential part of knowledge building away from the students, or uses makeshift technology, such as sticky notes or pouches on the bulletin board. Such low-tech methods require considerable teacher management if they are to work, and even so they lack the affordances of Knowledge Forum described above—notably analytic tools to provide feedback as work proceeds and to support increasingly high levels of reflection and self-organization. Others have advocated collaborative authoring software such as Google Docs (Nevin, Melton, & Loertscher, 2010) and wikis (Cress & Kimmerle, 2008) as more familiar technology for use in knowledge building. Although these have advantages in terms of producing a cumulative product, they lack Knowledge Forum’s affordances for Knowledge Building and in general are not designed to promote idea diversity, the free flow of ideas, reconceptualization, and imaginative leaps. Knowledge Building developers have preferred to incorporate the potential of familiar technology into an environment optimized for knowledge creation. Toward this end, Google Docs and video annotation (Matsuzawa, 2019) have been incorporated into Knowledge Forum as note types; Hypothes.is annotation software and open-source software for data exploration, modeling, and so forth are incorporated with links back to source material to bring new information into the discourse of the knowledge-building community (B. Chen et al., 2019). More generally, Knowledge Forum is engineered so that it can incorporate third-party software and add a reflective and synthesizing layer to it. A current example of this is the incorporation of LightSIDE (Mayfield & Rosé, 2013), a powerful machine-learning platform for
Knowledge Building: Advancing the State of Community Knowledge
267
text analysis, employed as a feedback tool to support evaluation of note content in Knowledge Forum (Zhang & Chen, 2019).
3.2
Knowledge-Building Pedagogy
A challenge and opportunity faced by pedagogical research and invention is to implement effective education based on knowledge-building principles rather than on prescribed procedures or activity structures (Hong & Sullivan, 2009). Thus, raising the level of students’ epistemic agency figures in much of the pedagogical research. Messina and Reeve (2006) describe an elementary grade teacher’s shift over 3 years from allowing limited to greatly increased student agency. Results showed that students not only took readily to higher levels of epistemic agency but produced higher quality discourse and demonstrated deeper content mastery. At the university level, Cacciamani (2010), comparing student work with and without instructor guidance, found that instructor guidance produced a higher output of Knowledge Forum notes but more reading of others’ notes and more linking of ideas and experiences occurred in the self-organized approach. In a university course in research methods, Sigin, van Aalst, and Chu (2015) had students work in fixed groups during the early part of the course and then shift to flexible, opportunistic grouping. They found this shift produced a change to more constructive progressive discourse. The general finding from both formal and informal studies is that, as more epistemic agency is granted, students show more capacity for it than their teachers had anticipated, accompanied by deeper inquiry into the subject they are studying. Ma’s research on rotating leadership, cited earlier (Ma et al., 2017), suggests that levels of student epistemic agency need to be assessed over time, as different students introduce influential ideas or relevant information at different times. The knowledge-building principle of “constructive use of authoritative sources” has become increasingly important as the internet takes over as the main source of information in academic courses. “Constructive” use implies more than reasoned judgment about the trustworthiness of information sources; it implies using authoritative sources along with other sources, such as personal experience and experimental results, to create and improve knowledge products of value. Yeo and Tan (2010) presented a detailed analysis of five high school physics students’ efforts to build an explanation of an actual rollercoaster accident. The most significant “authoritative source” in this case, was the teacher, who introduced the concept of energy change and provided a formula for calculating it. The students then gathered additional concepts related to energy change from internet sources, which then underwent a series of transformations and additions that led finally to a coherent explanation of the accident. Yeo and Tan generalized this process into a model of progressive development of authoritative information into a problem solution. Using Knowledge Forum notes from a university psychology course studying bullying, F-C. Chen, Chang, and Yang (2013), separated notes into ones that referred to authoritative sources and ones that did not, then traced efforts at idea improvement
268
M. Scardamalia and C. Bereiter
over a semester of study. Although a general trend was observed similar to that found by Yeo and Tan—from uncritical registering of information to inferential use of it—there were major differences, which the authors attributed to the difference between social science and physical science knowledge. The main sources of information in the discussions of bullying were not authoritative texts (which accounted for only about 20% of references) but movies, interviews with adolescents, and personal experiences. Interviews with the students indicated, however, that they had little sense of idea improvement, instead perceiving a mere proliferation of opinions. More recent work with media annotations, mentioned earlier, suggests that increasingly effective means for bringing authoritative source material into the discourse will greatly enhance its effective use. Along with many other constructivist approaches, knowledge building recognizes the fundamental importance of dialogue, but with emphasis on discourse that fosters knowledge creation. Thus, knowledge-building discourse is not primarily argumentative or expository. It is a variety of what Mercer (1995/2000) calls “exploratory talk” and what Tsoukas (2009) refers to as “self-distanciating” discourse through which new distinctions are identified. Priority is given to improving ideas and the material artifacts in which they are embedded. Community members are engaged in problem-solving that advances the state of community knowledge, working in design mode rather than simply sharing or evaluating ideas (van Aalst, 2009). There is a need for more detailed descriptions and analyses of knowledgebuilding episodes such as those of Caswell and Bielaczyc (2002), Moss and Beatty (2006), Messina and Reeve (2006), and Yeo and Tan (2010) to provide insights into what it means to take a principles-based approach as well as offering prototypes that beginning teachers may use in crafting their own principled approach. More detailed descriptions and analyses are also needed regarding student action surrounding UN sustainability goals (Toth & Ma, 2018), work in innovation networks (Ma et al., 2017), and design and production of material artifacts—as Seitamaa-Harrarainen and her collaborators have done with respect to weaving (e.g., SeitamaaHakkarainen, Viilo, & Hakkarainen, 2010), to convey pervasive knowledge building—a way of engaging the world not confined to the classroom.
3.3
Knowledge-Building Analytics and Analytical Tools
Whereas learning management systems provide tests of course content or personalized paths through it, Knowledge Forum provides feedback tools that students can use to reflect on their community advances, their contributions to it, and in determining future directions: for instance, a bar graph showing the class’s use of different epistemic markers, a tool that allows students to compare their own use of domain vocabulary with the vocabulary of curriculum guidelines or authoritative sources, a tool for mapping sequences of topically related notes (Zhang & Chen, 2019), and a tool for identifying and assembling promising ideas to be considered in planning further knowledge building (B. Chen, 2017). Results reported by Resendes
Knowledge Building: Advancing the State of Community Knowledge
269
(see Further Reading) show that students as young as grade 2 can use analytic tools to obtain feedback rather than relying on the teacher to provide it, and on their own initiate action based on the feedback. More than 20 analytic tools are available for use in Knowledge Forum, providing a wealth of data for analytics relevant to knowledge building. B. Chen and Zhang (2016) examine the challenges in using learning analytics to advance knowledge creation and review developments that have shown positive effects in knowledge building at the school level. Among distinctive characteristics of knowledgebuilding analytics are: • Group-level assessment. Assessment to support knowledge building should go beyond the level of individual or aggregated individual data to assess variables that only exist at the supra-individual level: for instance, collaboration and the state of the art or the state of knowledge in a community. • Usability. Assessments and analytical tools should be simple and accessible enough that students even in the early school grades can use them to gain insight into their collective knowledge practices (Wise, 2014) and initiate assessment rather than waiting for others to evaluate them. • Transparency (Resnick, Berg, & Eisenberg, 2000). For the benefit of teachers as well as students, information in the form of easy to interpret comparisons and event sequences is preferred over information generated by complex algorithms with the “black box” character of neural net processes. The latter are not ruled out, but users should have opportunities to experiment and play with them to gain a sense of control (like the sense of control they may have with a cell phone, which is also a “black box” as far as ordinary users are concerned). • Visualizations. Clear visualizations can partly offset the opacity of underlying computations (Ma, 2018). • Replays and trends. Information fed back to students should allow them to see change over time and indicate trends or directions, rather than only representing one point in time. For instance, KBdex (Oshima, Oshima, & Matsuzawa, 2012), incorporated into Knowledge Forum, can display animations of social and semantic networks as they grow and change shape over time. An important part of group self-assessment is assessing knowledge progress: Are we getting anywhere? Are our ideas improving? How am I helping? Existing automated tools assess knowledge progress by using an expert corpus or syllabus as a reference, with progress being indicated by increasing closeness to the reference set. This approach can be informatively applied to change over time in a Knowledge Forum database (Velazquez, Ratté, & de Jong, 2017), but in a fundamental way it is at odds with knowledge creation. Judging creativity by its resemblance to something else is anomalous; going beyond the expert corpus or deviating from it in promising ways will likely be scored as falling short. There may be no solution to this particular problem, but we plan experiments to map emerging work of a student community on to the emerging work of an expert community so that convergence or divergence over time can be visualized and made an object of discussion. Also, training a neural net on expert ratings of ideational creativity should produce a tool that is at least as
270
M. Scardamalia and C. Bereiter
good as the nonexpert evaluations normally available in educational settings, and the ratings can become objects of investigation in their own right.
3.4
Epistemological Issues
We have already suggested some philosophical differences among people who are trying to advance knowledge building. The following are three much-discussed issues that are broadly speaking epistemological in that they may reflect differences in what counts as knowledge, but at a more pragmatic level, they represent different views of how knowledge development may best be supported.
3.4.1
Activity Structures
An assumption that now finds agreement in all corners of educational thought is that learning depends on activity of the learner at both cognitive and physical levels. Differences arise concerning the teacher’s role as regulator or facilitator of that activity. The knowledge-building principle of “epistemic agency” calls for teachers to help students take responsibility for knowledge building/knowledge creation at the highest possible levels. Teachers advancing their practices in line with knowledge building principles often engage students in principle-based classroom discussion. “What can we do to be sure we hear from everyone (democratizing knowledge)?” Improvable idea: “What an interesting idea! I never thought of it that way. What can we do to find out?” Collective responsibility for community knowledge: “What do the data (from Knowledge Forum analytic tool) show about ideas from the curriculum that we are not addressing, and ideas we are addressing that are not in the curriculum?” We might think of this reflection-on-practice as principles-based rather than procedures based. The prototype for an educational approach composed of clearly defined and orchestrated activity structures and procedures having epistemic purposes was Brown and Campione’s (1994) Communities of Learners. The activity or “participant” structures—some borrowed, some original—included reciprocal teaching, cross-age tutoring, jigsaw, and benchmark lessons. These were frankly represented by Brown and Campione as rituals, to be practiced repeatedly: The repetitive, indeed, ritualistic nature of these activities is an essential aspect of the classroom ... As soon as students recognize a participant structure, they understand the role expected of them. Thus, although there is room for individual agendas and discovery in these classrooms, they are highly structured to permit students and teachers to navigate between repetitive activities as effortlessly as possible. (p. 236)
While the advantages claimed for prescribed activity structures are persuasive, there is a problem applying them in Knowledge Building. It is revealed in the claim that fixed activity structures enable students to “understand the role expected of
Knowledge Building: Advancing the State of Community Knowledge
271
them.” What if the role expected of students is that of knowledge creators? Can there be a ritualistic activity structure for creating new knowledge and improving it? Some of the popular literature on knowledge creation and design thinking suggests a sequence of activities but studies of the functioning of actual knowledge-creating research groups (e.g., Dunbar, 1997) suggest a much more self-organizing and opportunistic process. None of the Knowledge Building implementations we know of comes anywhere near the level of structuration found in community of learners. In contrast, engaging teachers in principles-based design has led to powerful teacher–researcher innovations (Resendes & Dobbie, 2017). For example, Messina and Reeve found higher levels of all kinds of knowledge-building activity in the self-organized community approach, along with greater content mastery and depth of inquiry. Furthermore, research in progress by Gaoxia Zhu indicates that increased student agency in knowledge building is associated with increased feelings of well-being.
3.4.2
Scripts and Scripting
Scripted and unscripted aspects of creative work with knowledge were the subject of a 2017 CSCL conference colloquium. With the Script Theory of Guidance (“SToG”) (Fischer, Kollar, Stegmann, & Wecker, 2013) positing both external and internal scripts, macro- as well as micro-scripts, it becomes possible to view any educational process as scripted. Thus, the epistemic markers used in Knowledge Forum could be regarded as external scripts, despite many non-script-like features such as those mentioned above. SToG generally calls for externally scripting in the early stages of an educational process and gradually diminishing external scripting as scripts become internalized. While a good case can be made for an external-to-internal process in a conventional teacher-led classroom, Knowledge Building raises the issue of to what extent the student community itself can function as a system adapting to its members’ cognitive and emotional needs while advancing community knowledge. Jianwei Zhang and his team have developed what they call “reflective structuration,” in which teacher and students collaboratively design activities in a particular area (Tao & Zhang, 2017), with positive effects on level of participation and quality of contributions in a before-and-after experiment. The “structuration” may be seen as a kind of scripting, the “reflective” part as self-organization. Adaptation to students’ varying needs is found in all modern education systems. Knowledge-building principles of student agency and community knowledge, however, take adaptation in quite a different direction. The helpful intervention of a teacher or other agent may sometimes be needed, but Plan A is always to help the students deal with the problem as a community. If discussions are flat, scattered, and leave some students out, this becomes a problem for the students themselves to investigate and solve collectively. If an important Knowledge Forum view is turning into a mess, students can appoint classmates to curate the view, clear up redundancies, organize the space to reflect knowledge advances. If an authoritative text is proving incomprehensible, students can marshal their resources to make sense of it
272
M. Scardamalia and C. Bereiter
or they may decide to flag it as something to be dealt with later when they understand more. All these kinds of actions might be described in terms of scripts, but this may not be a very illuminating way to capture the essence of what is going on. The whole area of controversy surrounding structure, scripting, guidance, and control has come to center on the question of “How much?” (see, for instance, the oft-discussed paper of Kirschner, Sweller, & Clark, 2006). To us, this is not the right question. The conventional answer, “Enough but not too much,” is hard to dispute but presages settlement on a mediocre middle ground rather than any innovative leap forward. A potentially more fruitful way of looking at these issues is through the lens of chaos theory, which sees the optimal condition for creativity and self-organization as occurring at “the edge of chaos” (Bilder & Knudsen, 2014). No one is advocating chaos, but activity structures and scripts seem designed to keep students from getting close to the edge. The edge is where unanticipated opportunities arise and evolution is most rapid. The educational challenge is to optimize these conditions for all students. Issues of structure and guidance are sure to arise, but Plan A, as suggested earlier, is to strengthen the ability of the community to identify and solve its own problems. That will require more than encouraging participants to design their own scripts.
3.4.3
Material Artifacts
Producing tangible or material artifacts is an essential part of many constructivist approaches (e.g., Paavola & Hakkarainen, this volume), and will be found throughout examples of knowledge building. Stahl (2008) described such material artifacts as “knowledge-embodying” and declared them to be essential in knowledge building. Setting aside philosophical issues concerning the nature of knowledge, the practical issue for Knowledge Building concerns productive interaction, as conceptual and material actions are intertwined. We agree that sustained collaborative work with ideas requires some semi-permanent representation of the ideas, and that ideas are improved, lost, distorted or in other ways changed as the representation or embodying object takes form. It follows therefore that material and conceptual production must continuously interact. Paavola and Hakkarainen (2014) can be read to assert that the focus of attention and effort should be steadily directed toward the collaborative production of a shared material object, which is the essential third party in a trialogue, along with the human participants who without it only form a dialogue. However, in an earlier paper, Paavola and Hakkarainen (2004) allowed that conceptual artifacts could in some cases serve as the third party. If you grant this, then in practical terms the distinction between conceptual and material artifacts reduces to one of relative emphasis. It is true that the Trialogical research program puts emphasis on the creation of material artifacts—especially in the work of Seitamaa-Hakarainen and collaborators on weaving as a knowledge-creating process (Seitamaa-Hakkarainen et al., 2010). This work fits well within a knowledgebuilding paradigm and has often been presented at knowledge-building summer institutes. Other knowledge-building research has dealt with medical illustration as a
Knowledge Building: Advancing the State of Community Knowledge
273
form of knowledge-embedding artifact (Lax, Russell, Nelles, & Smith, 2009). Knowledge builders typically design the research and produce the models or artifacts that lead to conceptual advances (e.g., Grade 1 students placing leaves in freezers and microwaves to test their theories regarding effects of cold and heat on leaves changing color; Grade 5 students drawing diagrams of gravitational pull to explore Newton’s thought experiment about shooting a cannon from a high mountain peak with enough force and speed to travel around the earth). Knowledge-building descriptions frequently focus on students’ conceptual artifacts—on their ability to treat ideas as real things—because we see this as the most sorely neglected and difficult kind of creative activity in general education (possibly neglected because it is difficult). This emphasis also follows from our earlier research on the psychology of writing, in which we argued that producing knowledge-representing documents can have a transformative effect on knowledge, but only if the process involves coordinated work in two problem spaces—the space of content or knowledge problems and the space of rhetorical or presentational problems (Scardamalia, Bereiter, & Steinbach, 1984). Immature writers focus too much on the writing task (or in other contexts the media production task) and too little on its conceptual content, with the result that authorship produces little development of ideas and understanding. We have seen examples of this in many classrooms, where attention to a written product, model, or display took up all the cognitive oxygen in the room, leaving none for the knowledge supposedly being created. Not to be overlooked in discussions of artifact construction, material or conceptual, is that play with ideas comes naturally to children, much the same as play with physical objects. Playing with words and numbers can be easily observed; play with explanatory ideas—that is, with what children like to call “theories”—is common in Knowledge Building classrooms from prekindergarten on up (see, for instance, Caswell & Bielaczyc, 2002; Moss & Beatty, 2006).
4 The Future: Prospects for a Knowledge-Building Culture Expecting children and naive students to create knowledge is an expectation exceeding those of most school objectives, both in terms of cognitive capabilities and in terms of level of agency or responsibility. Is this heightened expectation realistic? Many of our colleagues in CSCL, in informal conversation, have expressed support for Knowledge Building as an ideal, but one they regard as utopian—in other words, “You can’t get there from here.” In view of the severe systemic constraints on change in education systems, doubts about feasibility are quite understandable. We are optimistic because we see examples of substantial progress toward Knowledge Building in school systems globally with different kinds and degrees of constraint, from urban to rural, from schools with high-stakes testing to schools freed from pressure to conform to curriculum guidelines and testing, from use by new teachers to veterans, from single schools trying to implement a change in pedagogy to
274
M. Scardamalia and C. Bereiter
government-school–university alliances, from kindergarten to postsecondary education, from schools to cross-sector enterprises in health care. Undocumented, but evident in the remarks of many teachers who have taken up Knowledge Building, is the fact that they do not find themselves at a loss as to what to do when applying knowledge-building principles in the absence of prescribed activities. These are usually teachers already familiar with some more generic type of constructivist teaching, such as inquiry or project-based learning. For them, knowledge-building principles do not present forbidding challenges but suggestions of how they can move from lower case to upper case knowledge building. At a deeper level than success stories is our observation that any shift in the direction of Knowledge Building—that is, a shift in the direction of more positive answers to the questions at the beginning of this chapter—has generally had positive results, so that teacher and students are energized and the quality of students’ work reaches a new height. Our optimism is evidently shared by the hundreds who have taken part in annual Knowledge Building Summer Institutes. Whether researchers or practitioners, they have seen instances that are encouraging enough about the prospects for knowledge building to make them willing to cross oceans to get involved with it. Knowledge Building is part of a long-term trend toward making knowledge itself an object of deliberate, creative work (Loo, 2017). Working with ideas in design mode, which is what we mean by creative knowledge work, is spreading rapidly throughout modern societies. The profound changes that will eventually bring education into the knowledge age are not likely to come in response to new regulations. They will come about as teachers, parents, and policy makers begin to assimilate the idea of creating and working with knowledge into their worldviews. In its fullest realizations, Knowledge Building both engages students in authentic knowledge creation and socializes them into an increasingly integrated worldwide culture of knowledge creation. Toward this end, a new international Knowledge Building research institute is being formed to complement Knowledge Building International.
References Bielaczyc, K., & Collins, A. (2006). Fostering knowledge-creating communities. In A. O’Donnell, C. Hmelo-Silver, & G. Erkens (Eds.), Collaborative learning, reasoning, and technology, (pp. 37–60). Erlbaum. Bilder, R. M., & Knudsen, K. S. (2014). Creative cognition and systems biology on the edge of chaos. Frontiers in Psychology, 5, 1104. https://doi.org/10.3389/fpsyg.2014.01104. Brown, A. L., & Campione, J. C. (1994). Guided discovery in a community of learners. In K. McGilley (Ed.), Classroom lessons: Integrating cognitive theory and classroom practice (pp. 229–270). MIT Press. Cacciamani, S. (2010). Towards a Knowledge Building community: From guided to self-organized inquiry. Canadian Journal of Learning and Technology, 36(1). https://doi.org/10.21432/ T27599.
Knowledge Building: Advancing the State of Community Knowledge
275
Caswell, B., & Bielaczyc, K. (2002). Knowledge Forum: Altering the relationship between students and scientific knowledge. Education, Communication & Information, 1(3), 281–305. https:// doi.org/10.1080/146363102753535240. Chan, C. K. K. (2013). Collaborative knowledge building: Towards a knowledge creation perspective. In C. E. Hmelo-Silver, C. A. Chinn, C. K. K. Chan, & A. O’Donnell (Eds.), The international handbook of collaborative learning (pp. 437–461). New York: Routledge. Chen, B. (2017). Fostering scientific understanding and epistemic beliefs through judgments of promisingness. Educational Technology Research and Development, 65(2), 255–277. Chen, B., Chang, Y-H., Groos, D., Chen, W., Batu, W., Costa, S., Peebles, B., & Ma, L. (2019). IdeaMagnets: Bridging Knowledge Building in schools with public discourse. Institute for Knowledge Innovation and Technology. Retrieved September 9, 2019, from https://bodong. me/talk/2019-kbsi-ideamagnets/featured.png. Chen, B., & Hong, H.-Y. (2016). Schools as knowledge-building organizations: Thirty years of design research. Educational Psychologist, 51(2), 266–288. Chen, B., & Zhang, J. (2016). Analytics for knowledge creation: Towards epistemic agency and design-mode thinking. Journal of Learning Analytics, 3(2), 139–163. Chen, F.-C., Chang, C.-H., & Yang, C.-Y. (2013). Constructive use of authoritative sources among collaborative knowledge builders in social science classroom. In N. Rummel, M. Kapur, M. Nathan, & S. Puntambekar (Eds.), To see the world and a grain of sand: Learning across levels of space, time, and scale: CSCL 2013 Conference Proceedings Volume 1—Full Papers & Symposia (pp. 73–80). New York: International Society of the Learning Sciences. Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building with wikis. International Journal of Computer-Supported Collaborative Learning, 3, 105–122. https://doi.org/10.1007/s11412-007-9035-z. Damşa, C. I., Kirschner, P. A., Andriessen, J. E. B., Erkens, G., & Sins, P. H. M. (2010). Shared epistemic agency: An empirical study of an emergent construct. Journal of the Learning Sciences, 19(2), 143–186. https://doi.org/10.1080/10508401003708381. Dunbar, K. (1997). How scientists think: Online creativity and conceptual change in science. In T. B. Ward, S. M. Smith, & S. Vaid (Eds.), Conceptual structures and processes: Emergence, discovery and change (pp. 461–493). American Psychological Association. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi. org/10.1080/00461520.2012.748005. Fischer, M. M., & Fröhlich, J. (Eds.). (2013). Knowledge, complexity and innovation systems. Springer Science & Business Media. Gabora, L., & Kauffman, S. (2016). Toward an evolutionary-predictive foundation for creativity: Commentary on “Human creativity, evolutionary algorithms, and predictive representations: The mechanics of thought trials” by Arne Dietrich and Hilde Haider. Psychonomic Bulletin and Review, 23(2), 632–639. https://doi.org/10.3758/s13423-015-0925-1. Gourlay, S. (2006). Conceptualizing knowledge creation: A critique of Nonaka’s theory. Journal of Management Studies, 43(7), 1415–1436. Haneda, M., & Wells, G. (2000). Writing in knowledge-building communities. Research in the Teaching of English, 34(3), 430–457. Hong, H. Y., & Sullivan, F. R. (2009). Towards an idea-centered, principle-based design approach to support learning as knowledge creation. Educational Technology Research and Development, 57(5), 613–627. Jörg, T. (2016). Opening the wondrous world of the possible for education: A generative complexity approach. In M. Koopmans & D. Stamovlasis (Eds.), Complex dynamical systems in education. Concepts, methods and applications (pp. 59–92). Springer. Kimmerle, J., Moskaliuk, J., Oeberst, A., & Cress, U. (2015). Learning and collective knowledge construction with social media: A process-oriented perspective. Educational Psychologist, 50 (2), 120–137. https://doi.org/10.1080/00461520.2015.1036273.
276
M. Scardamalia and C. Bereiter
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimally guided instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75–86. Laferrière, T., Law, N. W. Y., & Montané, M. (2012). An international knowledge building network for sustainable curriculum and pedagogical innovation. International Education Studies, 5(3), 148–160. Lakatos, I. (1970). The methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91–195). Cambridge University Press. Law, N., & Wong, E. (2003). Developmental trajectory in knowledge building: An investigation. In B. Wasson, S. Ludvigsen, & U. Hoppe (Eds.), Designing for change in networked learning environments. Computer-supported collaborative learning (Vol. 2, pp. 57–66). Springer. Law, N., Yuen, J., & Tse, H. (2012). A teacher’s journey in knowledge building pedagogy. In J. van Aalst, K. Thompson, M. J. Jacobson, & P. Reimann (Eds.), The future of learning: Proceedings of the 10th international conference of the learning sciences (ICLS) 2012 (Vol. 1, pp. 212–219). Sydney, Australia: ISLS. Lax, L. R., Russell, M. L., Nelles, L. J., & Smith, C. M. (2009). Scaffolding knowledge building in a web-based communication and cultural competence program for international medical graduates. Academic Medicine, 84(10), S5–S8. https://doi.org/10.1097/ACM.0b013e3181b37b4d. Li, Y., & Kettinger, W. J. (2006). An evolutionary information-processing theory of knowledge creation. Journal of the Association for Information Systems, 7(9), 593–617. Loo, S. (2017). Creative working in the knowledge economy. Routledge. Ma, L. (2018). Designs for visualizing emergent trends in ideas during community knowledge advancement. In Knowledge building: A place for everyone in a knowledge society proceedings 22nd annual knowledge building summer institute (pp. 158–162). Institute for Knowledge Innovation and Technology. Retrieved from http://ikit.org/KBSI2018/KBSI2018-Proceedings. pdf. Ma, L., Tan, S., Teo, C., & Kamsan, M. (2017). Using rotating leadership to visualize students’ epistemic agency and collective responsibility for knowledge advancement. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th international conference on computer supported collaborative learning (CSCL) 2017 (Vol. 1, pp. 455–462). Philadelphia, PA: International Society of the Learning Sciences. Matsuzawa, Y. (2019). Knowledge forum video annotation to advance community knowledge. Institute for Knowledge Innovation and Technology. Retrieved September 9, 2019, from https:// macc704.github.io/www/VideoAnnotaionTool.html. Mayfield, E., & Rosé, C. P. (2013). LightSIDE: Open source machine learning for text. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 124–135). Routledge. Mercer, N. (1995/2000). The guided construction of knowledge. Talk amongst teachers and learners. Multilingual Matters Ltd. Messina, R., & Reeve, R. (2006). Knowledge building in elementary science. In K. Leithwood, P. McAdie, N. Bascia, & A. Rodrigue (Eds.), Teaching for deep understanding: What every educator should know (pp. 110–115). Corwin Press. Moss, J., & Beatty, R. (2006). Knowledge building in mathematics: Supporting collaborative learning in pattern problems. International Journal of Computer-Supported Collaborative Learning, 1(4), 441–465. Nevin, R., Melton, M., & Loertscher, D. V. (2010). Google Apps for education: Building knowledge in a safe and free environment. Hi Willow Research & Pub. Oshima, J., Oshima, R., & Matsuzawa, Y. (2012). Knowledge building discourse explorer: A social network analysis application for knowledge building discourse. Educational Technology Research and Development, 60(5), 903–921. Paavola, S. & Hakkarainen, K. (2004). “Trialogical” processes of mediation through conceptual artefacts. A paper presented at the Scandinavian Summer Cruise at the Baltic Sea (theme:
Knowledge Building: Advancing the State of Community Knowledge
277
Motivation, Learning and Knowledge Building in the 21st Century), June 18–21, 2004. https:// www.academia.edu/533526/_Trialogical_processes_of_mediation_through_conceptual_ artifacts Paavola, S., & Hakkarainen, K. (2014). Trialogical approach for knowledge creation. In S. C. Tan, H. J. So, & J. Yeo (Eds.), Knowledge creation in education (pp. 53–73). Springer. Paavola, S., & Hakkarainen, K. (this volume). Trialogical learning and object-oriented collaboration. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Paavola, S., Lipponen, L., & Hakkarainen, K. (2004). Models of innovative knowledge communities and three metaphors of learning. Review of Educational Research, 74(4), 557–576. Popper, K. R. (1972). Objective knowledge: An evolutionary approach. Clarendon Press. Resendes, M., & Dobbie, K. (2017). Knowledge Building gallery: Teaching for deep understanding and community knowledge creation (A collection of foundational KB practices and teacher innovations). Leading Student Achievement: Networks for Learning Project. Resnick, M., Berg, R., & Eisenberg, M. (2000). Beyond black boxes: Bringing transparency and aesthetics back to scientific investigation. The Journal of the Learning Sciences, 9, 7–30. Scardamalia, M. (2002). Collective cognitive responsibility for the advancement of knowledge. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 67–98). Open Court. Scardamalia, M., Bereiter, C., McLean, R. S., Swallow, J., & Woodruff, E. (1989). Computersupported intentional learning environments. Journal of Educational Computing Research, 5, 51–68. Scardamalia, M., Bereiter, C., & Steinbach, R. (1984). Teachability of reflective processes in written composition. Cognitive Science, 8(2), 173–190. Seitamaa-Hakkarainen, P., Viilo, M., & Hakkarainen, K. (2010). Learning by collaborative designing: Technology-enhanced knowledge practices. International Journal of Technology and Design Education, 20(2), 109–136. Sigin, T., van Aalst, J., & Chu, S. K.-W. (2015). Fixed group and opportunistic collaboration in a CSCL environment. International Journal of Computer-Supported Collaborative Learning, 10(2), 161–181. Stahl, G. (2000). A model of collaborative knowledge-building. In B. Fishman & S. O’ConnorDivelbiss (Eds.), Fourth International conference of the learning sciences (pp. 70–77). Erlbaum. Stahl, G. (2008). Chat on collaborative knowledge building. Qwerty, 3(1), 67–78. Retrieved from http://GerryStahl.net/pub/qwerty08.pdf. Tan, S. C., & Tan, Y. H. (2014). Perspectives of knowledge creation and implications for education. In S. C. Tan, H. J. So, & J. Yeo (Eds.), Knowledge creation in education (pp. 11–34). Springer Tao, D., & Zhang, J. (2017). Reflective structuration of knowledge building practices in grade 5 science: A two-year design-based research. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th international conference on computer supported collaborative learning (CSCL) 2017 (Vol. 1, pp. 644–647). Philadelphia, PA: International Society of the Learning Sciences. Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12, 435–502. Thagard, P. (2007). Coherence, truth, and the development of scientific knowledge. Philosophy of Science, 74(1), 28–47. Toth, P., & Ma, L. (2018). Fostering student voice and epistemic agency through knowledge building. In Knowledge building: A place for everyone in a knowledge society, Proceedings of the 22nd annual knowledge building summer institute (pp. 96–104). Toronto, Canada: Knowledge Building International. https://www.researchgate.net/publication/329842125_Fos tering_Student_Voice_and_Epistemic_Agency_through_Knowledge_Building. Tsoukas, H. (2009). A dialogical approach to the creation of new knowledge in organizations. Organization Science, 20, 941–957.
278
M. Scardamalia and C. Bereiter
van Aalst, J. (2009). Distinguishing knowledge-sharing, knowledge-construction, and knowledgecreation discourses. International Journal of Computer-Supported Collaborative Learning, 4(3), 259–287. van Aalst, J., & Chan, C. K. K. (2007). Student-directed assessment of knowledge building using electronic portfolios. Journal of the Learning Sciences, 16(2), 175–220. https://doi.org/10.1080/ 10508400701193697. van Aalst, J., & Hill, C. M. (2006). Activity theory as a framework for analysing knowledge building. Learning Environments Research, 9(1), 23–44. https://doi.org/10.1007/s10984-0059000-6. Velazquez, E., Ratté, S., & de Jong, F. (2017). Analyzing students’ knowledge building skills by comparing their written production to syllabus. In M. Auer, D. Guralnick, & J. Uhomoibhi (Eds.), Interactive collaborative learning. Proceedings of the 19th ICL 2016 conference. Advances in Intelligent Systems and Computing (Vol. 1, pp. 345–352). New York: Springer. Wells, G. (2002). Dialogue about knowledge building. In B. Smith (Ed.), Liberal education in a knowledge society (pp. 111–138). Open Court. Wise, A. (2014). Designing pedagogical interventions to support student use of learning analytics. In LAK 2014: 4th international conference on learning analytics and knowledge (pp. 203–211). New York: Association for Computing Machinery. https://doi.org/10.1145/2567574.2567588. Yeo, J., & Tan, S. C. (2010). Constructive use of authoritative sources in science meaning making. International Journal of Science Education, 32(13), 1739–1754. Yoon, S., & Hmelo-Silver, C. (2017). What do learning scientists do? A survey of the ISLS membership. Journal of the Learning Sciences, 26(2), 167–183. Zhang, J., & Chen, M.-H. (2019). Idea thread mapper: Designs for sustaining student-driven knowledge building across classrooms. In C. Hmelo-Silver et al. (Eds.), Proceedings international conference of computer-supported collaborative learning (CSCL 2019). Lyon, France: International Society of the Learning Sciences.
Further Readings Bereiter, C., & Scardamalia, M. (2006). Education for the knowledge age: Design-centered models of teaching and instruction. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (2nd ed., pp. 695–713). Mahwah, NJ: Erlbaum. Provides a more in-depth treatment of certain conceptual issues raised in the present chapter: e.g., the nature of conceptual artifacts, the distinctive character of thinking in design mode, comparison of Knowledge Building to skills approaches to 21st century needs and to other constructivist approaches that aim to bring design thinking into mainline education. Canadian Journal of Learning and Technology. (2010). Special Issue on Knowledge Building. 36 (1). Published online at http://www.cjlt.ca/index.php/cjlt/issue/view/1771. A collection of research articles and background papers showing the diversity of educational settings, from kindergarten to medical school, in which Knowledge Building is being developed and applied. Resendes, M., Scardamalia, M., Bereiter, C., Chen, B., & Halewood, C. (2015). Group-level formative feedback and metadiscourse. International Journal of Computer-Supported Collaborative Learning, 10(3), 309–336. https://doi.org/10.1007/s11412-015-9219-x. Original research on the effects of metadiscourse and feedback tools on Knowledge Building at the primary school level. Smith, B. (Ed.). (2002). Liberal education in a knowledge society. Chicago, IL: Open Court. Papers, both critical and supportive, presented at an interdisciplinary conference on Knowledge Building as a 21st-century approach to liberal education. The book also contains the original formulation of 12 knowledge-building principles and a paper on “intentional cognition” that
Knowledge Building: Advancing the State of Community Knowledge
279
had a formative role leading up to the development of Knowledge Building theory and pedagogy. Tan, S. C., So, H. J., & Yeo, J. (Eds.). (2014). Knowledge creation in education. Singapore: Springer Science + Business Media. The identification of Knowledge Building with knowledge creation runs through most of the contributions to this volume, many of which are forwardlooking, toward pedagogical and technological designs that more fully support knowledge creation by students. Commentaries by Timothy Koschmann and Peter Reimann view Knowledge Building/knowledge creation in a broader context of educational theory and philosophy 2014.
Metacognition in Collaborative Learning Sanna Järvelä, Jonna Malmberg, Marta Sobocinski, and Paul A. Kirschner
Abstract Research has shown that metacognition plays a role in collaborative learning. We view metacognition as a central process supporting all modes of regulation (i.e., self-regulation, shared regulation, and co-regulation), as it enables learners to control and adapt their cognition, motivation, emotion, and behavior at both the individual and group levels. Our claim is that metacognitive monitoring and regulation of collaborative learning can help reduce the collaborative/transactive costs in collaboration and, therefore, contributes to success in computer-supported collaborative learning (CSCL). In this chapter, we discuss the role of metacognition in CSCL and broaden the discussion to regulation. Since regulation in CSCL has been studied increasingly, we review the current state of the art in that research and conclude how technological and digital tools could be implemented for studying and supporting metacognition and regulation in CSCL. Keywords Metacognition · Monitoring · Self-regulated learning · Collaborative learning · CSCL
1 Definitions and Scope While collaborative learning has been of interest for a number of decades, it is often the case that learning teams function poorly. Effective collaboration is more than working in groups or completing a task assignment (Jeong & Hartley, 2018). The
S. Järvelä (*) · J. Malmberg · M. Sobocinski Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland e-mail: sanna.jarvela@oulu.fi; jonna.malmberg@oulu.fi; marta.sobocinski@oulu.fi P. A. Kirschner Department of Educational Sciences and Teacher Education, University of Oulu, Oulu, Finland Open University of the Netherlands, Heerlen, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_15
281
282
S. Järvelä et al.
group, for example, may expend more effort than necessary to effectively or efficiently work together and learn, may encounter emotional and social problems that impede group processes and learning (Kreijns, Kirschner, & Jochems, 2003; Näykki, Järvenoja, Järvelä, & Kirschner, 2017), and/or may not be able to coordinate and execute the necessary learning activities between team members for proper learning (Rogat & Adams-Wiggins, 2014; Zambrano, Kirschner, & Kirschner, 2019). These problems are rooted in the fact that team members often cannot effectively regulate what they do, what they feel, and how they act (Järvelä & Hadwin, 2013). Research has shown that team members do not recognize challenging learning situations and their need for regulation, which restricts their activation of strategic adaptation behavior (Järvenoja, Järvelä, & Malmberg, 2017; Rogat & Adams-Wiggins, 2014). This is where metacognition plays a role in collaboration. We view metacognition as a central process supporting all modes of regulation (i.e., self-regulation, shared regulation, and co-regulation). Metacognition enables learners to control and adapt their cognition, motivation, emotion, and behavior at both the individual and group levels (Hadwin, Järvelä, & Miller, 2017). Only recently we have received evidence that it is not only an individual’s but also a group’s shared processes that matter (Järvelä et al., 2016; Järvelä, Malmberg, & Koivuniemi, 2016), and for this reason, regulating collaborative learning is critical for success. In this chapter, we discuss the role of metacognition in CSCL and especially point out how understanding and supporting metacognition and regulation in learning can make CSCL more successful. In this chapter, we first introduce the critical processes in collaboration for CSCL and focus on the important role of metacognition in learning in general and for CSCL specifically. After that, we broaden the discussion to regulation. Since regulation in CSCL has been studied increasingly, we review the current state of the art in that research. Finally, we conclude how technological and digital tools could be implemented for studying and supporting metacognition and regulation in CSCL. It has long been clear that what matters in collaborative learning is the effort required to construct shared knowledge, that is, the “motor” of collaborative learning (Schwartz, 1995). This includes the intensity of interactions required for detecting and repairing misunderstandings. Early research on collaborative learning (e.g., Dillenbourg, 1999) was especially interested in questions such as “Under which conditions do specific interactions occur?” or “Which interactions are predictive of learning outcomes?” Later research produced guidelines for designing collaborative learning and computer-supported collaborative learning by creating conditions and tools in which effective group interactions are expected to occur (e.g., Jeong & Hmelo-Silver, 2016). Many studies have reported increased knowledge-building activities in CSCL (Zhang, Scardamalia, Reeve, & Messina, 2009) as well as that learners share surprisingly little knowledge after collaboration, even though they quickly adapt mutually in interaction (Fischer & Mandl, 2005; Jeong & Chi, 2007). The question “How do learners build a shared understanding of the task to be achieved?” has gotten more attention, as it points to co-construction of shared understanding (Roschelle & Teasley, 1995) and mental effort (Kirschner, Paas, & Kirschner, 2009; Kirschner, Paas, Kirschner, & Janssen, 2011). Bringing together a group of learners does not guarantee that they will work and learn properly, either as
Metacognition in Collaborative Learning
283
a group or individually. They must develop a shared mental model and/or a collective scheme of cognitive interdependence on how to effectively communicate and coordinate their actions so as to share group knowledge, appropriately distribute available task information, and exploit the quality of participation of each group member in the solution of the problem in hand (Fransen, Weinberger, & Kirschner, 2013). Transactive activities (Kirschner, Sweller, Kirschner, & Zambrano, 2018) play a crucial role in collaborative learning. These synchronous and asynchronous activities enable groups to acquire collective knowledge of who their other group members are, how they can deal with the task, the group’s accuracy and willingness to carry out the task, and how all group members should coordinate what they are doing with each other to accomplish the task together by mediating the acquisition of individual and group domain-specific knowledge and shared generalized knowledge (Kalyuga, 2013; Prichard & Ashleigh, 2007). As stated by Popov, van Leeuwen, and Buis (2017) “learning is particularly likely to occur when the collaborating students engage in transactive discourse (i.e., critique, challenging of positions, and attainment of synthesis via discussion), because this form of discourse gives rise to cognitive activities that stimulate knowledge construction” (p. 426). As is clear, collaborative learning effort is influenced by how well students coordinate their activities across time and transact with each other’s ideas. For example, Popov et al. (2017) studied whether the simultaneous alignment of student activities (i.e., temporal synchronicity) and students successively building on each other’s reasoning (i.e., transactivity) predict the quality of collaborative learning products in computersupported collaborative learning. They found that neither temporal synchronicity nor transactivity was directly related to the quality of group products, which pointed to a need for sociocognitive support for collaboration among group members. In the short run, collaborative learning results in group members trying to successfully perform a certain learning task or solve a specific problem together. In the long run, as an instructional method, it is very important that all members of the group develop effective experience working together that facilitates every member in acquiring domain-specific knowledge from this combined effort (Kirschner et al., 2018). In all, whatever this “effort” or “effective experience” is, the issue is open to debate, highlighting that there are critical aspects of the success of collaborative learning that are still unknown.
2 History and Development Metacognitive monitoring and regulation of collaborative learning can help reduce the collaborative/transactive costs in collaboration. In 1986, Kruger and Tomasello (1986) suggested distinguishing between three types of transacts: transactive statements (critique, refinement, or extension of an idea), transactive questions (requests for clarification, justification, or elaboration of ideas), and transactive responses (justification of ideas or proposals). Furthermore, these transacts can be self-oriented or other-oriented. As such, these activities help create shared task spaces and social
284
S. Järvelä et al.
spaces (Fransen et al., 2013) where teams can function more effectively, efficiently, and enjoyably, not only in terms of communication and coordination but also between interactive minds and with those fine-grained socioemotional processes that make collaboration and actions joint learning activities happen in CSCL. Metacognition is a term originally defined by Flavell (1979) that refers to the conscious awareness of cognitive processes and also includes knowledge of self and different types of tasks, strategies, and metacognitive experiences that all are influential for learning alone or in groups. Self-regulation refers to the ways that learners systematically activate, sustain, and regulate their cognitions, motivations, behaviors, and affects toward attaining their learning goals. When metacognition is viewed from the perspective of self-regulated learning (SRL), it is considered an engine for regulation, cognition, motivation, emotion, or behavior in any phase of the SRL process (Wolters & Won, 2017). This is to say, metacognition is not the same as SRL. SRL has a broader focus than metacognition (Dinsmore, Alexander, & Loughlin, 2008) and the foundation of metacognition is in the mind of the individual, whereas SRL emphasizes the reciprocal interaction between personal factors, the environment, and behavior, which cannot be separated (Bandura, 1989). Thus, when metacognition is viewed from the perspective of SRL, the focus lies in metacognitive monitoring, which focuses on the individual student’s qualities of thoughts and thinking (Winne, 2018) and goes beyond metacognition. Metacognitive monitoring activities are the core processes in individual learning, but we also have an increasing understanding of how metacognition plays a role among peers (Volet, Summers, & Thurman, 2009) and on a group level (Malmberg, Järvelä, & Järvenoja, 2017). Metacognition, as such, is invisible, but it becomes visible via cognitive activities (e.g., using strategies) or externalizing thoughts to other group members (e.g., “I don’t understand what this task is about”) (Winne & Hadwin, 1998). Metacognitive monitoring is always an internal individual mental process, but when it is externalized in the context of collaborative learning, it can trigger co-shared and socially shared regulation of learning. In collaborative learning, this still does not mean that learners necessarily change the course of their activities or externalize their thoughts to other group members even when a need arises. Empirically and methodologically, we still cannot answer questions such as “What is the threshold when metacognitive monitoring triggers regulation in collaborative learning?” Theoretically, metacognitive monitoring can occur in any phase of regulated learning (Winne & Hadwin, 1998) and metacognitive monitoring does not have a clear position in terms of when it occurs, but it can be activated after each regulated learning phase (Sonnenberg & Bannert, 2015). Nevertheless, empirical studies have continuously evidenced that, if a focus on metacognitive monitoring causes collaborating learners to neglect or ignore group members’ utterances, this can lead to dysfunctional group work and hinder the possibility of effective collaboration (Rogat & Linnenbrink-Garcia, 2011). For example, Volet, Vauras, Salo, and Khosa (2017) argue that it is not only individual contributions that make a difference in collaborative learning but also how group members react to their peers’ contributions. They also suggest that, “if questions, doubts and hesitations are not addressed, but instead either ignored or treated as unreasonable, individuals tend to withdraw their participation or stop managing their uncertainty” (Volet et al., 2017,
Metacognition in Collaborative Learning
285
p. 90). Therefore, it is not only a case of metacognitive monitoring but also when and how peers react to it. Because metacognition plays an important role in group learning, technologies have been developed that focus specifically on supporting metacognition in the context of CSCL. The support is not only for individual metacognitive processes but also to raise the group’s metacognitive awareness in terms of shared regulatory activities, such as planning, task performance, monitoring, and reflection. Typically, the support has been provided in the form of scripts or prompts that are used to facilitate learners’ awareness of their metacognitive processes at the individual and group levels. For example, Wang, Kollar, and Stegmann (2017) investigated the effectiveness of adaptable collaboration scripts in an asynchronous text-based CSCL environment in the higher education context. The adaptable script allowed students to modify parts of the script based on their self-perceived needs while solving complex cases related to psychological theory. They found that the adaptable script increased students’ use of monitoring and reflection activities, but it did not have an effect on planning. They concluded that adaptable collaboration scripts actually decreased the need for planning, but at the same time, it provided more opportunities for monitoring the progress of the task. Thus, the results showed that the adaptable script facilitated learners’ use of self-regulation through the promotion of co-regulation processes. Iiskala, Volet, Lehtinen, and Vauras (2015) studied how socially shared metacognition manifested during asynchronous CSCL science inquiry among 12-year-old primary school students. The results showed that socially shared metacognition was present in all the phases of the inquiry activities, and their main function was to maintain the perceived appropriate direction of the ongoing cognitive process. In their study, Su, Li, Hu, and Rosé (2018) investigated college students’ regulatory behaviors in CSCL. Participants completed wiki-supported collaborative reading activities in the context of learning English as a foreign language over a semester. The analysis consisted of content analysis and sequential analysis of the students’ chat logs. Results showed that high-performing groups demonstrated more instances of content monitoring (defined as checking, elaborating, revising, and improving group members’ task response) and higher proportion of evaluation. The sequential analysis revealed that high-performing groups showed a pattern of content monitoring, organizing, and process monitoring. Low-performing groups instead showed a pattern of organizing a set of limited regulatory skills, highlighting the necessity of adaptive scripts in CSCL that would facilitate groups’ co-shared and socially shared regulation of learning.
3 State of the Art We have been working with a concept of socially shared regulation in learning for understanding what constitutes “sharing” cognition, motivation, and emotions among members in collaborative learning groups. Collaborating requires negotiating beliefs and perceptions regarding the collaborative goals and plans about how to
286
S. Järvelä et al.
achieve the task. This is a complex process of co-construction of goals and plans, where metacognition and regulation processes of collaborating individuals need to be exchanged, negotiated, and aligned to achieve shared or joint regulation (Järvelä & Hadwin, 2013). Effective collaboration requires group members to ensure that they work toward the shared goals and reveal to each other when they become aware that their collaboration is not heading toward the shared goals. This means that, during collaboration, learners need to negotiate shared goals to ensure they all work toward the same outcome (e.g., Järvelä, Malmberg, Haataja, Sobocinski, & Kirschner, 2019), maintain a positive socioemotional atmosphere to ensure fluent collaboration (e.g., Lajoie et al., 2015), and finally, coordinate and ensure that each member is responsible for the joint outcome of their collaborative task (Rogat & LinnenbrinkGarcia, 2011). During collaboration, learners engage in metacognitive monitoring, focusing on their cognition (“Am I understanding this?”), motivation and emotions (“Are my feelings or thoughts disturbing my learning progress?”), behavior (“Do I have all the things I need to perform this task?”), and finally, coordination of the collaboration (“Is my group progressing with this task?”). Regulation can be realized in CSCL through different types of regulation. Within collaborative groups, co-regulated learning occurs when learners’ regulatory activities are guided, supported, shaped, or constrained by others in the group (Volet et al., 2009). Co-regulation plays a role in shifting groups toward more productive learning, and it can create affordances and constraints for productive SRL and shared regulation of learning in the forthcoming learning situations (Hadwin et al., 2017). For example, in CSCL, support can come from one person, group members, or affordances from the technological tools. Then, co-regulation serves as a mechanism for shifting regulatory ownership to an individual or group, implying that regulatory expertise is distributed and shared across individuals and evoked when necessary by and for whom it is appropriate (Hadwin et al., 2017). When groups engage in socially shared regulation, they extend their regulatory activity from the “I” to the “we” level to regulate their collective activity in agreement (Järvelä & Hadwin, 2013). Shared regulation is a collectively agentic process in which group members adopt joint goals and standards. They work together to complement and negotiate shared perceptions and goals for the task; they coordinate strategic enactment of the task and collectively monitor group progress and products; and they make changes when needed to optimize collaboration in and across tasks. Socially shared regulation differs from co-regulation to the extent that joint regulation emerges through a series of transactive exchanges amongst group members; therefore, it contributes to the transactive costs of collaborative learning. In all, both regulation types, combined with individuals’ selfregulation, play a part in collaborative learning. CSCL involves multiple people sharing responsibility for a collective task and, ideally, simultaneously shifting between self-regulation, co-regulation, and shared regulation in time (Järvelä & Hadwin, 2013). Through metacognitive monitoring and shared evaluation, learners recognize how the learning process is progressing, and through these regulatory acts,
Metacognition in Collaborative Learning
287
learners are able to respond to the new situations and challenges by optimizing the collaborative processes, standards, and products (Hadwin et al., 2017). Regulation in CSCL has been increasingly studied, especially in high school and higher education CSCL learning contexts, and the research evidence points to the most critical aspects of regulation in collaboration. This evidence deals with learners’ awareness of the need for regulation, time, and adaptation and recognition of the motivational and social aspects of collaboration. Learners do not recognize the opportunities for socially shared regulation (Malmberg, Järvelä, Järvenoja, & Panadero, 2015; Miller & Hadwin, 2015). For example, Malmberg et al. (2015) used planning tools asking learners the type of challenge they confronted as a group and the regulatory strategy they used to tackle the challenge. The researchers found that the groups that outperformed the others identified a variety of cognitive and motivational challenges and also invented strategies to tackle those challenges, whereas the groups who were not that successful repeated the same types of “superficial” challenges, such as time management or challenges with technology, and were not able to identify either cognitive or motivational challenges in their collaboration. Also, Sobocinski, Malmberg, and Järvelä (2017) study showed that, when the temporal order of regulatory phases and types of interaction were compared in groups participating in high- and low-challenge sessions, high-challenge session groups switched between forethought and the performance phase more often, which is a sign of metacognitive monitoring. There is research evidence showing that time and adaptation are important in progress of regulation (Järvelä, Järvenoja, Malmberg, & Hadwin, 2013; Malmberg et al., 2015). Järvelä et al. (2013) used a qualitative lens to explore how groups progress, or do not progress, in their socially shared regulation. Their detailed analysis revealed that regulation develops over time and may relate to the degree of collaborative success, as measured by the quality of a collaborative product. Malmberg et al.’s (2017) study suggest similar findings. The study examined the temporal sequences of regulated learning processes of groups collaborating over 2 months. The temporal analysis showed that collaborative interactions focusing on task execution lead to socially shared planning and that metacognitive monitoring facilitates task execution. The conclusion is that, in order for socially shared regulation to occur, there needs to be a distributed regulated learning process (co-regulation) and joint understanding of a task (knowledge construction) before group members can set the stage for socially shared regulated learning. It has also been noticed that recognizing social and motivational aspects of collaboration and successful regulation of those challenges is favorable for learning performance (Järvelä, Malmberg, & Koivuniemi, 2016; Näykki, Järvelä, Kirschner, & Järvenoja, 2014). For example, Järvelä et al. (2013) conducted a temporal analysis of log data and chat discussions in CSCL. Their study focused both on individual and group regulation processes. The study revealed that individual student SRL activities focus on the metacognitive aspects of learning (e.g., task understanding and monitoring), whereas socially shared regulation involves the coordinative activities of collaboration, such as planning and strategy choices. It was also found that the socially shared regulation of motivation is important in maintaining productive
288
S. Järvelä et al.
collaboration. Bakhtiar, Webster, and Hadwin (2017) conducted a cross-case analysis of two groups collaborating on an online text-based assignment. The findings underline the importance of emotion regulation during planning to achieve a positive socioemotional climate and point to negative emotions serving as a constraint for shared adaptation in the face of challenges.
4 The Future The main interest in CSCL is supporting the fine-grained processes that make collaboration and actions in joint learning activities happen. CSCL asks how technological and digital tools can be designed in order to support learners’ cognition and transactional activities in such a way that they mutually influence each other (Cress, Stahl, Rosé, Law, & Ludvigsen, 2018). Looking at the major problems encountered when using CSCL in practice, one can conclude that many of them might be solved if we had tools at our disposal that could help the participants in CSCL groups regulate learning within the group. In the future, technology can also play a major role both in helping researchers understand the complex invisible processes of metacognition and in regulating the learning and supporting of groups for more efficient collaboration. Our approach has been to develop technological supports and tools for the acquisition of regulation skills (see Järvelä, Kirschner, et al., 2016, for a review). However, the problem is that these tools are not enough for achieving lasting skills in SRL, coRL, and SSRL. Support can be designed that enable learners to increase awareness of their own learning processes and that of others to enhance metacognitive awareness. We emphasize three design principles for supporting SSRL: (1) increasing learners’ awareness of their own and others’ learning processes, (2) supporting the externalization of students’ and others’ learning processes in a social plane and helping in sharing and interaction, and (3) prompting the acquisition and activation of regulatory processes. Additional support for the process of collaboration has been received for implementing group awareness tools (Buder, Bodemer, & Ogata, this volume) as well as for designing learning environments so that they can support or provoke metacognition, for example, by fostering group awareness, implementing scripting for CSCL (see Vogel, Weinberger, & Fischer, this volume), or structuring groups for collaboration (see De Wever & Strijbos, this volume). Since metacognitive monitoring is an internal process, analytical methods that focus on learners’ externalization of metacognition can, at this moment, only partly capture metacognition (e.g., Malmberg et al., 2017). Thus, our current understanding of collaborative processes is based on research focusing on limited objective or subjective measures of collaboration (e.g., self-reports, video, and chat logs, and then analyzing the discourse and/or looking at the products of group work). Iiskala et al. (2015), for example, used qualitative content analysis to identify learners’ externalized metacognition related to regulatory activities and then focused on
Metacognition in Collaborative Learning
289
detailed analysis of the discourse. Similarly, Su, Li, Hu, and Rosé (2018) used chat logs and analyzed them qualitatively to identify various aspects of metacognitive monitoring along with regulatory activities. At present, this is the main data source to identify how and when regulation of learning—along with metacognitive monitoring in collaborating groups—takes place. It is still human-powered, time-consuming qualitative content analysis. Understanding how CSCL technology can aid in data collection can help. Since the data are often collected in the context of CSCL, learner interactions (focusing on regulation or metacognitive monitoring) are often time-stamped. This provides opportunities to explore not only the qualities of regulation but also how regulatory activities associated with metacognitive monitoring are temporally sequenced (Molenaar & Järvelä, 2014). This is achieved through the use of, for example, lag sequential analysis (Malmberg et al., 2017; Su et al., 2018), process discovery (Malmberg et al., 2015; Sobocinski et al., 2017), or social network analysis (Järvelä et al., 2013) to investigate how a group of students collaboratively build their regulation as collaboration proceeds. Methodologically, the analytical approaches described in this chapter are timeconsuming and labor-intensive. Temporal and sequential analytical methods along with qualitative content analysis methods have been effective in describing the functional and interactive aspects of collaboration, but they often lack the power to explain how the integrated metacognitive or socially shared acts play a role in the collaborative learning process. Recent advances in technology and computation are now making it possible to add a set of new process-oriented instruments, real-time measures, and physiological indicators to the data sets (Järvelä et al., 2019), as well as computational analytical methods for data mining, analytics, and visualization (Rosé, 2018). Triangulation of these kinds of data, such as videos, physiological measures, eye tracking, log data, or facial expression detection, can reveal information heretofore unavailable when studying metacognition and regulation in collaboration and CSCL. These new data sources have the potential to reveal hidden processes underlying collaboration, such as engagement (e.g., eye tracking), emotions (facial expression data), or physiological synchrony between the group members (physiological measures). The advantage of this type of temporal data is that it can simultaneously trace a range of cognitive and noncognitive processes that are parallel and overlap. Yet, combining these data sources requires sophisticated analysis methods and theoretical understanding of what the critical processes are in CSCL and how these new unobtrusive data sources can reflect regulation of learning. Since we already have an emerging understanding of how to use such multimodal data to understand the role of metacognition in CSCL (Haataja, Malmberg, & Järvelä, 2018; Malmberg et al., 2018), future research may bring us new ways to capture internal and partly invisible processes of metacognition and make it visible for the students, for example, via dashboards.
290
S. Järvelä et al.
References Bakhtiar, A., Webster, E. A., & Hadwin, A. F. (2017). Regulation and socio-emotional interactions in a positive and a negative group climate. Metacognition and Learning, 13, 57–90. https://doi. org/10.1007/s11409-017-9178-x. Bandura, A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–1184. Buder, J., Bodemer, D., & Ogata, H. (this volume). Group awareness. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Cress, U., Stahl, G., Rosé, C., Law, N., & Ludvigsen, S. (2018). Forming social systems by coupling minds at different levels of cognition: Design, tools, and research methods. International Journal of Computer-Supported Collaborative Learning, 13, 235–240. https://doi.org/10. 1007/s11412-018-9284-z. De Wever, B., & Strijbos, J.-W. (this volume). Roles for structuring groups for collaboration. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Dillenbourg, P. (1999). Introduction: What do you mean by “Collaborative learning”? In P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches (pp. 1–19). Amsterdam: Pergamon. Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on metacognition, self-regulation, and self-regulated learning. Educational Psychology Review, 20 (4), 391–409. Fischer, F., & Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: The role of external representation tools. Journal of the Learning Sciences, 14(3), 405–441. https://doi.org/10.1207/s15327809jls1403_3. Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906–911. Fransen, J., Weinberger, A., & Kirschner, P. A. (2013). Team effectiveness and team development in CSCL. Educational Psychologist, 48(1), 9–24. Haataja, E., Malmberg, J., & Järvelä, S. (2018). Monitoring in collaborative learning: Co-occurrence of observed behavior and physiological synchrony explored. Computers in Human Behavior, 87, 337–347. https://doi.org/10.1016/j.chb.2018.06.007. Hadwin, A. F., Järvelä, S., & Miller, M. (2017). Self-regulation, co-regulation and shared regulation in collaborative learning environments. In D. H. Schunk & J. A. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 83–106). London: Routledge. Iiskala, T., Volet, S. E., Lehtinen, E., & Vauras, M. (2015). Socially shared metacognitive regulation in asynchronous CSCL in science: Functions, evolution and participation. Frontline Learning Research, 3(1), 78–111. https://doi.org/10.14786/flr.v3i1.159. Järvelä, S., & Hadwin, A. F. (2013). New Frontiers: Regulating Learning in CSCL. Educational Psychologist, 48(1), 25–39. https://doi.org/10.1080/00461520.2012.748006. Järvelä, S., Järvenoja, H., Malmberg, J., & Hadwin, A. F. (2013). Part 1 : Underexplored Contexts and Populations in Self-Regulated Learning. Exploring Socially Shared Regulation. Journal of Cognitive Education and Psychology, 12(3), 267–286. Järvelä, S., Kirschner, P. A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual- and grouplevel shared regulatory activities. International Journal of Computer-Supported Collaborative Learning, 11(3), 263–280. https://doi.org/10.1007/s11412-016-9238-2. Järvelä, S., Malmberg, J., Haataja, E., Sobocinski, M., & Kirschner, P. (2019). What multimodal data can tell us about the self-regulated learning process? Learning and Instruction. https://doi. org/10.1016/j.learninstruc.2019.04.004.
Metacognition in Collaborative Learning
291
Järvelä, S., Malmberg, J., & Koivuniemi, M. (2016). Recognizing socially shared regulation by using the temporal sequences of online chat and logs in CSCL. Learning and Instruction, 42, 1–11. https://doi.org/10.1016/j.learninstruc.2015.10.006. Järvenoja, H., Järvelä, S., & Malmberg, J. (2017). Supporting groups’ emotion and motivation regulation during collaborative learning. Learning and Instruction, 70, 101090. https://doi.org/ 10.1016/j.learninstruc.2017.11.004. Jeong, H., & Chi, M. T. H. (2007). Knowledge convergence and collaborative learning. Instructional Science, 35(4), 287–315. https://doi.org/10.1007/s11251-006-9008-z. Jeong, H., & Hartley, K. (2018). Theoretical and methodological frameworks for computersupported collaborative learning. In F. Fischer, C. E. Hmelo-Silver, P. Reimann, & S. R. Goldman (Eds.), International handbook of the learning sciences (pp. 330–339). New York: Taylor & Francis. Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educational Psychologist, 51(2), 247–265. https://doi.org/10.1080/00461520.2016.1158654. Kalyuga, S. (2013). Enhancing transfer by learning generalized domain knowledge structures. European Journal of Psychology of Education, 28(4), 1477–1493. Kirschner, F., Paas, F., & Kirschner, P. A. (2009). A cognitive-load approach to collaborative learning: United brains for complex tasks. Educational Psychology Review, 21, 31–42. Kirschner, F., Paas, F., Kirschner, P. A., & Janssen, J. (2011). Differential effects of problemsolving demands on individual and collaborative learning outcomes. Learning and Instruction, 21, 587–599. Kirschner, P. A., Sweller, J., Kirschner, F., & Zambrano, J. (2018). From cognitive load theory to collaborative cognitive load theory. International Journal of Computer-Supported Collaborative Learning, 13, 213–233. Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: A review of the research. Computers in Human Behavior, 19(3), 335–353. https://doi.org/10.1016/S0747-5632(02)00057-2. Kruger, A. C., & Tomasello, M. (1986). Transactive discussions with peers and adults. Developmental Psychology, 22, 681–685. Lajoie, S. P., Lee, L., Poitras, E., Bassiri, M., Kazemitabar, M., Cruz-Panesso, Hmelo-Silver, C., Wisemand, J., Chane, L., & Lu, J. (2015). The role of regulation in medical student learning in small groups: Regulating oneself and others’ learning and emotions. Computers in Human Behavior, 52, 601–616. https://doi.org/10.1016/j.chb.2014.11.073. Malmberg, J., Järvelä, S., Holappa, J., Haataja, E., Huang, X., & Siipo, A. (2018). Going beyond what is visible: What multichannel data can reveal about interaction in the context of collaborative learning? Computers in Human Behavior, 96, 235–245. https://doi.org/10.1016/j.chb. 2018.06.030. Malmberg, J., Järvelä, S., & Järvenoja, H. (2017). Capturing temporal and sequential patterns of self-, co-, and socially shared regulation in the context of collaborative learning. Contemporary Educational Psychology, 49, 160–174. https://doi.org/10.1016/j.cedpsych.2017.01.009. Malmberg, J., Järvelä, S., Järvenoja, H., & Panadero, E. (2015). Promoting socially shared regulation of learning in CSCL: Progress of socially shared regulation among high- and low-performing groups. Computers in Human Behavior, 52(0), 562–572. https://doi.org/10. 1016/j.chb.2015.03.082. Miller, M., & Hadwin, A. (2015). Scripting and awareness tools for regulating collaborative learning: Changing the landscape of support in CSCL. Computers in Human Behavior, 52, 573–588. https://doi.org/10.1016/j.chb.2015.01.050. Molenaar, I., & Järvelä, S. (2014). Sequential and temporal characteristics of self and socially regulated learning. Metacognition and Learning, 9(2), 75–85. https://doi.org/10.1007/s11409014-9114-2.
292
S. Järvelä et al.
Näykki, P., Järvelä, S., Kirschner, P. A., & Järvenoja, H. (2014). Socio-emotional conflict in collaborative learning-A process-oriented case study in a higher education context. International Journal of Educational Research, 68, 1–14. https://doi.org/10.1016/j.ijer.2014.07.001. Näykki, P., Järvenoja, H., Järvelä, S., & Kirschner, P. (2017). Monitoring makes a difference: Quality and temporal variation in teacher education students’ collaborative learning. Scandinavian Journal of Educational Research, 61(1), 31–46. https://doi.org/10.1080/00313831.2015. 1066440. Popov, V., van Leeuwen, A., & Buis, S. C. A. (2017). Are you with me or not? Temporal synchronicity and transactivity during CSCL. Journal of Computer Assisted Learning, 33, 424–442. Prichard, J. S., & Ashleigh, M. J. (2007). The effects of team-skills training on transactive memory and performance. Small Group Research, 38(6), 696–726. Rogat, T. K., & Adams-Wiggins, K. R. (2014). Other-regulation in collaborative groups: Implications for regulation quality. Instructional Science, 42(6), 879–904. https://doi.org/10.1007/ s11251-014-9322-9. Rogat, T. K., & Linnenbrink-Garcia, L. (2011). Socially shared regulation in collaborative groups: An analysis of the interplay between quality of social regulation and group processes. Cognition and Instruction, 29(4), 375–415. https://doi.org/10.1080/07370008.2011.607930. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. Computer-Supported Collaborative Learning, 128, 69–97. https://doi.org/10. 1145/130893.952914. Rosé, C. (2018). Learning analytics in the learning sciences. In F. Fischer, C. E. Hmelo-Silver, P. Reimann, & S. R. Goldman (Eds.), International handbook of the learning sciences (pp. 511–519). New York: Taylor & Francis. Schwartz, D. L. (1995). The emergence of abstract representations in dyad problem solving. The Journal of the Learning Sciences, 4(3), 321–354. https://doi.org/10.1207/s15327809jls0403_3. Sobocinski, M., Malmberg, J., & Järvelä, S. (2017). Exploring temporal sequences of regulatory phases and associated interactions in low- and high-challenge collaborative learning sessions. Metacognition and Learning, 12, 275–294. https://doi.org/10.1007/s11409-016-9167-5. Sonnenberg, C., & Bannert, M. (2015). Discovering the effects of metacognitive prompts on the sequential structure of srl-processes using process mining techniques. Journal of Learning Analytics, 2(1), 72–100. https://doi.org/10.18608/jla.2015.21.5. Su, Y., Li, Y., Hu, H., & Rosé, C. P. (2018). Exploring college English language learners’ self and social regulation of learning during wiki-supported collaborative reading activities. International Journal of Computer-Supported Collaborative Learning, 13, 35–60. https://doi.org/10. 1007/s11412-018-9269-y. Vogel, F., Weinberger, A., & Fischer, F. (this volume). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Volet, S., Summers, M., & Thurman, J. (2009). High-level co-regulation in collaborative learning: How does it emerge and how is it sustained? Learning and Instruction, 19(2), 128–143. Volet, S., Vauras, M., Salo, A. E., & Khosa, D. (2017). Individual contributions in student-led collaborative learning: Insights from two analytical approaches to explain the quality of group outcome. Learning and Individual Differences, 53, 79–92. https://doi.org/10.1016/j.lindif.2016. 11.006. Wang, X., Kollar, I., & Stegmann, K. (2017). Adaptable scripting to foster regulation processes and skills in computer-supported collaborative learning. International Journal of ComputerSupported Collaborative Learning, 12(2), 153–172. https://doi.org/10.1007/s11412-0179254-x. Winne, P. H. (2018). Cognition and metacognition in self-regulated learning. In D. Schunk & J. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 36–48). New York, NY: Routledge.
Metacognition in Collaborative Learning
293
Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). London: Routledge. Wolters, C., & Won, S. (2017). Validity and the use of self-report questionnaires to assess selfregulated learning. In D. Schunk & J. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp. 307–322). New York: Routledge. Zambrano, J. R., Kirschner, F., & Kirschner, P. (2019). How cognitive load theory can be applied to collaborative learning: Collaborative cognitive load theory. In S. Tindall-Ford, S. Agostinho, & J. Sweller (Eds.), Advances in cognitive load theory: Rethinking teaching (pp. 30–40). London: Routledge. https://doi.org/10.4324/9780429283895-3. Zhang, J., Scardamalia, M., Reeve, R., & Messina, R. (2009). Designs for collective cognitive responsibility in knowledge-building communities. Journal of the Learning Sciences, 18(1), 7–44. https://doi.org/10.1080/10508400802581676.
Further Readings Haataja, E., Malmberg, J., & Järvelä, S. (2018). Monitoring in collaborative learning: Co-occurrence of observed behavior and physiological synchrony explored. Computers in Human Behavior, 87, 337–347. https://doi.org/10.1016/j.chb.2018.06.007. This empirical study is one of the first studies monitoring in collaborative learning by videos and physiological data. It studied how students in a group monitor their cognitive, affective, and behavioral processes during their collaboration, as well as how observed monitoring co-occurs with their physiological synchrony during the collaborative learning session. The results indicate not only the role metacognitive monitoring plays in regulation of the collaborative learning process but also that physiological synchrony could potentially shine a light on the joint regulation processes of collaborative learning groups. Järvelä, S., Hadwin, A. F., Malmberg, J., & Miller, M. (2018). Contemporary perspectives of regulated learning in collaboration. In F. Fischer, C. E. Hmelo-Silver, P. Reimann, & S. R. Goldman (Eds.), Handbook of the Learning Sciences (pp. 127–136). New York: Taylor & Francis. Grounding on decades of self-regulated learning research, this chapter shows that in order to succeed in solo and collaborative learning tasks, students need to develop regulating learning skills and strategies on their own, with peers, and in groups. It introduces the three forms of regulation (self-regulation, co-regulation, and shared regulation), explains the critical processes in regulation, and elaborates what is and what is not regulated learning. The chapter also has recommendations to the design principles and technologies to support regulated learning. Järvelä, S., Kirschner, P. A., Hadwin, A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual- and group-level shared regulatory activities. International Journal of Computer Supported Collaborative Learning, 11(3), 263–280. https://doi.org/10.1007/s11412-016-9238-2. This paper argues that few studies examine the effectiveness and efficiency of CSCL with respect to cognitive, motivational, emotional, and social issues, despite the fact that the role of regulatory processes is critical for the quality of students’ engagement in collaborative learning settings. The authors review the four earlier lines in developing support in CSCL and show how there has been a lack of work to support individuals in groups to engage in, sustain, and productively regulate their own and the group’s collaborative processes. It is discussed how socially shared regulation of learning (SSRL) contributes to effective and efficient CSCL, what tools are presently available, and what the implications of research on these tools are for future tool development.
294
S. Järvelä et al.
Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educational Psychologist, 51(2), 247–265. https://doi.org/10.1080/00461520.2016.1158654. This paper discusses about the critical processes in collaboration for CSCL. It introduces seven core affordances of technology for collaborative learning based on theories of collaborative learning and CSCL practices. Technology affords learner opportunities to (1) engage in a joint task, (2) communicate, (3) share resources, (4) engage in productive collaborative learning processes, (5) engage in co-construction, (6) monitor and regulate collaborative learning, and (7) find and build groups and communities. The proposed framework is illustrated using in-depth explorations of how technologies are actually used to support collaborative learning in CSCL research and identify representative design strategies and technology examples. Kirschner, P. A., Sweller, J., Kirschner, F., & Zambrano, J. (2018). From cognitive load theory to collaborative cognitive load theory. International Journal of Computer-Supported Collaborative Learning, 13, 213–233. This paper discusses how cognitive load theory can be associated not only individual learning but also efficiency and effectiveness of (computer-supported) collaborative learning. While the theory has been considered in instructional design in individual learning, it has often not considered when designing collaborative learning situations or CSCL. This paper illustrates how and why cognitive load theory can throw light on collaborative learning and generate principles specific to the design and study of collaborative learning.
Group Awareness Jürgen Buder, Daniel Bodemer, and Hiroaki Ogata
Abstract In CSCL contexts, group awareness refers to the state of being informed about cognitive and social attributes of group members, and being informed about the products that group members create. This chapter traces the historical developments of research on group awareness, and it provides a classification of group awareness tools that distinguishes between two types of group awareness tools (cognitive and social) and five functional levels of group awareness tools (framing, displaying, feedback, problematizing, and scripting). Selected group awareness tools and selected empirical findings are reported, showing how research on group awareness is theoretically motivated. The chapter concludes by discussing future directions both for the development of group awareness tools and for theoretical progress. Keywords CSCL · Group awareness · Feedback · Social comparison · Learning analytics
1 Definitions and Scope Monica is giving a talk at her institute. During the presentation, she scans her audience. She can give a fair assessment of the gender, age, and ethnicity of a given audience member. She might know many members by name, some might be J. Buder (*) Knowledge Exchange Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany e-mail: [email protected] D. Bodemer Media-Based Knowledge Construction Lab, University of Duisburg-Essen, Duisburg, Germany e-mail: [email protected] H. Ogata Academic Center for Computing and Media Studies, Kyoto University, Kyoto, Japan e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_16
295
296
J. Buder et al.
slightly familiar, others are totally unknown. She can further see what members are doing—some audience members are listening intently while others talk to their neighbors or check their e-mail. There are other things that Monica cannot directly see or hear, but can infer. For instance, her new colleague Heather asked great questions after Monica’s talk, so Monica believes Heather to be an exceptionally bright person. Monica can not only perceive and infer things about individuals in the audience but also about the audience as a whole. Asked by friends later on, she is likely to assess how many people were present and how attentive or friendly they were. All these are examples showing that Monica possesses some kind of group awareness. In its simplest form, group awareness can be defined as having access to knowledge or information about group members and their related attributes. This knowledge and information about others might be mentally stored and retrieved from memory, but it may also be information that can be gleaned from environmental cues. Now imagine that Monica uses digital technologies in order to make her talk available to a wider audience. Therefore, she records her talk as an audio podcast plus slides so that online viewers can access her presentation at any time that is convenient to them. In contrast to giving her talk in front of a live audience, Monica will never really know how many people are going to watch her presentation, who is going to watch it, what people are doing while watching it, and so on. In that case, Monica would lack group awareness. That being said, having group awareness can be defined as the state of being informed about a group and its members. With such a broad definition, group awareness can refer to quite different scenarios: the “group” in group awareness could consist of another person, a small group of people that one is interacting with, or a whole community of thousands of others. The information about the group and its members could refer to what other people do, what other people know, or what other people think or feel. Group awareness can also extend over time: for instance, a person could be informed about what others have done in the past, what they are doing right now, or even what they intend to do in the future. The example of Monica giving a talk in front of a live audience versus recording her talk as a podcast also illustrates another key issue of group awareness, namely its relative absence when people communicate via digital technologies. Only through the development of communication technologies, it became clear that mediated communication often lacks something that we always take for granted in face-toface scenarios—and this certain something was subsequently coined “group awareness.” Consequently, in order to enrich computer-mediated communication, several attempts have been made to create group awareness even in situations where people interact and collaborate over the distance. This led to the development of so-called group awareness tools—mostly visualizations that provide individuals with relevant information about their group and its members. Visualizing information about groups has a long history in CSCL research. However, these visualizations can have very different “addressees.” For instance, patterns of CSCL communication can be visualized so that researchers can better
Group Awareness
297
understand the dynamics of collaboration (e.g., Trausan-Matu, Dascalu, & Rebedea, 2014). Visualizations of groups and their members can also be used to assist teachers in monitoring classroom progress (e.g., Berland, Davis, & Smith, 2015). However, group awareness tools differ from these approaches as the addressees are the interactants themselves—it is the group members themselves who receive visual feedback about a group and its members, and they can put this information to use in order to better regulate their interaction with the group. The present chapter seeks to chart the territory of group awareness research. In the following section, we describe how the field has developed from its early origins to its current state. We will then provide a classification of group awareness tools based on two dimensions—the type of information that they focus on and the functions they support. We will describe selected examples of group awareness tools that have been at the focus of empirical research, and will also provide insights into the theoretical rationales that have driven researchers on group awareness. In the final section, we integrate these findings and speculate about future directions of the field.
2 History and Development The historical development of research on group awareness can be described in five overlapping stages. The first stage traces the origins of group awareness research in the area of computer-supported cooperative work (CSCW). As organizations started to work on a global scale, it has become common that people interact over the distance by using various communication technologies. However, communicating over the distance often involves a lack of social and physical cues (Kiesler, Siegel, & McGuire, 1984; Lauwers & Lantz, 1990). Therefore, early group awareness research in CSCW was spurred by the idea to recreate face-to-face like conditions over the distance. Gutwin and Greenberg (2002) referred to this as providing information about the “who,” “where,” and “what.” Returning to the example of Monica’s talk from the beginning of this chapter, Monica can see who is present in the environment, where other members are located in space, where they are looking, and what they are doing. Having this information at one’s disposal is important to coordinate taskwork and social exchange with others. Consequently, technological support for group awareness started out with very literal attempt to recreate physical space (e.g., the “media space” metaphor of installing video cameras in offices; Bly, Harrison, & Irwin, 1993), but this soon gave way to more abstract means of how to transmit “who,” “where,” and “what” information. For instance, an updated list of participants who are currently logged into a workspace conveys “who” information. In order to transmit “where” and “what” information, early suggestions involved the display of multiple cursors in a document (“where”) or small inserted displays that showed the current content of a partner’s screen (“what”; Gutwin, Stark, & Greenberg, 1995). The key idea of group awareness tools in CSCW was that in virtual communication and collaboration something got lost, and the goal of
298
J. Buder et al.
designers is to bring these lost elements back to the fore, with the richness of colocated face-to-face interaction being the gold standard. In the second stage of group awareness research, the notion of group awareness was adapted to the field of CSCL. Some of the most prominent figures in CSCW applied their version of group awareness (dubbed “workspace awareness”) to educational scenarios and presented their work at the first CSCL conference (Gutwin et al., 1995). Again, the focus was on making sure that learners will have access to the “who,” “where,” and “what” information regarding other learners when navigating through a CSCL environment. The first dedicated CSCL application of a group awareness tool was introduced with the Sharlok system (Ogata, Matsuura, & Yano, 1996). This system not only captured “who” and “what” information about people who were engaged with a learning object, but also notified users about changes (e.g., indicated when a piece of knowledge created by a given learner was edited by a peer). Another early adoption of CSCW ideas into CSCL was carried out by Kreijns and Kirschner (2001). They noted that many CSCL designs were focusing on task-related information, but provided few social affordances. In order to make group awareness about social interactions and relations salient, they proposed a group awareness widget that logged learner activities in an environment over time (“who” information) and displayed this information in a visualization that was available to the entire group. In the wake of these early CSCL developments, more and more investigators within CSCL began to show an interest in group awareness. Tools were developed to show maps of knowledge pieces that learners were interacting with (Ogata & Yano, 2003), to support collaborative writing (Lowry & Nunamaker, 2003), or to combine group awareness with physical objects (El-Bishouty, Ogata, Rahman, & Yano, 2010). What all these developments had in common was to focus on information that would have been visible if learners were in a face-to-face context—essentially, the technology worked as a crutch to support something that learners might miss from conventional learning contexts. The third stage in the development of the field was to go beyond imitating face-toface scenarios and rather search for the added value that digital communication technologies could offer (Buder, 2007). The idea was to no longer confine the notions of group awareness and group awareness tools to physical relations (who, where, what), but to extend research and development to “mental” content. As a consequence, more recent group awareness tools typically provide information about things that are not directly or easily observable (e.g., the knowledge of group members, their attitudes, or their social relations). This focus on “hidden variables” made group awareness research in CSCL quite different from group awareness research in CSCW. While many publications in the first three stages of the development of group awareness research described prototypes, use cases, or best-practice examples, a fourth stage of research was (and still is) dominated by testing group awareness tools in experimental scenarios. The first experimental investigation of group awareness might have been made by Gutwin and Greenberg (1999) who compared two interface variants in terms of usability. However, only from the mid-2000s onward,
Group Awareness
299
experimental tests of group awareness have become more widespread. Some of the main findings will be covered in the next section. Finally, in a fifth stage of research on group awareness, investigators tried to integrate technical developments and experimental findings into theoretical approaches of group awareness and to experimentally disentangle functions of group awareness tools. While there have been some theoretical accounts of group awareness in the CSCW literature (e.g., Carroll, Rosson, Convertino, & Ganoe, 2006; Gross, Stary, & Totter, 2005; Schmidt, 2002), most conceptual papers on group awareness have emerged in the field of CSCL (e.g., Bodemer, Janssen, & Schnaubert, 2018; Engelmann, Dehler, Bodemer, & Buder, 2009; Fransen, Kirschner, & Erkens, 2011; Janssen & Bodemer, 2013; Järvelä & Hadwin, 2013; Kreijns & Kirschner, 2001; Soller, Martínez, Jermann, & Muehlenbrock, 2005). Some key ideas of these papers will be covered in the following section.
3 State of the Art There is a huge range of technological support mechanisms that have been developed to foster group awareness. For instance, an early CSCW conference paper (Christiansen & Maglaughlin, 2003) already distinguished between 41 different types of awareness. Consequently, a lot of theoretical work in CSCL has been devoted to classifying concrete technological group awareness tools. In this chapter, we provide a classification that builds on earlier developments, but updates them with more recent trends and findings. The classification categorizes group awareness tools on two dimensions (see the rows and columns of Table 1). The first dimension categorizes group awareness tools by the type of information that they provide, and it builds on a distinction between cognitive group awareness tools and social group awareness tools that were introduced in a comprehensive account of group awareness research in CSCL (Janssen & Bodemer, 2013). The second dimension of the classification categorizes group awareness tools by the psychological functions they perform in collaboration. Thinking about functions goes back at least to early CSCL work (Soller et al., 2005), but has recently gained a lot of traction in theoretical work (e.g., Bodemer et al., 2018; Järvelä & Hadwin, 2013). We propose a scheme that Table 1 Classification of group awareness tools, organized by type of information (in rows) and functional levels (in columns) Level Cognitive group awareness tools Social group awareness tools
1 Framing Preparing knowledge exchange Preparing for group interaction
2 Displaying Showing partner knowledge Showing social attributes
3 Feedback Enabling comparisons of knowledge Enabling comparisons of productivity
4 Problematizing Indicating cognitive conflicts Indicating conflicting issues
5 Scripting Structuring knowledge exchange Structuring social reflection
300
J. Buder et al.
distinguishes between five functional levels of group awareness tools (framing, displaying, feedback, problematizing, and scripting). In the following subsections, we will walk through the two rows and five columns of Table 1. In each of those subsections, we will provide selected examples of how group awareness tools support the five functions for the two types of information, present empirical findings, and discuss theoretical underpinnings.
3.1
Group Awareness Tools by Type of Information: Going Beyond the Physical
A key development in early CSCL research on group awareness was to go beyond the idea of using group awareness as a mere stand-in for physically observable things. Rather, the idea was to provide information that is not directly observable even in face-to-face settings (e.g., knowledge and opinions of others, social relationship among others). This new idea led to two different foci of awareness that are mostly referred to as cognitive and social group awareness. While cognitive group awareness tools focus on cognitive and metacognitive aspects, social group awareness tools focus on emotional, motivational, and behavioral aspects (cf., Bodemer & Dehler, 2011).
3.1.1
Type of Information: Cognitive Group Awareness Tools
Around 2005, the idea that group awareness tools could be more than just a poor stand-in for the richness of face-to-face situations took hold among some CSCL researchers. Consequently, these investigators tried to think about the new affordances that group awareness tools may create if they no longer focus on physical variables (who, where, what), but on mental variables. One of the key ideas was to make cognitive variables visible, asking questions like: How does collaboration benefit if group members know which knowledge their peers hold? How does it impact collaboration if learners are aware of their peers’ opinions and attitudes? This has led to the development of a number of tool solutions that provide information about cognitive variables like the knowledge or attitudes of other group members. The spirit of these developments was outlined in a conceptual paper by Engelmann et al. (2009) on “knowledge awareness” which linked group awareness to other theoretical constructs and provided a simple model of how group awareness information is exchanged. Two examples of cognitive group awareness tools are the Partner Knowledge Awareness tool (PKA; Dehler, Bodemer, Buder, & Hesse, 2009) and the Knowledge and Information Awareness tool (KIA; Engelmann & Hesse, 2011). The PKA tool requests learners in a dyad to individually rate their understanding of multimedia text passages by clicking on an adjacent box if they sufficiently
Group Awareness
301
understood a given passage. Clicking on a box turns the color of the box from white to green. However, the PKA tool does not only show white or green boxes for a given learner, but also displays white or green partner ratings in an adjacent box. In this way, dyadic communication and collaboration are naturally and smoothly structured by indicating shared knowledge, shared ignorance, and knowledge differences. Empirically, the PKA tool led to improved elaboration of learning content (Dehler et al., 2009), and it particularly increased learning for those dyad members who provided explanations to their partner (Dehler Zufferey, Bodemer, Buder, & Hesse, 2010). The KIA tool gives learners direct access to the knowledge of co-learners. Here, members individually visualize their knowledge on a given domain as concept maps, and the group is given the task to create a shared concept map in order to solve a problem. Empirical investigations comparing KIA with a tool that does not display partner maps revealed that KIA led to better problem-solving performance (Schreiber & Engelmann, 2010), to a focus on unshared information (Engelmann & Hesse, 2011), and to collaborative maps that are more similar to an expert solution (Clariana, Engelmann, & Yu, 2013). KIA also counteracted detrimental effects of too much trust in the group’s abilities (Engelmann, Kolodziej, & Hesse, 2014). More recent examples of cognitive group awareness tools were developed in the context of knowledge-building activities within the Knowledge Forum environment (see Scardamalia & Bereiter, this volume). For instance, children as young as 8 years old were able to rate how promising ideas were that they generated within Knowledge Forum, thus benefiting from this awareness (Chen, Scardamalia, & Bereiter, 2015). Another cognitive group awareness tool visualized word clouds about frequently used words in Knowledge Forum, and also indicated how often certain Knowledge Forum scaffolds were used, thus improving several indicators of learning (Resendes, Scardamalia, Bereiter, Chen, & Halewood, 2015).
3.1.2
Type of Information: Social Group Awareness Tools
Work on social group awareness was motivated by the fact that participation levels in many CSCL environments are relatively low. It was argued that the lack of social affordances is responsible for these shortcomings (Kreijns & Kirschner, 2001). Researchers on social group awareness proposed that lack of participation and motivation in CSCL could be overcome by tools that create a “social space,” thus fostering group cohesion, trust, belongingness, and mutual attraction. Later theoretical work has suggested that social interaction in CSCL depends on factors such as social space (relational networks among learners), social and educational affordances of the environment, and the experienced social presence of the learners (Kirschner, Kreijns, Phielix, & Fransen, 2015; Kreijns, Kirschner, & Vermeulen, 2013). Researchers in this area also investigated how group awareness can have long-term effects by creating networks of mutual trust and fostering the development of taskwork and teamwork mental models (Fransen et al., 2011).
302
J. Buder et al.
Quite a number of social group awareness tools provide information about the participation rates of group members (Cress & Kimmerle, 2007; Janssen, Erkens, Kanselaar, & Jaspers, 2007; Jermann & Dillenbourg, 2008; Lin, Mai, & Lai, 2015; Michinov & Primois, 2005; see also Hod & Teasley, this volume). Empirical investigations of these tools typically reported that displaying to learners how much they and others contributed to the collaboration led to increased participation rates. However, some of these studies either did not measure or did not find improvements in subsequent learning outcomes. Another social group awareness tool that has been at the center of many empirical investigations is the Radar tool (Phielix, Prins, & Kirschner, 2010), which requires learners to rate themselves and their peers on six dimensions that cover relational aspects (e.g., friendliness) and task-related aspects (e.g., productivity). The tool aggregates these ratings and visualizes them in a radar-type diagram. Empirical investigations (e.g., Malmberg, Järvelä, Järvenoja, & Panadero, 2015; Phielix, Prins, Kirschner, Erkens, & Jaspers, 2011) revealed a number of positive influences of Radar on relational variables (e.g., team development, perceptions of conflict), but less impact on learning outcomes.
3.2
Group Awareness Tools by Function: Going Beyond the Outcome
In recent years, there has been a continuing shift toward thinking about the function of group awareness tools rather than a distinction between different types of awareness. This recent development has been fueled by the extensive conceptual and empirical work of Järvelä and colleagues (Järvelä & Hadwin, 2013; Järvelä et al., 2015; Järvelä et al., 2016; M. Miller & Hadwin, 2015). A conceptual paper by Järvelä and Hadwin (2013) laid the groundwork for a functional approach by arguing that CSCL has focused too much on learning outcomes and too little on group processes. In particular, the metacognitive regulation of collaboration has been neglected in much prior work (see also Järvelä, Malmberg, Sobocinski, & Kirschner, this volume). Regulating collaboration, it was further argued, requires three activities: individuals have to regulate their own learning process (selfregulation), have to regulate the learning processes of their partners (co-regulation), and have to mutually regulate joint learning (socially shared regulation). For all three types of process regulation, group awareness tools can play a vital part. Based on these distinctions and other functional conceptualizations (Bodemer et al., 2018; Soller et al., 2005), the present classification distinguishes between five functional levels (framing, displaying, feedback, problematizing, and scripting) that group awareness tools afford. As a rule, tools that support functions on a higher level (e.g., problematizing) also support functions on lower levels (framing, displaying, and feedback).
Group Awareness
3.2.1
303
Functional Level 1: Framing
By providing information about the group, its processes and products, group awareness tools provide some kind of general framing. For instance, providing information about degree of understanding will make learners think about their degree of understanding; and providing information about participation rates will make learners reflect about their participation. But is mere framing of cognitive or social information enough to improve learning? Empirical results are inconclusive. For example, in an experiment using a Twitter-like scenario (Rudat, Buder, & Hesse, 2014) participants were informed that many of their “followers” were interested in educational topics. This general framing about one’s audience had some influence on the decision to “retweet” messages, but other factors like controversy or unexpectedness of tweets had much higher impact. In another study (Schnaubert & Bodemer, 2019), a cognitive group awareness tool displayed the confidence with which learners in a dyad-held pieces of knowledge. This framing prompted learners to subsequently focus on pieces that they felt uncertain about. As for social group awareness tools, a study notified participants that the arguments they individually generated were later made publicly visible to a group (Tsovaltzi, Puhl, Judele, & Weinberger, 2014). Simply preparing students for subsequent group interaction actually decreased performance in a knowledge test. Taken together, these studies suggest that while framing has some positive effects on collaboration, additional functionalities are needed to completely harness the power of group awareness tools.
3.2.2
Functional Level 2: Displaying
Most group awareness tools in CSCL go beyond mere framing of collaboration. Rather, they display visualizations that are shared for all group members. The aforementioned KIA tool (e.g., Engelmann & Hesse, 2011) is an example of a cognitive group awareness tool that provides such a shared external representation. Recall that this tool requests learners to individually externalize their knowledge about a problem-solving context by constructing concept maps. A shared external representation of individual concept maps as well as of the jointly constructed, collaborative concept map provides a means for learners to solve complex problems. While cognitive group awareness tools display the knowledge of others, social group awareness tools show social attributes of the group members. For instance, the aforementioned Radar tool (Phielix et al., 2010) displays how group members rated their peers on social dimensions like friendliness and productivity, thus providing a shared external representation that informs subsequent collaboration. A key benefit of displaying shared external representations is that they provide a common ground (Clark, 1996; Clark & Brennan, 1991) for learners which implicitly structures collaboration. If learners have the shared external representation permanently available, it provides a convenient means to verbally or nonverbally refer to elements of the representation.
304
3.2.3
J. Buder et al.
Functional Level 3: Feedback
One of the key characteristics of many group awareness tools is that they provide learners with important feedback about their cognitive and social behavior. Typically, this feedback is coupled with comparisons that can lead to adjustments in regulatory behavior. The comparisons could be between oneself and others (social comparisons), or between different others. The comparisons might also be made between “products” (e.g., written statements) that oneself or others have created. For instance, the augmented group awareness tool (Buder & Bodemer, 2008) requires learners to rate written discussion posts of other group members on two dimensions (agreement and novelty). The tool then visualizes each discussion post in a two-dimensional external representation. In this way, learners receive feedback on how different discussion posts compare to each other on the rating dimensions. Empirically, it was shown that the tool increased the salience of posts written by learners who held a minority view, thus leading to better group decisions. A number of social group awareness tools also make comparisons highly salient. For instance, the participation meter (Janssen et al., 2007) visualizes number and average length of discussion posts, thus providing a means to compare the participation behavior between persons (including oneself) and even between different groups. It was found that the participation meter increased participation, was partially linked to equality of participation, but did have no direct impact on group products (Janssen et al., 2007; Janssen, Erkens, & Kirschner, 2011). More details about the mechanisms of socially salient comparisons emerged from a series of studies using tools that visualize the number of contributions that users made to a shared database. It was found that participation behavior could be raised both by providing actual numbers of contributions and providing externally set standards of how much a user should contribute (Cress & Kimmerle, 2007). Moreover, it was found that seeing the individual performance of each group member had a stronger effect than just seeing the average performance of the group (Kimmerle, Cress, & Hesse, 2007) and that a cumulative visualization of all contributions over time was superior to a visualization that displayed contributions over separate time slots (Kimmerle & Cress, 2009). A study by Lin et al. (2015) directly compared cognitive and social components of group awareness tools. Learners received a display of peers who either had a high level of knowledge (cognitive group awareness) or who sent and received lots of messages (social group awareness). The study measured to whom the learners directed requests for help, finding that social group awareness led to better outcomes. The power of social comparisons with others that goes along with the feedback function has been a hallmark of social psychological research (Festinger, 1954). Social comparisons help to validate one’s opinions and one’s performance with regard to a group and can instigate behavioral change. Moreover, performance feedback makes group norms visible, thus shaping the social identity of its members. However, it should also be noted that social comparisons can backfire. Evidence for this comes from a study showing that participants who scored high on Social
Group Awareness
305
Comparison Orientation used awareness information to strategically withhold some information (Ray, Neugebauer, Sassenberg, Buder, & Hesse, 2013).
3.2.4
Functional Level 4: Problematizing
Performance feedback ensures that individual learners in a group are provided with powerful cues on how to adapt to their peers. In the terminology of Järvelä and Hadwin (2013), feedback can lead to self-regulation. In contrast, some group awareness tools go one step beyond and provide information that leads to socially shared regulation. In particular, these tools problematize issues of collaboration, for instance by showing meaningful differences between learners, or by signaling problematic relational aspects. As for cognitive group awareness, the aforementioned PKA tool (Dehler, Bodemer, Buder, & Hesse, 2011) visualizes learner differences with regard to the understanding of learning passages. Displaying those differences provides powerful cues to both dyad members (e.g., when A understood a passage, and B didn’t, a natural division of labor is prompted where A explains the passage to B). Another way to problematize differences between learners is to make cognitive conflicts visible. For instance, the cognitive group awareness tool by Gijlers and de Jong (2009) uses so-called shared proposition tables where learners indicate the degree to which they believe in certain hypotheses with regard to a physics simulation. Thus, the shared proposition table makes it salient when two learners disagree about the truth value of hypotheses, and indeed it was shown that such a representation led to more discussion about conflicting views, more overall discussion, and better learning outcomes compared to a condition where learners jointly constructed propositions. Similar findings were reported in a study by Bodemer (2011) who developed the collaborative integration tool. This tool is embedded in a dyadic problem-solving task in the statistics domain, and it requires learners to individually externalize their solution attempts. The collaborative integration tool thus displays where dyad members agree or disagree. Empirically, seeing similarities and cognitive conflicts yielded stronger attempts to reconcile conflicting assignments in the group awareness condition, thus leading to higher learning gains in a knowledge test. Indicating cognitive conflicts is also at the heart of a cognitive group awareness tool which indicates resolved and unresolved controversies on Wikipedia talk pages (Heimbuch & Bodemer, 2017). Having such information at one’s disposal led to a more focused editing behavior and better learning. The S-REG environment (Järvelä et al., 2016; Laru, Malmberg, Järvenoja, Sarenius, & Järvelä, 2015) supports the problematizing function in a social group awareness tool. Here, learners’ self-ratings of various cognitive, motivational, and emotional states are aggregated on a group level and displayed as “traffic lights.” The group-level information is determined by the lowest individual rating. For instance, if two group members indicate being moderately motivated (yellow light), and a third member indicates low motivation (red light), the group receives
306
J. Buder et al.
a red light for motivation. In this way, critical issues in socially shared regulation (cognitive, motivational, emotional) are problematized. Taken together, group awareness tools that support the problematizing function provide information that is meaningful to more than one learner, and thus are likely to instigate corrective processes to reconcile differences and problematic issues among learners.
3.2.5
Functional Level 5: Scripting
A commonality of the group awareness tools described so far is that they provide “tacit guidance” (Bodemer, 2011). This means, they do not explicitly instruct CSCL groups on what to do or on how to make use of the group awareness information. However, recently some attempts were made to combine group awareness tools with more explicit types of instructions by merging them with collaboration scripts (see also De Wever & Strijbos, this volume). A cognitive group awareness tool by Puhl, Tsovaltzi, and Weinberger (2015) uses a two-dimensional visualization in which each dot represents the attitude of a learner. This tool led to better learning outcomes (knowledge gain, attitude change). However, combining the awareness tool with a collaboration script that fostered reflection on these attitudes, and that prompted discussion between learners with different attitudes increased both the amount of participation and learning outcomes. An example for a combination between a social group awareness tool and a script is the aforementioned S-REG environment (Järvelä et al., 2016; Laru et al., 2015), which provides a sequence on how to create and build upon group awareness information. Learners first individually rate their perceived cognitive, motivational, and emotional state. These ratings are then aggregated into three “traffic lights.” Subsequently, the script prompts learners to discuss reasons for the color of the traffic lights. In later stages of the script, learners can pick from a list of preformulated reasons for having a red traffic light, and are then provided with action prompts on how to regulate their behavior. A somewhat similar approach was taken in a study that had learners rate and later discuss their group performance on both cognitive variables like the presence of high-quality proposals and on social variables such as verbal equity (Borge, Ong, & Rosé, 2018). Groups who were instructed to focus their discussions on evidence for their ratings exhibited higher quality reflection than groups who should focus on ideas of how to improve their ratings. In sum, the combination of group awareness tools with collaboration scripts is a promising approach as it scaffolds collaborative learning processes in a way that groups can make the most out of group awareness.
Group Awareness
3.3
307
Conclusions
The classification of CSCL group awareness tools proposed in this chapter is based on two dimensions: the first dimension refers to the type of information (cognitive vs. social) that the tool is built on. A commonality of both cognitive and social group awareness tools is that they focus on information that is not directly observable (knowledge, attitudes, friendliness, exact rates of participation). This focus on unobservable entities sets group awareness tools in CSCL clearly apart from group awareness tools in CSCW which typically make use of group awareness on observable constructs (who, where, what). The second dimension of the classification refers to different functional levels that are built into the design of group awareness tools. We concur with Järvelä and Hadwin (2013) who argued that group awareness tools should focus on the metacognitive and socially shared regulation of collaborative processes more than on directly maximizing learning outcomes. Seen from this perspective, the fact that the direct impact of many group awareness tools on learning outcome is rather mixed, should not discourage from using these tools— the litmus test is that an effective group awareness tool will assist in the regulation of collaborative learning processes, and we believe there is abundant empirical evidence that this is exactly what they do. First, group awareness tools can help learners to focus on particular aspects of collaboration (framing the collaboration, displaying shared external representations), thus helping learners to stay “on track.” Second, group awareness tools can assist learners in adapting their behavior to improve group processes (via feedback, and by problematizing conflicting issues). And third, the implicit guidance of group awareness tools can be combined with more explicit instructional sequences (scripting)—in all these cases the focus is on learning processes rather than learning outcomes.
4 The Future What’s in store for future developments in research on group awareness? First, we believe that researchers will continue developing group awareness tools, and one could well imagine that some of these tools will tap into types of information that haven’t been covered by our proposed classification (e.g., awareness about physiological states of group members; for a step in this direction see Schneider & Pea, 2013). Second, another future trend might lie in the idea to combine group awareness tools with other approaches to structure CSCL processes (like scripts, prompts, or conversational agents). Third, another trend that is already underway is to (partially) automate the extraction of group awareness information. Most of the tools described above rely on explicit actions or ratings that the learners provide. However, recent advances in learning analytics may be successful in mining important information about collaborative processes much more unobtrusively. While most learning analytics approaches are currently used to inform researchers, or to inform teachers
308
J. Buder et al.
about the progress of their students (e.g., Berland et al., 2015), it would make a lot of sense to relate crucial information about a group directly to its members, thus fostering agency (Wise & Schwarz, 2017). One example for such an approach is the grouping and representing tool (Erkens, Bodemer, & Hoppe, 2016), which uses text mining for supporting teachers by suggesting conceptual categories and discussion groups on the basis of written essays, but also supports students by providing easily comparable cognitive group awareness visualizations. Apart from technological developments, we would also embrace theoretical and methodological progress in research on group awareness. First, surprisingly few attempts have been made to directly measure group awareness by asking learners about their retention of information presented via group awareness tools (Bodemer et al., 2018). It would be interesting to see how far group awareness actually mediates between tool provision and collaborative processes. Second, there has not been much investigation of the micro-processes involved in collaborative regulation through group awareness (Dillenbourg, Lemaignan, Sangin, Nova, & Molinari, 2016; Engelmann et al., 2009; Schmidt, 2002, for exceptions). Third, while there have been several attempts to link group awareness to concepts in social and organizational psychology (e.g., social comparison, transactive memory systems, shared mental models), there are several other concepts in these fields that might warrant further attention. As an example, one could link group awareness more generally to the field of social influence (Cialdini & Goldstein, 2004; Smith & Mackie, 2016) and behavioral contagion (Aarts, Gollwitzer, & Hassin, 2004). Similarly, the potential role of group awareness tools in the building, maintenance, and change of group norms and social identities (Ashmore, Deaux, & McLaughlinVolpe, 2004; D. T. Miller & Prentice, 2016) could be explored. Research on group awareness has come a long way from early attempts to instantiate face-to-face like conditions in spatially distributed CSCW settings toward the current crop of CSCL tools which try to uncover and harness the hidden potentialities of groups. In spite of all the accomplishments and progresses that were made along this journey, there is still so much unexplored territory that one should best conceive of it as a road half traveled.
References Aarts, H., Gollwitzer, P. M., & Hassin, R. R. (2004). Goal contagion: Perceiving is for pursuing. Journal of Personality and Social Psychology, 87(1), 23–37. https://doi.org/10.1037/00223514.87.1.23. Ashmore, R. D., Deaux, K., & McLaughlin-Volpe, T. (2004). An organizing framework for collective identity: Articulation and significance of multidimensionality. Psychological Bulletin, 130(1), 80–114. https://doi.org/10.1037/0033-2909.130.1.80. Berland, M., Davis, D., & Smith, C. P. (2015). AMOEBA: Designing for collaboration in computer science classrooms through live learning analytics. International Journal of ComputerSupported Collaborative Learning, 10(4), 425–447. https://doi.org/10.1007/s11412-0159217-z.
Group Awareness
309
Bly, S. A., Harrison, S. R., & Irwin, S. (1993). Media spaces: Bringing people together in a video, audio, and computing environment. Communications of the ACM, 36, 28–46. https://doi.org/10. 1145/151233.151235. Bodemer, D. (2011). Tacit guidance for collaborative multimedia learning. Computers in Human Behavior, 27(3), 1079–1086. https://doi.org/10.1016/j.chb.2010.05.016. Bodemer, D., & Dehler, J. (2011). Group awareness in CSCL environments. Computers in Human Behavior, 27(3), 1043–1045. https://doi.org/10.1016/j.chb.2010.07.014. Bodemer, D., Janssen, J., & Schnaubert, L. (2018). Group awareness tools for computer-supported collaborative learning. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International Handbook of the Learning Sciences (pp. 351–358). New York, NY: Routledge/ Taylor & Francis. Borge, M., Ong, Y. S., & Rosé, C. P. (2018). Learning to monitor and regulate collective thinking processes. International Journal of Computer-Supported Collaborative Learning, 13(1), 61–92. https://doi.org/10.1007/s11412-018-9270-5. Buder, J. (2007). Net-based knowledge communication in groups. Zeitschrift Für Psychologie/ Journal of Psychology, 215(4), 209–217. https://doi.org/10.1027/0044-3409.215.4.209. Buder, J., & Bodemer, D. (2008). Supporting controversial CSCL discussions with augmented group awareness tools. International Journal of Computer-Supported Collaborative Learning, 3(2), 123–139. https://doi.org/10.1007/s11412-008-9037-5. Carroll, J. M., Rosson, M. B., Convertino, G., & Ganoe, C. H. (2006). Awareness and teamwork in computer-supported collaborations. Interacting with Computers, 18(1), 21–46. https://doi.org/ 10.1016/j.intcom.2005.05.005. Chen, B., Scardamalia, M., & Bereiter, C. (2015). Advancing knowledge‐building discourse through judgments of promising ideas. International Journal of Computer-Supported Collaborative Learning, 10(4), 345–366. https://doi.org/10.1007/s11412-015-9225-z. Christiansen, N., & Maglaughlin, K. (2003). Crossing from physical workplace to virtual workspace: Be AWARE! In D. Harris (Ed.), Proceedings of the 10th international conference on human–Computer interaction (pp. 1128–1132). Hillsdale, NJ: Lawrence Erlbaum. Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55, 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015. Clariana, R. B., Engelmann, T., & Yu, W. (2013). Using centrality of concept maps as a measure of problem space states in computer-supported collaborative problem solving. Educational Technology Research and Development, 61(3), 423–442. Clark, H. H. (1996). Using language. Cambridge, MA: University Press. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). Washington, DC: American Psychological Association. Cress, U., & Kimmerle, J. (2007). Guidelines and feedback in information exchange: The impact of behavioral anchors and descriptive norms in a social dilemma. Group Dynamics: Theory, Research, and Practice, 11(1), 42–53. https://doi.org/10.1037/1089-2699.11.1.42. De Wever, B., & Strijbos, J.-W. (this volume). Roles for structuring groups for collaboration. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Dehler, J., Bodemer, D., Buder, J., & Hesse, F. W. (2009). Providing group knowledge awareness in computer-supported collaborative learning: Insights into learning mechanisms. Research and Practice in Technology Enhanced Learning, 4(2), 111–132. https://doi.org/10.1142/ S1793206809000660. Dehler, J., Bodemer, D., Buder, J., & Hesse, F. W. (2011). Guiding knowledge communication in CSCL via group knowledge awareness. Computers in Human Behavior, 27(3), 1068–1078. https://doi.org/10.1016/j.chb.2010.05.018. Dehler Zufferey, J., Bodemer, D., Buder, J., & Hesse, F. W. (2010). Partner knowledge awareness in knowledge communication: Learning by adapting to the partner. The Journal of Experimental Education, 79(1), 102–125. https://doi.org/10.1080/00220970903292991.
310
J. Buder et al.
Dillenbourg, P., Lemaignan, S., Sangin, M., Nova, N., & Molinari, G. (2016). The symmetry of partner modelling. International Journal of Computer-Supported Collaborative Learning, 11 (2), 227–253. https://doi.org/10.1007/s11412-016-9235-5. El-Bishouty, M. M., Ogata, H., Rahman, S., & Yano, Y. (2010). Social knowledge awareness map for computer supported ubiquitous learning environment. Journal of Educational Technology & Society, 13(4), 27–37. Engelmann, T., Dehler, J., Bodemer, D., & Buder, J. (2009). Knowledge awareness in CSCL: A psychological perspective. Computers in Human Behavior, 25(4), 949–960. https://doi.org/10. 1016/j.chb.2009.04.004. Engelmann, T., & Hesse, F. W. (2011). Fostering sharing of unshared knowledge by having access to the collaborators’ meta-knowledge structures. Computers in Human Behavior, 27(6), 2078–2087. https://doi.org/10.1016/j.chb.2011.06.002. Engelmann, T., Kolodziej, R., & Hesse, F. W. (2014). Preventing undesirable effects of mutual trust and the development of skepticism in virtual groups by applying the knowledge and information awareness approach. International Journal of Computer-Supported Collaborative Learning, 9 (2), 211–235. https://doi.org/10.1007/s11412-013-9187-y. Erkens, M., Bodemer, D., & Hoppe, H. U. (2016). Improving collaborative learning in the classroom: Text mining based grouping and representing. International Journal of ComputerSupported Collaborative Learning, 11(4), 387–415. https://doi.org/10.1007/s11412-016-92435. Festinger, L. (1954). A Theory of social comparison processes. Human Relations, 7(2), 117–140. https://doi.org/10.1177/001872675400700202. Fransen, J., Kirschner, P. A., & Erkens, G. (2011). Mediating team effectiveness in the context of collaborative learning: The importance of team and task awareness. Computers in Human Behavior, 27(3), 1103–1113. https://doi.org/10.1016/j.chb.2010.05.017. Gijlers, H., & de Jong, T. (2009). Sharing and confronting propositions in collaborative inquiry learning. Cognition and Instruction, 27(3), 239–268. https://doi.org/10.1080/ 07370000903014352. Gross, T., Stary, C., & Totter, A. (2005). User-centered awareness in computer-supported cooperative work-systems: Structured embedding of findings from social sciences. International Journal of Human Computer Interaction, 18(3), 323–360. https://doi.org/10.1207/ s15327590ijhc1803_5. Gutwin, C., & Greenberg, S. (1999). The effects of workspace awareness support on the usability of real-time distributed groupware. ACM Transactions on Computer-Human Interaction, 6(3), 243–281. https://doi.org/10.1145/329693.329696. Gutwin, C., & Greenberg, S. (2002). A descriptive framework of workspace awareness for real-time groupware. Computer Supported Cooperative Work, 11, 411–446. https://doi.org/10.1023/ A:1021271517844. Gutwin, C., Stark, G., & Greenberg, S. (1995). Support for workspace awareness in educational groupware. In J. L. Schnase & E. L. Cunnius (Eds.), The first international conference on computer support for collaborative learning (pp. 147–156). Hillsdale, NJ: Lawrence Erlbaum. https://doi.org/10.3115/222020.222126. Heimbuch, S., & Bodemer, D. (2017). Controversy awareness on evidence-led discussions as guidance for students in wiki-based learning. The Internet and Higher Education, 33, 1–14. https://doi.org/10.1016/j.iheduc.2016.12.001. Hod, Y., & Teasley, S. D. (this volume). Communities and participation. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Janssen, J., & Bodemer, D. (2013). Coordinated computer-supported collaborative learning: Awareness and awareness tools. Educational Psychologist, 48(1), 40–55. https://doi.org/10. 1080/00461520.2012.749153.
Group Awareness
311
Janssen, J., Erkens, G., Kanselaar, G., & Jaspers, J. (2007). Visualization of participation: Does it contribute to successful computer-supported collaborative learning? Computers & Education, 49(4), 1037–1065. https://doi.org/10.1016/j.compedu.2006.01.004. Janssen, J., Erkens, G., & Kirschner, P. A. (2011). Group awareness tools: It’s what you do with it that matters. Computers in Human Behavior, 27(3), 1046–1058. https://doi.org/10.1016/j.chb. 2010.06.002. Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39. https://doi.org/10.1080/00461520.2012.748006. Järvelä, S., Kirschner, P. A., Hadwin, A., Järvenoja, H., Malmberg, J., Miller, M., & Laru, J. (2016). Socially shared regulation of learning in CSCL: Understanding and prompting individual- and group-level shared regulatory activities. International Journal of Computer-Supported Collaborative Learning, 11(3), 263–280. https://doi.org/10.1007/s11412-016-9238-2. Järvelä, S., Kirschner, P. A., Panadero, E., Malmberg, J., Phielix, C., Jaspers, J., et al. (2015). Enhancing socially shared regulation in collaborative learning groups: Designing for CSCL regulation tools. Educational Technology Research and Development, 63(1), 125–142. https:// doi.org/10.1007/s11423-014-9358-1. Järvelä, S., Malmberg, J., Sobocinski, M., & Kirschner, P. A. (this volume). Metacognition in collaborative learning. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Jermann, P., & Dillenbourg, P. (2008). Group mirrors to support interaction regulation in collaborative problem solving. Computers & Education, 51(1), 279–296. https://doi.org/10.1016/j. compedu.2007.05.012. Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computermediated communication. American Psychologist, 39(10), 1123–1134. https://doi.org/10. 1037/0003-066X.39.10.1123. Kimmerle, J., & Cress, U. (2009). Visualization of group members’ participation how informationpresentation formats support information exchange. Social Science Computer Review, 27(2), 243–261. https://doi.org/10.1177/0894439309332312. Kimmerle, J., Cress, U., & Hesse, F. W. (2007). An interactional perspective on group awareness: Alleviating the information-exchange dilemma (for everybody?). International Journal of Human-Computer Studies, 65(11), 899–910. https://doi.org/10.1016/j.ijhcs.2007.06.002. Kirschner, P. A., Kreijns, K., Phielix, C., & Fransen, J. (2015). Awareness of cognitive and social behaviour in a CSCL environment. Journal of Computer Assisted Learning, 31(1), 59–77. https://doi.org/10.1111/jcal.12084. Kreijns, K., & Kirschner, P. A. (2001). The social affordances of computer-supported collaborative learning environments. In Proceedings—Frontiers in Education Conference (Vol. 1, pp. T1F/12–T1F/17). Reno, NV: IEEE Computer Society. https://doi.org/10.1109/FIE.2001. 963856. Kreijns, K., Kirschner, P. A., & Vermeulen, M. (2013). Social aspects of CSCL environments: A research framework. Educational Psychologist, 48(4), 229–242. https://doi.org/10.1080/ 00461520.2012.750225. Laru, J., Malmberg, J., Järvenoja, H., Sarenius, V.-M., & Järvelä, S. (2015). Designing simple tools for socially shared regulation: Experiences of using Google Docs and mobile SRL tools in mathematics education. In O. Lindwall, P. Häkkinen, T. Koschmann, P. Tchounikine, & S. Ludvigsen (Eds.), Exploring the material conditions of learning: The computer supported collaborative learning (CSCL) conference 2015 (Vol. 1, pp. 403–410). Gothenburg, SE: International Society of the Learning Sciences, Inc. https://doi.dx.org/10.22318/cscl2015.367. Lauwers, J. C., & Lantz, K. A. (1990). Collaboration awareness in support of collaboration transparency: Requirements for the next generation of shared window systems. In J. Chew & J. Whiteside (Eds.), Proceedings of the SIGCHI conference on human factors in computing systems (pp. 303–311). New York, NY: ACM. https://doi.org/10.1145/97243.97301. Lin, J.-W., Mai, L.-J., & Lai, Y.-C. (2015). Peer interaction and social network analysis of online communities with the support of awareness of different contexts. International Journal of
312
J. Buder et al.
Computer-Supported Collaborative Learning, 10(2), 139–159. https://doi.org/10.1007/s11412015-9212-4. Lowry, P. B., & Nunamaker, J. F. (2003). Using Internet-based, distributed collaborative writing tools to improve coordination and group awareness in writing teams. IEEE Transactions on Professional Communication, 46(4), 277–297. https://doi.org/10.1109/TPC.2003.819640. Malmberg, J., Järvelä, S., Järvenoja, H., & Panadero, E. (2015). Promoting socially shared regulation of learning in CSCL: Progress of socially shared regulation among high- and low-performing groups. Computers in Human Behavior, 52, 562–572. https://doi.org/10. 1016/j.chb.2015.03.082. Michinov, N., & Primois, C. (2005). Improving productivity and creativity in online groups through social comparison process: New evidence for asynchronous electronic brainstorming. Computers in Human Behavior, 21(1), 11–28. https://doi.org/10.1016/j.chb.2004.02.004. Miller, D. T., & Prentice, D. A. (2016). Changing norms to change behavior. Annual Review of Psychology, 67(1), 339–361. https://doi.org/10.1146/annurev-psych-010814-015013. Miller, M., & Hadwin, A. (2015). Scripting and awareness tools for regulating collaborative learning: Changing the landscape of support in CSCL. Computers in Human Behavior, 52, 573–588. https://doi.org/10.1016/j.chb.2015.01.050. Ogata, H., Matsuura, K., & Yano, Y. (1996). Sharlok: Bridging learners through active knowledge awareness. In Proceedings of the IEEE international conference on systems, man and cybernetics (Vol. 1, pp. 601–606). Beijing, CN: Tsinghua University. Ogata, H., & Yano, Y. (2003). How ubiquitous computing can support language learning. In Proceedings of KEST (pp. 1–6). JP: Honjo. Phielix, C., Prins, F. J., & Kirschner, P. A. (2010). Awareness of group performance in a CSCLenvironment: Effects of peer feedback and reflection. Computers in Human Behavior, 26(2), 151–161. https://doi.org/10.1016/j.chb.2009.10.011. Phielix, C., Prins, F. J., Kirschner, P. A., Erkens, G., & Jaspers, J. (2011). Group awareness of social and cognitive performance in a CSCL environment: Effects of a peer feedback and reflection tool. Computers in Human Behavior, 27(3), 1087–1102. https://doi.org/10.1016/j.chb.2010.06. 024. Puhl, T., Tsovaltzi, D., & Weinberger, A. (2015). Blending Facebook discussions into seminars for practicing argumentation. Computers in Human Behavior, 53, 605–616. https://doi.org/10. 1016/j.chb.2015.04.006. Ray, D. G., Neugebauer, J., Sassenberg, K., Buder, J., & Hesse, F. W. (2013). Motivated shortcomings in explanation: The role of comparative self-evaluation and awareness of explanation recipient’s knowledge. Journal of Experimental Psychology: General, 142(2), 445–457. https:// doi.org/10.1037/a0029339. Resendes, M., Scardamalia, M., Bereiter, C., Chen, B., & Halewood, C. (2015). Group-level formative feedback and metadiscourse. International Journal of Computer-Supported Collaborative Learning, 10(3), 309–336. https://doi.org/10.1007/s11412-015-9219-x. Rudat, A., Buder, J., & Hesse, F. W. (2014). Audience design in Twitter: Retweeting behavior between informational value and followers’ interests. Computers in Human Behavior, 35, 132–139. https://doi.org/10.1016/j.chb.2014.03.006. Scardamalia, M., & Bereiter, C. (this volume). Knowledge building: Advancing the state of community knowledge. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Schmidt, K. (2002). The problem with awareness. Computer Supported Cooperative Work, 11(3), 285–298. https://doi.org/10.1023/A:1021272909573. Schnaubert, L., & Bodemer, D. (2019). Providing different types of group awareness information to guide collaborative learning. International Journal of Computer-Supported Collaborative Learning, 14, 7–51. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. https://doi.org/10.1007/s11412-013-9181-4.
Group Awareness
313
Schreiber, M., & Engelmann, T. (2010). Knowledge and information awareness for initiating transactive memory system processes of computer-supported collaborating ad hoc groups. Computers in Human Behavior, 26(6), 1701–1709. https://doi.org/10.1016/j.chb.2010.06.019. Smith, E. R., & Mackie, D. M. (2016). Representation and incorporation of close others’ responses: The RICOR model of social influence. Personality and Social Psychology Review, 20(4), 311–331. https://doi.org/10.1177/1088868315598256. Soller, A., Martínez, A., Jermann, P., & Muehlenbrock, M. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. Trausan-Matu, S., Dascalu, M., & Rebedea, T. (2014). PolyCAFe - automatic support for the polyphonic analysis of CSCL chats. International Journal of Computer-Supported Collaborative Learning, 9(2), 127–156. https://doi.org/10.1007/s11412-014-9190-y. Tsovaltzi, D., Puhl, T., Judele, R., & Weinberger, A. (2014). Group awareness support and argumentation scripts for individual preparation of arguments in Facebook. Computers & Education, 76, 108–118. https://doi.org/10.1016/j.compedu.2014.03.012. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. https://doi.org/10.1007/s11412-017-9267-5.
Further Readings Engelmann, T., Dehler, J., Bodemer, D., & Buder, J. (2009). Knowledge awareness in CSCL: A psychological perspective. Computers in Human Behavior, 25(4), 949–960. https://doi.org/10. 1016/j.chb.2009.04.004. This early review distinguishes group awareness from similar constructs (common ground, shared mental models, transactive memory), provides a process model of how group awareness is created from content and context information, and introduces a number of cognitive group awareness tools. Heimbuch, S., & Bodemer, D. (2017). Controversy awareness on evidence-led discussions as guidance for students in wiki-based learning. The Internet and Higher Education, 33, 1–14. https://doi.org/10.1016/j.iheduc.2016.12.001. Recent empirical study, showing how group awareness information about controversiality of topics guides behavior of learners in a wiki. Janssen, J., & Bodemer, D. (2013). Coordinated computer-supported collaborative learning: Awareness and awareness tools. Educational Psychologist, 48(1), 40–55. https://doi.org/10. 1080/00461520.2012.749153. Empirical review on empirical CSCL studies about group awareness. Introduces the distinction between cognitive and social group awareness tools. Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39. https://doi.org/10.1080/00461520.2012.748006. Introduces a distinction between self-regulation, co-regulation, and socially shared regulation in CSCL, and argues how regulation could be supported in CSCL through appropriated tools. Soller, A., Martínez, A., Jermann, P., & Muehlenbrock, M. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. Introduces a distinction between mirroring tools, metacognitive tools, and guidance tools in CSCL that informed the distinction between functional levels introduced in the present chapter.
Roles for Structuring Groups for Collaboration Bram De Wever and Jan-Willem Strijbos
Abstract The emergence of productive collaboration benefits from support for group interaction. Structuring is a broad way to refer to such support, as part of which roles have become a boundary object in computer-supported collaborative learning. The term structuring is related to—yet distinct from—other approaches to support such as scaffolding, structured interdependence, and scripting. Roles can be conceived as a specific (set of) behavior(s) that can be taken up by an individual within a group. They can be assigned in advance or emerge during group interaction. Roles raise individual group member’s awareness of their own and fellow group member’s responsibilities, and they make an individual’s responsibilities toward the group’s functioning visible for all group members. In future research, pedagogical issues with respect to role design, assignment, and rotation as well as automated detection and visualization of emergent roles, should be addressed. Keywords Roles · Structuring · CSCL · Scripting · Scaffolding · Regulating
1 Definitions and Scope It is well established that groups, in particular, ad hoc groups without a common history, do not automatically develop productive ways of collaborating and that there is a need to structure them to enhance collaborative learning in face-to-face and computer-supported settings (Cohen, 1994a, 1994b; De Wever, Van Keer, Schellens, & Valcke, 2010a; Johnson & Johnson, 1994, 2009; Kagan, 1994; Slavin, B. De Wever (*) Tecolab Research Unit, Department of Educational Studies, Ghent University, Ghent, Belgium e-mail: [email protected] J.-W. Strijbos Faculty of Behavioural and Social Sciences, Department of Educational Sciences, University of Groningen, Groningen, The Netherlands e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_17
315
316
B. De Wever and J.-W. Strijbos
1995; Strijbos, Martens, Jochems, & Broers, 2004b, 2007). The term structuring can be broadly defined as a pedagogical approach by which the enactment of the collaborative process is organized (typically by the teacher or by computer technology) in order to guide the unfolding group interaction in such a way that the envisioned learning benefits are most likely achieved. In this chapter, we specifically focus on structuring by means of roles and how this can be used to foster computersupported collaborative learning (CSCL). There are many types of roles that have been applied and studied in face-to-face and CSCL settings. Following Strijbos and Weinberger (2010), roles can be defined as more or less stated functions or responsibilities that guide individual behavior and regulate group interaction (see also Hare, 1994). Roles can furthermore promote individual responsibility and group cohesion (see also Forsyth, 1999; Mudrack & Farrell, 1995) as well as positive interdependence and individual accountability (see also Brush, 1998), which are central support factors in collaborative learning arrangements (De Hei, Strijbos, Sjoer, & Admiraal, 2016; Slavin, 1996; Strijbos, Martens, & Jochems, 2004a). Roles can also facilitate group members’ awareness of overall group performance and of peer contributions (Mudrack & Farrell, 1995; Strijbos et al., 2007; Strijbos, Martens, Jochems, & Broers, 2004b) and are most relevant for distributing, coordinating, and integrating subtasks to attain a shared goal. Not surprisingly, roles were of interest early-on in research on collaborative learning (Cohen, 1994a; Johnson & Johnson, 1994). With the advent of networking technologies in universities and schools in the late 1990s and early 2000s, as well as the increased availability of computers in regular classrooms and at students’ homes, opportunities for CSCL increased. In these computer-supported environments, pedagogical approaches were brought in from earlier face-to-face classroom practices. As a result, roles quite naturally became an important focus for research on structuring in CSCL. Over time, many different roles have been introduced. Strijbos and De Laat (2010) conducted a scoping review of roles used in CSCL research and distinguished three levels of the role concept: (a) role as a specified activity focused on the collaborative product or process (role as a task; micro level), (b) role as multiple tasks focused on the product, process, or a combination (role as a pattern; meso level), and (c) role as an individuals’ participative pattern based on their attitude towards the task and collaborative learning (role as a stance; macro level). Typical micro-level roles in a CSCL discussion environment are, for example, Starter and Wrapper (Hara, Bonk, & Angeli, 2000), examples of meso-level roles are Moderator or Summarizer (De Wever et al., 2010a), and examples of macro-level roles are Communicative Learner and Quiet Learner (Hammond, 1999). In the remainder of this chapter, we will first elaborate on the history and development of structuring, before moving on to the history and development of roles for structuring. This is followed by a summary of the current state of the art and we will conclude with trends for future research.
Roles for Structuring Groups for Collaboration
317
2 History and Development 2.1
History and Development of Structuring
In our introduction, we defined structuring very broadly, and indeed structuring is mostly used as a general term. As a result, it has been used in several ways, sometimes as a synonym for different individual concepts, sometimes as an overarching term for multiple concepts. In addition, the term structuring is also often used without explicitly providing a definition or description. This makes it hard to see the forest for the trees, however, in this section, we provide an overview of the different interpretations that we have found, together with the history and development. We will first focus on structuring as scaffolding, structured interdependence, scripting, pre-intervention decisions, and structuring as an overall term encompassing all the previous ones. Second, we will delve more deeply into the use of roles for structuring (computer-supported) collaborative learning. The first way in which structuring is interpreted is as a synonym for scaffolding. The concept of scaffolding is often associated with Vygotsky’s zone of proximal development, in which scaffolding is the activity of guiding students from their actual developmental level to the level of their potential development (Vygotsky, 1978; see also De Wever et al., 2010a). Following Pata, Sarapuu, and Lehtinen (2005), scaffolding means providing assistance to students on an “as-needed”-basis with fading out of such assistance as their competence or mastery increases. Especially the “fading out” aspect can be considered typical for scaffolding. Regarding its history, it is hard to pinpoint an exact date for the development of scaffolding in (CS)CL. However, Pinantoan (2013) argues that the term scaffolding was first coined in 1976 by Wood, Bruner, and Ross, and that the term was subsequently refined in 1978 after the release of the collected works by Vygotsky (1978) in the book “Mind in Society.” Since the late 1970s and early 1980s, scaffolding received growing attention in the field of learning and instruction, and thus also in (computersupported) collaborative learning. As we will discuss later on, roles can be used as scaffolds to support group interaction and learning. A second way of interpreting structuring is to see it as originating from Aronson and Bridgeman’s (1979) concept of structured interdependence. They deliberately “structured” group interaction in such a way that students were “forced” to work together, and thus developed an intervention that focused on structuring group work. The “Jigsaw” structure they proposed, and which is nowadays broadly known in the field of the learning sciences, not only improved social inclusion of minority students but also their achievement. In this way, the impact of structuring group interaction on learning can be considered as a side effect of Aronson and Bridgeman’s (1979) research on stereotype reduction and inclusion of minorities in classrooms. A third way in which the term structuring is interpreted is as a synonym for scripting. The term “script” was borrowed from the theatre context, emphasizing how specific roles are assigned and specific activities are sequenced, and was already
318
B. De Wever and J.-W. Strijbos
used in the 1990s (O’Donnell & Dansereau, 1992). Within some research in the 1990s, scripts were used to specify roles and the specific nature and timing of cognitive activities, such as elaborating (e.g., O’Donnell & Dansereau, 1992), explaining (e.g., King, 1997), and argumenting (e.g., Kuhn, Shaw, & Felton, 1997). The idea was that scripts stimulate students to engage in specific cognitive activities that were proven to be important for learning. In the early 2000s, scripting was more and more used in CSCL settings, as its use expanded due to technological advances (among others, e.g., the increased use of asynchronous discussion groups, and later on wikis, together with more advanced CSCL tools). The term “script” also quickly became a boundary term in CSCL research, around which researchers from diverse backgrounds (psychology, education, and computer science) could gather, and which they could use as a shared term with mostly (but not completely) shared understanding. There were some differences in how scripting was defined: (1) the term “script” was known by computer scientists, where it indicates a list of commands to be executed in a sequential way, (2) the scripted cooperation approach (O’Donnell & Dansereau, 1992) was known among educational scientists and educational psychologists, although (3) within cognitive psychology scripts are often conceived—in line with Schank and Abelson (1977)—as a term “to refer to culturally shared knowledge about the world that provides information about the conditions, processes, and consequences of particular everyday situations” (Kollar, Fischer, & Hesse, 2006, p. 161). Several efforts have been made to further define and conceptualize CSCL scripts (see e.g., Kobbe et al., 2007; Kollar et al., 2006) and develop a theoretical framework for them (e.g., Fischer, Kollar, Stegmann, & Wecker, 2013). Following Kollar et al. (2006) and Fischer et al. (2013), collaboration scripts aim to scaffold the interactive processes between collaborators in a face-to-face or computer-mediated learning environment. In this way, these authors separate these types of interaction scaffolds from scaffolds that provide support on a content- or conceptual level. Collaboration scripts are conceived as “scaffolds that structure the interactive processes of collaborative learning [and] shape collaboration by specifying different roles and associated activities to be carried out by the collaborators” (Kollar et al., 2006, p. 160), and according to Fischer et al. (2013) they generally consist of at least four components: (a) play level (i.e., knowledge about the learning and collaboration setting), (b) scene level (i.e., knowledge about types of activities within the setting), (c) scriptlet level (i.e., knowledge of sequences of activities within the setting), and (d) role level (i.e., knowledge of roles that organize activities by specific participants with the setting). A more extensive discussion on scripting and how it was employed in the field of CSCL can be found in Sect. 3.1 of this handbook (Vogel, Weinberger, & Fischer, this volume). A fourth interpretation of structuring is based on the timing of the instructional intervention, i.e., structuring as an overarching term for pre-intervention decisions. In this interpretation, structuring is about prescribing and prespecifying the collaboration processes, thus structuring is something that is done in advance to favor the emergence of productive interactions (De Hei et al., 2016; Strijbos, Martens, & Jochems, 2004a), and it is different from regulating interactions, which is something
Roles for Structuring Groups for Collaboration
319
that happens during the interactive process (Dillenbourg, 2002). According to Dillenbourg, interventions done before the collaboration are termed structuring (e.g., the design and selection of structured communication tools and specific collaboration scripts), and the interventions done during the collaboration are termed regulating (e.g., tools that allow students to regulate their contribution to collaboration, co-regulation with the help of a tutor/teacher, and/or foster groups’ shared regulation; Järvelä & Hadwin, 2013; Järvelä et al., 2015; see also the reflective supports for student-directed regulation and structuration as mentioned by Law, Zhang, & Peppler, this volume). A fifth and final interpretation of structuring is to conceptualize structuring as a broader and more overarching term that incorporates the different types of activities and interventions, such as the abovementioned scripting and regulating. This is for example done by De Wever et al. (2010a), comparing two different ways of structuring the collaboration in asynchronous discussion groups by either scripting via assigning roles to students in those discussion groups or by regulating the collaboration through the assignment of cross-age peer tutors to each discussion group. This is also how we defined structuring at the start of this chapter, i.e., as a broad concept encompassing multiple specific approaches, with the aim of guiding the unfolding group interaction in such a way that collaborative learning can take place. What is clear from these five interpretations is that structuring is not as welldefined as often assumed in the field of CSCL and it is used in several ways by different (groups of) authors. We are fully aware that this is complex—especially for researchers new to the field of CSCL. However, rather than oversimplifying what structuring is and where it came from, we decided to show the complex nature of its definition and history. Regarding the five interpretations, we can conclude that they overlap or even encompass each other. A case in point is the fourth interpretation, following Dillenbourg (2002), in which scripting is a part of structuring, but structuring is something different than regulating. In fact, if we use structuring as an overarching term (fifth interpretation), then all scripting, scaffolding, and regulating is structuring, but not necessarily the other way around. Moreover, all scripting and regulating can be conceived as (part of) structuring, but not all structuring (and regulating) has the characteristics that are attributed to scripting. Finally, in the way that scripting of interactive processes (third interpretation) is typically defined, scaffolding (first interpretation) seems to be an integral part of it. In other words, most definitions of collaboration scripts share the inherent idea that these scaffolds should be removed after a while (and somehow be internalized by the collaborators). However, such a “fading out” principle is not always present in scripting research in the field of CSCL. In all, this multiplex of interpretations of “structuring” clearly signals a future task for the CSCL community to clarify the terminology, for example, in a systematic review. Regarding the history, the whole idea of structuring—in its broadest sense—of CSCL could be attributed as originating from either scaffolding, scripting (both the O’Donnell and Dansereau (1992) and Schank and Abelson (1977) approach), or from structured interdependence (Aronson & Bridgeman, 1979). However, we
320
B. De Wever and J.-W. Strijbos
consider it more likely that these developments somehow started independently from each other. Regardless, it is undisputed that in the 1980s and 1990s many structuring approaches were developed. Some have their origins more strongly in social psychology, utilizing group dynamics to foster individual learning, e.g., “Jigsaw” (Aronson & Bridgeman, 1979), “Student Teams Achievement Divisions” (Slavin, 1995), “Learning Together” (Johnson & Johnson, 1994), and the “Structural Approach” (Kagan, 1994); or to foster intrinsic motivation such as the “Group Investigation” approach (Sharan & Sharan, 1992; Sharan, Sharan, & Tan, 2013). Others have their origins more strongly in cognitive psychology emphasizing specific cognitive activities such as elaborating, explaining, and argumentation to foster student learning, e.g., “Scripted Cooperation” (O’Donnell & Dansereau, 1992) and “Complex Instruction” (Cohen, 1994a, 1994b) as well as the research by Webb on “student helping behavior” (e.g., Webb, 1989, 2013). Over time, the central tenet of structuring approaches has transcended these origins and it is nowadays acknowledged that their effects operate on the cognitive, social, and motivational planes of collaborative learning.
2.2
History and Development of Roles
In the broadest sense, a role within a team (or group) can be conceived as a specific (set of) behavior(s) that can be taken up by an individual within a group (Mudrack & Farrell, 1995), and according to Driskell, Driskell, Burke, and Salas (2017), these roles can vary along the dimensions of dominance, sociability, and task orientation. Likewise, within the field of (computer-supported) collaborative learning, roles are always conceived as embedded within the context of the group process, and thus in relation to other individuals and their roles. Such roles can be either specifically assigned to individuals or they can emerge in more naturalistic settings (Strijbos & Weinberger, 2010). When used for structuring collaboration, roles are often (but not necessarily) a specific case of a priori structuring, meaning that the roles are usually assigned in advance to several participants in a group, and roles are more or less well defined (i.e., the activities that should be taken up by the participant who was assigned a role are communicated prior to collaborating). Assigning roles to students in view of supporting them in their collaboration is not new. As with most pedagogical approaches, it first existed in regular (noncomputer supported) collaboration practices in classroom settings (Cohen, 1994a; Johnson & Johnson, 1994). Assigning roles to structure collaboration did however receive renewed attention when CSCL environments were set up. Cohen (1994a, 1994b) already discussed a number of roles in the context of face-to-face classroom discussions and distinguished between “how” roles and “what” roles. The “how” roles are more general roles indicating how students could tackle a specific collaborative task. Examples of “how” roles are resource person, materials manager, cleanup person, facilitator, reporter, recorder, spokesperson, synthesizer or summarizer, safety officer, and checker (Cohen, 1994a, 1994b; see also De Wever,
Roles for Structuring Groups for Collaboration
321
Schellens, Van Keer, & Valcke, 2008). The “what” roles are more content-specific roles, indicating what students need to tackle when dividing tasks. Examples of “what” roles in a specific context are camera person, director, storywriter, and actor (De Wever et al., 2008). This distinction is related to the distinction that Strijbos, Martens, Jochems, and Broers (2004b) made in the context of CSCL: they distinguished process-based roles, on the one hand, focusing on individual responsibilities for the coordination of the group process, and content-based roles, on the other hand, focusing on differences in individual responsibility regarding the content and task activities. In a later stage of CSCL research, Wise, Saghafian, and Padmanabhan (2012) suggested in the context of asynchronous discussions to focus on the conversational functions as a conceptual tool for role design. They reviewed role descriptions using the constant comparative method and identified seven functions that roles can fulfill during asynchronous discussions: motivate, give direction, add new ideas, bring in source, use theory, respond, and summarize. It is clear that the use of roles in CSCL research developed over time. In the state of the art below, we discuss the conceptualization of roles in CSCL further, before discussing the effects of roles.
3 State of the Art: What We Know about Roles for Structuring CSCL 3.1
Conceptualization of Roles in CSCL Research
During the past decade, some efforts have been undertaken to conceptualize roles. Strijbos and De Laat (2010) developed a framework to analyze the broad spectrum of available roles in the current literature in CSCL. This framework distinguished roles along three dimensions. The first dimension distinguishes between a priori assigned roles and emergent roles. The former are assigned by a teacher, with the aim to structure the collaborative learning process, while the latter “emerge spontaneously or are negotiated spontaneously by group members without interference by the teacher” (p. 496). The second dimension distinguishes between product-oriented roles and process-oriented roles. Product-oriented roles focus on developing or delivering a (part of a) product or performance, for example, a group member with the role of summarizer writing a summary at the end of a discussion. In contrast, process-oriented roles focus more on facilitating the group processes, for example, a moderator making sure that all group members are encouraged to participate in the discussion. The third dimension concerns the granularity of the role concept, resulting in roles that are conceptualized differently at the micro-, meso-, or macro level. We already briefly introduced these levels at the start of this chapter, but provide some more elaboration in the next paragraph.
322
B. De Wever and J.-W. Strijbos
At the micro level, roles are conceived as a task and/or a “specified activity focused on the collaborative product or process” (p. 496). Strijbos and De Laat (2010) claim that in most of the CSCL literature roles are essentially made up of single tasks and they attribute this to the fact that the idea of structuring collaboration with the help of roles originated in primary education. At the meso level, roles are conceptualized as multiple tasks focused on the product, process, or a combination of both. At this level, the one-to-one relationship of a role and a specific task is abandoned, and a role can be related to multiple tasks. De Wever et al. (2008) observed that students with a specific role assigned to them comparatively enact the associated role behaviors more often than students who do not have that role, but students also showed behavior that was associated with the other assigned roles in the group. Likewise, Wise et al. (2012) observed in relation to their concept of conversational functions that indeed several of those functions are combined in one role. Finally, roles at the macro level are understood as individuals’ participative stances which are behavioral patterns based on their general attitude toward the task and the collaborative learning setting. Strijbos and De Laat (2010) distinguished eight participative stances, depending on the group size (large group vs. small group), students’ orientations (individual- vs. Group orientation), and students’ effort investment in the collaborative assignment (low vs. high). Apart from the three dimensions identified by Strijbos and De Laat (2010) and discussed in the first paragraph of this section, there is a fourth dimension that should be considered: the concept of role as a way to induce students to approach a problem by enacting in line with a specific perspective. Such perspective induction is what role-playing typically tries to achieve. For example, Arvaja, Rasku-Puttonen, Häkkinen, and Eteläpelto (2003) used meso-level roles based on occupational or social roles representing a British and Indian society during the nineteenth century. The role-play was for students to study (and experience) imperialism and social status by acting out several distinguished societal roles or occupations. The students chose an occupational or social role and learned about their function and social responsibility before acting them out in the discussion forum. The students composed messages while keeping the perspective of their own role character during that period in history in mind, which prevented, for example, a farmer from contacting a British bishop.
3.2
Effects of Roles in CSCL Research
There is broad consensus that roles (a) raise individual group member’s awareness of their own and fellow group member’s responsibilities and (b) make an individual’s responsibilities toward the group’s functioning visible for their fellow group members. As such roles either promote (in the case of a priori assigned roles) or uncover (in the case of emergent roles) the degree to which individual accountability and positive interdependence exist and/or are enacted. The enhanced awareness is more salient when roles are assigned, but in the event that emergent roles become
Roles for Structuring Groups for Collaboration
323
observable (e.g., with the help of activity and/or behavior visualization) awareness can be enhanced as well. Assigned roles can enhance participation, coordination, performance, and learning, but this depends on the collaborative context and the specific group task at hand. Earlier research showed that roles certainly can enhance participation and coordination during group projects in online education (Gu, Shao, Guo, & Lim, 2015; Strijbos et al., 2007; Strijbos, Martens, Jochems, & Broers, 2004b). Furthermore, assigning roles to students in collaborative groups has shown to be favorable for enhancing knowledge construction processes with bachelor-level students who were involved in online discussions (Schellens, Van Keer, De Wever, & Valcke, 2007) when compared to collaborative groups of bachelor students without assigned roles. Yet, students’ learning benefits can vary according to the enacted role they were assigned. For example, a study comparing the levels of knowledge construction in students’ online discussion messages (De Wever, Van Keer, Schellens, & Valcke, 2010b) showed that students with the role of moderator, theoretician, and summarizer reached significantly higher levels of knowledge construction (compared to students in groups without role assignment), whereas this was not the case for students that were assigned the role of starter or source searcher. And while assigning roles was beneficial for enhancing knowledge construction, groups with additional cross-age peer tutor regulation outperformed groups with assigned roles only (De Wever et al., 2010a). More recently, research of Ouyang and Chang (2019) identified six social participatory emerging roles—i.e., leader, starter, influencer, mediator, regular, and peripheral—that were critical indicators for knowledge inquiry and knowledge construction contributions, namely students enacting the roles of leader, starter, and influencer made more contributions to knowledge inquiry and knowledge construction compared to the other three roles. Regarding motivation, group cohesion, and learning performance, Zheng, Huang, and Yu (2014) compared a condition with assigned roles (i.e., information searcher, explainer, coordinator, and summarizer) with a condition without role assignment and showed that there were significant differences in motivation and task cohesion; however, no significant differences in social cohesion and learning performance were found. In sum, we can conclude that assigned or emergent roles can positively affect both the processes and outcome(s) of (computer-supported) collaborative learning, but there are also no definitive guarantees.
4 The Future: Pedagogical Approaches and Technological Evolutions as Two Tracks for Future Development Over the past decades, there have been significant advances in the use and refinement of preexisting approaches to structuring group interaction—including roles—as well as understanding their effects in CSCL settings. Likewise, initial steps have been
324
B. De Wever and J.-W. Strijbos
made to expand our understanding of the nature of assigned and emergent roles, how these can be differentiated, and used to guide the collaborative process. However, notwithstanding these accomplishments, there are in our view two major tracks for future research. First, more insight into the pedagogical approach of roles for structuring collaborative group practices is still wanting. This research can be done both within and outside the field of CSCL; however, we think that research within the field of CSCL can utilize existing technology to assist teachers in the design of structuring in general, as well as in their decisions as to whether and when to assign roles. Second, more insight is needed as to how technology in CSCL environments can be leveraged to automate or facilitate working with roles. Regarding the first track, the pedagogical approach of roles to structure collaboration, many decisions need to be made by teachers (or instructional designers). We will list some of those here, together with four issues for teachers that we believe are important to further investigate, in order to make more informed decisions on roles in future pedagogical design. The first issue is whether to assign roles a priori or not; and in the latter case, how to deal with emerging roles. When a teacher decides to assign roles, a second issue is at which level the roles need to be introduced, i.e., at the micro-, meso-, or macro level. Thus far, most studies have predominantly focused on one level only—typically the micro level. Hence, future research could investigate whether roles on different levels can be combined; and if so, whether such a combination of levels would require different ways of monitoring and technological support. A third issue is what kind of roles a teacher can or should assign given the collaborative settings. As argued previously, roles can focus more on the collaborative processes, or on the collaborative product. In addition, roles can also be used to have students adopt different perspectives by introducing role-play. In this respect, more research is needed in view of building a stronger theory for role design and role assignment. While some attempts have been made to conceptualize roles and role assignment (see e.g., Strijbos & De Laat, 2010; Wise et al., 2012), currently, there is no overarching theoretical framework to describe (or even prescribe) the implementation of roles as a way to structure collaboration. Reflecting on our own studies, we noticed that some of the roles we implemented to structure collaboration were based on earlier research evidence and on the techniques described therein. For example, the starter and summarizer role of De Wever et al. (2008) was based on the starter– wrapper technique described by Hara et al. (2000). In contrast, other roles arose from a need to stimulate students in specific activities. For example, the theoretician role of De Wever et al. (2008) was specifically introduced to stimulate activities that would not occur in the unstructured collaborative environment. Likewise, the roles described in Strijbos, Martens, Jochems, and Broers (2004b)—project planner, communicator, editor, and data collector—were partly based on the functional role perspective outlined by Mudrack and Farrell (1995) and on activities that are typical for project work. Some roles in our own studies were thus (initially) in part (or to a considerable extent) based on a combination of existing techniques and frameworks, existing approaches to the structuring of collaboration, and (to some extent) our intuition—rather than extensive research evidence. This is in line with what Hoadley
Roles for Structuring Groups for Collaboration
325
(Hoadley, 2010, based on Simon, 1969) describes as design science, where designers: use processes to solve problems where there is no closed solution. They explore problems as part of solving them, they iterate, and they apply metaknowledge and craft to create solutions that work, even though the science is insufficient to predict the outcomes of the designer’s choices (p. 552)
In light of (a) the mix of evidence-based and intuitive design practices, as well as (b) the lack of an overarching design framework, a systematic and integrative review on both the design and effect of roles in (computer-supported) collaborative learning will be highly welcomed. A fourth pedagogical issue is whether or not roles should be rotated, and if so, when and how. Role rotation means that roles are “rotated” or “switched” between individual participants throughout a group’s collaboration. Spada (2010) suggests “that it would not be wise to wait for role rotation to emerge, but instead script the rotation of roles [. . .]” (p. 549). As previously stated, assigning roles can support and stimulate students in undertaking specific activities that they would otherwise not engage in. The moderator role, for instance, is often assigned to ensure that all opinions are heard in online discussions (see e.g., De Wever et al., 2010a). However, assigning roles can also often make students feel uncomfortable, that is, whereas one student may feel empowered by the moderator role and conceive the assigned role as supportive, another student may feel uneasy or anxious when asked to take up such a center-stage role. How assigned roles interact with student characteristics has, thus far, not been given attention in CSCL research. Related considerations are whether students should be assigned roles that they are comfortable with, or whether they should also be forced to take up roles that they are uncomfortable with. The same applies to their level of competence: should students only perform roles they are already competent in (some are born natural leaders), or should they also perform the other roles? Stempfle, Hübner, and Badke-Schaub (2001) showed that roles can be assigned to student software developers to maximize their group performance; but although such performance maximization might mirror the typical role and task distribution in a workplace context, such role assignment also negates learning of roles, tasks, and activities which a student has not yet mastered. These issues are also related to the question whether students can choose the roles that they will perform, whether the roles are simply preassigned by the instructor or the technical system, and whether roles will rotate or not. For answering these questions, more research effort could be devoted to the aim of role assignment in the first place. Are roles assigned to ensure that the collaboration is fluent and the end product meets the required standard? Are roles assigned to ensure that students learn from enacting them? What is the importance of the collaborative and learning processes versus the quality of the final product or outcomes of the group and/or individual group members? Are roles fixed for the entire duration of the collaborative process or will they rotate during the collaboration? Are roles to be faded, and if so, when? Regarding the second track, how technology in CSCL environments can be leveraged to automate or facilitate working with roles, it is hard to predict what
326
B. De Wever and J.-W. Strijbos
future technological developments will bring us with respect to roles for structuring collaboration. However, we assume that the evolving technology will at least support teachers, instructional designers, and researchers in view of the pedagogical issues discussed. At a lower level, technology can be preprogrammed to automatically and randomly assign roles to participants in groups, and foster rotation of these roles. If we take asynchronous discussion groups as an example: software could create small groups of five students and assign each student one of five predefined roles, and for example inform them about role rotation after 2 weeks of discussion. Technology could also be used for group formation based on student characteristics. A case in point is the arguegraph script of Dillenbourg and Jermann (2007) in which dyads were automatically formed based on students’ answers on a questionnaire that collected students’ views on a contentious topic and subsequently grouped two students with opposing views into a dyad. However, at a higher level, future technological developments could enable more adaptive regulation of collaborative group processes. In this respect, the advancement of learning analytics, in combination with pedagogical design for learning environments, offers interesting opportunities. Automated detection of role behaviors (such as those identified by Wise et al., 2012)—irrespective of whether these behaviors are due to assigned or emergent roles—can be facilitated with the help of dedicated application of network learning analytics to identify (adherence to) role behaviors in online courses (Gasevic, Joksimovic, Eagan, & Shaffer, 2019). This kind of data, extracted from learning analytics, could be directly processed and made immediately available for several purposes. We will briefly discuss four purposes that come to mind. First, learning analytics can facilitate automatic group formation based on previous behavior in online courses. Thus, instead of group formation based on questionnaires, collaborative groups could be formed based on previous assigned roles and/or the quality of students’ enactment of those roles. Second, assigned roles could be rotated based on information from these analytics. For example, students could be instructed to switch roles once the automated analysis showed that each student performed their role well, and thus the learning effect due to role enactment is “saturated” and students can move on to another role. At that time a student could be assigned a role that they had not yet taken up spontaneously, for example, a student who never recapitulated parts of the online discussion could be assigned the role of summarizer. Third, the analytics could be used to stimulate students to enact their role(s) more actively. This could be done at the group level, for example, if no one is summarizing the discussion, the analytics-based agent can remind the entire group that a summary is needed and/or assign the role of summarizer to a group member. At the individual level, an analytics-based agent could remind participants to pay attention to their roles, based on their actual performance. Even just being aware of what the others are doing in a group, might improve group collaboration (see also the group awareness chapter of Buder, Bodemer, & Ogata, this volume). Fourth, the analytics could be visualized to assist teachers, as well as the group and/or individual group members. These visualizations could stimulate students to regulate their behavior, or inform teachers in view of making decisions on role assignment or rotation.
Roles for Structuring Groups for Collaboration
327
With the rapid expanse of online and distance learning environments, such as Virtual Classrooms, Massive Open Online Courses, or other large-scale collaborations (Chen, Håklev, & Rosé, this volume), and the increased interest in collaborative learning in such settings, roles for structuring collaborations— especially the automated analysis of role behavior in view of developing scalable collaborative learning experiences—may lead to future technological developments. Although learning analytic procedures are promising (Wise, Knight, & Buckingham Shum, this volume), applications of this kind are still in their infancy. Implementing automatic role assignment, role rotation, role stimulation, or role visualization, based on self-reported behavior—such as asking participants to tag their contributions in online discussions (see e.g., Schellens, Van Keer, De Wever, & Valcke, 2009)—is rather straightforward. Doing so on the basis of automated coding of written or spoken discussion contributions and role-based communicative acts therein is one step beyond, although initial steps have been taken in that direction (see e.g., Erkens & Janssen, 2008; Lämsä et al., 2019; Mu, Stegmann, Mayfield, Rosé, & Fischer, 2012). However, for automated analysis of collaborative talk in face-to-face or virtual classrooms, much is yet to be done. Nevertheless, some preliminary analyses showed that prosodic features of talk could be aligned with the content of talk. More specifically, vocal characteristics—such as pitch variation and stress pattern, pausing, tempo, mean pitch and loudness, and vocal quality— could be related to specific types of talk, such as cumulative, promotive, or disputational talk (Hämäläinen, De Wever, Waaramaa, Laukkanen, & Lämsä, 2018), and subsequently related to roles and specific role behavior. While more research into learning analytics and their application to the structuring of collaboration is needed, as well as in relation to roles, in particular, we are confident that current and future technological developments can be leveraged to develop sophisticated support to roles when structuring CSCL.
References Aronson, E., & Bridgeman, D. (1979). Jigsaw groups and the desegregated classroom: In pursuit of common goals. Personality and Social Psychology Bulletin, 5(4), 438–446. https://doi.org/10. 1177/014616727900500405. Arvaja, M., Rasku-Puttonen, H., Häkkinen, P., & Eteläpelto, A. (2003). Constructing knowledge through a role play in a web-based learning environment. Journal of Educational Computing Research, 28(4), 319–341. https://doi.org/10.2190/4FAV-EK1T-XV4H-YNXF. Brush, T. A. (1998). Embedding cooperative learning into the design of integrated learning systems: Rationale and guidelines. Educational Technology Research and Development, 46(3), 5–18. https://doi.org/10.1007/BF02299758. Buder, J., Bodemer, D., & Ogata, H. (this volume). Group awareness. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Chen, B., Håklev, S., & Rosé, C. P. (this volume). Collaborative learning at scale. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer.
328
B. De Wever and J.-W. Strijbos
Cohen, E. G. (1994a). Restructuring the classroom: Conditions for productive small groups. Review of Educational Research, 64(1), 1–35. https://doi.org/10.3102/00346543064001001. Cohen, E. G. (1994b). Designing groupwork: Strategies for the heterogeneous classroom (2nd ed.). New York: Teachers College Press. De Hei, M., Strijbos, J. W., Sjoer, E., & Admiraal, W. F. (2016). Thematic review of approaches to design group learning activities in higher education: The development of a comprehensive framework. Educational Research Review, 18, 33–45. https://doi.org/10.1016/j.edurev.2016. 01.001. De Wever, B., Schellens, T., Van Keer, H., & Valcke, M. (2008). Structuring asynchronous discussion groups by introducing roles: Do students act in line with assigned roles? Small Group Research, 39(6), 770–794. https://doi.org/10.1177/1046496408323227. De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2010a). Structuring asynchronous discussion groups: Comparing scripting by assigning roles with regulation by cross-age peer tutors. Learning and Instruction, 20(5), 349–360. https://doi.org/10.1016/j.learninstruc.2009. 03.001. De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2010b). Roles as a structuring tool in online discussion groups: The differential impact of different roles on social knowledge construction. Computers in Human Behavior, 26(4), 516–523. https://doi.org/10.1016/j.chb. 2009.08.008. Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL: Can we support CSCL? (pp. 61–91). Heerlen, the Netherlands: Open Universiteit Nederland. Dillenbourg, P., & Jermann, P. (2007). Designing integrative scripts. In F. Fischer, I. Kollar, H. Mandl, & J. Haake (Eds.), Scripting computer-supported collaborative learning: Cognitive, computational and educational perspectives (pp. 277–303). New York: Springer. Driskell, T., Driskell, J. E., Burke, C. S., & Salas, E. (2017). Team roles: A review and integration. Small Group Research, 48(4), 482–511. https://doi.org/10.1177/1046496417711529. Erkens, G., & Janssen, J. (2008). Automatic coding of dialogue acts in collaboration protocols. International Journal of Computer-Supported Collaborative Learning, 3(4), 447–470. https:// doi.org/10.1007/s11412-008-9052-6. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi. org/10.1080/00461520.2012.748005. Forsyth, D. R. (1999). Group dynamics (3rd ed.). Belmont, CA: Wadsworth. Gasevic, D., Joksimovic, S., Eagan, B. R., & Shaffer, D. (2019). SENS: Network analytics to combine social and cognitive perspectives of collaborative learning. Computers in Human Behavior, 92, 562–577. https://doi.org/10.1016/j.chb.2018.07.003. Gu, X., Shao, Y., Guo, X., & Lim, C. P. (2015). Designing a role structure to engage students in computer-supported collaborative learning. The Internet and Higher Education, 24, 13–20. https://doi.org/10.1016/j.iheduc.2014.09.002. Hämäläinen, R., De Wever, B., Waaramaa, T., Laukkanen, A.-M., & Lämsä, J. (2018). It’s not only what you say, but how you say it: Investigating the potential of prosodic analysis as a method to study teacher’s talk. Frontline Learning Research, 6(3), 204–227. https://doi.org/10.14786/flr. v6i3.371. Hammond, M. (1999). Issues associated with participation in online forums—the case of the communicative learner. Education & Information Technologies, 4, 353–367. https://doi.org/ 10.1023/A:1009661512881. Hara, N., Bonk, C. J., & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115–152. https://doi.org/10.1023/ A:100376472. Hare, A. P. (1994). Types of roles in small groups: A bit of history and a current perspective. Small Group Research, 25(3), 433–448. https://doi.org/10.1177/1046496494253005.
Roles for Structuring Groups for Collaboration
329
Hoadley, C. (2010). Roles, design, and the nature of CSCL. Computers in Human Behavior, 26(4), 551–555. https://doi.org/10.1016/j.chb.2009.08.012. Järvelä, S., & Hadwin, A. F. (2013). New frontiers: Regulating learning in CSCL. Educational Psychologist, 48(1), 25–39. https://doi.org/10.1080/00461520.2012.748006. Järvelä, S., Kirschner, P. A., Panadero, E., Malmberg, J., Phielix, C., Jaspers, J., Koivuniemi, M., & Järvenoja, H. (2015). Enhancing socially shared regulation in collaborative learning groups: Designing for CSCL regulation tools. Educational Technology Research and Development, 63 (1), 125–142. https://doi.org/10.1007/s11423-014-9358-1. Johnson, D. W., & Johnson, R. T. (1994). Learning together and alone: Cooperative, competitive and individualistic learning (4th ed.). Needham Heights, MA: Allyn & Bacon. Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher, 38(5), 365–379. https://doi.org/10.3102/0013189X09339057. Kagan, S. (1994). Cooperative learning. San Juan Capistrano: Kagan Cooperative Learning. King, A. (1997). ASK to THINK-TEL WHY®©: A model of transactive peer tutoring for scaffolding higher-level complex learning. Educational Psychologist, 32(4), 221–235. https:// doi.org/10.1207/s15326985ep3204_3. Kobbe, L., Weinberger, A., Dillenbourg, P., Harrer, A., Hämäläinen, R., Häkkinen, P., & Fischer, F. (2007). Specifying computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(2–3), 211–224. https://doi.org/10.1007/ s11412-007-9014-4. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts: A conceptual analysis. Educational Psychology Review, 18(2), 159–185. https://doi.org/10.1007/s10648-006-9007-2. Kuhn, D., Shaw, V., & Felton, M. (1997). Effects of dyadic interaction on argumentative reasoning. Cognition and Instruction, 15(3), 287–315. https://doi.org/10.1207/s1532690xci1503_1. Lämsä, J., Espinoza, C., Araya, R., Viiri, J. G., Abelino, J., Gormaz, R., & Hämäläinen, R. (2019). Automatic content analysis in collaborative inquiry-based learning. Paper presented at the 13th Biennial Conference of the European Science Education Research Association (ESERA). Bologna, Italia. Law, N., Zhang, J., & Peppler, K. (this volume). Sustainability and scalability of CSCL innovations. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Mu, J., Stegmann, K., Mayfield, E., Rosé, C., & Fischer, F. (2012). The ACODEA framework: Developing segmentation and classification schemes for fully automatic analysis of online discussions. International Journal of Computer-Supported Collaborative Learning, 7(2), 285–305. https://doi.org/10.1007/s11412-012-9147-y. Mudrack, P. E., & Farrell, G. M. (1995). An examination of functional role behaviour and its consequences for individuals in group settings. Small Group Research, 26(4), 542–571. https:// doi.org/10.1177/1046496495264005. O’Donnell, A. M., & Dansereau, D. F. (1992). Scripted cooperation in student dyads: A method for analyzing and enhancing academic learning and performance. In R. Hertz-Lazarowitz & N. Miller (Eds.), Interaction in cooperative groups. The theoretical anatomy of group learning (pp. 120–141). New York: Cambridge University Press. Ouyang, F., & Chang, Y. (2019). The relationships between social participatory roles and cognitive engagement levels in online discussions. British Journal of Educational Technology, 50(3), 1396–1414. https://doi.org/10.1111/bjet.12647. Pata, K., Sarapuu, T., & Lehtinen, E. (2005). Tutor scaffolding styles of dilemma solving in network-based role-play. Learning and Instruction, 15(6), 571–587. https://doi.org/10.1016/j. learninstruc.2005.08.002. Pinantoan, A. (2013). Instructional scaffolding: A definitive guide. Retrieved from https://www. opencolleges.edu.au/informed/teacher-resources/scaffolding-in-education-a-definitive-guide/ Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals and understanding. Hillsdale, NJ: Erlbaum.
330
B. De Wever and J.-W. Strijbos
Schellens, T., Van Keer, H., De Wever, B., & Valcke, M. (2007). Scripting by assigning roles: Does it improve knowledge construction in asynchronous discussion groups? International Journal of Computer-Supported Collaborative Learning, 2(2-3), 225–246. https://doi.org/10.1007/ s11412-007-9016-2. Schellens, T., Van Keer, H., De Wever, B., & Valcke, M. (2009). Tagging thinking types in asynchronous discussion groups: Effects on critical thinking. Interactive Learning Environments, 17(1), 77–94. https://doi.org/10.1080/10494820701651757. Sharan, S., Sharan, Y., & Tan, I. G.-C. (2013). The group investigation approach to cooperative learning. In C. E. Hmelo-Silver, C. A. Chinn, C. K. K. Chan, & A. O’Donnell (Eds.), The international handbook of collaborative learning (pp. 351–369). New York: Routledge. Sharan, Y., & Sharan, S. (1992). Expanding cooperative learning through group investigation. New York: Teachers College Press. Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press. Slavin, R. E. (1995). Cooperative learning: Theory, research and practice (2nd ed.). Needham, Heights: Allyn & Bacon. Slavin, R. E. (1996). Research on cooperative learning and achievement: What we know, what we need to know. Contemporary Educational Psychology, 21(1), 43–69. https://doi.org/10.1006/ ceps.1996.0004. Spada, H. (2010). Of scripts, roles, positions, and models. Computers in Human Behavior, 26(4), 547–550. https://doi.org/10.1016/J.CHB.2009.08.011. Stempfle, J., Hübner, O., & Badke-Schaub, P. (2001). A functional theory of task role distribution in work groups. Group Process & Intergroup Relations, 4(2), 138–159. https://doi.org/10.1177/ 1368430201004002005. Strijbos, J. W., & De Laat, M. F. (2010). Developing the role concept for computer-supported collaborative learning: An explorative synthesis. Computers in Human Behavior, 26(4), 495–505. https://doi.org/10.1016/j.chb.2009.08.014. Strijbos, J. W., Martens, R. L., & Jochems, W. M. G. (2004a). Designing for interaction: Six steps to designing computer-supported group-based learning. Computers and Education, 42(4), 403–424. https://doi.org/10.1016/j.compedu.2003.10.004. Strijbos, J. W., Martens, R. L., Jochems, W. M. G., & Broers, N. J. (2004b). The effect of functional roles on group efficiency: Using multilevel modeling and content analysis to investigate computer-supported collaboration in small groups. Small Group Research, 35(2), 195–229. https://doi.org/10.1177/1046496403260843. Strijbos, J. W., Martens, R. L., Jochems, W. M. G., & Broers, N. J. (2007). The effect of functional roles on perceived group efficiency during computer-supported collaborative learning: A matter of triangulation. Computers in Human Behavior, 23(1), 353–380. https://doi.org/10.1016/j.chb. 2004.10.016. Strijbos, J. W., & Weinberger, A. (2010). Emerging and scripted roles in computer-supported collaborative learning. Computers in Human Behavior, 26(4), 491–494. https://doi.org/10.1016/ j.chb.2009.08.006. Vogel, F., Weinberger, A., & Fischer, F. (this volume). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Webb, N. (1989). Peer interaction and learning in small groups. International Journal of Educational Research, 13(1), 21–39. https://doi.org/10.1016/0883-0355(89)90014-1. Webb, N. M. (2013). Information processing approaches to collaborative learning. In C. E. HmeloSilver, C. A. Chinn, C. K. K. Chan, & A. O’Donnell (Eds.), The international handbook of collaborative learning (pp. 19–40). New York: Routledge. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer.
Roles for Structuring Groups for Collaboration
331
Wise, A. F., Saghafian, M., & Padmanabhan, P. (2012). Towards more precise design guidance: Specifying and testing the functions of assigned student roles in online discussions. Educational Technology Research and Development, 60(1), 55–82. https://doi.org/10.1007/s11423-0119212-7. Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. The Journal of Child Psychology and Psychiatry, 17(2), 89–100. https://doi.org/10.1111/j.1469-7610.1976. tb00381.x. Zheng, L., Huang, R., & Yu, J. (2014). The impact of different roles on motivation, group cohesion, and learning performance in computer-supported collaborative learning (CSCL). In Proceedings of the IEEE 14th International conference on advanced learning technologies (pp. 294–296). New York: IEEE. https://doi.org/10.1109/ICALT.2014.91.
Further Readings De Hei, M., Strijbos, J. W., Sjoer, E., & Admiraal, W. F. (2016). Thematic review of approaches to design group learning activities in higher education: The development of a comprehensive framework. Educational Research Review, 18, 33–45. https://doi.org/10.1016/j.edurev.2016. 01.001. This paper describes a thematic review on collaborative learning design that resulted in the Group Learning Activities Instructional Design (GLAID) framework, comprising eight components: (1) interaction, (2) learning objectives and outcomes, (3) assessment, (4) task characteristics, (5) structuring, (6) guidance, (7) group constellation, and (8) facilities. De Wever, B., Schellens, T., Van Keer, H., & Valcke, M. (2008). Structuring asynchronous discussion groups by introducing roles: Do students act in line with assigned roles? Small Group Research, 39(6), 770–794. https://doi.org/10.1177/1046496408323227. This study investigated to what extent first-year bachelor students enacted assigned roles (source searcher, theoretician, summarizer, moderator, and starter) in an online asynchronous discussion environment. Quantitative content analysis was applied and the study showed that all participants enacted the roles they were assigned and that they did not neglect other activities while discussing. Hoadley, C. (2010). Roles, design, and the nature of CSCL. Computers in Human Behavior, 26(4), 551–555. https://doi.org/10.1016/j.chb.2009.08.012. This commentary as part of the special issue on “Scripted and emergent roles” (Strijbos & Weinberger), reflects not only on the included studies but also on the concept of roles in general as well as their specific potential as a boundary object for CSCL research. Strijbos, J. W., & De Laat, M. F. (2010). Developing the role concept for computer-supported collaborative learning: An explorative synthesis. Computers in Human Behavior, 26(4), 495–505. https://doi.org/10.1016/j.chb.2009.08.014. This paper reports a framework to synthesize and conceptualize roles, by discerning three dimensions: Assigned versus emergent roles, product-oriented versus process-oriented roles, and the granularity of roles in terms of micro (role as task), meso (role as pattern), and macro (role as stance). Wise, A. F., Saghafian, M., & Padmanabhan, P. (2012). Towards more precise design guidance: Specifying and testing the functions of assigned student roles in online discussions. Educational Technology Research and Development, 60(1), 55–82. https://doi.org/10.1007/s11423-0119212-7. This paper explored assigned student roles in online discussions and identified a set of seven common functions. Based on this literature review, a targeted set of role descriptions was created, together with a content analysis scheme to assess the fulfillment of the functions. Role assignment was implemented and analyzed in an empirical study.
Part III
Technologies
Collaboration Scripts: Guiding, Internalizing, and Adapting Freydis Vogel, Armin Weinberger, and Frank Fischer
Abstract Research and practical experience show that for successful collaborative learning, learners need to be willing and able to engage in particular activities. Learners hardly reach this state when left to collaborate on their own. Thus, collaborative learning may rather be set up with particular instructions to learning together effectively. In this chapter, we introduce the Script Theory of Guidance (SToG) to explain how individual learners obtain, adapt, and use cognitive schemas (i.e., internal scripts) about collaborative learning scenarios. The theory further explains how external collaboration scripts can scaffold collaborative learning processes when learners do not spontaneously activate functional internal scripts for collaborative learning. We report on evidence that shows how scripts may help learners engage in transactive group processes that are conducive to joint knowledge construction. Moving beyond currently used scripts, future scripting may focus on the facilitation of interdisciplinary collaboration and scaffolding of learners’ mutual regulation throughout collaborative learning processes. Keywords Collaboration script · Socio-cognitive scaffolding · Transactivity · Interdisciplinary collaboration
F. Vogel (*) Learning Sciences Research Institute, University of Nottingham, Nottingham, UK e-mail: [email protected] A. Weinberger Department of Educational Technology, Saarland University, Saarbrücken, Germany e-mail: [email protected] F. Fischer Department of Psychology and Munich Center of the Learning Sciences, Ludwig-MaximiliansUniversität München, Munich, Germany e-mail: frank.fi[email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_18
335
336
F. Vogel et al.
1 Definitions and Scope Learning together builds on our innate abilities to share ideas and collaborate with each other for purposes of not only acquiring conceptual knowledge, but to challenge and advance perspectives on complex matters. Collaborative learning among peers can also disrupt classroom practices in which learners are mostly driven into passiveness. There are well-established theories backing up the expectations about the effectiveness of collaborative learning, but they also specify its prerequisites and conditions. Some theories define social embeddedness as inevitable requirement for learning in general (Wenger 1998). Other theories emphasize the role of others in facilitating learning processes either by challenging a learner’s limited perspective with other viewpoints or by supporting the learner with more advanced knowledge to enable the learner’s development of knowledge and skills (Andriessen et al. 2003; Vygotsky 1980). Research has shown that collaborative learning works with incentive structures that reward the group as a whole for the achievements of individual members of the group (Johnson and Johnson 2009; Slavin 1995). More recently, specific discourse qualities have been found to foster cognitive elaboration and learning, which can be scaffolded in a targeted manner (Asterhan and Schwarz 2009; Weinberger and Fischer 2006). Although these theories build upon humans’ capacity for and orientation toward collaboration, they mostly agree that specific conditions of effective collaborative learning must be designed and determined to enable learners to benefit from it. The main reason for learners not engaging in productive interactions themselves may be that students interpret social situations other than intended. When designers of collaborative learning scenarios may try to convey a notion of joint exploration of a topic, students may try and orientate toward minimization of the workload through its distribution. While teachers may think that students should focus on topic and construct arguments to inquire about a matter, students may seek to win the debate or land the best joke instead. Hence, different interpretations of the collaborative learning scenario can be at cross with the intentions of not only teachers and students but also among the interacting peers themselves. The collaboration script addresses the questions of how to understand and foster the specific beneficial conditions and interaction patterns of computer-supported collaborative learning (CSCL). The approach originally built on cognitive theory related to the understanding of text and discourse (revised dynamic memory model; Schank 1999) and instructional theory on learning strategies and cooperative learning (cooperation scripts; O’Donnell and Dansereau 1992). Confronted with the ideas from early research on CSCL in the 1990s (Pea 2004; Scardamalia and Bereiter 1994; see Hoadley 2018), the original concept has been advanced substantially. However, current approaches to collaboration scripts still preserved the core notion that effective external guidance for collaboration needs to be designed in a way that activates specific cognitive structures. Internal collaboration scripts are cognitive representations of collaboration that are already at the disposal of learners. External collaboration scripts are guidance for collaborative interaction, including prompts
Collaboration Scripts: Guiding, Internalizing, and Adapting
337
and interaction rules that are represented within a CSCL environment. In the following, we will unfold the theoretical principles of internal and external scripts for successful collaborative learning. Then, we will review empirical evidence that helps to understand the mechanisms of learning with collaboration scripts. Finally, the future of designing and researching scripts for CSCL will be discussed.
2 History and Development: A Theoretical Perspective on Learning with Collaboration Scripts The Script Theory of Guidance (SToG) provides an explanation of what and how people know about collaborating with others. Building on this explanation, the theory suggests ways to support collaboration. In what follows, we elaborate on these two aspects of the theory.
2.1
Internal Collaboration Scripts: What and how Do People Know About Collaboration?
SToG is a schema theory of how people learn about collaboration and how their knowledge is structured and represented in their memories. A central assumption is that individuals develop internal representations of collaborative situations, their goals, roles, and appropriate actions through engaging in these situations. These representations help them to understand and engage in similar situations in the future. Even more importantly, these representations are considered as being modular and thereby, in principle, highly flexible. Thus, parts of them can be reused to understand and engage in different and new collaborative situations that share certain characteristics, such as phases in which people introduce themselves, jointly brainstorm on possible causes of a problem, or collect ideas for the problem solution. How is this flexibility possible? SToG considers an internal collaboration script to be a configuration of knowledge components about a collaborative practice on different hierarchically organized levels. This configuration is constituted dynamically from existing knowledge components while attending or engaging in an instance of collaboration (Fischer et al. 2013). The knowledge components are (see Fig. 1): (1) The play component contains knowledge about the “piece” collaborators are engaging in or performing. An example of a prototypical play in collaborative learning would be a dialectical argumentation with knowledge building as the main goal (see Chap. 10 about Argumentation and Knowledge Construction). The play component entails the according knowledge on the roles and the sequence of scenes that belong to the play. (2) The scene component entails knowledge about different phases or situations that constitute the play, including knowledge about the roles and activities that
338
F. Vogel et al.
Fig. 1 Schematic model for the hierarchical relationship between the different script components within an activated script. The learners run through the sequence of scriptlets, starting in the first scene. Then they move on to the next scene, moving through this scene’s sequence of scriptlets, and so on. The flexibility of internal scripts results, amongst others, from adaptation of the script components by using components originally associated to other scripts or changes in the script used in a specific situation
can reasonably be enacted in a scene. A scene typically covers the activities and relationships of the different actors that are relevant. In the aforementioned play example about a dialectical argumentation, typical scenes would be argument construction, counterargument construction, or synthesis. (3) The role component contains knowledge on sets of activities that can be grouped and assigned to a specific actor within a scene. In our example, we would need at least two roles to
Collaboration Scripts: Guiding, Internalizing, and Adapting
339
be carried out, one for the actor arguing for a specific position and one for the actor arguing against. (4) The scriptlet component includes knowledge on activities that can be part of collaborative practices. In contrast to the scene component, one scriptlet is typically associated with the activities that are carried out by only one specific role within a scene. The scriptlets for the role expected to argue for a specific position could contain, for instance, formulating a claim or finding data to prove the claim. The idea is that our knowledge on collaboration is represented in configurations of these components and that these configurations help us to understand and engage in collaborative situations, including new CSCL environments. When people participate in CSCL practices, they are guided by dynamically configured internal collaboration scripts consisting of play, scene, scriptlet, and role components. Even if they participate in a completely unfamiliar practice, they will be able to understand and act by activating knowledge components that they have been using in other situations that share characteristics with the new CSCL practice. SToG proposes that the configuration of the script components is influenced by the characteristics of the situation as perceived by an individual as well as the goals this individual pursues. SToG further explains how internal collaboration scripts develop and change through participation in CSCL practices. Coming back to the unfamiliar CSCL practice, individuals “recruit” existing script components in a new configuration. With repeated participation in the initially unfamiliar practice, new higher level components develop that organize the components on the lower levels (script induction). In that way, the configuration stabilizes further. Another way of learning is through modification of existing configurations. According to the theory, this is likely to happen if certain configurations fail to work in a CSCL practice. This can (but does not need to) be an explicit, reflective process. In many situations, this modification will be a fast and implicit process of activating different components that may better help to understand the situation. For instance, a learner could enter a discourse in which the participants share the goal of knowledge building. Even though the learner might not share the same script with other participants in this social situation, a script for a similar situation could be activated and components of this script could be used. The learner, unfamiliar to the current social situation, could activate a script for a debate with the goal to convince the others of the superiority of one’s own position. By using script components that are common in both situations, such as formulating an argument or scrutinizing other’s arguments, a new, more appropriate script can be configured. This would leave out components that are not common to both social situations and add components that are specific for the new situation, such as synthesizing each other’s arguments and finding a joint solution. The successfully modified configurations will be more likely activated in similar future situations (script reconfiguration). SToG also prioritizes certain forms of interaction over others with respect to learning new domain knowledge through collaboration. More specifically, it predicts that the transactive application of knowledge in collaboration will lead to better domain learning of the participating individuals. Transactivity in collaborative
340
F. Vogel et al.
learning processes means that learners not only add independent individual contributions, but build on each other’s contributions. In doing so, they construct knowledge that has not been available to any of the participants at the beginning of the collaborative learning process (Chi and Wylie 2014; Teasley 1997). Instantiation of transactivity is scrutinizing each other’s arguments, finding joint solutions for opposing positions, or extending other’s explanations (Teasley 1997). Whereas explaining matters of fact to each other or collecting individual ideas without connecting them cannot be regarded as transactive. There are ways to measure learners’ internal scripts and their reconfiguration as well as tests showing the impact of transactive script components. Building on these, Fischer et al. (2013) report evidence for the claims of SToG. Still, studies about internal collaboration scripts are rare though (e.g., Kollar et al. 2007). The majority of script studies are concerned with external collaboration scripts. These external scripts function as scaffolding for learners who not autonomously activate appropriate internal scripts in collaborative learning settings.
2.2
External Collaboration Scripts: How Can Configurations of Internal Script Components Be Shaped?
External collaboration scripts are sets of scaffolds for collaborative learning. Given that learners are guided in CSCL practices by the flexible configuration of internal script components, external collaboration scripts mainly have the role to stimulate and shape this process. External scripts are thus aimed at activating particular internal script components and, sometimes, they also reduce the likelihood of other components to become part of the internal script configuration that is activated in a specific social learning situation. According to the type of script component the scaffolds are targeted at, SToG distinguishes play scaffolds, scene scaffolds, role scaffolds, and scriptlet scaffolds. An example of a play scaffold for the play “dialectical argumentation for knowledge building” could be a prompt directed to a group of learners reminding them “the goal of your collaboration is to jointly identify the best solution and not just to convince the other of your initial position.” Within the same script, scaffolding for the different scenes could be designed in a way that the learners first have to accomplish all scriptlets from one scene to be able to see what they are prompted to do in the subsequent scene. The scene scaffold could ask them to enter all information about their arguments in text boxes on the computer and then navigate them to the next scene in which they are asked to enter all information about their counterarguments. The according prompt could be: “Please start your collaboration with each of you stating their position, then click the green button to move on to the argument scrutinizing phase.” In CSCL environments, external scripts are often embedded in the communication interface to facilitate a certain sequence of scenes, to induce the engagement in specified scriptlets or activities, or to assign distinctive roles to learners in one group.
Collaboration Scripts: Guiding, Internalizing, and Adapting
341
Scaffolds for scriptlets and roles, like writing a well-reasoned argument, have also been successfully implemented through prestructuring the text body of messages in online discussions (e.g., Rummel et al. 2009; Stegmann et al. 2012). SToG goes further in specifying which of the script scaffolds will be more effective. First, scaffolds are considered to be more effective for learning when they enable individuals to engage in a CSCL practice somewhat beyond what they would be able to without the scaffolding (script guidance principle). With this assumption, based on the Zone of Proximal Development (Vygotsky 1980), SToG is clearly positioned as a scaffolding theory. However, SToG additionally specifies this general assumption by referring to the internal script components. External collaboration scripts are most effective for learning if they target at the highest hierarchical level possible at present. But what is the highest level that is possible at a given point in time? It is the component that is positioned highest in the hierarchy for which subordinate components are already at the disposal of the collaborating individual (optimal scripting level principle). For instance, if learners would already possess scriptlet-level knowledge on how to compose and structure a proper argument, it might be sufficient to provide prompts on the scene level, guiding them through the sequence of argument, critique, and synthesis. A script on the play level might not include enough information, since learners don’t know at which point in time and in which sequence to process their argument, critique, and synthesis. Whereas a script including prompts on the scriptlet level, might be an obstacle, since learners already know what to do in each scene and thus naturally emerging, appropriate behavior might be overwritten or suppressed by prompts on the scriptlet level. After being criticized for not addressing the very process of internalization (i.e., appropriation between external and internal scripts; Tchounikine 2016), SToG has advanced with regard to learners’ interpretation and appropriation of external script scaffolds (Stegmann et al. 2016). Depending on the degrees to which scaffolds inform, instruct, and/or constrain learners to engage in specific interaction patterns, learners act based on their personal interpretations of any given scaffold in CSCL. Accordingly, effects of collaboration scripts may build on different mechanisms from direct regulation of activities to knowledge construction (Weinberger 2011).
3 State of the Art: Empirical Evidence for Learning with CSCL Scripts CSCL research explores instructional technology designed to support collaborative learning processes. The research on learning with scripts represents a large amount of these efforts. There is a great amount of experimental studies comparing collaborative learning with and without the support of an external script for various disciplines, age groups, and locations. The meta-analysis by Vogel et al. (2017) synthesized 34 experimental comparisons about the effectiveness of computer-
342
F. Vogel et al.
supported collaborative learning with external scripts on learners’ acquisition of domain-specific knowledge (e.g., in physics, medicine, social sciences) and collaboration skills (e.g., argumentation and reasoning skills, storytelling, communication skills). The meta-analysis shows a small positive effect on domain-specific knowledge (d ¼ 0.20) and a large effect on collaboration skills (d ¼ 0.95). This general outcome supports the assumptions that external scripts can help learners to benefit more from a collaborative learning scenario compared to unguided collaborative learning. However, the rather small effect on domain-specific knowledge and a great variability between the effect sizes of the included comparisons open up more questions: Why do effect sizes differ so much between the different learning outcomes? How can the potential of scripts for supporting collaboration skills transfer to domain-specific knowledge? What are the specific features of scripts that lead to better learning outcomes? The difference between the effect sizes on the two learning outcomes might result from the distance between the activities prompted by the script and the respective measures. Collaboration skills, as operationalized in the script studies, are mostly very close to the practices the script guides the learners to engage in (e.g., Rummel et al. 2009). In studies that guide learners through the process of argumentation and reasoning, for instance, the collaboration skills are represented by the formal quality of the learners’ argumentation. The activation and repeated practice of argumentation and reasoning might have led to the acquisition of argumentation as collaboration skill (Stegmann et al. 2012). The acquisition of domain-specific knowledge is a result of being engaged in the learning activities demanded by the script. Yet, domain knowledge to be learned is heuristically not identical with these activities. For instance, engaging in a discourse about a scientific theory will lead to learning this theory, yet the argumentative features of the discourse are not equivalent to the theory. Here, the learning mechanism is rather based on the cognitive elaboration of the theory and resolving socio-cognitive conflicts with other learners, while the script scaffolds the discourse only in a generic way (e.g., Noroozi et al. 2012). However, it remains an open question if and how the impact of learning with external scripts on domain-specific knowledge could possibly be increased up to an effect size similar to the one on collaboration skills. A way to find an answer to this question is to reveal which specific features and mechanisms of external scripts are responsible for their effectiveness and might be possibly tweaked to raise the impact of scripts. Mainly guided by the principles from the SToG, the meta-analysis by Vogel et al. (2017) analyzed more closely three mechanisms of external scripts and their impact on the effectiveness of learning with scripts. First, the meta-analysis revealed that scripts prompting transactivity have a significant positive overall effect on domain-specific knowledge, while the overall effect of scripts not prompting transactivity is not significant. These results support that transactivity is a major characteristic of functional collaboration scripts as postulated in the SToG. The main reason for the expected importance of promoting transactivity in collaborative learning is that transactivity comprises activities within
Collaboration Scripts: Guiding, Internalizing, and Adapting
343
the collaborative learning process in which learners elaborate on each other’s contributions (e.g., critique a peer’s argument, clarify the meaning of the peers’ statements). Thus, all learners within a group are cognitively engaged and benefit from diverging peer knowledge, the different perspectives emerging within a group, and subsequent socio-cognitive conflicts to be resolved (Chi and Wylie 2014; Fischer et al. 2013; Teasley 1997). For instance, Noroozi et al. (2013) studied the effectiveness of learning with a transactive discussion script on multidisciplinary domain learning in higher education. The script asked learners to engage in transactive activities, such as critiquing each other’s arguments, formulating counterarguments, or finding joint syntheses of arguments while solving interdisciplinary problems about water management. The experimental study showed that, indeed, learners supported by the transactive discussion script outperformed learners collaborating without the script regarding their individual domain learning. The cognitive processes activated by scripts prompting transactivity affected learning domainspecific knowledge rather than learning collaboration skills. Second, the meta-analysis explored the differences in the effectiveness of scripts depending on the script level that is being addressed. As outlined before, the SToG assumes that an external script would be most effective when its scaffolds are focused on the highest hierarchical level possible, given the learners’ background and the specificity of the collaborative scenario (optimal scripting level principle). An important issue related to the script level is the repeatedly raised critique of “over-scripting” learners (Dillenbourg 2002). This critique states that supporting learners with scripts, rigidly prescribing activities to be executed during collaborative learning, would lead to a decrease in motivation and would impede learners’ spontaneous use of beneficial collaborative activities. Hence, learning with collaboration scripts will lead to lower learning gains than less rigidly structured collaboration (Wise and Schwarz 2017). Following this argumentation, scripts on a less specific, higher hierarchical level (play level scripts) should generally be more beneficial than scripts on a more specific, lower hierarchical level (scriptlet level). The data of the meta-analysis cannot support this critique. For learning both, domain-specific knowledge and collaboration skills, the scripts on the least rigid play level were the least effective scripts. For domain-specific knowledge, in contrast, the more rigid scene-level scripts were the most effective, and for collaboration skills, the most rigid scriptlet level scripts were the most effective ones. Moreover, in a more recent meta-analysis, the effect of learning with CSCL scripts on motivation has been analyzed. Six studies reporting motivation measures were synthesized. The overall effect size of learning with collaboration scripts on motivation for these studies was close to zero (g ¼ 0.07) and not significant (Radkowitsch et al. 2020b). This result does not support that learning with CSCL scripts might have a negative impact on motivation. Yet, there is still a need for research connecting measures of internal scripts and the analysis of the effectiveness of external scripts on different script levels. Particularly, the design of scripts for learners with already partially developed internal scripts who might benefit better from the play-level scripts should be considered.
344
F. Vogel et al.
Third, the meta-analysis investigated the relevance of the availability of additional scaffolds structuring the domain-specific content to be learned. Collaboration scripts are focused on the social interactions during collaborative learning but they do not contain any information about the content to be learned. For instance, a collaboration script might support students to discuss about biology content, but the content must be gathered from additional resources such as a textbook or webpages (e.g., Laru et al. 2012). The access to the content can be more or less scaffolded and the combination of external script scaffold and the domain-specific content scaffold should ideally lead to synergies between both (Tabak 2004). Results from the metaanalysis showed that scripts had a significant positive effect, even higher than the general effect of scripts, when additional scaffolds for the domain-specific content were provided in the learning environment (both for the scripted and the control group). In a study by Gijlers and de Jong (2009), for instance, students were exposed to inquiry learning with simulations about physical phenomena (e.g., the relation between friction, acceleration, etc., for the velocity of a motorbike). The domainspecific content scaffolds in this study contained support to formulate hypotheses that could be investigated by running the simulation. The results argue clearly for designing script scaffolds along with scaffolds for the domain-specific content to be learned in order to increase the effectiveness of learning with collaboration scripts. Computers seem to be extremely suitable to integrate both script and content scaffolds into one learning environment (Gijlers and de Jong 2009). In summary, the meta-analysis revealed that the effects of learning with CSCL scripts were greater for learning to collaborate than for collaborating to learn. Regarding the transactivity principle, the meta-analysis revealed that building on each other’s ideas seems important for learning about domain concepts and procedures but less important for learning to engage in collaboration. The same metaanalysis also found differing “optimal” scripting levels, with the scene level being optimal for learning domain-specific knowledge, whereas for learning to collaborate, the scriptlet level was the most effective. This pattern of findings may inspire further research and a revision of the SToG for specifying mechanisms of collaborative domain learning.
4 The Future: Research on Collaboration Scripts The assumption is that learners develop their interaction skills and interpretation of script scaffolds in extended trajectories of learning together. Aiming at an optimal scripting level, there is a need to adapt scaffolds to the learners’ needs that are changing over time. Indeed, the very definition of scaffolding is that the help becomes ultimately redundant and is faded out or replaced by more challenging types of scaffolding. This learning dynamic demands for future external collaboration scripts to become flexible. Further considerations emerge from that: How can we design CSCL scripts more flexibly, so they serve the diverse needs of learners with various prerequisites? How can CSCL scripts assign the teachers and learners of
Collaboration Scripts: Guiding, Internalizing, and Adapting
345
a collaborative learning scenario more responsibility and agency for the script to be used? Which generic skills for collaborative learning can be successfully scaffolded by CSCL scripts?
4.1
Flexibility of CSCL Scripts
The most prevalent approaches to realize flexibility for CSCL scripts are fading and adaptivity. Fading means that a CSCL script would first offer scaffolding on the most specific scriptlet level, then gradually reduce the scaffolding to the scene level, and then the play level. After finally developing internal script components on the highest hierarchical level, learners do not need the CSCL script anymore. The fading approach accommodates the assumption that learners gradually develop components of a script while being scaffolded by a CSCL script and thus need less scaffolding after they developed internal script components. However, this approach is rather static as the fading is predetermined in the same way for all learners neglecting their individual learning pace and their actual development of script components. Thus, it is not surprising that studies about fading CSCL scripts do not reveal much effectiveness of fading but reporting other factors, such as peer monitoring, to be relevant during faded scaffolding with CSCL scripts (Bouyias and Demetriadis 2012; Wecker and Fischer 2011). To achieve a better fit, the fading of the CSCL script may need to respond to the current state and needs of each learner to match the optimal scripting level. The approach of adaptive CSCL scripts seems to be better suited at this point. Adaptive CSCL scripts reduce or increase the amount of scaffolding and the scripting level based on the assessment of the learners’ current skills. The assessed skills should be about the script components the learners develop internally or, more broadly, the collaboration skills scaffolded by the script. These skills become most apparent in the social exchange between learners and thus it needs a thorough analysis of the social exchange, such as the discourse between two learners. Moreover, to be adaptive, the CSCL script must react to learners’ needs just in time, so the social exchange must be analyzed instantly. Computer technology for automatic discourse analysis seems to be best suited to serve these demands. Indeed, studies show that adaptive scripting can be successfully implemented to effectively support collaborative learning (Rau et al. 2017). Some promising ways to guide collaborative learners have been achieved with conversational agents and are to be explored further (Tegos et al. 2016). Here, learners’ talk is automatically analyzed, typically building on a domain model consisting of relevant terms and their relations as well as a list of prompts the agent puts forward in response to the learners’ use of relevant terms. Whenever collaborative learners mention a relevant term covered by the domain model, the conversational agent is triggered to prompt learners accordingly, e.g., learners talking about “photosynthesis” may activate the conversational agent to prompt the learners to further elaborate on this central concept, e.g. “can you specify what is needed for photosynthesis to happen and what does it produce?” Yet, the drawback of adaptive scripting technology is that it needs for each instance a
346
F. Vogel et al.
highly sophisticated automatic assessment tool, which would disallow teachers from developing and using a script spontaneously in the classroom. The future of adaptive CSCL scripts will be highly dependent on how well the designers of adaptive CSCL scripts adopt what is known about the mechanisms of collaborative learning with scripts (Rummel et al. 2016).
4.2
Learners’ and Teachers’ Agency When Learning with CSCL Scripts
Particularly adaptive CSCL scripts are suspected to take over control of a collaborative scenario to a large degree. This way, the collaborative learning environment would be “running on autopilot” with little room for autonomy and selfdetermination of learners and teachers (Ryan and Deci 2000). Given the high investment needed to develop (auto-)adaptive CSCL scripts and the possible decrease in learners’ and teachers’ motivation, the approach of adaptable scripts might be better suited. With adaptable scripts, learners or teachers are placed in the cockpit to adjust the script according to their needs themselves. This approach might best serve both, the demand for flexible CSCL scripts and the learners’ and teachers’ need for autonomy. Adaptable CSCL scripts are flexible regarding the amount of scaffolding and the scripting level as it is the case for adaptive scripts. But, in contrast to adaptive scripts, the changes in the adaptable CSCL scripts are controlled by the learners, the teachers, or both (Hanisch and Straβer 2003). Adaptability could be realized through various design aspects of a CSCL script. The expertise of the teachers could be included in the design of an adaptable CSCL script used in the classroom. Here, script flexibility could be realized by inviting the teachers to adapt the script components at their own discretion and on the perceived needs of their students. There is not only the chance that the teacher’s motivation will increase, but this also provides the opportunity for teachers and researchers to develop new instantiations of CSCL scripts in a design-based research approach (Bielaczyc 2013). Also, the learners can take control how much scaffolding the script should offer them. To do so, the learners’ knowledge and skills would be regularly (self-) assessed. Based on this assessment, the optimal level of CSCL script support would be suggested and hence the learner could adjust the level of script support accordingly (e.g., Wang et al. 2017). Adaptable CSCL scripts seem to be an effective and efficient approach to design flexible scripts that better fit the learners changing needs and increase autonomy and self-determination. However, it is questionable to what extent learners can realistically assess their learning progress to regulate their own and their learning partners’ learning. Another line of research seeks to combine and balance guidance of learners through scripts with the feedback that awareness tools (for more insights about Group Awareness, see Buder et al. this volume) provide about group processes and states (Miller and Hadwin 2015). The idea is to make available information about the learners’ respective group processes and individual
Collaboration Scripts: Guiding, Internalizing, and Adapting
347
prerequisites. CSCL scripts are then designed to simultaneously scaffold groups to process and use the additional group information productively, which learners may not be capable of on their own (Tsovaltzi et al. 2015, 2014).
4.3
Generic Skills to be Scaffolded by CSCL Scripts
A great amount of studies about the effectiveness of CSCL scripts incorporated argumentation skills as the generic collaboration skill scaffolded by the script (Noroozi et al. 2012). In these studies, scripts mostly guide through the sequence of a discourse, such as the sequence of argument, counterargument, and synthesis. Moreover, they provide more specific scaffolding on how to construct an argument and how to critically assess arguments (e.g., Stegmann et al. 2012). Other types of CSCL scripts that scaffold argumentation as generic collaboration skill use learners’ diverse prior knowledge and opinions to match learners with most diverging viewpoints to initialize the need for a critical discourse. Then, these scripts scaffold learners’ discourse moves by, for instance, offering sentence openers (e.g., Clark et al. 2009). The CSCL scripts using argumentation usually have both goals, improving learners’ argumentation skills and using argumentation as means to learn domain-specific content by overcoming socio-cognitive conflicts that may arise in a group of learners (Andriessen et al. 2003).
4.3.1
Collaboration Scripts Scaffolding Transactivity and Learning Regulation
New trends show that CSCL scripts may also enhance the effectivity of collaborative learning through scaffolding generic collaboration skills, for instance, transactivity (Noroozi et al. 2013; Vogel et al. 2016). Another future direction is concerned with scaffolding regulation in collaborative learning (Järvelä and Hadwin 2015). This approach can possibly realize support for learning regulation the learners need to benefit from adaptable CSCL scripts. One open issue here is, whether learners have enough self-regulation skills to assess their learning and if they can adapt the CSCL scripts accordingly. CSCL script research may need to address how to scaffold not only the regulation of one’s own learning (self-regulation) but also the regulation of the learning of the collaboration partner (co-regulation) and the joint regulation of each other’s learning (shared regulation) (Miller and Hadwin 2015). Studies about scaffolding learning regulation with CSCL scripts are an emerging field and first studies raise expectations for its effectiveness for learning (Splichal et al. 2018). Adaptable scripts for generic skills do not only open issues of how knowledge is constructed in interaction with dynamic support but also computational questions of developing such advanced CSCL scripts. There is a substantial body of research on collaboration scripts within the respective technology-enhanced learning environments in computer science, which we cannot address here in detail (e.g.,
348
F. Vogel et al.
Villasclaras-Fernández et al. 2013). It might be a question for future interdisciplinary studies to explore how CSCL scripts can be designed to best enhance regulation in collaborative learning and how they might be combined with adaptable CSCL scripts to help learners to assess their learning and adjust the CSCL script offered to them.
4.3.2
Collaboration Scripts as a Framework to Analyze and Facilitate Interdisciplinary Collaboration
From its early beginnings, CSCL scripts have been used to design support for online collaboration of persons with highly diverse backgrounds, such as the collaboration of psychologists and physicians jointly discussing the medical and psychological conditions of a patient and coming up with a plan for a treatment (Rummel et al. 2009). We believe that there is much more potential in approaches to collaboration scripts for this increasingly important field of computer-supported collaborative learning. For example, SToG can be used to analyze barriers to effective collaboration between different professions or disciplines (Radkowitsch et al. 2020a). More generally, persons with different expertise can be researched by conceptualizing the barriers as mismatches of internal collaboration scripts enacted in an interdisciplinary encounter (Noroozi et al. 2013). It seems promising that CSCL scripts could help interdisciplinary collaborators to activate sets of more compatible internal script components.
5 Conclusion CSCL scripts are learning scaffolds for groups implemented in computer-supported learning environments in various contexts. Existing research evidence suggests that learners supported with CSCL scripts achieve higher learning outcomes compared to unguided collaborative learning. CSCL scripts support collaboration skills, such as argumentation, by helping learners to develop internal schemata for successful engagement in social learning scenarios. They further support domain learning by demanding the engagement in learning beneficial activities, such as social-discursive exchange with the learning partners. There is a cognitively oriented instructional theory, SToG (Fischer et al. 2013), explaining most of the existing evidence (and there is currently no alternative theory). According to this theory, external collaboration scripts work when they activate appropriate internal script components. This theory also suggests scaffolding levels and predicts under which conditions a certain level of external scaffolding is likely to be effective. Empirical evidence emphasizes the effectiveness of CSCL scripts inducing so-called transactive activities, in which learners build upon other’s contribution (Vogel et al. 2017). Research on CSCL scripts does not support the criticism claiming that highly structured CSCL scripts would decrease motivation, impede natural learning processes, and hence result in
Collaboration Scripts: Guiding, Internalizing, and Adapting
349
worse learning than less structured or unguided collaborative learning (Wise and Schwarz 2017). Given a very high flexibility of internal collaboration scripts, there is currently a mismatch with respect to the (much lower) flexibility of external scripting. To overcome this mismatch, the line of recent research connecting collaboration scripts and conversational agents are very promising. Conversational agents could automatically prompt learners based on the content and dynamics of their online talk (Tegos et al. 2016). This could be realized by offering teachers and learners to take over agency in the adjustment of CSCL scripts or by breaking new grounds for the activities that should be scaffolded by the scripts. With the massive spread of social media, there are also several new phenomena of online collaboration where collaboration scripts may be effectively applied to CSCL. Among the most interesting of these phenomena are those of people collaborating with partners who have a very different knowledge background, such as the different forms of interdisciplinary and interprofessional collaboration. Learning how to effectively engage in problemsolving and decision-making with collaborators with different expertise seems an important educational goal and collaboration scripts offer an approach for the design of CSCL environments to reach this goal.
References Andriessen, J., Baker, M., & Suthers, D. (2003). Argumentation, computer support, and the educational context of confronting cognitions. In J. Andriessen, M. Baker, & D. Suthers (Eds.), Arguing to learn: Confronting cognitions in computer-supported collaborative learning environments (pp. 1–25). Dordrecht: Kluwer. Asterhan, C. S. C., & Schwarz, B. B. (2009). Argumentation and explanation in conceptual change: Indications from protocol analyses of peer-to-peer dialog. Cognitive Science, 33(3), 374–400. https://doi.org/10.1111/j.1551-6709.2009.01017.x. Bielaczyc, K. (2013). Informing design research: Learning from teachers’ design of social infrastructure. The Journal of the Learning Sciences, 22(2), 258–311. https://doi.org/10.1080/ 10508406.2012.691925. Bouyias, Y., & Demetriadis, S. (2012). Peer-monitoring vs. micro-script fading for enhancing knowledge acquisition when learning in computer-supported argumentation environments. Computers & Education, 59(2), 236–249. https://doi.org/10.1016/j.compedu.2012.01.001. Buder, J., Bodemer, D., & Ogata, H. (this volume). Group awareness. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. https://doi.org/10.1080/ 00461520.2014.965823. Clark, D. B., D’Angelo, C. M., & Menekse, M. (2009). Initial structuring of online discussions to improve learning and argumentation: Incorporating students’ own explanations as seed comments versus an augmented-preset approach to seeding discussions. Journal of Educational Science and Technology, 18, 321–333. https://doi.org/10.1007/s10956-009-9159-1. Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In P. A. Kirschner (Ed.), Three worlds of CSCL. Can we support CSCL (pp. 61–91). Heerlen: Oben University Nederland.
350
F. Vogel et al.
Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi. org/10.1080/00461520.2012.748005. Gijlers, H., & de Jong, T. (2009). Sharing and confronting propositions in collaborative inquiry learning. Cognition and Instruction, 27(3), 239–268. https://doi.org/10.1080/ 07370000903014352. Hanisch, F., & Straβer, W. (2003). Adaptability and interoperability in the field of highly interactive web-based courseware. Computers & Graphics, 27(4), 647–655. https://doi.org/10.1016/ S0097-8493(03)00107-9. Hoadley, C. (2018). A short history of the learning sciences. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 11–23). Taylor and Francis. https://doi.org/10.4324/9781315617572. Järvelä, S., & Hadwin, A. (2015). Promoting and researching adaptive regulation: New frontiers for CSCL research. Computers in Human Behavior, 52, 559–561. https://doi.org/10.1016/j.chb. 2015.05.006. Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher, 38(5), 365–379. https://doi.org/10.3102/0013189X09339057. Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17(6), 708–721. https://doi.org/10. 1016/j.learninstruc.2007.09.021. Laru, J., Järvelä, S., & Clariana, R. B. (2012). Supporting collaborative inquiry during a biology field trip with Mobile peer-to-peer tools for learning: A case study with K-12 learners. Interactive Learning Environments, 20(2), 103–117. https://doi.org/10.1080/10494821003771350. Miller, M., & Hadwin, A. (2015). Scripting and awareness tools for regulating collaborative learning: Changing the landscape of support in CSCL. Computers in Human Behavior, 52, 573–588. https://doi.org/10.1016/j.chb.2015.01.050. Noroozi, O., Teasley, S. D., Biemans, H. J. A., Weinberger, A., & Mulder, M. (2013). Facilitating learning in multidisciplinary groups with transactive CSCL scripts. International Journal of Computer-Supported Collaborative Learning, 8(2), 189–223. https://doi.org/10.1007/s11412012-9162-z. Noroozi, O., Weinberger, A., Biemans, H. J. A., Mulder, M., & Chizari, M. (2012). Argumentationbased computer supported collaborative learning (ABCSCL). A systematic review and synthesis of fifteen years of research. Educational Research Review, 7(2), 79–106. https://doi.org/10. 1016/j.edurev.2011.11.006. O’Donnell, A. M., & Dansereau, D. F. (1992). Scripted cooperation in student dyads: A method for analyzing and enhancing academic learning and performance. In R. Hertz-Lazarovits & N. Miller (Eds.), Interaction in Cooperative Groups: The Theoretical Anatomy of Group Learning (pp. 120–141). Cambridge: Cambridge University Press. Pea, R. D. (2004). The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. The Journal of the Learning Sciences, 13 (3), 423–451. https://doi.org/10.1207/s15327809jls1303_6. Radkowitsch, A., Fischer, M. R., Schmidmaier, R., & Fischer, F. (2020a). Learning to diagnose collaboratively: validating a simulation for medical students. GMS Journal for Medical Education, 37(5), Doc51. https://doi.org/10.3205/zma001344. Radkowitsch, A., Vogel, F., & Fischer, F. (2020b). Good for learning, bad for motivation? A MetaAnalysis on the Effects of Computer-Supported Collaboration Scripts on Learning and Motivation. International Journal of Computer-Supported Collaborative Learning, 15(1). https:// doi.org/10.1007/s11412-020-09316-4. Rau, M. A., Bowman, H. E., & Moore, J. W. (2017). An adaptive collaboration script for learning with multiple visual representations in chemistry. Computers and Education, 109, 38–55. https://doi.org/10.1016/j.compedu.2017.02.006. Rummel, N., Spada, H., & Hauser, S. (2009). Learning to collaborate while being scripted or by observing a model. International Journal of Computer-Supported Collaborative Learning, 4(1), 69–92. https://doi.org/10.1007/s11412-008-9054-4.
Collaboration Scripts: Guiding, Internalizing, and Adapting
351
Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education, 26(2), 784–795. https:// doi.org/10.1007/s40593-016-0102-3. Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, 54–67. https://doi.org/10.1006/ceps. 1999.1020. Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge-building communities. The Journal of the Learning Sciences, 3(3), 265–283. https://doi.org/10.1207/ s15327809jls0303_3. Schank, R. C. (1999). Dynamic memory revisited. Cambridge, NY: Cambridge University Press. Slavin, R. E. (1995). Cooperative learning: Theory, research, and practice. Boston: Allyn & Bacon. Splichal, J. M., Oshima, J., & Oshima, R. (2018). Regulation of collaboration in project-based learning mediated by CSCL scripting reflection. Computers and Education, 125, 132–145. https://doi.org/10.1016/j.compedu.2018.06.003. Stegmann, K., Kollar, I., Weinberger, A., & Fischer, F. (2016). Appropriation from a script theory of guidance perspective: A response to Pierre Tchounikine. International Journal of Computer-Supported Collaborative Learning, 11(3), 371–379. https://doi.org/10.1007/ s11412-016-9241-7. Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2), 297–323. https://doi.org/10.1007/s11251-011-9174-5. Tabak, I. (2004). Synergy: A complement to emerging patterns of distributed scaffolding. The Journal of the Learning Sciences, 13(3), 305–335. https://doi.org/10.1207/s15327809jls1303_3. Tchounikine, P. (2016). Contribution to a theory of CSCL scripts: Taking into account the appropriation of scripts by learners. International Journal of Computer-Supported Collaborative Learning, 11(3), 349–369. https://doi.org/10.1007/s11412-016-9240-8. Teasley, S. D. (1997). Talking about reasoning: How important is the peer in peer collaborations? In L. B. Resnick, R. Säljö, C. Pontecorvo, & B. Burge (Eds.), Disourse, tools, and reasoning: Essays on situated cognition (pp. 361–384). Berlin: Springer. Tegos, S., Demetriadis, S., Papadopoulos, P. M., & Weinberger, A. (2016). Conversational agents for academically productive talk: A comparison of directed and undirected agent interventions. International Journal of Computer-Supported Collaborative Learning, 11(4), 417–440. https:// doi.org/10.1007/s11412-016-9246-2. Tsovaltzi, D., Judele, R., Puhl, T., & Weinberger, A. (2015). Scripts, individual preparation and group awareness support in the service of learning in Facebook: How does CSCL compare to social networking sites? Computers in Human Behavior, 53, 577–592. https://doi.org/10.1016/j. chb.2015.04.067. Tsovaltzi, D., Puhl, T., Judele, R., & Weinberger, A. (2014). Group awareness support and argumentation scripts for individual preparation of arguments in Facebook. Computers & Education, 76, 108–118. https://doi.org/10.1016/j.compedu.2014.03.012. Villasclaras-Fernández, E., Hernández-Leo, D., Asensio-Pérez, J. I., & Dimitriadis, Y. (2013). Web collage: An implementation of support for assessment design in CSCL macro-scripts. Computers & Education, 67, 79–97. https://doi.org/10.1016/j.compedu.2013.03.002. Vogel, F., Kollar, I., Ufer, S., Reichersdorfer, E., Reiss, K., & Fischer, F. (2016). Developing argumentation skills in mathematics through computer-supported collaborative learning: The role of transactivity. Instructional Science, 44(5), 477–500. https://doi.org/10.1007/s11251016-9380-2. Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2017). Socio-cognitive scaffolding with collaboration scripts: A meta-analysis. Educational Psychology Review, 29(3), 477–511. https://doi. org/10.1007/s10648-016-9361-7. Vygotsky, L. S. (1980). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wang, X., Kollar, I., & Stegmann, K. (2017). Adaptable scripting to foster regulation processes and skills in computer-supported collaborative learning. International Journal of Computer-
352
F. Vogel et al.
Supported Collaborative Learning, 12(2), 153–172. https://doi.org/10.1007/s11412-017-9254x. Wecker, C., & Fischer, F. (2011). From guided to self-regulated performance of domain-general skills: The role of peer monitoring during the fading of instructional scripts. Learning & Instruction, 21(6), 746–756. https://doi.org/10.1016/j.learninstruc.2011.05.001. Weinberger, A. (2011). Principles of transactive computer-supported collaboration scripts. Nordic Journal of Digital Literacy, 6(3), 189–202. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46, 71–95. https:// doi.org/10.1016/j.compedu.2005.04.003. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. New York, NY: Cambridge University Press. https://doi.org/10.1017/CBO9780511803932. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12, 423–467. https://doi.org/10.1007/s11412-017-9267-5.
Further Readings1 A more thorough introduction to the Script Theory of Guidance can be found in the following article: Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi. org/10.1080/00461520.2012.748005 Detailed insights about research on collaboration scripts and empirical evidence for the mechanisms of learning with CSCL scripts are published in the following meta-analysis: Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2017). Socio-Cognitive Scaffolding with Collaboration Scripts: a Meta-Analysis. Educational Psychology Review 29(3), 477–511. https://doi. org/10.1007/s10648-016-9361-7 The following two exemplary empirical studies take up the new approaches to design CSCL scripts. The first is targeted at the regulation of learning during collaboration (Splichal, Oshima, & Oshima, 2018). The second study combines CSCL scripts and awareness tools as an additional scaffolding and implements both into the currently famous social network site Facebook (Tsovaltzi, Judele, Puhl, & Weinberger, 2015) Splichal, J. M., Oshima, J., & Oshima, R. (2018). Regulation of collaboration in project-based learning mediated by CSCL scripting reflection. Computers and Education, 125, 132–145. https://doi.org/10.1016/j.compedu.2018.06.003 Tsovaltzi, D., Judele, R., Puhl, T., & Weinberger, A. (2015). Scripts, individual preparation and group awareness support in the service of learning in Facebook: How does CSCL compare to social networking sites? Computers in Human Behavior, 53, 577–592. https://doi.org/10.1016/j. chb.2015.04.067
To get a basic overview of the theoretical foundation and empirical findings, we would like to direct to the recording of the NAPLeS webinar about Collaboration Scripts for CSCL, held by Frank Fischer, Christof Wecker, and Ingo Kollar: https://www.isls.org/research-topics/ collaboration-scripts-scaffolding-collaborative-learning
1
The Roles of Representation in Computer-Supported Collaborative Learning Shaaron E. Ainsworth and Irene-Angelica Chounta
Abstract Representational learning is fundamental to CSCL. In this chapter, we consider four distinct roles that representations play as collaborators can: (1) interpret existing representations to create shared knowledge; (2) construct new joint representations based upon negotiation and shared understanding; (3) make representational choices concerning how they or other agents in the collaboration are portrayed; and (4) use representations to express and analyze these activities and their outcomes. We show how this research draws upon multiple theoretical perspectives and attempt to look forward to consider where representational paradigms for CSCL may be going. Keywords Representational learning · Multiple representations · Collaboration · Argumentative diagrams · Constructing representations · Co-construction · Joint construction · Self-representation · Learning analytics · Agents and avatars · Simulation
1 Definitions and Scope CSCL offers many diverse ways that learners can collaborate to learn but there is one feature common to all of them—their learning will be representational. Learners may be collaborating face to face as they share photos and videos on a multi-touch table or interact with a physics simulation running on a computer in their classroom. They
S. E. Ainsworth (*) Learning Sciences Research Institute, School of Education, University of Nottingham, Nottingham, UK e-mail: [email protected] I.-A. Chounta Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Duisburg, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_19
353
354
S. E. Ainsworth and I.-A. Chounta
may be collaborating at a distance as they meet in virtual worlds, or share their views (in text, pictures, or by voice) in response to a prompt in a MOOC or as they jointly construct an argument map. They may each have a phone in their hand as they play an educational game or explore a historic location, or could be jointly drawing on a shared whiteboard. They may be collaborating with people who are well known to them, unfamiliar or anonymous behind a chosen avatar, or even with an artificial agent or robot. And as teachers or researchers, our understanding of what the collaborators know, how they are interacting with one another, and what they come to understand, will be shaped by representations of this activity. Thus, whether you define representation using cognitive dyadic (a representation is something that stands for something else) approach or a semiotic triadic (a representation is something which stands to somebody for something else) approach, it is clear that representation is a ubiquitous feature of CSCL. This chapter explores the ways that different theoretical perspectives have shaped the history and development of these diverse aspects of representational learning in CSCL and articulate four key roles we see representations playing, both in terms of the current state of the art and how the field may develop in the future.
2 History and Development We have argued (e.g., Ainsworth 2018) that understanding representational learning within the learning sciences tradition integrates multiple theoretical perspectives. Firstly, there are cognitive perspectives of representational learning that have emerged from traditional information processing accounts of human learning. Theories such as Cognitive Theory of Multimedia Learning (Mayer 2014) or Cognitive Load Theory (Sweller et al. 1998) shaped an understanding of how the design of representational systems can help make learning more successful by overcoming limited-capacity modality-specific memory systems, permitting learners to effectively select, organize, and integrate the studied material with long-term memory. van Bruggen et al. (2002) argue that collaborative learning with shared representations inherently increases the cognitive demands upon learners as they must negotiate a shared understanding of a representation’s form and meaning. As a consequence, CSCL tools have been designed that attempt to reduce the cognitive demands of collaborative multimedia learning, such as those of Bodemer and Dehler (2011) who argue for the importance of providing awareness of other learners’ representational integration practices. However, a similar argument has been proffered in exactly the opposite direction: that as certain representations such as animations are notorious for encouraging passive and superficial processing, the increased cognitive demands of collaboration around representations are to be welcomed (e.g., Rebetez et al. 2010). Although such information-processing approaches have had a considerable impact on approaches to individualized representational learning, they have not gained as much traction in CSCL (van Bruggen et al. 2002). Instead, the predominant cognitive approaches used to explain representational learning in CSCL are
The Roles of Representation in Computer-Supported Collaborative Learning
355
those which have emerged from situated (e.g., Hall 1996) and distributed cognition (e.g., Zhang and Norman 1994; Hutchins 1995). For example, Hall (1996) argues that people do not simply “have” representations, but they are instead constructed through interaction with one another in time and space. Consequently, learning should be about becoming a competent representer within a domain of practice. Hutchins’s account of the role of representation, within a distributed cognitive approach, sees both internal and external representations as part of the wider cognitive system, which includes individuals and artifacts through with and with which they interact. These cognitive paradigms, which place the emphasis on the situation and the social, are clearly much more closely aligned to the sociocultural accounts of representational learning that have played such an important role in our understanding of CSCL. In such approaches, representation is key: learning is seen as a process of taking up the physical or representational tools and the ongoing practices of a culture (e.g., Vygotsky 1978; Säljö 1999). The learning of representational forms depends upon the use to which they are put; representations are not passively acquired in some context-free way but instead through action (in the world) to perform some culturally valued activity. Moreover, this learning is inherently social as (external) representations are shared, negotiated, developed, and redeveloped across a community, often across extended periods of time. Thus, for those researchers who follow this tradition, CSCL does not just involve joint meaning-making mediated through digital representation, this defines CSCL (e.g., see Koschmann 2002). Given the central importance of representational learning within CSCL, and the breadth of theoretical perspectives that underpin it, it is not surprising that there are so many examples of innovation within the field. To provide a structure and synthesize the state of the art in this complex endeavor, we suggest a simple taxonomy of four key functional roles that representation can play in CSCL. In doing so, we are indebted to previous researchers who have previously articulated similar approaches, particularly Daniel Suthers (e.g., Suthers 2006; Suthers and Hundhausen 2003) whose specific arguments are included in the sections below). In so doing, we will draw from cognitive and sociocultural perspectives as each of the roles we describe has been explored through both these lenses. Consequently, we suggest that CSCL can involve: 1. Interpreting existing shared representations to guide collaboration and create shared knowledge 2. Jointly constructing representations to negotiate and express new understandings 3. Making representational choices to portray oneself or other human and artificial agents in the collaboration 4. Using representations to express and analyze collaborative activities and their outcomes
356
S. E. Ainsworth and I.-A. Chounta
3 State of the Art 3.1
Interpreting Existing Shared Representations to Guide Collaboration and Create Shared Knowledge
In many instances of CSCL, learners are provided with computational representations to interpret, which are intended to support their learning. These may take the form of noninteractive representations such as text, pictures, video, or (classical) animations that are used to display an expert view of a domain but equally common are the interactive representations such as tables, graphs, and equations found in simulations and microworlds. As Suthers (2014) argues, inherent to this activity is an epistemology of learning that focuses on knowledge communication, whereby learners are expected to develop their understanding in a specific direction to increasingly resemble that of the more expert other. When used in this way, representations are intended to constrain the interpretations that can be made of phenomena, while making salient specific aspects of a situation for cognitive, affective, and social processing. However, as we are concerned with collaborative learning, below we focus on the roles that representations play in specific reference to social processes. Firstly, learners can refer to the same (set of) representations as they collaborate. Much analysis of collaboration has focused on the way that collaborators develop shared knowledge via the construction of common ground (Clark and Wilkes-Gibbs 1986). In collocated CSCL, this process of grounding may be assisted by the learners’ shared environment, history, and interactions; however, in distant and asynchronous CSCL, there may be no such commonalities to draw upon (e.g., Baker et al. 1999). Grounding is neither effortless nor automatic and collaborators need frequent feedback from one another to know when they have been understood and to engage in repairs when not. Existing research suggests that representations provided in CSCL play a fundamental role in the process. Roschelle and Teasley’s (1995) classic analysis of collaborators learning about velocity and acceleration with the envisioning machine shows how the interactive representations supported learners to develop their joint understanding of the problems through communication, action, and gesture. For example, the representations supported communication even when the verbalizations between the pairs were less than coherent or fragmented. Collaborators could point or gesture to shared external representations (such as dots left by a moving object to represent speed) as part of their explanations. Accordingly, even if collaborators lacked a technical or shared terminology, the representation helped disambiguate their references. Moreover, in this case, as the representations were interactive and runnable, different interpretations and ideas need not only be resolved through argumentation and discussion, actions could be run and a new shared interpretation of the changes in the representations created. Such advantages are not only limited to traditional computer-based learning but may perhaps be even more apparent when learners collaborate around horizontal screens such as multi-touch tables (Rogers and Lindley 2004). For example, Higgins
The Roles of Representation in Computer-Supported Collaborative Learning
357
et al. (2012) analyzed small groups of children undertaking an investigation into a historical accident using representational resources presented on a multi-touch table. Compared to pen and paper, they found that the multi-touch implementation supported joint attention and shared viewing. Representations must be “left” on the table and not picked up and studied by a single individual. Moreover, by resizing, moving, and reorganizing these representations, not only was joint attention facilitated, but the history of joint decision-making and ultimate consensus on an interpretation could be displayed and shared. Secondly, learners can refer to different (sets of) representations as they collaborate. Much existing research has detailed the challenges that learners can have in coming to understand and then reason through new representations (e.g., see Ainsworth (2018) for a review and other readings). Therefore, CSCL environments can be designed to manage these difficulties by providing specific representations to different collaborators, and so limit the number of representations that an individual must focus (and thus come to understand) on at any one time. The advantages of this can be seen in the analysis that White and Pea (2011) conducted of students learning about algebra using CodeBreaker, a system that provides dyna-linked representations (equations, graphs, two types of tables) to collaborators in a group. Each participant had their own tablet, which permitted the display of only one or two of the representations at once but the group as a whole could see the actions from the equation representation reflect on the complete set. Over an extended period of time, students became successful problem-solvers and coordinators of these representations as their understanding developed. This was often achieved in ways that had not been fully intended by the designers but that emerged as a consequence of relating, communicating, and coordinating representations to solve local tasks. In this case, the representations were “flexibly assigned,” i.e., each member was given responsibility for a specific representation (although could move to others if they choose) and this assignment was changed daily. However, in Ploetzner et al. (1999), concept maps of either quantitative representations (in equation form) or qualitative representations (in diagrams and words) were provided to each member of a dyad who worked first individually before coming together with both representations to jointly solve the same problems. Thus, the collaborative phase could benefit from the understandings that each individual in the dyad had built up first. Their evidence suggests that this did happen, especially when in the joint phase the initial construction of qualitative representations was privileged before subsequent integration of quantitative reasoning. Of course, simply providing representations to collaborators will not magically result in enhanced learning. All of the research reviewed in this section discussed instances when learners struggled with the system. They revealed learners failing to understand the representations, not actively engaging with the cognitive, affective, or social processing that was needed, or perhaps the representations themselves had not been well chosen by designers. However, a commonly noted feature of success was when collaborators, either collocated as the examples above or distant (e.g., Suthers et al. 2003) used the representations to negotiate shared meaning. Moreover, this was not a swift process with collaborators needing extended participation for this
358
S. E. Ainsworth and I.-A. Chounta
to be achieved. Teachers (if CSCL activities are occurring in classrooms) can be crucial, stepping in to help collaborators make sense of the representations and their role in the activities (e.g., Ingulfsen et al. 2018). A final point worth remembering is that interpretation of presented representations is still a constructive process and that as we now turn our attention to collaborative construction, learners in CSCL systems often cycle through activities which depend more upon the interpretation of representation and then on construction or vice versa.
3.2
Collaborating by Jointly Constructing Representations
The joint construction of representations is our second example of the use of representations in collaborative learning. In contrast to sharing existing, preconstructed representations for further interpretation, the learners in this context are asked to construct the representations themselves and together with their peers. These may be any kind of diagrammatic representations (like, e.g., algorithmic flowcharts or concept maps), textual representations (such as written essays), or even tangible and digital artifacts (such as collages and videos). The joint construction (or else, co-construction) of shared representations entails all the benefits of working over shared representations: developing shared knowledge and constructing common ground, developing joint understanding, and so on. Furthermore, the actual task of constructing the representation requires the learners to externalize their knowledge or to elicit knowledge from their peers (Fischer and Mandl 2005). While constructing a shared representation, learners’ knowledge transitions from tacit (i.e., internalized knowledge, difficult to communicate to peers verbally) to explicit (a formal, concrete piece of knowledge that the learner can communicate to peers). At the same time, co-constructing a shared representation can support reflection through internalization (Nonaka and Takeuchi 1995). Whilst working together with others toward a common goal, the learner is exposed to new ideas or new knowledge coming from peers. This new knowledge is further crystallized and becomes one’s own when the learner reflects on collaborative practice. Additionally, the collaborators need to evaluate each other’s contribution to the shared construct, to use arguments in order to convince their peers, and to critically reflect on their practices. Most importantly, peers have to coordinate, to create a common plan of action and to manage their common resources in order to work together constructively and efficiently (Meier et al. 2007). There are many key ingredients for creating a meaningful representation. These include successful communication between the members of the group (e.g., maintaining communication flow and coordinating verbal discussion with workspace activity), effective exchange of information (providing explanations for their actions or intentions to their peers), and the creation of a coherent, efficient, and agreed plan for the development of the final outcome (defining roles, subtasks, and assigning resources to each group member according to the group’s needs and the members’ abilities).
The Roles of Representation in Computer-Supported Collaborative Learning
359
Fig. 1 An example of a collaborative-constructed argument diagram from the online argumentation environment LASAD (Chounta et al. 2017). Students insert premises and conclusions in the collaborative workspace and they link them in order to form the argument
An example of collaboration between learners over the joint construction of representations is the case of argumentation diagraming. Figure 1 presents a collaboratively constructed argument diagram that was developed using the online argumentation environment LASAD (Chounta et al. 2017). The joint construction of visual representations may have a positive effect on important cognitive skills, such as critical thinking and argumentation (Harrell and Wetzel 2013), and can scaffold the co-construction of knowledge (Weinberger et al. 2010). The use of diagrams for practicing argumentation supports reflection and deep learning because arguments are explicit, inspectable, and accessible at every moment. Thus, learners are given the chance to interact over the diagrams, to discuss over arguments’ explanations and to exchange information rather than simply reading a text. At the same time, learners can visually inspect the various structures of arguments and also envision structural criteria for assessing their quality (Scheuer et al. 2014). For example, van Amelsvoort and Schilperoord (2018) explored how the visual and perceptual cues of diagrams, such as, for example, the number of arguments, the size of diagrams, or even the layout may affect learner’s perception and judgments. Their research suggested that learners use both content and perceptual cues to interpret argument diagrams and that the number of arguments had a positive effect on perceived quality. That is, learners tend to assess a diagram that consists of more arguments as of better quality. The co-construction of representations as collaborative activities can be used to bridge potential gaps between students in heterogeneous groups and to support their practice. Manske et al. (2015) asked small groups of students to work together in order to create collaborative concept maps and jointly written reports during an inquiry-based, STEM course for K–12 education. The results showed that heterogeneous groups outperform homogeneous groups with respect to learning gains as well as the quality of the co-constructed representations. Chen et al. (2018)
360
S. E. Ainsworth and I.-A. Chounta
conducted a meta-analysis to synthesize research findings of the effects of CSCL and studied the effects of visual representation tools on task performance and learning outcomes. The meta-analysis indicates that students who construct visual representations perform better (i.e., they are more likely to complete a task successfully) than those who don’t have access to such tools. Additionally, the use of virtual environments for constructing visual representations leads to improved learning outcomes. Overall, these findings suggest that the co-construction of representations can be used to successfully scaffold the collaborative practice of student groups with respect to academic achievement; although, of course, success will be conditional upon similar factors to those articulated earlier. At the same time, the products of such collaborative work (i.e., the final representations) can be used to assess learning outcomes as evidence of the learning process.
3.3
Using Representations to Portray oneself or Other Agents in the Collaboration
A key role for representation in CSCL is to represent oneself or one’s collaborator (s) digitally. When learners are engaged in face-to-face collaboration, their choices about how they appear to others are more limited by physical copresence. They may change their clothing, accent, or manner, but digital collaboration affords even greater possibilities. On the one hand, collaborators may be anonymized online when they deliberately remove clues to their identity. This is somewhat notoriously associated with negative uses of digital technology whereby people may become disinhibited, do not fear reprisal for negative behavior, and so can engage in “flaming,” “trolling,” and off-task behavior with impunity (e.g., Christopherson 2007). However, anonymity can also have positive aspects. It can encourage people to participate without fear of judgment, reduce the noted tendency for groupthink (where poor choices are made as group members prize conformity) as ideas are evaluated more critically and consequently can lead to better decisions being considered (Nunamaker Jr et al. 1996). Current theoretical explanations, therefore, suggest that the effects of anonymity are nuanced, involving both depersonalization and reduced accountability and that people can take strategic advantage of this (e.g., see the Social Identity model of Deindividuation Effects; Lea and Spears 1991). Ainsworth et al. (2011) attempted to gain the possible benefits of anonymity in educational situations by asking students anonymous to one another (but known to a teacher) to participate in classroom debates about controversial issues. They found that when afforded anonymity in this way, teenage students were more likely to engage in classroom debate and change their position about these topics. Initially, this did come with an associated degree of off-task behavior; however, positively this behavior rapidly reduced and was no greater than in known situations after less than 2 h of argumentation. They speculate that this type of anonymity encouraged
The Roles of Representation in Computer-Supported Collaborative Learning
361
students to debate with one another without worrying about the consequences to their social relationships when expressing unpopular views but still were aware of the need to engage productively in the activity. Technologies such as immersive virtual worlds (e.g., Second Life) can afford people the opportunity to create rich identities online and choose avatars that represent themselves in ways that they would like to be perceived. In so doing, it is argued that collaborators can more easily replace (or even amplify) the nonverbal cues that supporting developing understanding in face-to-face collaboration such as facial expressions, gestures, and even posture (e.g., Bailenson et al. 2006). Collaborators can represent themselves online in ways that differ from their offline identity and in so doing explore different aspects of their possible selves (Turkle 1995). They could, for instance, choose to vary their gender, ethnicity, physical appearance, or age. For example, Lee and Hoadley (2007) explored how students’ experience of the virtual world can foster motivation and engagement in learning. For example, students can adopt different points of view associated with the avatar they are inhabiting, opening them up to new perspectives and challenging them to think in new ways. Alternatively, if students are assigned these new identities to inhabit, they can experience the digital world and their body within it in new ways as reactions to avatars seem to mirror those found outside virtual worlds. For example, female avatars receive different treatment to male avatars, and black avatars invoke more aggressive responses (Lee and Hoadley 2007). This may be one of the reasons why Peck et al. (2013) found that white female participants who were assigned avatar with a black skin color showed reduced implicit racial bias after this experience. Additionally, inhabiting avatars may not just change how others react to us, but change how we react to others. This can be seen in the work of Yee and Bailenson (2007) who found that participants assigned taller avatars negotiated more confidently than those assigned shorter avatars. Of course, when collaborating online, the “person” one is collaborating with need not be human. Consequently, there has been extensive research where artificial pedagogical agents act as expert teachers, motivational coaches, and peers (e.g., Johnson et al. 2000). The basic premise is that by encouraging anthropomorphization through representing the computer as human-like enhances students’ motivation to learn (e.g., Baylor and Kim 2005). Moreover, designers of such environments must make a number of representational choices as they create avatars to represent artificial collaborators, including their degree of realism, ethnicity, and gender. Unsurprisingly, given the research reviewed on avatars above, such representational choices can impact upon learners’ motivation, engagement, and learning outcomes (e.g., Baylor and Ryu 2003; Domagk 2010).
362
3.4
S. E. Ainsworth and I.-A. Chounta
Using Representations to Analyze Collaborative Activities and Their Outcomes
A critical aspect that differentiates collaborative learning through the joint construction of representations from learning together with peers by interpreting or discussing over shared representations is that the representation itself is the tangible outcome of the collaborative learning activity. As such, the representation is the end product of the learning process and it can be used to analyze the way peers worked together and to assess learning outcomes (Hoppe 2009). Thus, representations, in addition to enabling and facilitating learning, play a key role in the analysis of collaborative activities. On one hand, analytical representations can be used to inform research with respect to the mechanics of collaboration and aspects of the collaborative process, such as communication flow and user activity and interaction. On the other hand, they can support learners to self-regulate and reflect. The use of representations as analytical tools for both researchers (in order to shed light on aspects of collaboration and learning) and students (for supporting self-reflection, self-regulation, or other aspects of collaborative learning) is discussed extensively in Wise et al. (this volume) and Rosé and Dimitriadis (this volume). As analytical tools, representations can take many forms. They may be graphs (networks) that represent the interactions between students or students with teachers, or state diagrams that represent sequences of student activity while carrying out a task, or other graphical representations (such as bar charts, time series, or contingency graphs) that depict students’ activity and interaction. When used this way, the representations aim both to uncover the underlying mechanisms of collaboration and group dynamics and then display these, potentially along with their impact on students’ learning outcomes. The use of visual representations as analytical tools has two advantages: (a) they can be easily constructed from data captured in log files and (b) they are typically straightforward to understand, especially for users who are not familiar with data or statistical analysis. An example of this can be seen in the work of Bannert et al. (2014) who used Petri nets—that is, directed graphs that represent transitions or states and the sequential or conditional relation between them—to represent learning processes and so analyze learners self-regulation mechanisms. To that end, the authors coded learning events and used them to compose event sequences that represented regulatory processes. Then they explored how process patterns would differentiate between activities of successful and less successful students. Contingency graphs are another type of graphical representation that is used to represent and analyze interactions between students, even across multiple media (Suthers et al. 2010). Contingency graphs are different from social networks because the relations between events represent uptakes, that is, ongoing events have taken into account or are based upon aspects of prior activity. Soller et al. (2005) utilized representations to graphically reproduce the collaborative process by using metrics of student activity and interaction. These representations were then used to scaffold students’ self-reflection, to facilitate monitoring, and to support real-time guiding.
The Roles of Representation in Computer-Supported Collaborative Learning
363
Another recent example of using representations for the purpose of analysis and assessment is the use of directed graphs to detect and depict communication patterns and knowledge flow in discussion forums, especially in the case of MOOCs (Gillani et al. 2014). The social relations between participants are represented by social networks, in combination with content analysis, and provide the means for detecting subcommunities and potentially triggering supportive interventions (Yang et al. 2014). Networks are also used as representations of information exchange in discussion forums in order to provide insights about the ways MOOC participants adapt their communication strategies in a social context and adopt specific semantic roles driven by these strategies (Hecking et al. 2017). However, social networks have been used in collaborative learning as analytical tools before the advent of MOOCs, for example, to explore community formation based on semantic interest (Malzahn et al. 2007).
4 The Future Extensive research over the past years has established the potential of shared representations in scaffolding co-construction of knowledge and supporting peers to establish common ground. Peers interact with each other over the shared representation which enables argumentation, supports decision–making, and scaffolds the externalization and internalization of knowledge that peers communicate. However, sometimes it seems that the transition from theory to practice—that is, the application of collaborative representational paradigms in actual classrooms— has faltered. Several methodological and context-related factors have contributed to this, from the way we implement research studies to the practices of teachers and the school environments. Methodologically, much collaborative learning research involves only small-scale and short-term studies. It is argued that most studies involve small numbers of participants who work together over only short periods of time (Chen et al. 2018). Thus, small-scale studies may impact upon the accuracy of our conclusions or simply and lead to inconclusive results. Furthermore, the small number of longitudinal studies limits our understanding regarding the long-last impact and potential benefits of shared representations. Such studies may have accidentally focused on learning a representational system rather than learning with a representational system. This can hinder the implementation and integration into real-world situations of collaborative paradigms that employ representations with large numbers of collaborators and over longer periods of time. Future work that addresses this gap is now needed. Another criticism that has been leveled at existing research is the way we design learning tasks. For example, Kirschner et al. (2011) argued that to take full advantage of the benefits of shared representations, it is important that we design learning tasks that are inherently collaborative—that is, peers need to work together in order to carry them out successfully—and that can be facilitated by active engagement with representations. It is critical for the successful
364
S. E. Ainsworth and I.-A. Chounta
outcome of the activity, including but not limited to learning gains, that the task will engage peers to interact with representations in an intuitive and meaningful way. These points impact the way we integrate the use of shared representations in formal learning settings. Teachers, who are primarily responsible for designing and implementing learning activities in the classroom, usually follow guidelines that focus on content and frequently use textbooks as the primary source of learning materials. However, the representations commonly used in textbooks are aimed at individual learning and not collaborative practices. Asking teachers to design engaging and cognitively challenging collaborative tasks requires them to acquire new skills and to establish new work practices. This can either discourage teachers from adopting shared representations in their classrooms due to additional workload or may not lead to the desired outcomes due to ineffective design and adaptation. We agree with Eilam (2012) that we need more research concerning how to help teachers to support collaborative learning with representations and acknowledge in our research that such support is often provided and required in practice. Additionally, it is also crucial to conduct fine-grained studies of how learners understand, interpret, and construct representations in order to decide how to use representations (how many, what kinds, to what extent) to support collaborative learning. For example, using multiple representations does not necessarily mean that the students will achieve higher learning gains. Instead, this depends on the properties of the representations, their interdependence, or interconnection and on the ways that learners understand them (Ainsworth 2014). This is acknowledged as complicated in individual learning, collaborative learning does not make these questions any easier. Thus, one prominent question is clear: how do we overcome this impasse and where do we go from here? Stahl (2018) argues that now is the time to advance the CSCL vision to extend content to cover the needs of twenty-first-century learning. Motivated by Stahl’s view, we envision the use of representations as a proxy for “twenty-first-century skills.” The current technological advances, the easy access to information and knowledge resources, and the digitalization of education and everyday life come with the risk of personal overload and potential uncertainty regarding information reliability and privacy protection. Thus, it is important to foster the cognitive, metacognitive, and social skills that are necessary for acting autonomously and responsibly in the new information society (Taylor 2016). We believe that important competencies—such as critical thinking, creativity, and problem-solving—can be promoted with the use of representations as learning objects, in a similar way as collaboration and construction of shared knowledge. In order to make this transition, we think of representations as something more than shared digital artifacts (usually graphical or diagrammatic objects). For example, representations can be tangible artifacts that serve as outcomes of a creative process (such as the products of makerspaces) or digital models that document the learning practice of digital competencies (like, digital e-portfolios). In this sense, representations are the products of a project-based learning experience of twenty-first-century learners who come together to practice in a social arena using modern tools.
The Roles of Representation in Computer-Supported Collaborative Learning
365
In the same vein, the integration of cutting-edge, digital technologies in education—ranging from commercial products like Kinect or Oculus Rift to experimental prototypes including data gloves or embodied virtual agents—allows the implementation of virtual or augmented reality-enhanced activities for training motor skills, fostering storytelling, or supporting learning in science education with realistic experiences. It is interesting to see how these new technologies will affect representations in terms of format, form, and role but also how they will affect the way collaborators (some of whom may be artificial) interact and achieve common ground through and with representations. We envision that digital technologies will more routinely transform representations to tangible objects or even to whole-body interactive experiences (Price et al. 2016) that, on the one hand, will be customizable, personalized, and dynamically adaptable to serve the learner’s needs and on the other hand will evolve to resembling embodied approaches interacting with one another. What is clear from this short review is that the future offers an increasing wealth of representational possibilities for CSCL and that the centrality of representation for the field of CSCL is not changing anytime soon.
References Ainsworth, S. (2014). The multiple representation principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 464–486). New York, NY: Cambridge University Press. Ainsworth, S. (2018). Multiple representations and multimedia learning. In F. Fischer, C. HmeloSilver, S. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 96–105). New York: Routledge. Ainsworth, S., Gelmini-Hornsby, G., Threapleton, K., Crook, C., O’Malley, C., & Buda, M. (2011). Anonymity in classroom voting and debating. Learning and Instruction, 21(3), 365–378. https://doi.org/10.1016/J.Learninstruc.2010.05.001. Bailenson, J. N., Yee, N., Merget, D., & Schroeder, R. (2006). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments, 15(4), 359–372. Baker, M., Hansen, T., Joiner, R., & Traum, D. (1999). The role of grounding in collaborative learning tasks. Collaborative Learning: Cognitive and Computational Approaches, 31, 63. Bannert, M., Reimann, P., & Sonnenberg, C. (2014). Process mining techniques for analysing patterns and strategies in students’ self-regulated learning. Metacognition and Learning, 9(2), 161–185. Baylor, A. L., & Kim, Y. (2005). Simulating instructional roles through pedagogical agents. International Journal of Artificial Intelligence in Education, 15(2), 95–115. Baylor, A. L., & Ryu, J. (2003). The effects of image and animation in enhancing pedagogical agent persona. Journal of Educational Computing Research, 28(4), 373–394. Bodemer, D., & Dehler, J. (2011). Group awareness in CSCL environments. Computers in Human Behavior, 27(3), 1043–1045. Chen, J., Wang, M., Kirschner, P. A., & Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88(6), 799–843. https://doi.org/10.3102/0034654318791584.
366
S. E. Ainsworth and I.-A. Chounta
Chounta, I.-A., McLaren, B. M., & Harrell, M. (2017). Building arguments together or alone? Using learning analytics to study the collaborative construction of argument diagrams. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th international conference on computer supported collaborative learning (CSCL) 2017 (Vol. 2, pp. 589–592). Philadelphia, PA: International Society of the Learning Sciences. Christopherson, K. M. (2007). The positive and negative implications of anonymity in Internet social interactions: “On the Internet, Nobody Knows You’re a Dog”. Computers in Human Behavior, 23(6), 3038–3056. Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22(1), 1–39. Domagk, S. (2010). Do pedagogical agents facilitate learner motivation and learning outcomes? The role of the appeal of agent’s appearance and voice. Journal of Media Psychology, 22(2), 84–97. Eilam, B. (2012). Teaching, learning, and visual literacy: The dual role of visual representation. Cambridge University Press. Fischer, F., & Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: The role of external representation tools. The Journal of the Learning Sciences, 14(3), 405–441. Gillani, N., Yasseri, T., Eynon, R., & Hjorth, I. (2014). Structural limitations of learning in a crowd: Communication vulnerability and information diffusion in MOOCs. Scientific Reports, 4, 6447. Hall, R. (1996). Representation as shared activity: Situated cognition and Dewey’s cartography of experience. Journal of the Learning Sciences, 5(3), 209–238. https://doi.org/10.1207/ s15327809jls0503_3. Harrell, M., & Wetzel, D. (2013). Improving first-year writing using argument diagramming. In Proceedings of the annual meeting of the cognitive science society (Vol. 35). Hecking, T., Chounta, I. A., & Hoppe, H. U. (2017). Role modelling in MOOC discussion forums. Journal of Learning Analytics, 4(1), 85–116. Higgins, S., Mercier, E., Burd, L., & Joyce-Gibbons, A. (2012). Multi-touch tables and collaborative learning. British Journal of Educational Technology, 43(6), 1041–1054. Hoppe, H. U. (2009). The disappearing computer: Consequences for educational technology? In P. Dillenbourg, J. Huang, & M. Cherubini (Eds.), Interactive artifacts and furniture supporting collaborative work and learning (pp. 1–17). Springer. Hutchins, E. (1995). Cognition in the wild. Cambridge, Massachusetts: MIT press. Ingulfsen, L., Furberg, A., & Strømme, T. A. (2018). Students’ engagement with real-time graphs in CSCL settings: Scrutinizing the role of teacher support. International Journal of ComputerSupported Collaborative Learning, 13, 365–390. Johnson, W. L., Rickel, J., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11, 47–78. Kirschner, F., Paas, F., & Kirschner, P. A. (2011). Task complexity as a driver for collaborative learning efficiency: The collective working-memory effect. Applied Cognitive Psychology, 25 (4), 615–624. Koschmann, T. (2002). Dewey’s contribution to the foundations of CSCL research. In Proceedings of the conference on computer supported collaborative learning (pp. 17–22). International society of the learning sciences. Lea, M., & Spears, R. (1991). Computer-mediated communication, deindividuation and group decision-making. International Journal of Man-Machine Studies, 34(2), 283–301. Lee, J. J., & Hoadley, C. (2007). Leveraging identity to make learning fun: Possible selves and experiential learning in massively multiplayer online games (MMOGs). Innovate, 3(6). Retrieved December, 2018, from http://www.innovateonline.info/. Malzahn, N., Harrer, A., & Zeini, S. (2007). The fourth man: Supporting self-organizing group formation in learning communities. In Proceedings of the 8th international conference on
The Roles of Representation in Computer-Supported Collaborative Learning
367
computer supported collaborative learning (pp. 551–554). International Society of the Learning Sciences. Manske, S., Hecking, T., Hoppe, U., Chounta, I.-A., & Werneburg, S. (2015). Using differences to make a difference: A study in heterogeneity of learning groups. In 11th International Conference on Computer Supported Collaborative Learning (CSCL 2015). Mayer, R. E. (2014). Cognitive theory of multimedia learning. In R. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 43–71). Cambridge University Press. https://doi.org/10. 1017/CBO9780511816819.004. Meier, A., Spada, H., & Rummel, N. (2007). A rating scheme for assessing the quality of computersupported collaboration processes. International Journal of Computer-Supported Collaborative Learning, 2(1), 63–86. https://doi.org/10.1007/s11412-006-9005-x. Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. Oxford: Oxford University Press. Nunamaker, J. F., Jr., Briggs, R. O., Mittleman, D. D., Vogel, D. R., & Pierre, B. A. (1996). Lessons from a dozen years of group support systems research: A discussion of lab and field findings. Journal of Management Information Systems, 13(3), 163–207. Peck, T. C., Seinfeld, S., Aglioti, S. M., & Slater, M. (2013). Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and Cognition, 22, 779–787. https://doi.org/ 10.1016/j.concog.2013.04.016. Ploetzner, R., Fehse, E., Kneser, C., & Spada, H. (1999). Learning to relate qualitative and quantitative problem representations in a model-based setting for collaborative problem solving. The Journal of the Learning Sciences, 8(2), 177–214. Price, S., Sakr, M., & Jewitt, C. (2016). Exploring whole-body interaction and design for museums. Interacting with Computers, 28(5), 569–583. Rebetez, C., Bétrancourt, M., Sangin, M., & Dillenbourg, P. (2010). Learning from animation enabled by collaboration. Instructional Science, 38(5), 471–485. Rogers, Y., & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133–1152. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. In C. O’Malley (Ed.), Computer supported collaborative learning (pp. 69–97). Springer. Rosé, C. P., & Dimitriadis, Y. (this volume). Tools and resources for setting up collaborative spaces. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Säljö, R. (1999). Learning as the use of tools. In K. Littleton & P. Light (Eds.), Learning with computers: Analysing productive interaction (pp. 144–161). London: Routledge. Scheuer, O., McLaren, B. M., Weinberger, A., & Niebuhr, S. (2014). Promoting critical, elaborative discussions through a collaboration script and argument diagrams. Instructional Science, 42(2), 127–157. Soller, A., Martínez, A., Jermann, P., & Muehlenbrock, M. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. Stahl, G. (2018). Advancing a CSCL vision. In G. Stahl (Ed.), Theoretical investigations: Philosophical foundations of group cognition. Springer. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1(3), 315–337. https://doi.org/10.1007/s11412-006-9660-y. Suthers, D. D. (2014). Empirical studies of the value of conceptually explicit notations in collaborative learning. In A. Okada, S. J. Buckingham Shum, & T. Sherborne (Eds.), Knowledge cartography (pp. 1–22). London: Springer. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. https://doi.org/10.1007/s11412-009-9081-9.
368
S. E. Ainsworth and I.-A. Chounta
Suthers, D. D., Girardeau, L., & Hundhausen, C. (2003). Deictic roles of external representations in face-to-face and online collaboration. In B. Wasson, S. Ludvigsen, & U. Hoppe (Eds.), Designing for change in networked learning environments (pp. 173–182). London: Springer. Suthers, D. D., & Hundhausen, C. D. (2003). An experimental study of the effects of representational guidance on collaborative learning processes. The Journal of the Learning Sciences, 12 (2), 183–218. Sweller, J., Van Merrienboer, J. J., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. Taylor, B. (2016). Evaluating the benefit of the maker movement in K-12 STEM education. Electronic International Journal of Education, Arts, and Science (EIJEAS), 2. Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon and Schuster. van Amelsvoort, M., & Schilperoord, J. (2018). How number and size of text boxes in argument diagrams affect opinions. Learning and Instruction, 57, 57–70. van Bruggen, J. M., Kirschner, P. A., & Jochems, W. (2002). External representation of argumentation in CSCL and the management of cognitive load. Learning and Instruction, 12(1), 121–138. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press. Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26(4), 506–515. White, T., & Pea, R. (2011). Distributed by design: On the promises and pitfalls of collaborative learning with multiple representations. Journal of the Learning Sciences, 20(3), 489–547. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Yang, D., Wen, M., Kumar, A., Xing, E. P., & Rosé, C. P. (2014). Towards an integration of text and graph clustering methods as a lens for studying social interaction in MOOCs. The International Review of Research in Open and Distributed Learning, 15(5). https://doi.org/10.19173/ irrodl.v15i5.1853. Yee, N., & Bailenson, J. (2007). The Proteus effect: The effect of transformed self-representation on behavior. Human Communication Research, 33(3), 271–290. Zhang, J., & Norman, D. A. (1994). Representations in distributed cognitive tasks. Cognitive Science, 18(1), 87–122.
Further Readings Christopherson, K. M. (2007). The positive and negative implications of anonymity in Internet social interactions: “On the Internet, Nobody Knows You’re a Dog”. Computers in Human Behavior, 23(6), 3038–3056. This article discusses the aspect of anonymity and social behaviors’ expression in computer-mediated communication. Roschelle, J., & Teasley, S. D. (1995). The construction of shared knowledge in collaborative problem solving. In C. O’Malley (Ed.), Computer supported collaborative learning (pp. 69–97). Berlin, Heidelberg: Springer. This paper explores the mechanics behind the construction of shared knowledge in collaborative problem-solving through the observation and analysis of social and learning processes that occurred while a group of two worked together to solve a Physics task using a computer-based environment. The objective was to understand how common ground was established, how shared knowledge was constructed, and how technology—that is, computers—can be used to further support collaborative learning.
The Roles of Representation in Computer-Supported Collaborative Learning
369
Soller, A., Martínez, A., Jermann, P., & Muehlenbrock, M. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. This paper reviews related research and discusses how representations can be used to provide feedback, to guide, and to enable reflection for students who are engaged in collaborative learning activities. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. This work studies the use of graph-based representations of joint activities to depict contingency and uptake relations. Such graphs can be used to inform with respect to the user interaction and sequential patterns in user behavior and to provide insight with respect to the quality of collaboration. White, T., & Pea, R. (2011). Distributed by design: On the promises and pitfalls of collaborative learning with multiple representations. Journal of the Learning Sciences, 20(3), 489–547. This article studies the use of multiple representations to engaging students in mathematical reasoning and communication, the challenges that may arise when students are asked to understand and interpret multiple representations, and how to address these challenges.
Perspectives on Scales, Contexts, and Directionality of Collaborations in and Around Virtual Worlds and Video Games Deborah Fields, Yasmin Kafai, Earl Aguilera, Stefan Slater, and Justice Walker
Abstract Interpersonal collaborations play a key role in video games and virtual worlds. Yet historically, research regarding games and virtual worlds in CSCL has focused on the smaller, more localized aspects of collaboration within games and virtual worlds. In this chapter, our goal is to broaden the ways in which we conceptualize the possibilities of CSCL in video games and virtual worlds. We argue for broadly and imaginatively expanding the scales, contexts, and directionality of collaborative learning, considering each area in turn and providing vignettes of research that break traditional bounds in considering what collaboration is, where it takes place, and how to study it in video games and virtual worlds. In the discussion, we turn to next steps for CSCL in addressing issues of scale, access, and methods that capture the richness and diversity of computer-supported collaborative learning in video games and virtual worlds. Keywords Collaborative learning · Video games · Virtual worlds · Games and learning · Massive collaboration · Affinity spaces
D. Fields (*) Instructional Technologies & Learning Sciences, Utah State University, Logan, UT, USA e-mail: deborah.fi[email protected] Y. Kafai · S. Slater Teaching & Leadership Division, University of Pennsylvania, Philadelphia, PA, USA e-mail: [email protected]; [email protected] E. Aguilera California State University-Fresno, Curriculum & Instruction, Fresno, CA, USA e-mail: [email protected] J. Walker Teacher Education, University of Texas, El Paso, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_20
371
372
D. Fields et al.
1 Definitions and Scope: Collaborative Learning in Video Games and Virtual Worlds Video games and virtual worlds have provided rich areas to design for and study collaborative learning. From early on in the study of games and learning, designers have considered the role of interpersonal motivations such as cooperation, competition, and recognition in gameplay (Malone and Lepper 1987). Even simple designs for collaboration can have powerful effects, for instance using a leaderboard or “high score” display to encourage competition between players. Further, even games that are not explicitly designed to be collaborative, such as single-player video games, can support emergent local collaborations such as a group of friends hanging around a video game console giving advice, sharing tips, and cheering each other on (Stevens et al. 2008). If collaboration can be productive even in these designs and environments, we can imagine the wide range of ways that collaborative learning can be supported in, around, and across contemporary games and virtual worlds, both small and massive in scale, across physical and digital realms with networked computational possibilities—all of which are currently available. This technological landscape presents an expansive scope for designing and studying collaborative learning at the cutting edge of our theories and computational capabilities in CSCL. Thus far, much of existing research regarding games and virtual worlds in CSCL has focused on smaller, more localized aspects of collaboration within games and virtual worlds—for instance in small groups playing an educational game (Kafai and Peppler 2011). Even when the play takes place in larger virtual worlds or massive multiplayer online games with hundreds or thousands of people collaborating at scale (see Chen et al. this volume), studies have tended to focus on small groups within the larger social gaming world (e.g., Clarke-Midura and Dede 2010) or a single player in relation to a broader social context (e.g., Steinkuehler 2006). Added to this, analytical lenses in CSCL mostly center on a single social space of play such as a classroom or home, when many collaborations around games extend beyond the game/world itself to a variety of spaces where players share knowledge and socialize. Finally, while many studies focus on the intentionally structured elements of collaboration created by researchers or game designers, fewer studies consider the types of collaborations unexpectedly and synergistically emerging among players in and around these game systems, in both sanctioned and unsanctioned ways. The technological, theoretical, and methodological capabilities to design, study, and theorize about collaboration in games and virtual worlds have grown substantially in recent years (see Wise et al. this volume). These developments challenge CSCL research to expand our understanding of what is possible and leverageable for learning and collaboration. Before moving forward, we pause for a brief discussion on how we define learning in this chapter, as well as how we relate it to the practice of collaboration. As Sfard (1998) has argued, many academic and everyday discourses tend to represent learning through the metaphor of acquisition, be it the development of knowledge, skills, dispositions, or other outcomes deemed desirable by educators
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
373
and institutions. This chapter, however, focuses on a metaphor of learning as participation, and emphasizes processes of socialization, mentorship, participation, and social practice (e.g. Lave and Wenger 1991; Rogoff 2003; Sandell 2007). While Sfard (1998) argues these metaphors for learning are not meant to be mutually exclusive of one another, this chapter will frame learning through the lens of participation. Thus, we will focus on the learning-as-doing that occurs through processes of collaboration—working together to accomplish shared or mutually beneficial goals. In this chapter, our goal is to broaden the ways in which we conceptualize the possibilities of computer-supported collaborative learning in video games and virtual worlds. We argue for broadly and imaginatively expanding the scales, contexts, and directionality of collaborative learning, considering each of these three areas in turn and providing vignettes of research that break the traditional analytical bounds we set on ourselves in considering what collaboration is, where it takes place, and how to study it in video games and virtual worlds. In the discussion, we turn to next steps for CSCL in addressing issues of scale, access, and methods that capture the richness and diversity of computer-supported collaborative learning in video games and virtual worlds.
2 History and Development: From Small to Massive Scales of Collaboration Collaboration has always played a key role in understanding gaming and learning. In one of the first studies to examine social motivation in gaming, Malone and Lepper (1987) identified critical features that to this day impact game play and performance, including cooperation and competition, task structures that foster individual or interdependent play, and the importance of recognition by others. Early personal computers and consoles encouraged mostly individual game play, providing only a keyboard or a controller as input devices, but even then players engaged in turn taking, thus inviting others to join in the game. Furthermore, we should not forget that much early game playing took place in public social spaces (Kiesler and Sproull 1985) when game consoles were located in pool halls and arcades, allowing players to compete with each other by having their high scores listed on scoreboards. These social game-play settings have continued to the present day in cybercafes, student dormitories, classrooms, and even households where people play video games together. Whether engaging in single player, multiplayer, or massive multiplayer games, players sit together sharing in-game experiences, providing support and tips for advancement, and generally furnishing resources for collaborative learning across physical and digital spaces (Lindtner et al. 2008; Stevens et al. 2008). This draws us to consider the multiple scales, and later contexts, of collaborative learning in video game play.
374
D. Fields et al.
Across physical and virtual contexts, collaboration can be understood at varying levels of scale, from more intimate shared experiences to massively organized movements. For this reason, we first consider collaborative learning at many scales of size, from shared single-player experiences to massively multiplayer coordinated efforts, and even pushing the bounds of what constitutes the “game” or “virtual world” itself. As prior research in CSCL has demonstrated, small scales are still highly relevant for collaborative learning in games. For instance, research examining the at-home gaming practices of youth demonstrated meaningful experiences of “shared” play and interactions between participation communities that include family members and friends even when only one player was actively involved with hands on a controller (Stevens et al. 2008). In another instance, Barab et al. (2007) demonstrated how pairing groups of students who played through an education science game called Quest Atlantis yielded rich opportunities for both knowledge transfer and collaborative play. Similarly, playing in team-based games can generate a positive affect that mediates learning (Brom et al. 2016). Working in teams with designed, interdependent roles can support learning and a sense of belonging, as in an augmented reality game where students took on interdependent roles while scientifically investigating environmental disasters (Squire and Klopfer 2007). Yet small, predetermined groups and their group practices (Medina and Stahl this volume) are only one aspect of collaborative learning and knowledge building in games and virtual worlds. Midsize groupings occur in many areas of games and virtual worlds, defined in-game or out-of-game depending on the situation. For instance, interdependent roles can become much more complex in commercially developed virtual worlds like massively multiplayer online games—with distinct forms of group awareness and practice (see Buder et al. this volume)—where players with different capabilities interdependently work together to overcome challenges. For instance, Chen’s (2012) research illustrated the intricate dynamics and distributed cognition in a group of 40 dedicated, expert players in World of Warcraft (Blizzard) who coordinated their activities, adapted their play to changing technological situations and game-based rules, and negotiated roles and responsibilities for sophisticated challenges. Groups may also be more dispersed than a questing/raiding group or a small group within a virtual fantasy adventure setting. Whole classrooms or after-school clubs of students may participate together in productive ways. Fields and Kafai’s (2009) research on tweens in the virtual world of Whyville demonstrated that kids collaborated diffusively in sharing “secret” (unposted) knowledge about gameplay in ways that involved observing, overhearing, and overseeing, in the club and online with club members and with players they only knew in the virtual world itself. Further, the affordances of shared play may be different depending on the technology structure of the game space and the organizational design of the in-person space. As Steinkuehler et al. (2012) demonstrated, the interaction between the type of game and virtual social spaces with in-person educational supports (formal and informal) shapes the overall affordances for collaborative learning with games. Finally, beyond their obvious quantitative differences, games and virtual worlds with massive populations allow for qualitatively different types of collaboration and
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
375
collaborative tasks. Some games crowdsource problem-solving, as with the online puzzle game Foldit!, which challenges players to fold virtual models of proteins into perfect natural structures. The game was initially developed through a collaboration between the University of Washington’s Center for Game Science and Department of Biology as a way to explore a more distributed and collective model of problemsolving in the sciences. Players collaboratively learned with and from each other as they made their strategies visible in posting solutions and discussing them on chat and game forums. Designers took the most productive player strategies and integrated them into the system to make those available to all players. Thus, the overall findings about protein-folding solutions were collectively developed across players, designers, and the iteratively designed gaming system in ways that topped computer solution algorithms (Cooper et al. 2010). Other collective intelligence efforts have stemmed from augmented reality games designed to take place across virtual and physical settings around the world, as with McGonigal’s I Love Bees (McGonigal 2008) where players pooled knowledge that required interpreting extremely distributed clues (i.e., a phone call to one place on Earth at a particular time). Massive populations of players can facilitate the exploration and simulation of real-life problems such as discovering new scientific knowledge, facilitating the collective intelligence of a crowd, or considering real-world social problems that are collective in nature, such as infectious disease, environmental challenges, or political issues. These studies begin to raise questions about the scales of collaborative learning and play in video games and virtual worlds. In CSCL research, we must consider a wide range of scales, from small to massive, as well as multiple scales of collaborative learning in games. Individuals or small groups often work within a massive game and/or a midsize social context out-of-game (i.e., classroom, friendship group). Each scale of collaboration is important to understanding and designing for collaborative learning and looking at multiple scales together may provide deeper understandings. Below we provide one example of collaboration designed for a massive scale of participation as players experienced and fought against an epidemic in the virtual world of Whyville.
2.1
Vignette: The Great Dragon Swooping Cough—Massive In-World Collaboration
In Winter 2015–2016, a new epidemic swept through the virtual world of Whyville. Boils and scales erupted on skin, coughs and “rawrs” interrupted conversations, and citizens spouting fire from their mouths rolled haphazardly around the screen (see Fig. 1). Sick citizens could not work and received no salary during the three worst days of the infection, known as the Dragon Swooping Cough. Whyvillians throughout the world began changing their behaviors in order to prevent catching it themselves. What resulted was a massive collaborative effort to stop the spread of the epidemic. Just when citizens thought they had thwarted the epidemic, a new,
376
D. Fields et al.
Fig. 1 Left: Pop-up information about the Dragon Swooping Cough available on the homepage of Whyville. Right: The Dragon Swooping Cough design: Scales, cough, washing hand (“wash”), and knowledge sharing about the virus and preventative measures
mutated version of the virus came up again, though this time research allowed for vaccines and more sophisticated preventative measures against the disease. The Dragon Swooping Cough is an example of a “serious” event within a virtual world, where the designers sought to engage a youth population with a virtual epidemic that required a massive population to simulate the spread of an infectious disease, and mass collaboration in order to resolve it (Fields et al. 2017). In this case, the “game” took place within a virtual world, Whyville (Numedeon 1999), where millions of children and youth login each month. As a virtual epidemic, it leveraged the availability of a massive number of participants so that the outbreak could realistically impact a population where infection levels were dependent on various vectors in conjunction with social dynamics (Kafai and Fefferman 2010). It also mirrored real-life efforts to combat widespread infectious diseases in that it took a massive effort to contain the disease: a large proportion of the population needed vaccines in order to create “herd immunity” protection. As an event in a virtual world, the Dragon Swooping Cough promoted a diffuse type of massive collaboration while allowing players to safely engage with the virtual epidemic. Most players strongly disliked it and either avoided it (leaving the virtual world for the duration of the disease) or sought to prevent and cure it. To that end, players purchased and shared a variety of more or less successful preventative treatments and vaccines. For instance, washing hands or using skin lotion were highly visible preventative measures that players could observe and quickly imitate themselves to prevent contracting the disease. Further, during the first rampage of the disease, players manifested a wide range of naive conceptions about which preventative measures were useful and in what ways. For instance, masks did not prevent a healthy person from contracting the disease but lowered the likelihood that a sick person wearing a mask would infect others. Over time many players corrected each
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
377
other about such appropriate uses of preventative measures, and during the second manifestation of the disease, players manifested much more accurate usage of preventative measures. Finally, a highly successful fundraising drive collected millions of clams (Whyville’s currency) toward research on a vaccine for the disease, with hundreds of players subsequently purchasing, using, and sharing information about the vaccines (see Fig. 1). To this end, one could argue that the population of Whyville collaboratively discussed and learned new in-world behaviors that helped to prevent the spread of infectious disease. The benefit of the virtual game was twofold: youth actively experienced, learned about, and fought the disease in a “safe” way while researchers studied emotional engagement and activity in ways that could help disease prevention in real life (Fields et al. 2017). The Dragon Swooping Cough provides one example of how games can harness massive populations in diffusive collaborative learning efforts.
3 State of the Art Building on our discussion of scales of collaboration, in the following section, we explore the roles of multiple contexts of collaboration in games and virtual worlds.
3.1
Contexts of Collaboration: From in-Game to around, across, and between Games
Collaborative learning rarely happens exclusively within a game. Indeed, a key to understanding collaboration in video games and virtual worlds lies in understanding the distinction between what Gee (Gee 2003; Gee and Hayes 2010) refers to as small “g” games and big “G” infrastructures around those games (Steinkuehler 2006). To elaborate, small “g” games refer to the bounded experiences of players within game worlds; they are self-contained and finite, pre-optimized to introduce, cover, or reinforce learning, and well-suited for learning in a safe, simulated, and structured environment. In contrast, big “G” interactions take place outside of the game, across contexts and communities, acting as extensions of the original design, and providing the game a larger life and world impact. Big “G” game infrastructures are openended and seamlessly integrate the small “g” games within broader, flexible “metagame” experiences that foster user-driven extensions and adaptations in support of real-world goals and outcomes. Metagaming activities are the social practices that take place around a game, like the cybercafes, gaming arcades, or home spaces discussed in the introduction of this chapter. Consider the context of playing a videogame at home while others critique or encourage (see Stevens et al. 2008); or having a school of children play a simulation game at home and school while sharing tips, challenges, and mods (revised player-generated versions or challenges) through
378
D. Fields et al.
phone networks and in-person conversations at school (see Squire 2011). These collaborative practices are integral to playing and experiencing a game but take place around, rather than “in” games and virtual worlds. Such social practices are not new, previously supported by print magazines published by game developers to share information and shortcuts about the game. Increasingly these social activities are initiated by players themselves, hosted both in-person (as described above) and in virtual spaces, providing a compelling example of affinity groups (Gee 2004) that form around like-minded interests. Affinity spaces around games can include site-sponsored or player-created game forums, fan-based writing communities, modding groups, and cheat sites among others (Hayes and Duncan 2012). In these spaces, players collaborate to participate in knowledge creation, share encouragement, post artifacts they have created, and even contribute mods (edits) of games. One of the first studies that examined game forums for collective knowledge included analysis of hundreds of postings by players on the official World of Warcraft forums, including spreadsheets with information about outcomes, theorycrafting and discussion between participants, and mathematical formulas for the damage and healing of skills (Steinkuehler and Duncan 2008). Aside from documenting the content of these forums, Steinkuehler and Duncan also categorized the ways that postings reflected scientific inquiry practice and found that players on the forums engaged in constructing arguments using evaluative epistemologies in much in the same way that scientists would in their own fields. Beyond official forums, another genre of collaboration outside of games is cheat sites (Consalvo 2007; Jørgensen and Karlsen 2019; Salen and Zimmerman 2006) where players share cheats or shortcuts, passwords, easter eggs (secret surprises planted by game designers), and other information that can help players advance through the game. Even younger players have these sites for their games, as documented in analysis of the Whyville community in and around that virtual world (Fields and Kafai 2010b). In this instance, players developed internet sites and, more recently, YouTube videos where they not only shared information about how to play in Whyville but also debated the pros and cons of cheating both within the game and in the wider world (Kafai et al. 2019). Contributing to affinity spaces can go beyond sharing knowledge to posting usergenerated content such as game modifications (mods for short) or new games themselves, a growing phenomenon that validates players not just as users but as creators (Fields and Grimes 2018). Some commercial games make their platforms editable by users, facilitating entire cultural scenes where users create, share, remix, critique, and generally celebrate the game. For instance, the creators of Little Big Planet (Media Molecule 2011) host player–creators who have contributed over 8 million user-generated game levels, not including additional player-made objects, stories, and related content (Grimes 2015). This phenomenon also includes the creation of independent games on platforms designed for learning, such as Scratch, GameStar Mechanic, or even Minecraft (Mojang 2009). As we describe in the vignette below, communities around these user-created mods and games provide eager audiences who playtest games, share constructive feedback, and encourage
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
379
user designers. All of this points to the need to include multiple contexts—in, around, and making games—as part of a game ecology to promote and interpret collaborative learning more fully.
3.2
Vignette: Collaboration Through Videos and Comments in Minecraft YouTube Communities
This case examines the forms of learning and engagement that take place within the comment sections of Minecraft tutorial videos posted on YouTube (Walker et al. 2019). YouTube is a popular website for game players and hosts tutorials, “Let’s Play1” videos, game reviews, and content remixes such as fan art and songs. In this study, the research team conducted a scaled search of YouTube tutorial videos that involved popular in-game activities. Results yielded a data set of more than 546,000 unique comments which were systematically sampled and qualitatively analyzed to assess forms of user participation and collaboration. Various types of collaboration took place as content viewers publicly engaged content producers to provide feedback about content usefulness, elicit technical assistance, or describe in-game reflections about experiences during play. This points to the video tutorials as not just static content but as part of an interactive and dialogic effort between players posting, watching, and interacting around the videos to learn. Based on these analyses, researchers showed that Minecraft players leveraged the big “G” infrastructure that YouTube provides to participate in meta-game cultures that emerged from play including, but not limited to, learning and relationship building through dialogue. Specifically, researchers found that comments were primarily used to participate in conversations around game content, reference in-game activities reflected in video content, and/or engage content authors directly. These findings reveal that big “G” infrastructures in themselves can foster unique cultures and practices that are distinct from game environments and which cultivate distinctive forms of engagement toward learning. These phenomena are not exclusive to YouTube, and interactions like these are a common component of the big “G” architecture that Gee (2003) describes. As interaction data from big “G” environments become easier to obtain and analyze, researchers will be able to better understand the nuance and detail of the collaborative and participatory exchanges that players engage in. Yet challenges to analyzing massive game-play or gamecreation communities remain, such as identifying and assessing unique forms of engagement at scale when forms of dialect and behaviors are not readily elucidated through typical approaches to language processing. This includes context and content-specific uses of images (e.g., emojis, memes, and GIFs), behaviors (e.g., trolling), and language (e.g., slang, intentional misspellings, and technical
1 “Let’s Play” videos are multipart videos where the video creator plays through a game while offering commentary and thoughts on the game experience.
380
D. Fields et al.
vernacular). Methodologies to understand collaborative learning practices in and between gaming and metagaming infrastructures remain an area for growth and development.
3.3
Directionality: Designed and Emergent Forms of Collaboration
Beyond issues of scale and context, another important dimension of collaborations in video games and virtual worlds is the issue of directionality—that is, understanding the degree to which collaborations reflect the intended designs of the developers of games and virtual worlds, as compared to collaborations arising from players or communities themselves. This definition of directionality draws from Woodcock and Johnson’s (2018) contrasting of “gamification-from-above” as an authoritative and managerial strategy, and “gamification-from-below” as a kind of player-driven resistance to structures of labor control. In related discussions of interactivity within games, Salen and Zimmerman (2004) discuss several dimensions through which we might understand interactivity in games—including explicit interactivity with the designed features of games, and meta-interactivity, or cultural participation with practices that can extend beyond designed game features themselves. Building on this framework, this chapter offers an additional dimension of collaboration across a spectrum of (developer) designed and (participant-driven) emergent experiences. Representing one end of this spectrum of directionality is a predominant form of collaboration in virtual worlds in general, and digital games in particular: explicitly designed by developers through player roles or tasks that require collaboration within the game toward a goal (Salen and Zimmerman 2004). Designing for collaboration can include everything from requiring a team to solve or fight through a task, developing character roles with interdependent abilities that lead players to form groups in-game, and creating events and tasks too big for individual players to solve. Praised by both game enthusiasts and educational researchers alike, these in-game mechanics have often been identified as a key affordance of virtual worlds for supporting valuable social practices applicable to many everyday life experiences (Nardi 2010; Steinkuehler 2007). On the other side of the spectrum are more emergent, participant-driven experiences of collaboration between players or participants in virtual worlds. This might include in-world collaborations unplanned by designers. For instance, Whyville developers created “projectiles” that players could throw at each other in the world, such as footballs (for playing catch) or maggots (for creating a “ewww” factor). Whyvillians took these designed projectiles and found new uses for them, including flirtations, elaborate mudball fights, and even social policing for inappropriate talk (Fields and Kafai 2010a; Kafai and Searle 2010). Emergent participation may also go outside of the virtual worlds or games themselves, as in the case of affinity spaces and metagaming infrastructures discussed earlier. Scholars of game
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
381
studies have theorized beyond-the-game experiences of cultural participation that sometimes contrast with developers’ designed experiences of in-game collaboration (Gee 2003; Salen and Zimmerman 2004). These may include engaging in metagaming practices, in which players use their knowledge of the world beyond the game or virtual world to influence their engagement within the game (Boluk and LeMieux 2017). Consider, for example, the phenomena of “theorycrafting” (Paul 2011) or the mathematical analysis of game mechanics by players and player communities, typically for the purpose of optimizing play experiences or maximizing chances of success. While individual game players have made their theorycrafting efforts public via platforms such as YouTube (similar to the Minecraft vignette above), other player communities have demonstrated collaborative practices of theorycrafting by crowdsourcing efforts to test various game mechanics and report the results on a publically accessible wiki page (Tran 2018). And while such experiences can initially feel removed from more formal educational contexts, the work of the Twitter-based community of math teachers unified under the label #iteachmath has offered examples of mathematics pedagogy that take on similarly emergent characteristics (see Willet and Reimer 2018). Finally, recent research also suggests a third category of experiences that group together somewhere between the two poles of this spectrum—building on game features that are designed-for-emergence (Holmes 2015). These include tools, mechanics, or other designed features meant to support collaborations both within and beyond games and virtual worlds. The multiplayer online battle area game DOTA2 (Valve 2013) exemplified one such example of this, in the form of a “Coach Mode” built into the game’s online platform. Using this feature, novices of the game could connect with more veteran players via screen-sharing, audio and text chat, and even digital annotations. In this way, pedagogical relationships were mediated by the platform in a personalized way while leveraging the affordances of a virtual world where players could connect beyond the bounds of physical spaces. Interestingly, the genre of Multiplayer Online Battle Arena (MOBA) under which DOTA2 is categorized, was arguably created and popularized by player–creator collaborations using level-modification tools provided in the game Warcraft III (Blizzard 2002), lending another example of the outcomes of designed-for-emergent collaborations. While the tools provided in these examples were certainly designed and sanctioned by developers, the nature of the collaborations that emerged from them were highly dependent and driven by the intentions of players and player– communities themselves. The vignette that follows centers on player–creator communities within the game called The Sims (Electronic Arts 2000). While The Sims game itself is designed as a single-player experience, this vignette illustrates the more emergent nature of some of the collaborations of player communities around and beyond the game.
382
3.4
D. Fields et al.
Vignette. Player–Creator Communities in The Sims
This case focuses on game creation and modification (“modding”) practices around games and virtual worlds as an example of emergent collaboration in these spaces. In their exploration of women and twenty-first-century gaming, Gee and Hayes (2010) describe two large communities of player–creators who develop in-game content for The Sims: The Sims Resource (TSR) and Mod The Sims 2 (MTS2). Beyond simple repositories for the uploading and downloading of virtual resources, these collaborative online spaces offer interested participants a myriad of online tutorials (also designed and maintained by other players), as well as opportunities for mentorship from more experienced creators. Gee and Hayes go on to describe the experiences of particular individuals who have participated and benefitted from these collaborations. These include the story of Jade, a teenage girl from a rural and working-class family in the Midwest, who learned to use complex graphic design and 3D modeling software through her participation in these communities of creative practice (Lave and Wenger 1991)—as well as her own, passionate, affinity-driven interest in the game itself. By engaging with these communities of player–creators online and in person through an after-school club, Jade moved from being a more peripheral participant in these communities to a community-respected expert in her own right. Tabby Lou, a 61-year-old retiree introduced to the Sims by her adult daughter, was another participant in these online communities. Motivated initially by her interest in having a particular item to place in the virtual “house” inhabited by her virtual Sims characters, Tabby Lou connected with Sims-focused online communities such as TSR and MTS2. Through mentorship, practice, and the exchange of feedback with others, she eventually developed a reputation as an internationally renowned designer of Sims-based content and was often approached with requests from other players interested in commissioning her expertise. The idea of “crediting” others in uploaded designs is a common social practice within these communities, Tabby Lou’s credits have included community members who have requested her help, experienced creators who have helped support her learning, and even the designers of the software she uses to create her own Sims content (Gee and Hayes 2010). While not all participants in the affinity spaces surrounding the Sims attain these levels of expertise and renown, Tabby Lou’s crediting practices exemplify how the contributions of all members in the community, no matter how peripheral, can be interpreted as important collaborations. As these examples suggest, collaborations can arise from more participant-driven and emergent experiences as well as developer-designed collaborative features or game mechanics. Indeed, the intersections of emergent experiences, along with features designed-for-emergence, can provide additional avenues for further research to expand our understanding of computer-supported collaborative learning, within and beyond video games and virtual worlds.
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
383
4 The Future In this chapter, we have argued for an expanded consideration of the collaboration in and around video games and virtual worlds. In particular, we have examined how exploring collaboration in these environments across the dimensions of scale, contexts, and directionality helps to reveal insights and opportunities for deepening our understandings of CSCL in the twenty-first century. We now consider the diversity of collaborations observed in gaming and virtual play and what this means for future research in CSCL. We organize this discussion around two key considerations CSCL researchers may face in studying collaboration across these dimensions: what counts as collaboration and how to study it across multiple scales and contexts.
4.1
Pushing the Boundaries of Design for Collaboration
The educational design of video games and virtual worlds requires an expansion of what we consider collaboration. Designing for pairs, teams, and specific roles is a strong start, but does not reflect the rich, “in the wild” collaborations occurring in and around many games and virtual worlds, especially in commercially created settings. As learning designers, we need to facilitate both collaborations that are tight and cohesive (e.g., team-based tasks, prescribed roles) and those that are loose and diffusive (e.g., emergent peer pedagogy, knowledge diffusion, mid-scale, and massive scale collaborations). Each scale of collaboration affords opportunities for learning in games and virtual worlds that should be carefully considered for design or investigation. Small-scale collaborations might provide more intimate and familiar settings in which individual players can coordinate learning with each other while large-scale collaborations might require players to develop more agency in reaching out to others. We also need to consider designing for multiple environments of play. This means attending not just to collaboration in-game but also designing for learner participation around the game, whether through in-person designs that can take place in classrooms, clubs, libraries, or at home) or in digital spaces like wikis and social networking forums, game-sharing sites, and even fanfiction, storytelling, and other fan-based sites. Intentionally providing (or using existing) developer tools for players could provide a means for engaging youth in designing their own mods. Even where the funds for providing digital tools for development are short, encouraging non-digital mods like playing a game within non-digital constraints on play (Gee and Hayes 2010; Kafai and Fields 2013) can stimulate creative forms of collaboration and knowledge sharing around games. One issue this diversity of collaborations within and around games brings up is that digitally based collaborations in the wild frequently require a game with a healthy and active big “G” ecosystem. Without an active user-base thinking about
384
D. Fields et al.
and discussing the game being studied, there will be little collaboration outside of the game to observe. Unfortunately, many educational games struggle to develop these types of robust big “G” ecosystems. Goldiez and Angelopoulou (2016) identify three impediments to successful serious games: culture, validity, and business models. While educational games frequently excel at developing validity—ensuring that the game environment does what it is intended to do—this development often comes at the expense of a coherent business model and culture of exchange between decision makers for the game and users playing it. For those reasons, much current research has focused on studying collaboration and learning through existing commercial games, such as Whyville and World of Warcraft, and websites, such as Twitter and YouTube. These environments afford researchers the ability to study these phenomena in contexts with existing big “G” environments, using data collected from thousands of users or more. While designed serious games are useful for studying particular behaviors or cognitions around collaboration and learning, commercial spaces often allow for the collection of robust observational data. That said, the ability of learning scientists and system designers to create serious games for the study of learning is vital. How then can learning games developers support and curate long-lasting, active, participatory collaborative ecosystems within their games, that mirror those of commercial products?
4.2
Analyzing Across Multiple Scales and Contexts
One step toward a more sophisticated analysis of collaborative learning in games and virtual worlds is to expand analytical frames across multiple scales and/or contexts of play. These might include studying small-group collaborations within massive-scale trends, researching the relationship between an individual’s in-game experiences alongside their in-person collaborative learning, or considering small, intimate interactions within a massive game or affinity space. This also means considering the role of more peripheral participation in collaborative spaces, for instance, lurkers or viewers who use information shared in a space but do not contribute either information or commentary. This may require the use of methods that do not solely rely on click data collected on the logfile level and development of methodologies that combine information collected from observations or self-reports. While the ubiquity of computers in educational and leisure contexts has given rise to multiple new modalities of data for analysis of emergent and structured collaborative processes, these data are not always easy to obtain, and collaborative practices are not always readily observable. For instance, while data mining can make collaboration and learning in massive online game playing and metagaming visible, it is not always feasible for researchers to gain access to data which contain these behaviors. In commercial spaces, data are an expensive commodity, and for that reason, they are not made freely accessible for use by third parties. The existence of application programming interfaces (APIs) can make some of this data available and
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
385
actionable (Slater and González Canché 2018). Designing for data exhaust capture (Steinkuehler and Squire 2016) in the early stages of game development can also support these analytical techniques. This leaves many questions for researchers of CSCL to grapple with: How can we employ data collection and data-sharing practices that give us rich contextual information on player behavior while preserving anonymity and privacy? How can we effectively combine the types of methods required to analyze collaboration at multiple scales, in multiple contexts?
4.3
Conclusions
Computer-supported collaborative learning is part of the very core of what makes video games and virtual worlds engaging and stimulating. In this chapter, we have broadened the concepts of what counts as collaboration in games and virtual worlds, particularly in ways that take advantage of the populations, trends, and tools available in game development that attend to emergent collaborations occurring in and around games. While these trends may be uniquely exhibited in video games and virtual worlds, they push CSCL research into new directions and should influence how we conceptualize, design for, and analyze collaborative learning in and across contexts at multiple scales.
References Barab, S., Dodge, T., Thomas, M. K., Jackson, C., & Tuzun, H. (2007). Our designs and the social agendas they carry. Journal of the Learning Sciences, 16(2), 263–305. Boluk, S., & LeMieux, P. (Eds.). (2017). Metagaming: Playing, competing, spectating, cheating, trading, making, and breaking videogames. Minneapolis, MN: University of Minnesota Press. Brom, C., Šisler, V., Slussareff, M., Selmbacherová, T., & Hlávka, Z. (2016). You like it, you learn it: Affectivity and learning in competitive social role play gaming. International Journal of Computer-Supported Collaborative Learning, 11(3), 313–348. Buder, J., Bodemer, D., & Ogata, H. (this volume). Group awareness. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer International Publishing. Chen, B., Håklev, S., & Rosé, C. P. (this volume). Collaborative learning at scale. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer International Publishing. Chen, M. (2012). Leet noobs: The life and death of an expert player group in world of Warcraft. New York, NY: Peter Lang. Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309–328. Consalvo, M. (2007). Cheating. Cambridge, MA: The MIT Press. Cooper, S., Khatib, F., Treuille, A., Barbero, J., Lee, J., Beenen, M., et al. (2010). Predicting protein structures with a multiplayer online game. Nature, 466(7307), 756–760.
386
D. Fields et al.
Fields, D. A. & Grimes. S. M. (2018). Pockets of freedom, but mostly constraints: Emerging trends in children’s DIY media platforms. In Children, media and creativity: The international clearinghouse on children (pp. 159–172). Sweden: Nordicom. Fields, D. A., & Kafai, Y. B. (2009). A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world. International Journal of Computer-Supported Collaborative Learning, 4(1), 47–68. Fields, D. A., & Kafai, Y. B. (2010a). Knowing and throwing mudballs, hearts, pies, and flowers: A connective ethnography of gaming practices. Games and Culture, 5(1), 88–115. Fields, D. A., & Kafai, Y. B. (2010b). Stealing from grandma or generating knowledge? Contestations and effects of cheating in Whyville. Games and Culture, 5(1), 64–87. Fields, D. A., Kafai, Y. B., Giang, M. T., Fefferman, N., & Wong, J. (2017). Plagues and people: Engineering player participation and prevention in a virtual epidemic. In FDG’17, International conference on the foundations of digital games, Hyannis, MA, USA, August 14–17, 2017. New York, NY: ACM, Article 29. Gee, J., & Hayes, E. R. (2010). Women and gaming: The Sims and 21st century learning. New York, NY: Palgrave Macmillan. Gee, J. P. (2003). What video games have to teach us about learning and literacy. New York, NY: Palgrave Macmillan. Gee, J. P. (2004). Situated language and learning: A critique of traditional schooling. New York, NY: Routledge. Goldiez, B. F. & Angelopoulou, A. (2016). Serious games-creating an ecosystem for success. In 2016 8th International conference on games and virtual worlds for serious applications (VS-GAMES) (pp. 1–7). IEEE. Grimes, S. M. (2015). Little big scene: Making and playing culture in media molecule’s littlebigplanet. Cultural Studies, 29(3), 379–400. Hayes, E. R., & Duncan, S. C. (2012). Learning in video game affinity spaces. New literacies and digital epistemologies (Vol. 51). New York, NY: Peter Lang. Holmes, J. (2015). Distributed teaching and learning systems in Dota 2. Well Played, 4(2), 92–111. Jørgensen, K., & Karlsen, F. (Eds.). (2019). Transgression in games and play. Cambridge MA: The MIT Press. Kafai, Y., & Fefferman, N. (2010). Virtual epidemics as learning laboratories in virtual worlds. Journal of Virtual Worlds Research, 3(2). Kafai, Y. B., & Fields, D. A. (2013). Connected play: Tweens in a virtual world. Cambridge, MA: The MIT Press. Kafai, Y. B., Fields, D. A., & Ellis, E. (2019). The ethics of play and participation in a tween virtual world: Cheating practices and perspectives in the Whyville community. Cognitive Development, 49(1), 33–42. Kafai, Y. B., & Peppler, K. A. (2011). Beyond small groups: New opportunities for research in computer-supported collective learning. In H. Spada, G. Stahl, N. Miyake, & N. Law (Eds.), Connectin g computer-supported collaborative learning to policy and practices: CSCL2011 community events proceedings, Vol. 1. Long papers (pp. 17–24). Hong Kong, China: International Society of the Learning Sciences. Kafai, Y. B., & Searle, K. A. (2010). Safeguarding play in virtual worlds: Designs and perspectives on tween player participation in community management. International Journal of Learning and Media, 2(4), 31–42. Kiesler, S., & Sproull, L. (1985). Pool halls, chips and war games: Women in the culture of computing. Psychology of Women Quarterly, 9, 451–462. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. Lindtner, S., Nardi, B., Wang, Y., Mainwaring, S., Jing, H., & Liang, W. (2008). A hybrid cultural ecology: world of warcraft in China. In Proceedings of the 2008 ACM conference on computer supported cooperative work (pp. 371–382). ACM.
Perspectives on Scales, Contexts, and Directionality of Collaborations. . .
387
Malone, T. W., & Lepper, M. R. (1987). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. Snow & M. J. Farr (Eds.), Aptitude, learning, and instruction volume 3: Conative and affective process analyses (pp. 223–253). Hillsdale, NJ: Lawrence Erlbaum. McGonigal, J. (2008). Why I love bees: A case study in collective intelligence gaming. In K. Salen (Ed.), The ecology of games: Connecting youth, games, and learning (pp. 199–228). Cambridge, MA: MIT Press. Medina, R., & Stahl, G. (this volume). Analysis of group practices. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer International Publishing. Mojang. (2009). Cross-platform. Stockholm, Sweden: Mojang Synergies AB. Nardi, B. (2010). My life as a night elf priest: An anthropological account of world of Warcraft. Ann Arbor, MI: University of Michigan Press. Numedeon. (1999). Browser-enabled. Pasadena, CA: Numedeon, Inc. Paul, C. A. (2011). Optimizing play: How theorycraft changes gameplay and design. Game Studies, 11(2). Retrieved from http://gamestudies.org/1102/articles/paul. Rogoff, B. (2003). The cultural nature of human development. Oxford: Oxford University Press. Salen, K., & Zimmerman, E. (2006). The game design reader. Cambridge, MA: The MIT Press. Salen, K. Z., & Zimmerman, E. (2004). Rules of play: Game design fundamentals. Cambridge, MA: The MIT Press. Sandell, O. (2007). Learning as acquisition or learning as participation? Thinking Classroom, 8(1), 19. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13. Slater, S., & González Canché, M. S. (2018). Using social network analysis to examine player interactions in EvE Online. In J. Kalir (Ed.), Proceedings of the 2018 connected learning summit (pp. 265–274). Squire, K. (2011). Video games and learning. Teaching and participatory culture in the digital age. New York, NY: Teachers College Press. Squire, K., & Klopfer, E. (2007). Augmented reality simulations on handheld computers. Journal of the Learning Sciences, 16(3), 371–413. Steinkuehler, C. (2007). Massively multiplayer online games & education: An outline of research. In C. Chinn, G. Erkins, & S. Puntambekar (Eds.), Proceedings of the seventh conference of computer-supported collaborative learning (pp. 675–685). New Brunswick, NJ: Rutgers University. Steinkuehler, C., Alagoz, E., King, E., & Martin, C. (2012). A cross case analysis of two out-ofschool programs based on virtual worlds. International Journal of Gaming and ComputerMediated Simulations (IJGCMS), 4(1), 25–54. Steinkuehler, C., & Duncan, S. (2008). Scientific habits of mind in virtual worlds. Journal of Science Education Technology, 17(6), 530–543. Steinkuehler, C., & Squire, K. (2016). Videogames and learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 377–396). New York, NY: Cambridge University Press. Steinkuehler, C. A. (2006). Massively multiplayer online video gaming as participation in a discourse. Mind, Culture, and Activity, 13(1), 38–52. Stevens, R., Satwicz, T., & McCarthy, L. (2008). In-game, in-room, in-world: Reconnecting video game play to the rest of kids’ lives. In K. Salen (Ed.), The ecology of games: Connecting youth, games, and learning (pp. 41–66). Cambridge, MA: MIT Press. Tran, K. M. (2018). Distributed teaching and learning in Pokémon Go. Unpublished Doctoral dissertation. Tempe, AZ: Arizona State University. Walker J. T., Slater, S., & Kafai, Y. B. (2019). A scaled analysis of how Minecraft gamers leverage an online video platform to participate and collaborate. In Proceedings of the 13th international conference on computer-supported collaborative learning (Vol. 1, 440–447). Lyon, France: International Society of the Learning Sciences.
388
D. Fields et al.
Willet, B. S. & Reimer, P. N. (2018). The career you save may be your own: Exploring the mathtwitterblogosphere as a community of practice. Paper presented at Society for Information Technology in Education Conference 2018. Washington, D.C., United States. Wise, A., Knight, S., & Buckingham-Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer International Publishing. Woodcock, J., & Johnson, M. R. (2018). Gamification: What it is, and how to fight it. The Sociological Review, 66(3), 542–558.
Further Readings Chen, M. (2012). Leet noobs: The life and death of an expert player group in World of Warcraft. New York, NY: Peter Lang. Chen explores in great detail a group of 40 game players as they join, play, adjust, adapt, and eventually separate in the massive multiplayer online role-playing game, World of Warcraft. In his ethnographic account, he demonstrates how the group expertise was distributed across roles, tools, responsibilities, and a network of actors. This book has implications for analyzing and supporting long-term collaboration in both formal and informal spaces. Fields, D. A., & Kafai, Y. B. (2009). A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world. International Journal of Computer Supported Collaborative Learning, 4(1), 47–68. This paper describes how an insider gaming practice spread across a group of tween players ages 9–12 years in an after-school gaming club that simultaneously participated in a virtual world called Whyville.net. Analysis showed that club members took advantage of the different spaces, people, and times available to them across Whyville, the club, and even home and classroom spaces, showing that knowledge diffusion as a type of collaboration can take place across spaces. Gee, J. P., & Hayes, E. R. (2011). Language and learning in the digital age. London: Routledge. In this accessible and engaging volume, Gee and Hayes address the ways that digital media are transforming language and learning in the twenty-first century. Highlighting examples from games such as The Sims, virtual worlds like Second Life, and passionate affinity spaces all over the internet, their work serves as an inspiration to all those seeking to design for and research collaboration in computer-supported contexts. Nardi, B. (2010). My life as a night elf priest: An anthropological account of world of Warcraft. Ann Arbor, MI: University of Michigan Press. In this book, freely available for online reading, ethnographer Bonnie Nardi compiles more than 3 years of participatory research in World of Warcraft play and culture in the United States and China into a study of player behavior and activity. Her work both illustrates and complicates the nature of collaboration across virtual and social worlds, and across cultural and national boundaries. Thomas, D., & Brown, J. S. (2009). The play of imagination: Extending the literary mind. In After cognitivism (pp. 99–120). Dordrecht: Springer. In this article, Thomas and Brown argue that the experiences behind games, and virtual worlds, in particular, are built on a fundamentally different model of learning that can inform the development of future pedagogical practice. Highlighting key aspects of massive-multiplayer online games—including avatar-based gameplay, peer-to-peer networking, and emergent collective action, they highlight important lessons that games like World of Warcraft can teach us about collaboration in the twenty-first century.
Immersive Environments: Learning in Augmented + Virtual Reality Noel Enyedy and Susan Yoon
“We lose ourselves in what we read, only to return to ourselves, transformed and part of a more expansive world.” Judith Butler
Abstract We articulate a framework for CSCL researchers to consider when designing activities for learning and participation in immersive environments. We begin with a description of the framework that includes five types of immersive qualities, four of which have been written about in previous frameworks–Sensory, Actional, Narrative, and Social. We believe the fifth quality called Emancipatory immersion, advances the field by providing a sense of purpose for learning and participation that is larger than oneself that can transform identities and conceptions of empowerment and action. We then use the SANSE framework to investigate synergies with these qualities among four genres of immersive environments that CSCL researchers have investigated: (1) Headset VR; (2) Desk-Top Virtual Worlds; (3) Space-Based AR; and (4) Place-Based AR. We intend to contribute to dialogue about the kinds of designs for immersive tools the field can and should be thinking about to improve educational conditions. Keywords Collaborative learning · Emancipatory learning · Virtual reality · Augmented reality · Mixed reality
N. Enyedy (*) Department of Teaching and Learning Peabody College, Vanderbilt University, Nashville, USA e-mail: [email protected] S. Yoon Graduate School of Education, University of Pennsylvania, Philadelphia, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_21
389
390
N. Enyedy and S. Yoon
1 Definitions and Scope Immersive environments are those that invoke a subjective experience that metaphorically submerges us in an alternate reality. The metaphor of immersion references our experience of feeling present in a different context than we physically are, and/or our deep engagement with that context. This subjective experience does not require digital technologies, a good book, song, or game of chess can lead to immersion, but relatively recent advances in digital technologies allow designers to try to evoke immersion primarily through immediate sensory experience. This has led the field to begin to conceptualize immersive environments in terms of the technologies and less so in terms of the subjective experience. We believe this is a mistake. In doing so, we background the more cerebral aspects of the experience that can work hand-in-hand with or against one’s sensory experience to evoke immersion. In this chapter, we offer a framework that both summarizes the field’s efforts in developing immersive technologies and argues for design categories that incorporate sensory and subjective experiences. The goal of this framework is to help designers of computer-supported collaborative learning (CSCL) environments think about how to construct a synergistic set of immersive qualities that will work together to create the type of immersive experience that is best suited to one’s learning and participation goals. While we do not offer an exhaustive overview of the field, we do illustrate this framework with a number of cases that highlight the tensions and the opportunities for synergy in the intellectual space we have mapped out. The most obvious form of immersion in computer environments is sensory immersion. Sensory immersion refers to attempts to use one’s senses (primarily visual and auditory stimulus, but haptic feedback is being explored as well) to create a suspension of disbelief (Bowman and McMahan 2007; Nam et al. 2008). The goal is to make the subject feel as if they were there. It is often the fidelity with the real world that is assumed to be the most important factor—low-resolution graphics, low frame rate, or the lack of tactile feedback are often assumed to be the cause of a failure to suspend disbelief. Dede (2009) points out, however, that one does not passively sense the world but acts on it. Actional immersion is facilitated by the types of actions that the subject is allowed to do or is prevented from doing, for example, picking up an axe in the virtual world and chopping wood. Game designers often take actional immersion a step further by relating it to forms of immersion that are tied to the larger task one is engaged in. To continue the example, there is most likely a reason you want to chop wood—perhaps to create shelter. Therefore, game designers suggest that actional immersion can be differentiated into two different forms that address in-the-moment tactile and more longer term strategic activities. Tactical activities, what we will refer to in this chapter as immediate actional immersion, are related to the moment-bymoment physical acts and operations en route to reaching an immediate goal. Strategic activities, or what we will refer to as long-term actional immersion, are a less visceral and more cognitive engagement. They are strategic in the sense that they involve meta-cognition, planning, observing, inference, and deduction. Both have implications for education.
Immersive Environments: Learning in Augmented + Virtual Reality
391
While immediate immersion is tied closely to sensory immersion, long-term/ strategic immersion is tied more to what is called narrative immersion. This is a type of immersion that depends exclusively on the semiotic content of the environment and how it engages one’s imagination rather than the senses directly (Dede 2009). Narrative immersion is the type of immersion that happens when you read a good book. We can further break narrative immersion down into the dimensions of a good narrative some of which are directly related to sensory immersion, but others which are unique. There are the spatial aspects of a narrative. The sense of space and place that engage the subject. While aspects of spatial immersion overlap with sensory immersion, for example, the experience of being on a frozen tundra, there is an added level of immersion from the symbolism and cultural significance of the place (rather than space). For example, that this tundra is in Siberia may create additional meaning for the audience. In addition to space and place, characters also encourage one’s immersion in the narrative and invoke a strong emotional relationship to the choices and events in that character’s arc. In many educational virtual reality (VR) and augmented reality (AR) environments, you take on the role of a character and interact with others. Finally, there is the temporal unfolding of the narrative—the plotline—that can engage the subject to make predictions and become emotionally invested in what happens next. While narrative immersion is an obvious aspect of books and film, it is also an important aspect of the immersion of many CSCL environments. Existing frameworks for immersion also touch on the importance of social immersion. Usually, this is conceived of in terms of social presence or copresence (Oh et al. 2018)—the degree to which other agents are available for interaction within the environment and the degree to which the interaction feels natural. Most often this is thought of in terms of sensory feedback and its value is measured in terms of the way that remote participants can interact with each other and with computer agents. However, we wish to extend social immersion beyond issues of copresence and its ties to sensory immersion. In terms of ties to actional immersion and narrative immersion, it is not just what we can do as individuals that matters but what we can do collaboratively and collectively that matters to our sense of being immersed in a social world and belonging to a group. This is where most of the existing frameworks for immersion stop. To these often discussed categories of immersion, we wish to add emancipatory immersion. A strong consideration that influences how we think about the research on immersive environments has to do with the kinds of learning goals for which we aim. Sensory, actional, narrative, and social immersion can provide multiple representational frames for making information and knowledge accessible (Ainsworth 2018). However, immersive activities can also ultimately serve to inspire and engage learners with a sense of something larger than themselves. One of the frontiers of immersive environments is the conjecture that immersion in another’s identity, perspective, and/or narrative context can lead to greater empathy, attitudinal shifts, new aspirations to take action, and to become part of a new community. That is, we believe that one of the greatest potentials for immersive environments is to promote learning and participation for social justice purposes. This could include enabling users to take
392
N. Enyedy and S. Yoon
action to improve their local communities, developing community-oriented dispositions, or providing access to innovative instruction with underserved and underrepresented populations. By immersing ourselves in the experiences of others and by seeing learning as becoming part of and immersing oneself in a community, we can develop a critical consciousness that promotes epistemological curiosity taking ownership of one’s own constructed world (Freire 1998). Emancipatory immersion is closely tied to narrative immersion and social immersion, but extends them both. We see emancipatory immersion as a tool for identity development, and as such necessarily involving the stories we tell ourselves and our engagement with others in social practices (Hand and Gresalfi 2015). Hence emancipatory immersion‘s close ties to narrative and social immersion. However, emancipatory immersion extends these ideas by focusing on immersion in a joint enterprise with others that is larger than oneself—a joint enterprise that provides relevance to our experience, actions, and aspirations. For example, an important aspiration might be transforming the lived experiences of those in society who may not have access to the social capital that is afforded through normative participation in other kinds of immersion. Emancipatory immersion, we suggest, should be an equal consideration for all CSCL designers of immersive environments. It is by binding the senses to a sense of purpose (e.g., toward equity and access for all) that this innovative technical genre will achieve its potential as a transformative, educational tool.
1.1
Applications of the Sensory, Actional, Narrative, Social, Emancipatory (SANSE) Framework in CSCL Immersive Environments
In the next sections, we use the five characteristics of immersion to illustrate and evaluate how particular immersive technologies in the CSCL field have investigated and designed learning and instructional activities. We have selected four cases that encompass genres of socio-technological platforms that have gained some traction in the educational and learning sciences literature: (a) Headset Virtual Reality; (b) Desktop Virtual Worlds; (c) Space-Based/Tabletop Augmented Reality; and (d) Placed-Based/GPS Augmented Reality. For each case, we discuss how the research instantiates the SANSE framework in more or less synergistic ways (Fig. 1). As space is limited, we feature one main program of research as a test case to operationalize the framework but acknowledge that there are many CSCL researchers who have contributed to our current understanding of immersive learning environments. Each case provides fodder to discuss both our strengths and potential areas in need of improvement as a research field to realize the full range of SANSE affordances. It is not our intent to argue that one type of immersive environment is better than another or critique the cases presented as having neglected an aspect of the SANSE framework. We recognize that any design faces a series of tradeoffs when attempting
Immersive Environments: Learning in Augmented + Virtual Reality
393
Fig. 1 Overview of types of immersion and cases
to simultaneously create a productive learning environment and investigate how the body and senses are and can be engaged during learning. Indeed, we recognize the value in deeply pursuing some aspects of immersion now to better understand how to integrate them in the future, and further recognize that even when focusing on one aspect of immersion, designers in fact consider and design for all of them. Instead, our goal is to offer a framework that designers can use to look for synergies and contradictions across forms of immersion that will aid their explicit discussions in terms of how to reach their design goals and manage tradeoffs. We also hope that by looking across and between types of immersion, this framework will prompt new conjectures about how learning and cognition span multiple levels of analysis and can engage people both as individuals and social, purposeful human beings. To encourage this discussion, we focus our analysis of cases on pairings of types of immersion we see productively existing already in these cases. What follows is an organization of ideas that privileges space-based and place-based environments. This is intentional for two reasons. First, we see these as emerging areas of the CSCL community and thus we wanted to explore their potential synergy with forms of immersion. Second, space-based and place-based environments have particular affordances for emancipatory immersion and therefore allowed us to clarify and illustrate this part of our framework.
394
N. Enyedy and S. Yoon
2 History and Development 2.1
Headset VR
The first VR headset, referred to as the “Telesphere Mask” was patented in 1960. In the 1970s VR made headway into education via aviation and military training. Although still out of reach of most classrooms, headset VR has become more accessible in K12 education through commercial ventures that provide low-fidelity immersion such as Google cardboard and Google expeditions. The gold standard, however, remains high fidelity sensory immersion. Given this focus on promoting a sense of “being there” it is not surprising that educational VR has primarily focused on history, where students can explore firsthand a distant place and/or a distant time, such as the Titanic, ancient Rome, or the seven wonders of the world. For the purposes of this chapter, we will use a VR headset environment on ocean acidification developed by the Virtual Human Interaction Lab at Stanford University (Markowitz et al. 2018). In this VR environment, you begin by observing cars emitting CO2 that you watch, travel to, and become dissolved into the ocean. You then are asked to dive in two reefs—one healthy reef and one reef damaged by acidification from too much CO2. While at the reefs, you are asked to measure the health of the reef by counting snails, and are told that the absence of snails (and other shellfish) will lead to the collapse of the entire food web for that ecosystem. At the end of the session, there is a call to action by the narrator of the VR world and a number of suggestions of things you could do to help the environment.
2.1.1
Synergies with Sensory and Immediate Actional Immersion
There are clearly strong synergies at work with multiple dimensions of sensory immersion and with the tactical actions one can take on the reef. There is the highfidelity telepresence of the visual experience combined with unmediated nature of navigating/swimming through the spatial landscape of the underwater reef. Selfpresence is accentuated by the visibility of one’s virtual hands, which can take actions such as counting snails and inspecting the health of snails.
2.1.2
Synergies with Long-Term Actional and Narrative Immersion
It is less clear the degree to which there are synergies with long-term actional immersion and narrative immersion. There is an effort to create a narrative—from the car’s emissions to the ocean, to the future of the ocean. However, this narrative unfolds in a linear way and is not contingent on strategic decisions made by the participant. One is prompted to move between spaces at fairly fixed points. One interesting aspect of the narrative is the way in which the designers chose to break with the visual fidelity of the virtual world in order to represent CO2 as molecules
Immersive Environments: Learning in Augmented + Virtual Reality
395
one can visually see and manipulate. This allows participants to directly interact with an invisible part of the narrative—specifically the causal mechanism of ocean acidification—and may have contributed to the significant outcomes in knowledge gains in one study (Markowitz et al. 2018).
2.1.3
Synergies with Social and Emancipatory Immersion
Social immersion is notably absent in the ocean acidification environment. While it is common in headset VR to focus on an individual’s immersion with little attention to collaboration, recent developments are beginning to allow multiple individuals to enter the same virtual world. A notable exception in the current research is related to haptic feedback, which often requires multiple participants to collaboratively interact with objects in the virtual world (e.g., play virtual air hockey; Nam et al. 2008). Still, most headset VR in education conceptualizes learning as an individual cognitive act. The lack of social immersion may be consequential in undermining VR’s effectiveness at emancipatory immersion. Arguably, the ocean acidification environment’s ultimate goal is to change student’s attitudes toward environmental issues and encourage people to change their behavior to curb greenhouse emissions and global warming. However, the empirical evidence that the empathy garnered from walking a mile in another leads to attitude change or behavioral change is mixed and modest (Markowitz et al. 2018). Sociocultural theories of identity (Hand and Gresalfi 2015; Nasir 2002) suggest that empathy produced from taking a new perspective or identity needs to be matched with a change in goals, values, and practices that accompany membership in a community and that one’s changing participation in a community is linked to social immersion.
2.2
Desktop Virtual Worlds
Learning scientists and CSCL researchers have also been active in investigating and constructing virtual worlds to support learning (e.g., Barab et al. 2007; Dede 2009; Metcalf et al. 2011). The field has notably investigated virtual worlds through MUVEs or multiuser virtual environments that are curated computational environments that allow users to interact with each other and with artifacts through a desktop computer in imaginary 3D worlds. Virtual worlds have sometimes been characterized as mixed-reality activities (e.g., Feldon and Kafai 2008) because learners simultaneously reside in both the online and real worlds. However, for our analysis, we consider only the interactions that are happening in the online environment. One line of research in this genre that has been active for over a decade is the work by Kafai and colleagues who have studied the Whyville virtual world platform (Kafai 2008; Fields et al. this volume). Whyville is a virtual learning environment that is designed to engage children in activities that will teach them about a range of domain-specific knowledge and skills, such as marine biology and data collection,
396
N. Enyedy and S. Yoon
through games and daily community activities. But it is even more than a site for learning. There are activities that are designed to address the daily vicissitudes of life such as visiting the “Wellness Center” for information on how to get one’s emotions back on track. Whyville was founded in 1999 by scientists at CalTech and has impacted over 7 million users between the ages of 8 and 15 years old worldwide (see whyville.net for more information).
2.2.1
Synergies with Sensory and Immediate Actional Immersion
In the category of sensory immersion, there are several activities that participants engage in through Whyville that enable the feeling of being in the world. Kafai and her colleagues (Kafai et al. 2007) found that activities that supported the construction of their avatar face were most popular among Whyvillians. While the interpretation here of sensory immersion is slightly different from what can be experienced in head-mounted or haptic technologies (Dede 2009), it’s clear that the visual display of one’s self or one’s imaginary representation is an important motivator. Immediate actional immersion is tied to sensory immersion in that one navigates the virtual world by moving an avatar through the virtual world and in many locations, each with unique sets of activities.
2.2.2
Synergies with Long-Term Actional and Narrative Immersion
Participants make strategic choices about what locations to visit based on personal goals (i.e., to earn “clams” which are the currency of the world, to engage in purely social interactions with virtual friends, or to solve social problems). The presence of this long-term immersion creates synergy with narrative immersion of the virtual world. Narrative immersion in the form of plots or story lines in virtual worlds is important feature that can influence behavior and learning. The Whypox outbreak documented in Kafai et al. (2007) showed that prior to the onset of the disease in the community, no one had visited the Whyville CDC (Center for Disease Control and Prevention). Once participants started showing symptoms of the disease, e.g., red dots on their avatar, visits to the CDC increased exponentially as did their use, while there, of disease epidemic simulators that supported hypotheses about how the disease was being spread. From surveys after the outbreak, Kafai et al. (2007) found that a majority of participants (61.5%) said that they felt “bad” and also noted the similarity that they saw between the Whypox experience and a real outbreak of infectious disease.
2.2.3
Synergies with Social and Emancipatory Immersion
Social immersion for many desktop virtual worlds is among the most important motivators of participation. Second, only to the focus on avatar appearance,
Immersive Environments: Learning in Augmented + Virtual Reality
397
Whyvillians spent the most time chatting with others in social spaces (Kafai et al. 2007). This kind of affinity group interaction in online environments has been written about by other CSCL scholars (e.g., Gee 2003; Ito et al. 2013) and may have the greatest influence on feelings of inclusion. Additionally, the Whypox narrative directly leveraged social immersion to motivate emancipatory immersion. Emancipatory immersion includes influencing one’s ability to take ownership and action to improve one’s lived experiences, and we see this type of change in attitudes and behaviors (at least in the virtual world itself). For example, during the Whypox outbreak, the authors noted that participants went to different virtual spaces in order to avoid getting the disease. They also reported feelings of ostracism for those who had contracted the disease and thus a large number (54.8%) said that if Whypox came back that they would isolate themselves. Kafai (2008) also discusses implications for helping students to understand the real and implied dangers to others when considering things like irresponsible behaviors when viruses spread. Emphasizing the value of moral and ethical lessons that can be learned in virtual worlds and then transferred to real-life scenarios can be time well-spent in emancipatory activities.
3 State of the Art 3.1
Space-Based AR (Tabletop and Room-Based Immersion)
The third type of immersive environment we wish to explore is space-based AR. As opposed to placed-based AR (see below), where participants use mobile, GPS-enabled AR technologies to explore a larger geographical place in the real world), space-based versions of AR enable users to interact with digital overlays on smaller scale spaces. These can take the form of a large room as well as stationary physical tabletop devices. This kind of AR typically has a thin layer of sensory immersion and capitalizes on the hybridization digital information and graphics superimposed on real-world phenomenon. At the larger scale of space-based AR are systems that track student’s movement within a room presenting them with digital information based on their location or provide the ability to interact with digital agents that are located in specific locations of the physical space. Room-sized space-based AR have been successful at enabling students to learn about complex biological systems such as pollination (Danish et al. 2018) and animal foraging (Moher et al. 2015); physical science concepts such as force and motion (Enyedy et al. 2012); states of matter (Enyedy et al. 2017); earthquakes (Jaeger et al. 2016); and vocational education (Cuendet et al. 2013). Other CSCL scholars have designed embodied AR technological systems that simulate interaction with objects, such as celestial entities found in outer space (Lindgren and Johnson-Glenberg 2013). For our case study, we have chosen a series of space-based tabletop AR studies designed by Yoon and her colleagues that highlight opportunities for informal learning through AR tools. The Augmented Reality for Interpretive and Experiential Learning (ARIEL) project aimed to provide science museum visitors with
398
N. Enyedy and S. Yoon
Fig. 2 Suite of ARIEL AR devices
experiences in learning scientific content that would: (1) help them to understand the underlying scientific phenomenon; (2) work in collaborative groups to develop theories about the scientific phenomenon; and (3) to support the informal museum experience while engaging in deeper learning (Yoon et al. 2012; Yoon et al. 2013). The research focused on the development of three digitally augmented devices that engaged visitors in learning about electrical circuits in a device called Be the Path; magnetic fields in a device called Magnetic Maps; and the Bernoulli Principle in a device called Bernoulli Blower. Figure 2 shows configurations of the ARIEL AR devices on the museum floor. In Be the Path visitors are asked to grasp two metal balls affixed to a table where one ball is connected by a wire to a battery and the other connected to a light bulb. When the circuit is completed, the lit bulb triggers the projection of an animated flow of electricity on the visitor’s hands, arms, and shoulders—showing the complete loop and visualizing the flow of electricity through the completed circuit. If the visitor releases their hold on the spheres, the circuit is broken, and the visualization instantly disappears. In Magnetic Maps, visitors can manipulate real bar magnets and the interaction is captured by a camera above, digitized, and simultaneously fed back in real time with AR magnetic force field lines appearing around the magnets on the computer screen. As the magnets move, the magnetic field lines also move showing different patterns that emerge. In the Bernoulli Blower, a physical plastic ball floats in midair when it is caught between fast-moving air coming from a blower attached to the exhibit and slow-moving air in the room. The digital augmentation is produced on a screen that depicts the fast-moving (low-pressure) air and the slow-moving (high-pressure) air as arrows in relation to a real-time image of the physical plastic ball. The fast-moving air is represented by arrows that point diagonally up and curve around the ball, while the slow-moving air from the room is depicted by shorter arrows that point toward ball. Although the normal room air moves at a lower speed than the blown air, the room air exerts greater pressure on the ball and is therefore able to keep the ball floating in the stream of fast-moving air instead of blowing it away.
Immersive Environments: Learning in Augmented + Virtual Reality
3.1.1
399
Synergies with Sensory and Immediate Actional Immersion
All ARIEL devices encompassed the physical object (e.g., bar magnets), digital video projections that responded to the user’s movements (e.g., dynamically moving patterns of magnetic force fields), and haptic feedback connecting user’s behaviors with the digital projections (e.g., feeling the magnetic repulsion and seeing the digital force fields repel). Yoon and Wang (2014) investigated how these sensory affordances supported participant learning and found responses in the following categories: visible—allowing users to see things that are normally invisible; dynamic—displaying the phenomenon in motion shoring changes over time; and details—providing scientific details of the phenomenon. Affordances were also found in two immediate actional categories: interactive—enabling the user to interact with the device; and scaffolding—providing structure that focuses the user’s attention on relevant information as they interact with the device.
3.1.2
Synergies with Long-Term Actional and Narrative Immersion
In the various ARIEL studies, long-term actional immersion can be understood as linked to narrative immersion in two ways. First, the labels on the devices posed brief questions to visitors such as for Be the Path, “What happens when you touched both metal spheres?” “When you touched only one?” “What happened to make the bulb light up?” and “How can you be the path?” (Yoon et al. 2012). The impact of labels in museums has been researched to some extent due to their importance in informal learning environments as scripts or scaffolds that can motivate actions (e.g., Gutwill 2006) and serve to promote observational, inferential, and deductive cognitive processing. Indeed, the ARIEL project did focus a great deal of attention on labels when designing for this immersive quality (Wang and Yoon 2013). These labels not only suggested actions but also created frameworks (or narratives) for how to interpret the results. Second, at a larger narrative time scale, museums are curated to offer cumulative learning activities as visitors visit one display or device to the next (Grinell 2006). Ideally visitors will create a narrative of their visit and the relations between exhibits to connect their experience to a metalevel understanding of a scientific phenomenon. For example, currently, the Bernoulli Blower device exists in the section of the museum devoted to flight and aviation.
3.1.3
Synergies with Social and Emancipatory Immersion
A hallmark of the ARIEL sociotechnical design was the incorporation of knowledgebuilding scaffolds (Scardamalia and Bereiter 2014) to support collaboration in physically copresent small groups. Understanding how knowledge building or deeper discursive interactions could be incorporated into what is meant to be short informal activities was a major focus of research (Yoon et al. 2012, 2013; Yoon et al.
400
N. Enyedy and S. Yoon
2018). Design features intended to encourage social immersion included access to a bank of theories that were generated by peers and instructions that required groups to come to consensus on prompts that asked them about their scientific understanding, e.g., “our theory is. . .” Designed social scaffolds contributed to emancipatory immersion by amplifying the broader social orientation toward participation in informal environments. With their fluid, social, and participant-driven characteristics, informal learning environments stand in contrast to the often highly structured conditions of formal classrooms, which lead to notable differences in interest and motivation (Allen 2002). For example, instead of the motivation for learning being driven by an individual’s experience with a surprising phenomenon, in ARIEL one experienced the phenomenon as a group and the social scaffolds encouraged and amplified the desire to figure “it” out together. The process, products, and relevance of the inquiry were collective and voluntary. One study within the ARIEL series revealed that when the sociotechnical environment is too scripted such that their experiences look too much like school, participants’ interest, and motivation decline (Yoon et al. 2013). Thus, we agree with Dede’s (2009) emphasis on attending to the situated nature of the immersive experience. But we suggest here that the reasons for doing so should serve emancipatory goals, that is, access to resources, some may not normally have access to, which can positively transform their learning experiences.
3.2
Place-Based AR (GPS-Enabled Immersion)
Finally, the last genre of immersive environments that we briefly examine through the SANSE framework is place-based immersion, also often referred to as locationbased AR. Place-based immersion refers to a genre of GPS-enabled AR, audiovisual tours, and mapping tools available on mobile devices. In many ways, place-based immersion is the compliment to headset VR. Whereas headset VR uses sensory immersion to transport you to a different place or time, place-based immersive platforms have only a thin layer of sensory immersion and instead use digital augmentation through a mobile device to layer a meaningful narrative on top of your presence in a place that is already geo-spatially significant. Much of the work in this area is in the form of curated tours or simulations where participants walk around a physical space (e.g., a park, a campus, or city) but interact with virtual agents, objects, activities, and storylines. For example, Squire and Klopfer’s (2007) Environmental Detectives game had students investigate a virtual chemical spill by walking around the physical university taking virtual measurements and talking to virtual agents who provided important conceptual information that helped the students determine the severity of the spill and steps to protect the watershed. These types of games have been used to explore history (Schrier 2007), climate science (Squire and Jan 2007) often integrating multiple subjects into one game. For our case study, we have chosen to focus on a case of community mapping called Mobile City Science (MCS) (Taylor et al. 2019). We have chosen this case because it represents the extreme of narrative immersion (with very little sensory
Immersive Environments: Learning in Augmented + Virtual Reality
401
immersion) and the most direct attempt to strive for emancipatory immersion. In Mobile City Science students walk through their own neighborhood discovering neighborhood assets (e.g., libraries, tutoring centers, etc.) and neighborhood challenges (e.g., the lack of safe biking routes, the lack of banks, or the lack of fresh produce). Students describe and tag what and where these assets and challenges are, adding them to a computerized map, and using their discoveries to generate arguments for civic change.
3.2.1
Synergies with Sensory and Immediate Actional Immersion
As noted above, there is very little in the way of sensory immersion in MCS. Information, images, and audio are triggered by one’s location, but since they are experienced through a phone or other handheld device, they have limited capacity to immerse one in a different context. More to the point, place-based immersion’s goals are to capitalize on the meaningful nature of where one is in physical reality and to layer that physical presence with a narrative of what once was, what could be, or what is there but not visible. Likewise, immediate actional immersion is limited to making choices about what media assets to engage with that are available based on one’s physical location, and to use this information to choose where to go in physical space next.
3.2.2
Synergies with Long-Term Actional and Narrative Immersion
Place-based immersion has more synergy with long-term actional and narrative immersion. A sense of place is intimately tied to narratives grounded in that location. The MCS project engages with three distinct forms of place-based narrative. First, it allows students to engage with the history of the place they are occupying through historical photos and oral history audio files. Second, it allows students to map the assets that exist in the community today as well as identify the issues that are currently threatening the community’s future creating a narrative of tensions that exist now. Finally, it asks youth to envision the future they want and create a narrative for tomorrow. Long-term, strategic actional immersion plays a role in all three narratives in that students in MCS are simultaneously consumers of a placedbased tour and while adding new assets to a digital map of their community. It is in the latter role, that of author of a new narrative, that bridges the past to today and a vision for the future, where actional immersion and narrative immersion combine to create a powerful environment for learning.
3.2.3
Synergies with Social and Emancipatory Immersion
In MCS social immersion is encouraged on multiple levels that together lead to emancipatory immersion. First, investigations of the neighborhood are conducted in
402
N. Enyedy and S. Yoon
groups, with multiple participants (students, teachers, and other adult stakeholders/ community members) physically copresent accessing and discussing the import of the media presented on their mobile devices. Thus, like the tabletop case, the design begins with the gold standard of social immersion through face-to-face collaboration, and concentrates on scaffolding these interactions. Second, and perhaps more importantly, collective inquiry is directed not just at understanding, but becoming a part of the community and organizing to effect change. Toward this end, students “counter-map” the neighborhood to produce evidence-based recommendations for action. Investigations of and in the neighborhood are followed up by presentations to educators, urban planners, politicians, and other youth. While the project is ongoing and empirical data on the way, we believe that MCS’s ultimate value lies in the potential long-term changes in attitudes, action, and identity that participants may view as something larger than themselves.
4 The Future Immersive environments help us lose ourselves in alternate realities. The educational value of this is that we can see new things, do new things, understand things in a new way (i.e., as part of a different narrative), and we aspire to do new things or become someone new. New technologies are making immersion easier to invoke than ever before, but if we think in technocentric terms we run the risk of solidifying boundaries between technologies and forms of immersion in unproductive ways. We have presented the SANSE framework and these case studies to open up the conversation and help the field and designers think through the multiple ways and multiple combinations of immersion that can be embodied in a single design.
References Ainsworth, S. (2018). Multiple representations and multimedia learning. In F. Fischer, C. HmeloSilver, S. Goldman, & P. Reimann (Eds.), The international handbook of the learning sciences (pp. 96–105). New York, NY: Routledge Press. Allen, S. (2002). Looking for learning in visitor talk: A methodological exploration. In G. Leinhardt, K. Crowley, & K. Knutson (Eds.), Learning conversations in museums (pp. 259–303). Mahwah: Lawrence Erlbaum Associates. Barab, S., Sadler, T., Heiselt, C., Hickey, D., & Zuiker, S. (2007). Relating narrative, inquiry, and inscriptions: Supporting consequential play. Journal of Science Education and Technology, 16 (10), 59–82. Bowman, D. A., & McMahan, R. P. (2007). Virtual reality: How much immersion is enough? Computer, 40(7), 36–43. https://doi.org/10.1109/MC.2007.257. Cuendet, S., Bonnard, Q., Do-Lenh, S., & Dillenbourg, P. (2013). Designing augmented reality for the classroom. Computers & Education, 68, 557–569.
Immersive Environments: Learning in Augmented + Virtual Reality
403
Danish, J., Enyedy, N., Humburg, M., Saleh, A., Dahn, M., Lee, C.,&Georgen, C. (2018). STEPBees and the role of collective embodiment in supporting learning within a system. In J. Kay & R. Luckin (Eds.), International conference of the learning sciences. Retrieved from https:// repository.isls.org/bitstream/1/600/1/272.pdf Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323(5910), 66–69. Enyedy, N., Danish, J., DeLiema, D., Saleh, A., Lee, C., Morris, N., & Illum, R. (2017). Social affordances of mixed reality learning environments: A case from the science through technology enhanced play project. In 50th Proceedings of the Hawaii international conference on system science. University of Hawaii, Honolulu, Hawaii: University of Hawaii Press. Enyedy, N., Danish, J. A., Delacruz, G., & Kumar, M. (2012). Learning physics through play in an augmented reality environment. International Journal of Computer-Supported Collaborative Learning, 7(3), 347–378. Feldon, D., & Kafai, Y. (2008). Mixed methods for mixed reality: Understanding users’ avatar activities in virtual worlds. Educational Technology Research and Development, 56, 575–593. Fields, D., Kafai, Y., Aguilera, E., Slater, S., & Walker, J. (this volume). Perspectives on scales, contexts and directionality of collaborations in and around virtual worlds and video games. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Freire, P. (1998). Pedagogy of freedom: Ethics, democracy and civic courage. Lanham, MD: Rowman Littlefield. Gee, J. P. (2003). What video games have to teach us about learning and literacy. New York, NY: Palgrave Macmillan. Grinell, S. (2006). In search of new audiences: Blockbusters and beyond. ASTC Dimensions. May/June, 1–4. Gutwill, J. (2006). Labels for open-ended exhibits: Using questions and suggestions to motivate physical activity. Visitor Studies Today, 9, 1–9. Hand, V., & Gresalfi, M. (2015). The joint accomplishment of identity. Educational Psychologist, 50(3), 190–203. Ito, M., Gutiérrez, K., Livingstone, S., Penuel, W., Rhodes, J., Salen, K., et al. (2013). Connected learning: An agenda for research and design. Irvine, CA: Digital Media and Learning Research Hub. Jaeger, A. J., Wiley, J., & Moher, T. (2016). Leveling the playing field: Grounding learning with embedded simulations in geoscience. Cognitive Research: Principles and Implications, 1(1), 23. Kafai, Y. B. (2008). Understanding virtual epidemics: Children’s folk conceptions of a computer virus. Journal of Science Education and Technology, 17, 523–529. Kafai, Y. B., Feldon, D., Fields, D., Giang, M., & Quintero, M. (2007). Life in the times of Whypox: A virtual epidemic as a community event. In C. Steinfeld, B. Pentland, M. Ackermann, & N. Contractor (Eds.), Proceedings of the third international conference on communities and technology (pp. 171–190). New York: Springer. Lindgren, R., & Johnson-Glenberg, M. (2013). Emboldened by embodiment: Six precepts for research on embodied learning and mixed reality. Educational Researcher, 42(8), 445–452. Markowitz, D. M., Laha, R., Perone, B. P., Pea, R. D., & Bailenson, J. N. (2018). Immersive virtual reality field trips facilitate learning about climate change. Frontiers in Psychology, 9. Metcalf, S. J., Kamarainen, A., Tutwiler, M. S., Grotzer, T. A., & Dede, C. J. (2011). Ecosystem science learning via multi-user virtual environments. International Journal of Gaming and Computer-Mediated Simulations, 3(1), 86–90. Moher, T., Slotta, J. D., Acosta, A., Cober, R., Dasgupta, C., Fong, C., et al. (2015). Knowledge construction in the instrumented classroom: Supporting student investigations of their physical learning environment. International Society of the Learning Sciences, Inc. [ISLS]. Nam, C. S., Shu, J., & Chung, D. (2008). The roles of sensory modalities in collaborative virtual environments (CVEs). Computers in Human Behavior, 24(4), 1404–1417.
404
N. Enyedy and S. Yoon
Nasir, N. S. (2002). Identity, goals, and learning: Mathematics in cultural practice. Mathematical Thinking and Learning, 4, 213–248. Oh, C. S., Bailenson, J. N., & Welch, G. F. (2018). A systematic review of social presence: Definition, antecedents, and implications. Frontiers in Robotics and AI, 5, 114. https://doi. org/10.3389/frobt. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 397–417). New York, NY: Cambridge University Press. Schrier, K. (2007). Reliving the revolution: designing augmented reality games to teach critical thinking. In D. Gibson, C. Aldrich, & M. Prensky (Eds.), Games and simulations in online learning: Research and development frameworks (pp. 250–270). IGI Global. Squire, K., & Klopfer, E. (2007). Augmented reality simulations on handheld computers. The Journal of the Learning Sciences, 16(3), 371–413. Squire, K. D., & Jan, M. (2007). Mad City mystery: Developing scientific argumentation skills with a place-based augmented reality game on handheld computers. Journal of Science Education and Technology, 16(1), 5–29. Taylor, K. H., Silvis, D., Kalir, R., Negron, A., Cramer, C., Bell, A., & Riesland, E. (2019). Supporting public-facing education for youth: Spreading (not scaling) ways to learn data science with mobile and geospatial technologies. Contemporary Issues in Technology and Teacher Education, 19(3). Retrieved from https://www.citejournal.org/volume-19/issue-3-19/currentpractice/supporting-public-facing-education-for-youth-spreading-not-scaling-ways-to-learndata-science-with-mobile-and-geospatial-technologies. Wang, J., & Yoon, S. (2013). Scaffolding visitors’ learning through labels. Journal of Museum Education, 38(3), 320–332. Yoon, S., Anderson, E., Elinich, K., Park, M., & Lin, J. (2018). How augmented reality, text-based, and collaborative scaffolds work synergistically to improve learning in a science museum. Research in Science and Technology Education, 36(3), 261–281. Yoon, S., Elinich, K., Wang, J., Steinmeier, C., & Tucker, S. (2012). Using augmented reality and knowledge-building scaffolds to improve learning in a science museum. International Journal of Computer-Supported Collaborative Learning, 7(4), 519–541. Yoon, S., Elinich, K., Wang, J., Van Schooneveld, J., & Anderson, E. (2013). Scaffolding informal learning in science museums: How much is too much? Science Education, 97(6), 848–877. Yoon, S., & Wang, J. (2014). Making the invisible visible in science museums through augmented reality devices. Tech Trends, 58(1), 49–55.
Further Readings Dalgarno, B., & Lee, M. J. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10–32. This article explores the affordances for learning of different immersive environments based on their technological features. Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323(5910), 66–69. This article provides the basis for the SANSE framework laying out the ways in which senses, action, and symbolism can work to create immersive environments. Horn, M. (2018). Tangible interaction and cultural forms: Supporting learning in informal environments. Journal of the Learning Sciences, 27, 632–665. This article advances a form of immersion through tangible interactions that support investigations of cultural literacies through cueing cognitive, physical, and emotional resources in the individual.
Immersive Environments: Learning in Augmented + Virtual Reality
405
Lindgren, R., & Johnson-Glenberg, M. (2013). Emboldened by embodiment: Six precepts for research on embodied learning and mixed reality. Educational Researcher, 42(8), 445–452. Although we did not discuss the notion of mixed reality in this chapter, a number of researchers have investigated immersion through under this topic rather than AR or VR. This is article is an example of framing in mixed-reality research. Oh, C. S., Bailenson, J. N., & Welch, G. F. (2018). A systematic review of social presence: Definition, antecedents, and implications. Frontiers in Robotics and AI, 5, 114. https://doi. org/10.3389/frobt. This article provides an extensive and systematic review of the research in the area of social presence.
Robots and Agents to Support Collaborative Learning Sandra Y. Okita and Sherice N. Clarke
Abstract This chapter examines how social components of robot and agent technology, combined with learning theories and methodologies, can develop powerful learning partnerships. Exploring ways to leverage the affordances of technology as peers and learning tools can provide teachers with useful information to identify features and conditions for learning. This in turn can help design activities using pedagogical robots/agents to assist collaboration with and between students. Keywords Robots · Pedagogical agents · Adaptive support · Tutoring systems
1 Definitions and Scope As many broad definitions for collaborative learning exist, this chapter sees collaborative learning as a “. . .situation in which two or more people learn or attempt to learn something together” (Dillenbourg 1999, p. 1). This act of learning something together can relate to course content knowledge (e.g., mathematics, biology) or acquiring skills to perform learning activities (e.g., problem-solving, reasoning skills, self-reflection). “Two or more people” can imply a pair, a small group, a class, a community, or all other intermediate levels, but much of the work introduced in this chapter, will involve collaborative learning between a pair (dyads) and a small group of people (triads). The chapter focuses beyond content knowledge and examines the learning mechanisms that emerge from such collaborative processes and the role of robots and agents in supporting these processes. Collaborative S. Y. Okita (*) Mathematics, Science and Technology, Teachers College, Columbia University, New York, NY, USA e-mail: [email protected] S. N. Clarke Education Studies, University of California San Diego, La Jolla, CA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_22
407
408
S. Y. Okita and S. N. Clarke
processes are often complex, with both social and cognitive processes circulating and feeding into one another (Perret-Clermont et al. 1991). Studying and identifying the practices that research indicates are successful can inform teachers about the affordances of robots and agents for supporting collaborative learning, as well as potential pedagogical risks when using these technologies. We argue that it is essential to understand the possibilities and the limits of technology-rich collaborative learning environments in order to design instruction that can support learning. Practice needs to be well grounded in theory to facilitate the exploration of how or why things work (Gomez and Henstchke 2009) to ensure that learning trajectories are clear and robust for the classroom. This chapter does not reflect on advanced social and institutional factors, such as leadership and social norms, which may appear in large collaborative groups.
2 History and Development: A Brief Background of Robots and Agents in Education Applying technology to educational content and pedagogy is not new. People have had high hopes for technology-restoring personalized instruction as far back as Pressey’s Testing Machine (Pressey 1932) and Skinner’s Teaching Machine (Skinner 1986). Expert systems have been successful in their intended domain, but often evaluated unfairly because of the high expectations of the Turing Test (Hayes and Ford 1995). Interestingly, much of the “learning” focused on programmed instruction and machine learning, where robot intelligence and expert systems guided or modeled human problem-solving. Human behavioral models implemented into the system improved the quality of social interactions between humans and machines (e.g., conversational agents, intelligent tutoring systems). As a result, technology was used to implement well-known teaching and tutoring strategies, but detecting the thought processes of children turned out to be quite difficult as it involved human learning, rather than machine learning. Early computerized automated instructions included teacher–student dialogues that asked questions to elicit responses. Frameworks such as Bloom’s Taxonomy were used as a way to ask questions that required higher order thinking processes to answer (Stevens and Collins 1977). However, little evidence was found to support the correlation between higher order questions and student achievement (Winne 1979). Technological tools still seem to fall short when dealing with unfamiliar content. The transition of technologically simple and single-minded artifacts to sociable, adaptable, and intelligent ones can pose controversies and open questions. Some of these controversies are decades old, but are still very much alive and well. Back when Skinner’s Teaching Machine was first introduced, there was the fear that teachers would be replaced by machines. The role of the machines and the principles on which these teaching machines were based were misunderstood, which led to anxiety and high expectations that students would learn twice as fast. Today, we still
Robots and Agents to Support Collaborative Learning
409
see headlines in the media that say “Are robots going to replace teachers?” “Are robots going to be smarter than humans?” Back then and now, it appears that the role of machines and the principles on which these tools are based may continue to be misunderstood. Recent advancements in digital manipulatives and technological artifacts (e.g., programmable building bricks) have helped expand the range of concepts children can explore with different robotic systems and machine platforms (Resnick et al. 2000). Robotics is an integrative discipline that brings together basic math, science, applied engineering, and computational thinking. Preparing preservice teachers to teach STEM using robotics has been suggested as a promising way to improve students’ experience of, and attainment in, science and mathematics. Modern robotic construction kits provide learning environments in which children can use their hands to touch and build concrete objects using familiar materials such as gears, motors, sensors, and computer-generated interfaces to program their creations (Bers et al. 2002; Resnick and Kafai 1996). Cubetto, by Primo Toys, is another example in which a wooden robot is designed to teach children basic principles of coding using a tangible programming language (Bers and Horn 2010). Influenced by the earlier work of Seymour Papert’s LOGO (Papert 1980) and Turtle Graphics, Cubetto uses a hands-on programming language to control a wooden robot that roams the checkerboard and completes clearly defined tasks. Students are less exposed to think about the social aspects of human–computer interactions (HCI), which is somewhat ironic, as more interest has been placed on the social effects of technology on human learning and behavior. Advancements in technology have enabled sensory technology to detect human behavior (e.g., physiological sensors, facial and voice recognition) and improve the interaction between humans and machines for collaborative tasks (e.g., building and moving objects). Another question that comes to mind is how agents and robots are similar and/or different from one another. One common feature between “Agents,” “Robots,” and “Collaborative learning” is the wide variety of usage of these terms in different fields (e.g., cognitive psychology, artificial intelligence, social sciences), and variation in the degree of function and capability. While some agents and robots may be equipped with elaborate skills (e.g., have goals and knowledge, make decisions), others may have elementary skills (e.g., grammatical parser agent) and work in large numbers. Dillenbourg (1999) describes agents as a functional unit inside a system. Agents can have different skills and vary in number, representation, goals, and knowledge. Agents do not have to be autonomous or intelligent to impact learning; interactions with low functional units can also bring about interesting phenomena. On the other hand, robots can be seen as a physical platform with a system full of functional units that direct its behavior. These functional units can consist of multiple agents programmed to control humanoid robots to interact with, and respond to, a precise sequence of stimuli or systematic patterns. Robots may have more functional units within their system than agents, as they are not confined to a computer screen and have a physical presence, can take action, and manipulate objects in the real world.
410
S. Y. Okita and S. N. Clarke
3 State of the Art 3.1
Robots and Agents as Collaborative Learning Partners
The collaborative partner in learning has expanded from a human peer to include a virtual representation (agent) or a robotic peer. Advances in robotics technology have shifted the focus from supporting humans in industrial productivity (e.g., industrial robots, mobile agents) to supporting humans at a more personal level (e.g., companion robots, personal agents). One reason for this expansion may be the fact that many technological artifacts (e.g., humanoid robots, computer agents) now display biologically inspired human-like features and physical human behaviors that elicit social responses. Strong social metaphors enable students to share knowledge and build peer-like relations. Pedagogical agent programs can engage in contingent social dialog for a long time. The effects of socialness on learning are readily attributed to the timing and quality of information delivery, which computers can largely mimic and control in targeted ways (Kanda et al. 2004; Breazeal et al. 2016). Robots and agents are highly directable and can create ideal circumstances that enable new ways for students to reflect, reason, and learn. Collaborative learning with robots and agents is complex, as social metaphors are used to elicit engagement, and learning conditions are structured to induce cognitive processes in individuals and groups (Perret-Clermont et al. 1991). This area of research may best be described along two continuums, the level of social metaphor, and who the targeted learner is (i.e., self, self-other, other). The amount of social interaction and verbal dialogue may differ based on the amount of social metaphor present and who the target learner is (self, self and other, or other). The next few sections will take a closer look into the collaborative learning processes and learning mechanisms that emerge from these interactions. Fig. 1 attempts to position some of the work introduced in this chapter along these two continuums (i.e., social metaphor and target learner). Although some of the research introduced may not directly involve physical robots, the arrangements can be generalized to human–agent interactions, as much of the research and development with robots begins with human-modeled computer systems and virtual agent simulations.
3.2
Social Metaphors
Pedagogical agents with human-like appearances can be categorized according to the extent to which they include representations of social metaphors. We use the term “representations” because they are not mental models; they simply use technology to mimic schemas presumed to be present in the cognition of the humans with whom they interact. We call those representations social metaphors. However, as mentioned in an earlier section, not all technologies put the emphasis on direct social
Robots and Agents to Support Collaborative Learning
411
Fig. 1 Pedagogical robots and agents along social metaphor and target learner continuum
exchange with humans (e.g., industrial robots). Most fall somewhere in-between and participate in both machine-like and human-like features. It is important to note that direct interactions between an artifact and a user can occur without any real social elements. Socially indifferent systems usually have no social functions, social interests, or social abilities as part of human–machine interactions (e.g., statistical applications, word processors, industrial robots). Early examples of socially indifferent machines that assisted learner performance were the Testing Machine (Pressey 1932) and the Teaching Machine (Skinner 1986) mentioned earlier. Skinner’s Teaching Machine taught arithmetic: a sequence of math problems appeared on top of a machine box and guided the learner through programmed instruction. Even though both the testing and teaching machines had no social elements, they were somewhat successful in teaching mathematics to elementary, secondary, and college mathematics students. Modern robotic construction kits may also fit into this category, as children build and interact with concrete objects with no social elements, but still learn how gears, motors, and sensors work.
412
S. Y. Okita and S. N. Clarke
Socially implicit systems draw on social patterns of interaction without trying to lead the learner to presume that they (i.e., the system) think like humans. Computer tutors, for example, incorporate interaction patterns known to be effective for tutoring, but they usually have a command-line interface and very little, if any, visual representation of an animated tutor character (Pane et al. 2014). Anderson et al. (1995) developed the Cognitive Tutor, an intelligent tutoring system and computational model, which represented student thinking and cognition. The Cognitive Tutor contained a computational model representing student thinking and cognition, but the tutor, itself, appeared as disembodied text with no visual character (Anderson et al. 1985). Other systems build on explicit social metaphors of interaction and appearance that invite social interaction. Socially explicit systems consist of features that try to cue learners to think of social interaction, such as having an animation character interact with the student (Mayer and DaPra 2012) or embodied conversational agents engaging in literacy learning with children (Cassell et al. 2007). Socially explicit systems usually consist of features that maximize social metaphors and perceptions of social presence to enable an affective social interaction to take place. For example, Honda’s humanoid robot exhibits human-like movements and appearance, but also includes implicit features from cognitive models that invite social interactions (Ng-Thow-Hing et al. 2009). Robots in the socially indistinguishable category utilize extensive human mimicry. The social metaphor at this level usually involves high-fidelity appearance and behavior. A good example of this is the robots of Hiroshi Ishiguro (2007), who has been developing androids (i.e., realistic humanlike robots) he calls “geminoids” and “actroids” that look and behave (almost) human. Mori’s work on the uncanny valley indicates that the challenge in future development is to achieve total human mimicry (Mori 1970). While androids are quite common in science fiction, material and design challenges still need to be resolved before they can become widely used in the real world. However, such ideas are starting to be tested in classrooms (Hashimoto et al. 2011).
3.3
Learner-Centered Interactions with and Between Pedagogical Agents
Social interaction has been found to be quite effective in peer tutoring (Roscoe and Chi 2007; Chi et al. 2001; Graesser et al. 1995), reciprocal teaching (Palincsar and Brown 1984), and behavior modeling (Anderson et al. 1995). Underlying these approaches is the view that individual cognition is shaped through social interactions and verbal dialogue plays a special role in learning and cognition (Wertsch 1979). Such forms of collaborative learning are often seen as an optimal way to help people learn (Chi et al. 2001). While direct “in-person” human facilitation can indeed be effective, finding a peer learner that is a good match for learners with specific needs can be a challenge (Mandl and Ballstaedt 1982). Human tutors that do not have the
Robots and Agents to Support Collaborative Learning
413
appropriate metacognitive abilities (e.g., unable to accurately monitor their pupils’ understanding), skills, and patience, can negatively impact the tutee’s learning outcomes (Chi et al. 2004). One way to overcome the limitations of human peers is to involve computerized people (e.g., pedagogical agents and avatars) and/or computerized instructions (e.g., intelligent tutoring systems). Implementing teaching and tutoring strategies into technology has led to the development of pedagogical computer agents (Baylor 2007). Like a human peer, a computerized peer can have limitations. The human learner may oftentimes be constrained by what the computerized peer agent or environment can do in response. However, like a human peer, observing a computer-controlled agent, under peer tutoring circumstances, may trigger similar learning and reflection. The use of social metaphors and schemas makes interactions with humans more original and motivating (Bailenson 2012; Baylor 2007). When social metaphors in technology are combined with empirically supported learning methodologies and dialogic instructional repertoires, strong collaborative learning with and between students can occur with pedagogical agents.
3.3.1
Individual Self-Learning
There are several processes at work when individuals engage in self-learning. Some of the mechanisms involve students monitoring their own understanding through self-reflection, monologic reasoning, and self-regulation. Technological artifacts (i.e., pedagogical agents and robots) can provide a safe environment for students to externalize their own thought processes onto an artifact to make their thoughts more accessible for personal reflection (Shneiderman 2007; Okita 2014; Schwartz et al. 2009). For example, the Teachable Agent system (Biswas, Leelawong, Schwartz, Vye,, and TAG-V 2005; Schwartz et al. 2009) takes students’ vague mental conceptualizations of a topic area and produces more concrete representations using visualization tools (i.e., electronic concept map called Betty’s Brain), which is interpreted and explored by the pedagogical agent. This allows the learners to reflect and structure their thoughts through social interactions with the agent, which in turn influences the development of metacognitive skills (Schwartz et al. 2009). While “collaborative dialogue with oneself” may sound puzzling, “conflict with oneself” may sound more familiar (Dillenbourg 1999). As explored by Mead (1934) and Vygotsky (1978), thinking can be viewed as an internalized dialogue with oneself (e.g., self-regulation, self-explanation, cognitive conflict). Self-explanation is a process whereby students explain to themselves or externalize their knowledge or understanding in the form of verbal utterances (Chi et al. 1989). Early automated instructions involved pedagogical agents and chatbots that helped scaffold students’ verbal reasoning through questions that elicited explanations from students (Stevens and Collins 1977). Through self-explanations, students “fill in” missing or not yet understood parts of phenomena (i.e., knowledge integration) in order to provide a complete explanation. King (1999) found that when students were trained to use
414
S. Y. Okita and S. N. Clarke
such self-regulation techniques to monitor their own understanding, they were more effective at problem-solving than students who were not trained. Learning by explaining to oneself has received great attention in machine learning (Mitchell et al. 1986) and cognitive modeling (VanLehn et al. 1992). Self-learning can involve both indirect and direct interactions with others, where the interaction can shift back and forth between monologic and dialogic interactions, but still focus primarily on individual self-learning. Interactions that involve the “thought of others” or the “anticipation of a social interaction” has led to learning with pedagogical agents and avatars (Okita et al. 2007). Studies have found that asking students to “prepare to teach” can lead to more learning compared to students who are asked to study for themselves (Bargh and Schul 1980). The mere presence of others or studying among peers can also be useful in learning. Learning can occur by comparing ourselves to peers or observing others to develop a better understanding of the self. Even if a student cannot solve a math problem, observing someone else may help. This is because the person they are observing can provide a model of competent performance. Self-reflection while problem-solving is challenging because of the cognitive demand of solving the problem and simultaneously reflecting on one’s own performance (Gelman and Meck 1983). A projective pedagogical agent, “ProJo,” was designed to openly display its reasoning when solving math problems. This relieved the cognitive load and allowed learners to monitor ProJo and “look for mistakes” (Okita 2014). The additional benefit of monitoring the work of others for mistakes is located in the act of wrestling with potentially inferior solutions (Kruger 1993). ProJo is based on the premise that externally monitoring the reasoning of a pedagogical agent’s problem-solving can help students turn their monitoring skills inward, and eventually self-correct when solving math problems (Karmiloff-Smith 1979). Learning by Teaching (LBT) through Teachable Agents has created an ideal situation for self-learning, where the student takes on the role of a peer tutor and teaches a computerized pupil agent (Bargh and Schul 1980; Leelawong and Biswas 2008; Biswas et al. 2005). LBT consists of a phase that further shows desirable effects called Recursive Feedback (Okita and Schwartz 2013), which refers to information that flows back to tutors when observing their pupils’ subsequent performance while interacting with others (e.g., football coach seeing his team competing out on the field). Tutors can map their understanding by observing how their pupils apply their teachings through interaction with others. Any discrepancies they notice lead to the realization that potential deficiencies in pupil understanding are not due exclusively to how the material was taught per se, but rather a lack of precision in the tutor’s own content knowledge. When studies compared human pupils to computerized pupil agents, similar results were found in a virtual reality environment (Okita et al. 2013). Self-learning can benefit from intelligent tutoring systems that use behavioral data of other students and instructors, even if they do not have direct contact with them. Research on intelligent model-based agents has focused on personalized instruction involving multifaceted systems that leverage rich models of students and pedagogies to create complex learning interactions. Systems such as Cognitive Tutors
Robots and Agents to Support Collaborative Learning
415
(Anderson et al. 1995; Pane et al. 2014) have tens or hundreds of thousands of users and have gathered performance and behavioral data (e.g., how to teach, study what worked, when it worked), which has contributed to the design of current intelligent tutoring systems. Cognitive Tutors can provide support to and anticipate the student’s thinking processes (VanLehn et al. 2005) based on experiential data from other students, and model complex teacher and student pedagogical strategies (Heffernan and Koedinger 2002). Research on intelligent tutoring systems has been successful at producing impressive technologies based on knowledge modeling with Bayesian Knowledge tracing and production-rule models to represent skills (Pane et al. 2014; Corbett et al. 2001). Self-learning can also have limitations. While elaboration of one’s own thinking is good for individual performance, ignoring the views of peers and their ideas can limit opportunities to reflect on and develop reasoning skills. Barron (2003) has found that this is especially true in group problem-solving, where the absence of engagement with others’ reasoning, or the excessive use of one’s own thinking, can lead to poor overall group performance.
3.3.2
Self-Other Learning
This section covers pedagogical agents and robots that focus on Self and Other’s learning, which is important in dyads, triads, and small group activities that involve more ideas and perspectives from the participants (i.e., learner and their peers). SelfOther learning with pedagogical agents often involves different discussion methods (e.g., “talk moves,” script-based, and dynamic dialogic instruction) that help set up systematic differences among learners and elicit-rich interactions that improve students’ reasoning skills. Reasoning skills are developed through self-other interactions that trigger cognitive and socio-cognitive conflict, develop group knowledge integration, and build consensus from discussions (Kuhn et al. 2013). Early automated instructions with static interactions were limited in monitoring and responding to learners. These interactions eventually evolved into more elaborative speech acts that have social and intellectual functions (Greeno 2015), and trigger cognitive processes through elaboration and self and other reasoning (Resnick et al. 2010). Speech acts like Accountable Talk move scaffold collaborative knowledge building and reasoning and have been shown to support learning, longterm retention, and development in reasoning. Pedagogical agents that elicit talk moves in a collaborative learning situation have been successful at producing similar performance effects (Adamson et al. 2014; Dyke et al. 2013). The pedagogical agent takes on a “facilitator” role rather than a “tutor” role, because the agent only minimally intervenes to scaffold group discussions with human peers (Dyke et al. 2013; Kumar et al. 2007). However, even minimal intervention of this kind (e.g., prompting a student to reason about their peers’ reasoning), carries both a conversational implicature whereby the student shares their thinking verbally with their peers, as well as a cognitive implicature—simply prompting the target student to reason by virtue of that request.
416
S. Y. Okita and S. N. Clarke
Script-based methods (Dillenbourg 1999; Kollar et al. 2006; Kobbe et al. 2007) comprise different scaffolding techniques that involve structuring tasks into phases, introducing interaction rules, or employing role-playing during collaborative interactions. A script can be used to define a wide range of features in collaborative activities (e.g., methods, tasks, roles, timing, patterns of interactions). Static scripts provide the same support for all participants, regardless of participant behavior during a collaborative interaction. Dynamic scripts provide a different response tailored to a participant’s or group’s performance or context of discussion as it unfolds. Scripts with a strict model can easily be encompassed in the design of the agent system, and may use dialogues intended to be adopted by the participants (e.g., “follow me” style prompts) to more subtle suggestions of behavior (e.g., “Each come up with three ideas, then discuss the ideas as a group”). Strict scripts can minimize the gap in group learning experience and performance and establish uniformity in discussions between the groups. Such semi-structured interfaces that include predefined scaffolds have helped grouped students to focus more on the task and produce less off-task comments (Baker and Lund 1996). Over-scripting can have negative implications by limiting the creative thought process and the contributions students can make (Dillenbourg and Hong 2008). Over the years, there has been more interest in dialogue-rich instruction that involves dialogue with discussants who engage students in inter-mental reasoning processes. In inter-mental reasoning processes, students explain, reflect upon, and elaborate on their own and their peers’ understanding of domain concepts, and collaboratively engage in a sensemaking process (Clarke et al. 2015). Engaging in dialogue with another creates conditions for challenges, disagreements, and contradictions of opinions and ideas. This process can lead to cognitive restructuring where students begin to integrate new perspectives into their own understanding (Kruger 1993). Early pedagogical agents that use dialogic instruction have been programmed to elicit conceptual depth by using generic prompts that encourage learners to articulate and elaborate their own lines of reasoning and to challenge and extend the reasoning of their peers. Recently, more dynamic dialogical instructions support learners by adapting the strategy by taking into account emergent characteristics of a discussion. Pedagogical agent systems, also referred to as “Tutorial Dialogue Agents,” help lead students through directed lines of reasoning to construct their conceptual development from the Knowledge Construction Dialogues developed by Rosé and VanLehn (2005). Dynamic dialogical instruction engages the group in a dynamic interchange of input by producing and receiving ideas and negotiating for meaning. Tutorial dialogue agents are interactive and have the ability to conduct multi-turn directed lines of reasoning with students who respond to their prompts (Kumar and Rose 2010). A notable framework, Academically Productive Talk (APT), is used to elicit rich interactions (Kumar et al. 2007) and can be triggered through real-time analysis of collaborative discussions (Dyke et al. 2013; Kumar et al. 2007). Students using tutorial dialogue agents with APT have engaged in directed lines of reasoning that have led to significantly more learning than those with no support. Studies have found a number of important mechanisms in dialoguerich discussions or dialogic instruction where individuals articulate their thinking, listen to their peers, and try to negotiate meaning while integrating their input.
Robots and Agents to Support Collaborative Learning
417
Where this kind of agent might be particularly useful is in augmenting a teachers’ facilitation of face-to-face whole-class discussions. Several studies have documented how rare it is to find classrooms where teachers lead students in rich discussions of this kind (Kane and Staiger 2012; Pimentel and McNeil 2013). In addition, few professional learning interventions have been successful in supporting teachers in learning how to facilitate discussions that engage students in deep reasoning and argumentation (Clarke et al. 2013). Studies have documented that teachers rarely use probing questions or help students think with peers during classroom discussions (Pimentel and McNeil 2013). Thus, effective tools/methods aimed at improving teachers’ classroom talk skills are in much demand and can support teachers in facilitating discussions that support learning (McLaren et al. 2010 p. 387). Tutorial dialogue agents can be useful for preservice and in-service teachers in managing group collaborative interactions in the classroom and monitoring the interactions occurring in different places and/or at different times. Not only is initiating small groups of students into dialogic discussion practices using computer support beneficial, but the findings also show that dynamic support by tutorial dialogue agents poses positive effects on teacher uptake of dialogic facilitation practices in classroom discussions (Clarke et al. 2013).
3.3.3
Others’ Learning
Pedagogical agents can also provide information on “others’ learning” to help people such as teachers and parents make instructional decisions, select appropriate course content, and monitor academic performance. Applying algorithms such as Bayesian Knowledge Tracing (BKT) to intelligent tutoring agents can model the learner’s mastery of knowledge, and predictive analytics can identify potential struggles students may have (Corbett and Anderson 1995). While such interventions are sophisticated and prescriptive (i.e., an automated system taking specific action to a given situation), there are challenges in accommodating the effects of human intervention (e.g., teachers taking action after seeing student performance) in the system’s automated instruction process. Studies have also found that teachers and parents tend to favor technology that depends more on simple and straightforward heuristics to assess student mastery (e.g., Get three in a row right, move to next level) (Heffernan and Heffernan 2014). Another approach is to use information on others’ learning to provide useful descriptive information to a third party (i.e., teachers and parents). Recent openlearner models and reporting systems use educational data mining and learning analytic methods to extract important information and present data using visualization techniques to indicate student progress and behavior. Instead of having the system make the decision, the system uses information from others’ learning to present options for teachers and parents from which they can make intelligent informed decisions (Baker 2016).
418
S. Y. Okita and S. N. Clarke
The Purdue Course Signal System (Arnold and Pistilli 2012) offers predictive analytics in student success and early warnings for instructors when a student is at risk. Course Signals attempts to scaffold effective practice by suggesting actions to instructors based on student performance and behavior. In ASSISTments, teachers examine student’s interaction data and performance reports on assignments, to design next day lectures, and better predict exam outcomes (Feng et al. 2009; Heffernan and Heffernan 2014). ASSISTments all provide extensive professional development for teachers to share and disseminate effective practices. The S3 project (Tissenbaum and Slotta 2019) allows teachers to monitor ongoing student activities through software agents that process student interactions in real time. This allows teachers to receive notifications that help orchestrate student groups, dynamically control classroom flow, and allocate necessary resources to students in a timely manner. Other systems have also made available to instructors real-time information on student participation through chat messages, so instructors can take immediate action to improve the collaborative discussions among students (Van Leeuwen et al. 2014).
3.4
Linking Theory to Practice Using Robots and Agents in Learning
Some have argued that by making machines smarter, good teaching and tutoring strategies can be implemented, and thus more learning will occur. Intelligent agent systems and robots alone do not guarantee learning, and we would argue that they should not be considered the panacea for supporting collaborative learning either. They cannot replace the intelligence of a teacher, but when deployed strategically the affordances of these technologies can foster processes of collaboration and thinking practices that are supportive of individual and collaborative learning. Other factors can influence learning (e.g., motivation, engagement, trust) and application in classrooms. Learners may not understand sophisticated artifacts; thus, they may not trust them or may over trust them by attributing too much intelligence to them. Also, some worry that making learners too dependent on sophisticated features will cause them to cease acting, thinking, and learning independently and depend on machines to make decisions for them (advice giver, expert system, information system management). According to Salomon et al. (1991), student performance with technology can be assessed in two ways. One is the way students perform while equipped with or interacting with technology. Usually, this means that technology plays a significant part in the cognitive process that students would usually have to manage manually on their own. Just handling a computer-based tool with no guidance can make the user (teacher or student) lapse into meaningless activities. A positive impact of interaction with these computer-based tools would be lasting cognitive changes that equip students with thinking skills, depth of understanding, and strategies to continue solving math problems (e.g., similar to internalizing the abacus) even when away from technology.
Robots and Agents to Support Collaborative Learning
419
Dialogic instruction has been shown to support learning, retention, and reasoning development, and has made this form of instruction a widespread practice that is promising; however, scaling this practice is not easy. Training teachers to use dialogic instructions effectively is a challenge, especially in low-performing schools (Clarke et al. 2013). Despite the decades of work on knowledge modeling in Intelligent Tutoring Systems, the approaches favored in practice (or at scale) are fairly simple (Heffernan and Heffernan 2014). It is difficult for a pedagogical agent to recognize that an intervention is not working as students adapt faster than automated systems. It is not that such changes and updates are impossible, but they take time and require constant attention. Humans are flexible and intelligent, but going through large amounts of information takes time. Baker (2016) suggests that rather than building sophisticated intelligent tutors, tools need to be designed more intelligently using Educational Data Mining (EDM) and Learning Analytics to augment human cognitive abilities and performance.
4 The Future: Prospects of Robots and Agents in Education In this chapter, we examined how social metaphors in robot and agent technology, combined with learning theories and methodologies, reveal powerful learning partnerships and new insights into the role of social relationships in learning. Winograd and Flores (1986) remind us that people develop tools, but tools need to be refined and used by intelligent individuals based on practice. There is much educational research that looks into developing guidelines for practice and design (Bransford et al. 2010) and educational data mining methods (e.g., learning decomposition) that show strategies that work (Beck and Mostow 2008). By identifying the kinds of practices that research suggests are successful, we can work to maximize benefits from technology (pedagogical agents and robots, and robotic systems). Salomon et al. (1991) remind us that cognitive effects gained through technology depend greatly on the meaningful engagement of learners in the tasks afforded by these technological artifacts. It is essential to design a collaborative learning relationship with pedagogical robots and agents that do not cease independent thinking but promote lifelong learning.
References Adamson, D., Dyke, G., Jang, H. J., & Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. International Journal of Artificial Intelligence in Education, 24(1), 91–121. Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228 (4698), 456–462. Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. Journal of the Learning Sciences, 4(2), 167–207.
420
S. Y. Okita and S. N. Clarke
Arnold, K. E., & Pistilli, M. D. (2012) Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd international conference on learning analytics and knowledge (pp. 267–270). New York, NY: ACM. Bailenson, J. N. (2012). Doppelgangers-a new form of self? Psychologist, 25(1), 36–38. Baker, M. J., & Lund, K. (1996). Flexibly structuring the interaction in a CSCL environment. In P. Brna, A. Paiva & J. Self (Eds.), In Proceedings of the European conference on artificial intelligence in education (pp. 401–407). Lisbon, Portugal, Sep 20–Oct 2. Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. Bargh, J. A., & Schul, Y. (1980). On the cognitive benefits of teaching. Journal of Educational Psychology, 72(5), 593–604. Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359. Baylor, A. L. (2007). Pedagogical agents as a social interface. Educational Technology, 47(1), 11–14. Beck, J. E., & Mostow, J. (2008). How who should practice: Using learning decomposition to evaluate the efficacy of different types of practice for different types of students. Proceedings of the 9th International Conference on Intelligent Tutoring Systems, 5091, 353–362. Bers, M. U., & Horn, M. S. (2010). Tangible programming in early childhood. In I. R. Berson & M. J. Berson (Eds.), High tech tots: Childhood in a digital world (Vol. 49, pp. 49–70). Greenwich, CT: IAP. Bers, M. U., Ponte, I., Juelich, C., Viera, A., & Schenker, J. (2002). Teachers as designers: Integrating robotics in early childhood education. Information Technology in Childhood Education Annual, 2002(1), 123–145. Biswas, G., Leelawong, K., Schwartz, D., Vye, N., & TAG-V. (2005). Learning by teaching: A new agent paradigm for educational software. Applied Artificial Intelligence, 19, 363–392. Bransford, J., Mosborg, S., Copland, M. A., Honig, M. A., Nelson, H. G., Gawel, D., Phillips, R. S., & Vye, N. (2010). Adaptive people and adaptive systems: Issues of learning and design. In A. Hargreaves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), Second international handbook of educational change (pp. 825–856). Dordrecht: Springer. Breazeal, C., Dautenhahn, K., & Kanda, T. (2016). Social robotics. In B. Siciliano & O. Khatib (Eds.), Springer handbook of robotics (pp. 1935–1972). Cham: Springer. Cassell, J., Tartaro, A., Rankin, Y., Oza, V., & Tse, C. (2007). Virtual peers for literacy learning. Educational Technology, 47(1), 39–43. Chi, M. T., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13(2), 145–182. Chi, M. T., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive Science, 25(4), 471–533. Chi, M. T. H., Siler, S., & Jeong, H. (2004). Can tutors monitor students’ understanding accurately? Cognition and Instruction, 22, 363–387. Clarke, S. N., Chen, G., Stainton, C., Katz, S., Greeno, J. G., Resnick, L. B., et al. (2013). The impact of CSCL beyond the online environment. In N. Rummel, M. Kapur, M. Nathan, & S. Puntambekar (Eds.), To see the world and a grain of sand: Learning across levels of space, time, and scale: CSCL 2013 (Vol. 1, pp. 105–112). Madison, WI: International Society of the Learning Sciences. Clarke, S. N., Resnick, L. B., & Rosé, C. P. (2015). Dialogic instruction: A new frontier. In L. Corno & E. M. Anderman (Eds.), Handbook of educational psychology (pp. 392–403). New York: Routledge. Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4(4), 253–278. Corbett, A. T., Koedinger, K. R., & Hadley, W. (2001). Cognitive tutors: From the research classroom to all classrooms. In P. S. Goodman (Ed.), Technology enhanced learning: Opportunities for change (pp. 235–263). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Robots and Agents to Support Collaborative Learning
421
Dillenbourg, P. (1999). What do you mean by collaborative learning? Collaborative-learning: Cognitive and computational approaches (pp. 1–19). Oxford: Elsevier. Dillenbourg, P., & Hong, F. (2008). The mechanics of CSCL macro scripts. International Journal of Computer-Supported Collaborative Learning, 3(1), 5–23. Dyke, G., Adamson, D., Howley, I., & Rosé, C. P. (2013). Enhancing scientific reasoning and discussion with conversational agents. IEEE Transactions on Learning Technologies, 6(3), 240–247. Feng, M., Heffernan, N., & Koedinger, K. (2009). Addressing the assessment challenge with an online system that tutors as it assesses. User Modeling and User-Adapted Interaction, 19(3), 243–266. Gelman, R., & Meck, E. (1983). Preschoolers’ counting: Principles before skill. Cognition, 13, 343–359. Gomez, L. M., & Henstchke, G. C. (2009). K–12 education and the role of for-profit providers. n Bransford, John D., Stipek, Deborah J., Vye, Nancy J., Gomez, Louis M., Lam, Diana (Eds.) The role of research in educational improvement, Harvard Education Press, Cambridge, 137–159. Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). Collaborative dialogue patterns in naturalistic one-to-one tutoring. Applied Cognitive Psychology, 9(6), 495–522. Greeno, J. G. (2015). Classroom talk sequences and learning. In L. B. Resnick, C. S. C. Asterhan, & S. N. Clarke (Eds.), Socializing intelligence through academic talk and dialogue. Washington, DC: American Educational Research Association. Hashimoto, T., Kato, N., & Kobayashi, H. (2011). Development of educational system with the android robot SAYA and evaluation. International Journal Advanced Robotic Systems, 8(3), 51–61. Hayes, P., & Ford, K. (1995). Turing test considered harmful. Proceedings of the fourteenth international joint conference on Artificial Intelligence, 1, 972–977. Heffernan, N. T., & Heffernan, C. L. (2014). The ASSISTments ecosystem: Building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education, 24(4), 470–497. Heffernan, N. T., & Koedinger, K. R. (2002). An intelligent tutoring system incorporating a model of an experienced human tutor. In Proceedings of the 6th international conference on intelligent tutoring systems (pp. 596–608). Ishiguro, H. (2007). Scientific issues concerning androids. The International Journal of Robotics Research, 26(1), 105–117. Kanda, T., Hirano, T., Eaton, D., & Ishiguro, H. (2004). Interactive robots as social partners and peer tutors for children: A field trial. Human–Computer Interaction, 19(1–2), 61–84. Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains. Research Paper. MET Project. Bill & Melinda Gates Foundation. Karmiloff-Smith, A. (1979). Micro- and macro- developmental changes in language acquisition and other representational systems. Cognitive Science, 3, 91–118. King, A. (1999). Discourse patterns for mediating peer learning. In A. M. O’Donnell & A. King (Eds.), Cognitive perspectives on peer learning. Mahwah, NJ: Lawrence Erlbaum. Kobbe, L., Weinberger, A., Dillenbourg, P., Harrer, A., Hämäläinen, R., Häkkinen, P., & Fischer, F. (2007). Specifying computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(2–3), 211–224. Kollar, I., Fischer, F., & Hesse, F. W. (2006). Collaboration scripts—A conceptual analysis. Educational Psychology Review, 18(2), 159–185. Kruger, A. C. (1993). Peer collaboration: Conflict, cooperation, or both? Social Development, 2(3), 165–182. Kuhn, D., Zillmer, N., Crowell, A., & Zavala, J. (2013). Developing norms of argumentation: Metacognitive, epistemological, and social dimensions of developing argumentive competence. Cognition and Instruction, 31(4), 456–496.
422
S. Y. Okita and S. N. Clarke
Kumar, R., & Rose, C. P. (2010). Architecture for building conversational agents that support collaborative learning. IEEE Transactions on Learning Technologies, 4(1), 21–34. Kumar, R., Rosé, C. P., Wang, Y. C., Joshi, M., & Robinson, A. (2007). Tutorial dialogue as adaptive collaborative learning support. Frontiers in Artificial Intelligence and Applications, 158, 383–390. Leelawong, K., & Biswas, G. (2008). Designing learning by teaching agents: The Betty’s brain system. International Journal of Artificial Intelligence in Education, 18(3), 181–208. Mandl, H., & Ballstaedt, S. (1982). Effects of elaboration on recall of texts. Advances in Psychology, 8, 482–494. Mayer, R. E., & DaPra, C. S. (2012). An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology: Applied, 18(3), 239–252. McLaren, B. M., Scheuer, O., & Miksáko, J. (2010). Supporting collaborative learning and e-discussions using artificial intelligence techniques. International Journal of Artificial Intelligence in Education, 20(1), 1–46. Mead, G. H. (1934). Mind, self, and society. Chicago, IL: University of Chicago Press. Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. (1986). Explanation-based generalization: A unifying view. Machine Learning, 1(1), 47–80. Mori, M. (1970). Bukimi no tani [The Uncanny Valley]. Energy, 7(4), 33–35. Ng-Thow-Hing, V., Thórisson, K. R., Sarvadevabhatla, R. K., & Wormer, J. (2009). Cognitive map architecture: Facilitation of human-robot interaction in humanoid robots. IEEE Robotics & Automation Magazine, 16(1), 55–66. Okita, S. Y. (2014). Learning from the folly of others: Learning to self-correct by monitoring the reasoning of virtual characters in a computer-supported mathematics learning environment. Computers and Education, 71, 257–278. Okita, S. Y., Bailenson, J., & Schwartz, D. L. (2007). The mere belief of social interaction improves learning. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 29, No. 29). Okita, S. Y., & Schwartz, D. L. (2013). Learning by teaching human pupils and teachable agents: The importance of recursive feedback. Journal of the Learning Sciences, 22(3), 375–412. Okita, S. Y., Turkay, S., Kim, M., & Murai, Y. (2013). Learning by teaching with virtual peers and the effects of technological design choices on learning. Computers & Education, 63, 176–196. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension fostering and comprehension-monitoring activities. Cognition and Instruction, 2(2), 117–175. Pane, J. F., Griffin, B. A., McCaffrey, D. F., & Karam, R. (2014). Effectiveness of cognitive tutor algebra I at scale. Educational Evaluation and Policy Analysis, 36(2), 127–144. Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. New York: Basic Books. Perret-Clermont, A. N., Perret, F.-F., & Bell, N. (1991). The social construction of meaning and cognitive activity in elementary school children. In L. Resnick, J. Levine, & S. Teasley (Eds.), Perspectives on socially shared cognition (pp. 41–62). Hyattsville, MD: American Psychological Association. Pimentel, D. S., & McNeil, K. L. (2013). Conducting talk in secondary science classrooms: Investigating instructional moves and teachers’ beliefs. Science Education, 97(3), 367–394. Pressey, S. L. (1932). A third and fourth contribution toward the coming industrial revolution in education. School and Society, 36, 934. Resnick, L. B., Michaels, S., & O’Connor, C. (2010). How (well-structured) talk builds the mind. In D. Preiss & R. Sternberg (Eds.), Innovations in educational psychology: Perspectives on learning, teaching and human development (pp. 163–194). New York: Springer. Resnick, M., Berg, R., & Eisenberg, M. (2000). Beyond black boxes: Bringing transparency and aesthetics back to scientific investigation. Journal of the Learning Sciences, 9(1), 7–30. Resnick, M., & Kafai, Y. (1996). Constructionism in practice: Designing, thinking and learning in a digital world. Mahwah, NJ: LEA.
Robots and Agents to Support Collaborative Learning
423
Roscoe, R. D., & Chi, M. T. (2007). Understanding tutor learning: Knowledge-building and knowledge-telling in peer tutors’ explanations and questions. Review of Educational Research, 77(4), 534–574. Rosé, C., & VanLehn, K. (2005). An evaluation of a hybrid language understanding approach for robust selection of tutoring goals. International Journal of Artificial Intelligence in Education, 15(4), 325–355. Salomon, G., Perkins, D. N., & Globerson, T. (1991). Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3), 2–9. Schwartz, D. L., Chase, C., Chin, D. B., Oppezzo, M., Kwong, H., Okita, S., Roscoe, R., Jeong, H., Wagster, J., & Biswas, G. (2009). Interactive metacognition: Monitoring and regulating a teachable agent. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 340–358). New York, NY: Routledge. Shneiderman, B. (2007). Creativity support tools: Accelerating discovery and innovation. Communications of the ACM, 50(12), 20–32. Skinner, B. F. (1986). Programmed instruction revisited. Phi Delta Kappan, 68(2), 103–110. Stevens, A., & Collins, A. (1977). The goal structure of a Socratic tutor. In Proceedings of the national ACM conference. New York: Association for Computing Machinery. Tissenbaum, M., & Slotta, J. (2019). Supporting classroom orchestration with real-time feedback: A role for teacher dashboards and real-time agents. International Journal of Computer-Supported Collaborative Learning, 14(3), 325–351. Van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2014). Supporting teachers in guiding collaborating students: Effects of learning analytics in CSCL. Computers & Education, 79, 28–39. VanLehn, K., Jones, R. M., & Chi, M. T. (1992). A model of the self-explanation effect. Journal of the Learning Sciences, 2(1), 1–59. VanLehn, K., Lynch, C., Schulze, K., Shapiro, J. A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., & Wintersgill, M. (2005). The Andes physics tutoring system: Lessons learned. International Journal of Artificial Intelligence in Education, 15(3), 1–47. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wertsch, J. V. (1979). From social interaction to higher psychological processes: A clarification and application of Vygotsky’s theory. Human Development, 22(1), 1–22. Winne, P. H. (1979). Experiments relating teachers’ use of higher cognitive questions to student achievement. Review of Educational Research, 49(1), 13–49. Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Reading, MA: Addison-Wesley.
Further Readings Johnson, W. L., & Lester, J. C. (2018). Pedagogical agents: Back to the future. AI Magazine, 39(2), 33–44. Johnson and Lester revisit their 2000 survey on researching pedagogical agents—agents that engage with humans to support their learning (Johnson, Rickel & Lester, 2000). In this 2018 update, they revisit their 2000 predictions for developments in the area of pedagogical agents and consider the current state of the art. This article provides a survey of pedagogical agents developed over the last 20 years, classified in terms of the ways in which they engage with humans to support learning. In addition to examples of pedagogical agents, this article discusses the underlying technological architecture that has been driving developments of pedagogical agents. Le, N. T., & Wartschinski, L. (2018). A cognitive assistant for improving human reasoning skills. International Journal of Human-Computer Studies, 117, 45–54. This article reports on an
424
S. Y. Okita and S. N. Clarke
evaluation of LIZA, an adaptive conversational agent that interacts with humans through textbased natural language processing. Le and Wartschinki examine whether engaging with Liza increases a humans’ skill of reasoning through discussions about reasoning, heuristics, and biases. This article provides an example of a system as a thought partner for solving problems. In addition, this articular of how a system’s mimicry of human-like interaction through natural conversation can elicit cognitive processes of humans that are productive for learning. Shiomi, M., Kanda, T., Howley, I., Hayashi, K., & Hagita, N. (2015). Can a social robot stimulate science curiosity in classrooms? International Journal of Social Robotics, 7(5), 641–652. This article reports on a field study of Robovie in an elementary school. Robovie is a social robot designed to socially engage with children. In this study, the researchers explored how social interaction with Robovie about science might stimulate children’s interest and curiosity about science. This article provides a concrete application of the use of robots as a collaborative learning partner in schools and an evaluation of its affordances for development interest in science. Timms, M. J. (2016). Letting artificial intelligence in education out of the box: educational cobots and smart classrooms. International Journal of Artificial Intelligence in Education, 26(2), 701–712. This article invites the field of artificial intelligence in education to imagine systems that are purposefully designed for teaching and learning, rather than adapting systems from business industries to support teaching and learning. In proposing the former, the author invites the field to imagine the kind of technological support for teaching a teacher might desire, and the kind of technological support for learning a student might desire. This article identifies some of the constraints of off the shelf systems for pushing the field forward in terms of designs for supporting teaching and learning. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. This article presents a series of questions and tensions for the field of CSCL to wrestle with, or perhaps reconcile, for future research and development. Among these questions and tensions are several issues at the edge of current research and development on robots and agents. This article is useful for identifying potential new directions for designing robots and agents for colslaborative support of learning.
Collaborative Learning Analytics Alyssa Friend Wise, Simon Knight, and Simon Buckingham Shum
Abstract The use of data from computer-based learning environments has been a long-standing feature of CSCL. Learning Analytics (LA) can enrich this established work in CSCL. This chapter outlines synergies and tensions between the two fields. Drawing on examples, we discuss established work to use learning analytics as a research tool (analytics of collaborative learning—ACL). Beyond this potential though, we discuss the use of analytics as a mediational tool in CSCL—collaborative learning analytics (CLA). This shift raises important challenges regarding the role of the computer—and analytics—in supporting and developing human agency and learning. LA offers a new tool for CSCL research. CSCL offers important contemporary perspectives on learning for a knowledge society, and as such is an important site of action for LA research that both builds our understanding of collaborative learning and supports that learning. Keywords Learning analytics · Educational data mining · Collaboration · Collaborative learning · Computer-supported collaborative learning · Intelligent support for groups · Adaptive support for groups
A. F. Wise (*) Learning Analytics Research Network (NYU-LEARN), New York University, New York, NY, USA e-mail: [email protected] S. Knight Transdisciplinary School, University of Technology Sydney, Sydney, Australia e-mail: [email protected] S. B. Shum Connected Intelligence Centre, University of Technology Sydney, Sydney, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_23
425
426
A. F. Wise et al.
1 Definitions and Scope There are two different histories that can be told about the relationship between Learning Analytics (LA) and Computer-Supported Collaborative Learning (CSCL). One is a story of continuity, in which the two fields have approached each other through a natural convergence of aligned interests. In this story, the promise of automated analyses to create dynamic support for collaboration has always been a part of the vision for CSCL, developing from early ideas about the relevance of machine learning for supporting collaboration made by Dillenbourg in 2005 (Rosé 2018), to the creation of group awareness tools (Bodemer and Dehler 2011), adaptive scripts (Vogel et al. this volume), and other intelligent support for groups (Kumar and Kim 2014). In parallel, the use of analytics to support groups of learners in their interactions has been a growing theme of work in LA since the inception of the field (Gašević et al. 2015), with attention paid both to carefully designed small group collaboration in formal learning environments (as commonly studied in CSCL) and large networks of people in informal settings (subject to less control by researchers/educators). This work has often been considered under the umbrella of social learning analytics (Buckingham Shum and Ferguson 2012), including attention to participant relationships (Bakharia and Dawson 2011), discourse (De Liddo et al. 2011), sequential patterns of interaction (Suthers and Rosen 2011), and communities (Haythornthwaite 2011). While this story of productive convergence is compelling, there is also a second, somewhat less harmonious, history that can be told. In this story, one of the critical characteristics of CSCL as a field is the study of collaborative processes as unique constructions achieved by (primarily small) groups, where units of discourse come to have meaning in an ever-evolving mediated context. The importance of developing theories to make sense of the local meaning-making that emerges through participation in specific situations has always stood in tension with more quantitative approaches to the study of CSCL (in which learning analytics is now included). Thus the rise of analytic approaches that attend only to quantitative representations of collaboration can be met with skepticism as a productive route to understanding (Wise and Schwarz 2017). Similarly, there are concerns that quantification from the “bottom-up” (e.g., through structure discovery methods such as topic modeling) without attention to existing theory (or the generation of new theory) will not help to advance the collective knowledge base about collaborative learning.1 How, then, can the CSCL and LA communities work together to develop collaborative learning analytics in ways that hold true to the collective values of
1 Rosé (2018) discusses many of the same tensions existing between learning analytics and the learning sciences more broadly: for example, the need to consider the relative value of model accuracy versus interpretability, and top-down (theory-driven) versus bottom-up (data-driven) approaches. A key differentiator for CSCL in addressing these tensions is a long-standing history of considering the role of computers and computation in learning, which has been a central part of the fiber of the CSCL community from the beginning.
Collaborative Learning Analytics
427
the field? In the following sections, we address this question by unpacking two distinct but complementary appeals that LA holds for the field of CSCL: first, as a set of methods useful to better understand collaborative learning; and second as a set of tools useful to better facilitate it. In doing so, the chapter aims to provide both an overview of the history of analytics of collaborative learning (to generate understanding of CSCL) and give signposts to recent and emerging work to create collaborative learning analytics (to offer support for CSCL). Finally, we ask critical questions for both the CSCL and LA communities to consider about how the area of collaborative learning analytics (CLA) should develop.
2 History and Development: Analytics of Collaborative Learning (ACL) The core challenge in using learning analytics to better understand collaborative learning is to conceive (conceptually) and implement (technically) connections between (1) fine-grained trace data of the sort captured in software logs and (2) learning constructs (see Fig. 1). How can this bridge from “clicks to constructs” (Knight et al. 2014) be built? The shaded triangles at the foot of Fig. 1 reflect what we might think of as the traditional strengths of CSCL (anchored to the left) and LA (anchored to the right). CSCL is theoretically robust in its definition of learning constructs and investigation of them through careful manual qualitative and quantitative analysis of data. LA has arisen from data science and analytics, working with large amounts of machine-generated information. It is methodologically strong in
From Clicks to Constructs Not Directly Observable
Human and/or Computationally Observable
Behaviour
Digital Trace
Behaviour
Digital Trace
Behaviour
Digital Trace
Behaviour
Digital Trace
Behaviour
Digital Trace
Sub-Construct
Learning Construct
Sub-Construct
Sub-Construct
CSCL
Learning Analytics
Fig. 1 A core challenge for analytics of collaborative learning is to map digital traces to learning constructs
428
A. F. Wise et al.
using computational tools and techniques to mine insights from low-level trace data captured from online platforms, mobile computers, and environmental/physiological sensors used “in the wild” at scale. Bringing these strengths together offers the possibility to engineer complex higher-order data features that richly represent meaningful learning constructs. Examination of such features can both offer direct insight into collaborative learning processes from a quantitative perspective and provide indicators of “where to look” for in-depth qualitative examinations in large data sets. The potential for mutual enrichment across CSCL and LA is significant, therefore, if rigorous mappings can be designed and implemented. In the case of studying argumentative knowledge construction (see e.g., Weinberger and Fischer 2006) or in a collaborative tool designed for argumentation such as Argunaut (McLaren et al. 2010), theory might direct us to consider (1) distribution of the discourse (i.e., it is not argumentation if only one person engages); (2) responsiveness (i.e., that contributions made connect to one another); (3) the use of formal reasoning (e.g., evidence, warrants, and qualifications); or (4) employing particular argumentation strategies (e.g., “argument by analogy”). Each of these constructs can then be mapped onto human observable behaviors, such as turn taking (as a window to discourse distribution), or supporting claims with evidence (as evidence of formal reasoning). For example, adding a node in an argument graph might be one indicator of taking a turn, while the presence of sources, reasons, or epistemic modals in the node’s text (detected using natural language processing) might be an indicator of the use of evidence in communication. Similarly, detecting when a student moves an argument node in a way that substantively affects the arguments could offer another indicator of both taking a turn and engaging in formal reasoning. These kinds of analyses sit well in established CSCL work. For example, as Fig. 2 indicates, in research to create analytics of relations among collaborative contributions, Suthers et al. (2010) operationalized the construct of uptake between learners using evidence mapping and threaded discussion tools, based on the notion of contingency constructed from a combination of semantic relatedness between peers’ contributions (involving many derived linguistic features), media dependency (taking action on a peer’s object), temporal proximity (happening close together in time) as well as spatial organization and inscriptional similarity. LA amplifies such possibilities for analytically constructing complex features from digital traces. In the following sections, we unpack in more detail further examples of work in this space, as summarized in Fig. 2.
2.1 2.1.1
Examples of How Analytics of Collaborative Learning Further Understanding of CSCL Analytics of Collaborative Knowledge Building
In the second row of Fig. 2, we introduced the example of operationalizing the construct of promising ideas in knowledge-building discourse. This is a good example of how research, and the student/educator experience, has evolved in an
Collaborative Learning Analytics
Construct
429
Behaviour
Analytic
Digital Trace
Uptake of ideas between learners in online discourse (Suthers et al., 2010)
Referring to someone’s idea in your message
Contingency as indicated by temporal proximity, semantic relatedness, inscriptional similarity + spatial organization
Discussion post contents + metadata
Promising ideas in knowledge-building communities (Lee & Tan, 2017)
An idea being built on and spread through people’s posts
Peak point of interest in an idea as indicated by temporal analyses of changes in the relatedness (betweeness centrality) of keywords in contributions
Knowledge Forum post contents + metadata
Joint Attention (Schneider and Pea, 2015)
Looking at the same thing at the same time
Gaze similarity as indicated by fixation in the same spatial area in +/- 2s
Eye tracking logs
Fig. 2 Research on analytics of collaborative learning seeks to bridge “from clicks to constructs” by creating construct-aligned analytics, derived from digital traces
established CSCL software tool and research platform (Knowledge Forum). Knowledge building as a strand of CSCL research has focused on the use of graphical networks of nodes and links (enabled with hypertext software) as mediating representations for student discourse. These notations structure discourse by providing a vocabulary of contributions and relationships that focus learners’ attention on the conceptual structure of the conversation: explicitly created links signal discourse relationships and node types signal the intended nature of contribution. The advantage over conventional contributions to a flat or threaded discussion forum is that such representations both provide distinct visual and classification affordances for reasoning by humans and are more tractable computationally. Other well-known CSCL tools for structuring thinking and argument include Belvedere (Suthers et al. 1995) and Argunaut (McLaren et al. 2010); for an overview of these and diverse other forms of tools to support structured discourse and deliberation, see Kirschner et al. (2003) and Okada et al. (2014). Early research analyses of Knowledge Forum data used conventional techniques such as statistical summaries of how networks grew, and qualitative (human) coding of contributions (Scardamalia 2003). More recent analyses have seen the introduction of text analytics approaches to provide new proxies for constructs such as similarity of contributions (using Latent Semantic Analysis: Teplovs and Fujita 2013), and productive threads and improvable threads (Chen et al. 2017). Threads coded as productive or improvable by human analysts were interrogated using novel (to CSCL) analytic techniques, such as lag-sequential analysis, “to determine whether there is cross-dependence between a specified behavior and another behavior that occurs earlier in time” (p. 228). This revealed which metrics were powerful enough to differentiate the two thread types (productive vs. improvable), yielding
430
A. F. Wise et al.
new insights into the temporal dimensions of each kind of discourse; for example, fine-grained claims such as: “. . .productive threads of inquiry involved significantly more transitions among questioning, theorizing, obtaining information, and working with information, while improvable inquiry threads showed more transitions involving giving opinions” (p. 229). In recent work, Lee and Tan (2017) demonstrated how the construct of a promising idea (one that helps to move a knowledge-building community forward: Chen 2017) can be operationalized in the Knowledge Forum software tool through the combination of text mining (to reveal relationships between keywords in students’ contributions) with network analysis (using the metrics of betweenness, centrality, and degree centrality on the text mining results to identify strongly connected ideas). These metrics were designed to “reveal the associations between keywords, discourse units, and participants within a discourse” (p. 81). Coupled with visualizations, Lee and Tan argue that these are a form of visual analytic for the quality of “promisingness” that is a hallmark of a strong discourse contribution.
2.1.2
Analytics of Joint Attention
Analytics of collaborative knowledge building evolved through innovating the analyses applied to existing data sources from an established CSCL software tool. Opportunities are also emerging to generate analytics based on new sources of data. To illustrate, this example details how eye-tracking has facilitated the examination of joint attention, a central concept in CSCL (Tomasello 1995). Joint attention is theorized as an important mechanism by which shared focus on a common reference helps collaborators coordinate with one another to ground communication (Clark and Brennan 1991) but has traditionally been difficult to empirically assess. Considering (computationally detectable) gaze as an indicator of visual attention (which can be taken as a reasonable proxy for attention more generally), researchers have used dual eye-tracking systems to construct measures of joint visual attention, operationalized as the composite features of gaze similarity or cross-recurrence (Jermann et al. 2011; Schneider and Pea 2013; Sharma et al. 2015). These measures have been useful in investigating the role of joint attention in CSCL in ways not previously possible. For example, Schneider and Pea (2013) found that an intervention designed to support joint visual attention in dyads collaborating at a distance (by making participants’ gaze visible to each other) achieved its goal and that greater joint visual attention correlated with higher quality collaborative processes and improved learning outcomes. Going beyond the simple notion that more joint visual attention is better for collaboration, Schneider and Pea (2015) sought to examine its relationship with the related concept of transactive discourse (Stahl 2013). They examined the association of moments of relatively high and low joint visual attention in the previous study, with patterns in the coherence of talk (a derived feature constructed from the computationally detectable words used via a sliding time-window analytic of cosine similarity). This allowed them to identify different potential modes of
Collaborative Learning Analytics
431
collaboration. For example, an exchange when both joint visual attention and verbal coherence was high was spatially focused, with referents used to anchor the dialogue (e.g., “this one right there”), while an exchange with high coherence but low joint visual attention illustrated an attempt to integrate information across the task. In a separate study of collocated collaboration using a tangible tabletop, Schneider et al. (2015) found that using three-dimensional representations of shelves in a warehouse optimization task led to greater joint visual attention than two-dimensional representations and that the greater overall levels of joint visual attention were associated with higher task performance, and in some cases, learning outcomes and quality of collaborative processes. Deeper examination of the eye-tracking data revealed a critical aspect of group dynamics: groups in which learners equally shared responsibility for initiating joint visual attention showed higher learning gains than those in which joint visual attention was primarily initiated by only one of the learning partners (Schneider et al. 2016). This second example illustrates how the technical capabilities of analytics (ability to digitally capture gaze to the microsecond) can also support the generation of new conceptual categories (construction of the derived feature of joint attention initiator opened the possibility for the construct of a leading learner). Additionally, in both this set of studies and the one described above, there was evidence that the more successful dyads moved fluidly between different (spatial) parts of the problem to be solved, rather than focusing on them serially (Schneider and Pea 2013; Schneider et al. 2016). Together, these studies advance our understanding of joint attention by identifying different patterns of partner gaze that can occur, examining how they do (or do not) contribute to learning, and testing designs through which they can be supported.
2.2
From Understanding to Action
The above examples showed how researchers have worked to connect clicks to constructs in order to develop meaningful insight into collaborative learning processes. Once valid features and metrics have been built to serve as proxies for pedagogically significant constructs, it may then be possible for an analytics infrastructure to partially or fully automate that analytical workflow. This makes it possible to go beyond tools that support researchers to create tools that support students and educators by providing meaningful representations of collaborative activity patterns in a timely manner. Such Collaborative Learning Analytics create a feedback loop by generating information that can trigger computer-initiated adaptations to the conditions of collaboration or be provided to students and educators to provoke reflection, and potentially, changes that improve collaborative learning.
432
A. F. Wise et al.
3 State of the Art: Collaborative Learning Analytics (CLA) Recent years have seen a shift in LA, from a tool for researchers to understand the processes and learning impacts of CSCL (analytics of collaborative learning: ACL), to a way to support CSCL (collaborative learning analytics: CLA). This new emphasis is marked by a focus on designing learner- and instructor-facing analytics that provide timely feedback on collaborative learning process. In this way, CLA involves treating the outputs of analyzing collaborative learning (ACL) as inputs to improve collaboration quality. Consequently, a key challenge in translating “ACL” to “CLA” are human–computer interaction and human factors considerations; for instance, how to usefully present analytic information as feedback on collaborative learning processes and how to support people in interacting with, comprehending and taking action on that feedback. In this way, the developing area of CLA has significant potential to become a core mediating tool to support collaborative learning. This broad idea is not a new one in CSCL. For example, in 2007, the notion of group mirrors was introduced (Dillenbourg and Fischer 2007), while a 2011 special issue of Computers in Human Behavior (Bodemer and Dehler 2011) built on earlier efforts in the related field of computer-supported collaborative work (CSCW) to address the question of group awareness in CSCL environments. In this research, devices such as interactive tabletops and other large displays were used to show information to groups about their interactions (in the simplest example, the number of times each participant spoke) with the goal of encouraging reflection in real time (Jermann and Dillenbourg 2008). LA extends and expands this tradition of showing collaborating learners information about their processes by capturing and analyzing data in richer, more complex ways. CLA provides potential for the CSCL community to design new kinds of support for collaborative learning which are able to impose less structure a priori by providing ongoing assistance through real-time, theory-grounded, scalable feedback. This addresses a concern within the CSCL community that some of the original emancipatory spirit of the field is lost when the implementation of effective CSCL requires strong scripts that limit the range of interaction possibilities (Wise and Schwarz 2017). The emergence of LA offers the opportunity to address this issue by developing learning environments in which the processes of interaction with computer support are less tightly predefined, with the system instead acting responsively to the learners and their interactions. Such responsiveness also has the potential to extend the sophistication of CSCL scripts from relatively fixed templates for learning interactions to dynamic models in which conditions and patterns of collaboration are adjusted and calibrated responsively before and during a learning episode. For example, this might include the customization of scripts for particular groups of learners or group-interaction dynamics. It might also involve the use of scripts that—to use Fischer et al.’s (2013) term— fade, or adapt over time, adjusting the nature of the structures put in place to support learning. In doing so, there is also the potential to advance our knowledge of
Collaborative Learning Analytics
433
collaboration and the systems we develop to support it from being relatively domain general (e.g., “assigning roles to learners is helpful for collaboration”) to one in which the support (and related knowledge claims) is more tightly specified to the people and learning tasks involved (e.g., “assigning roles that take distinct perspectives may be helpful for this task since the goal is broad idea generation and your group tends to arrive at quick consensus. . .”). In summary, CLA offers the potential for a new chapter of CSCL research; however, it must navigate several key tensions to do so.
3.1
Changing the Shape of Support for Collaborative Learning Through CLA
Collaborative learning analytics (CLA) is an emerging area of research and practice, which builds on prior CSCL work to make a distinctive contribution to the nature of computer support for collaborative learning. The final piece of the technical puzzle is the growing development of systems and infrastructure to “close the loop” in real time; that is, to automate the capture, analysis, and feeding back of information about collaborative learning processes to the collaboration while it is still in progress. Increasingly, marking the shift to CLA, we thus see a focus on new questions that arise in this final phase with respect to the design features of how learning analytics information can be integrated and implemented in practical learning contexts to inform learning. In so doing, CLA must navigate three core tensions which we discuss in detail below: 1. What will CLA do? The relative balance of technology and human agency. 2. Who will CLA attend to? Support for activity at different levels (group, individual, collective). 3. How will CLA operate? Iterations of refining collaborative learning efforts.
3.1.1
What Will CLA Do? The Relative Balance of Technology and Human Agency
A key consideration for CLA is the mode of action for the computer and the agency of the actors in a particular learning context. CSCL research has a long history of attention to the agency of human (particularly student) actors; as stated by Scardamalia et al. (1989), “the computer environment should not be providing the knowledge and intelligence to guide learning, it should be providing the facilitating structure and tools that enable students to make maximum use of their own intelligence and knowledge” (p. 54). Attention to human agency has also been an area of focus in LA. Drawing on precedents in a range of fields that seek to keep the human “in the loop” when working with intelligent machines, Kitto et al. (2018) have argued that knowing when to disagree with analytics (and being empowered to do
434
A. F. Wise et al.
so) is both an important competence to build and an effective pedagogic strategy. Deriving from this, in deploying CLA, attention must be paid to the ways in which, through flexible implementation, LA can be used to support rather than supplant the agency of learners. Consideration must also be given to the extent to which analytics are seen as a temporary scaffold for collaborative learning whose role will eventually be taken over and internalized by learners, as compared to a performance support system which will continually provide data to inform collaboration on an ongoing basis. To consider these questions, we explore two different routes to building CLA: (1) via Adaptive CSCL systems, in which changes to collaboration based on analytics are algorithmically initiated and (2) Adaptable CSCL systems, in which changes to collaboration based on analytics are initiated by the users (Wise 2019). Adaptive CSCL assigns a great degree of agency to the computer to adjust learning tasks and content for these tasks, potentially with input (or a veto from) learners or educators. In contrast, Adaptable CSCL is designed to promote reflection on the part of learners and instructors and support action based on that reflection.
Adaptive CSCL In adaptive CSCL systems, tools are designed to algorithmically alter learning environments in response to data during a learning episode. More specifically, the aim of adaptive CSCL is to use “intelligent technologies to improve student collaboration and learning by assessing the current state of the interaction and providing a tailored pedagogical intervention” (Soller et al. 2005). Rummel, Walker, and Aleven (2016) point to the potential of ongoing work at the intersection of learning analytics and artificial intelligence in education to provide such support. The support may target the ways in which groups are configured and formed (e.g., the characteristics of members and their roles), the nature of group interaction, or the nature of the group’s understanding (Magnisalis et al. 2011). For example, Howley et al. (2012) investigated support for group composition and interaction, by looking at how an intelligent dialogue system influenced interaction in groups with varying self-efficacy compositions. Similarly, Walker et al. (2011) explored the impact of adaptive support on helping behaviors in peer tutoring. Indeed, there is an emerging body of work on Intelligent Support for Learning in Groups, exemplified by a special issue of the International Journal of Artificial Intelligence in Education (Kumar and Kim 2014) as well as a series of workshops at the AIED and ITS conferences by the same name, and recent CSCL group formation symposium with a similar theme (Tsovaltzi et al. 2019). While our default assumption might be that the primary agent in such systems is the computer itself, alternative adaptive models have been developed in which humans retain more control. Rummel et al. (2016) directly address concerns about a “dystopian” future in which artificial intelligence support for collaboration is reactive, rigid, and robs learners (and teachers) of agency, by describing a vision for a more “utopian” vision in which AI support is provided in a responsive, nuanced, and flexible way. In this vision, the adaptive agent supports development
Collaborative Learning Analytics
435
grounded in the learning sciences, with agency to both educators and students, through the use of explainable models. Thus, we can imagine a continuum of systems from those in which the adaptive system drives adaptation in the learning context, to ones in which students and teachers retain ability to adjust, “speak back to,” or ignore adaptive features, blurring into the kinds of adaptable systems we now turn to.
Adaptable CSCL In adaptable CSCL, collaborative learning processes are made visible for reflection by educators and learners so that they can adjust their learning interactions or the learning environment itself. Adaptable CSCL systems commonly deploy analytics (that were previously used in research contexts) to display information to students and educators about their collaborative interactions for reflection (see Liu and Nesbit 2020 for a recent review of CSCL dashboards). In the future, adaptable CSCL may expand to coaching systems that also offer support in interpreting and actioning on these representations of interaction processes (see e.g., Soller et al. 2005). Adaptable CSCL systems can include analytics embedded as part of the interface used for collaboration to support reflection-in-action, or extracted from it to support reflection-on-action (Wise et al. 2014). For example, the Starburst discussion forum tool was built to embed analytics by visualizing the online conversation as a hyperbolic tree in which the size and color of the nodes communicate information to students about the structure of the discussion, the location of their comments within it, whether contributions are receiving replies, and if threads are being abandoned or ignored (Wise et al. 2014). Separately, log-file trace data about students’ speaking and listening activity were extracted from the system and provided to students in a separate (digital) space, where they were asked to “step back” from the action and reflect on the group’s collaborative process and their role in it. Other examples of embedded CLA include various visual representations of interaction designed to maintain group awareness during collaboration (e.g., Bodemer and Dehler 2011), while the classic example of extracted CLA is an analytics dashboard whose use is separated from the collaboration itself in time and/or space (e.g., van Leeuwen 2015; Tan et al. 2017). Recent work in multimodal analytics has also sought to combine real-time temporally and spatially embedded analytics, with extracted analytics in the form of post hoc visualization for the purposes of reflection. For example, as described in Gesture and Gaze: Multimodal Data in CSCL (Schneider et al. this volume), visualizations can be used to support educator agency in real time to indicate how they move through collaborative learning spaces, and make suggestions for groups that might require more attention. After sessions, then, the same technology can be used to support both individual students and collaborative groups, through their exploration of extracted analytics that can be used to show key events for individuals in a teamwork context, against an idealized process (see Echeverria et al. 2019). Indeed, tools are similarly emerging that shift from analytics of CSCL in the context
436
A. F. Wise et al.
of the knowledge-building construct described earlier, to CLA for that construct. For example, Tan et al. (2017) and (2018) discuss the use of learning analytics dashboards to support pedagogic adaptation by a teacher with their students in the context of complex literacy practices. In summary, Adaptable CSCL is an active area of research and innovation in which a variety of systems are being built and tested.
3.1.2
Who Will CLA Attend to? Support for Activity at Different Levels
The level(s) of analysis for studying collaborative learning is a long-standing area of consideration in CSCL, with different work taking individual learners, small groups, and large collectives as the unit-of-analysis (as well as tackling the challenging question of how to make claims that bridge across these different levels, Stahl et al. 2013). The introduction of CLA brings an additional layer of complexity as the goal is no longer simply understanding collaborative learning from a group, individual, or collective perspective, but also acting on it. This is an important issue as learning analytics more broadly have largely stayed focused on the individual as the “target” for analytic insight and resultant action, but for CLA action at multiple levels may be needed. For example, we can imagine a CSCL system in which core concerns might include feedback that prompts reflections from “how is my contribution to the collaboration going and what might I do to improve it?” to “how is our collaboration going and what can we do to improve it?” Drawing on work from the field of selfregulation, these can helpfully be distinguished by considering whether the goal is to support self or socially-shared regulation (Wise et al. 2015). Where computer action supports collaborative learning, the question arises of how the intervention or changes to the conditions of collaboration are intended to impact learners as individuals and/or at the level of the unified group.
3.1.3
How Will CLA Operate? Iterations of Refining Collaborative Learning Efforts
The final core question for CLA relates to the theory of action: How are CLA expected to impact collaboration and how can this process can be designed for? CLA emphasizes improvement to collaborative processes and learning as the outcome measure; thus, our attention shifts from accuracy of representations of theoretical constructs to considering the audience(s) for action and means by which such action will occur (Wise et al. 2018). From this perspective, while analytics may be imperfect, they can still provide useful insights to learners and instructors with appropriate design (Kitto et al. 2018). To support learning, CLA must be embedded into practical learning contexts in ways that support thoughtful interpretation of, and action on, the collaborative learning interactions that occur within them. This embedding relates to guidance surrounding the implementation of learning analytics generally, such as for their use to be incorporated as an integral part of the learning experience and for reference
Collaborative Learning Analytics
437
points to be provided such that users have a ready way to evaluate the meaning of the information and representations provided (Wise and Vytasek 2017). In addition, a key concern in implementing CLA is that for LA to be effective learners need both to recognize when they are experiencing a learning problem (perhaps prompted by the analytics) and to know how to address the problem—a translation requiring a conceptual leap from the analytic information (Wise et al. 2014). There are also additional concerns specific to CLA. For example, van Leeuwen and colleagues conducted a program of research examining specific ways in which CLA is useful to instructors in monitoring and supporting collaborating groups. They found evidence to suggest that while CLA may or may not improve teacher’s ability to notice problems in collaboration, they did increase their specificity of diagnosis and their confidence in interpretation, leading them to offer more support to the groups (van Leeuwen et al. 2014; van Leeuwen 2015; van Leeuwen et al. 2015). While this work examines how instructors work with CLA to improve collaborative learning, other models of action address the ways in which students work with CLA (Wise et al. 2016) or how students and teachers can come together to make sense of and act on CLA (Tan et al. 2017; 2018). Further documentation of the mechanisms by which CLA can support and impact collaborative learning is an active area of research, which can also contribute to the effective design of such systems.
4 The Future: Conclusions and New Directions This chapter has identified some of the key tensions at the intersection of CSCL and Learning Analytics, and introduced examples that demonstrate how these have been—and could be—resolved in productive ways. Spanning a rich variety of learning contexts, the potential of log-file data mining, natural language processing, and multimodal analytics to support online and collocated CSCL is clear. In this chapter, we have foregrounded the challenge of grounding analytics in CSCL constructs in a principled way and identified the distribution of agency between learners, educators, and computers as a key design consideration. We have argued that Learning Analytics can provide a powerful new capability in the CSCL toolbox, firstly, by yielding new insights when deployed as Analytics of Collaborative Learning, and secondly, deployed as Collaborative Learning Analytics, to directly support learners and educators as they engage in CSCL. The potential of digital trace data to inform our understanding of collaborative learning has been of long-standing interest to the CSCL community. The history of questions about how these new techniques can and should be applied and used is similarly extensive. Discussing the role of ‘Computer’ in Computer-Supported Collaborative Learning over 25 years ago, Bannon (1994) lists a number of ways we might understand the potential of computers for learning:
438
A. F. Wise et al.
1. As a tool for researchers, to gather data for analysis: “the computer makes the task of the researcher easier but does not really affect the collaborative learning process per se” (p. 4). 2. As a platform or “rich microworld” in which students can interact (p. 4). 3. As an automated tutoring tool with which the student interacts or collaborates (pp. 4–5). 4. As a resource to support collaborative learning (the viewpoint he argues for): “The computer can help students to communicate and collaborate on joint activities, providing assistance in the coordination process. This mediational role of the technology emphasizes the possibilities of using the computer not simply as an individual tool but as a medium through which individuals and groups can collaborate with others. In such studies, the computer acts as a support and resource for the collaborating students” (p. 5). Substituting Analytics for Computer, at a basic level, learning analytics is a tool that can augment research through the collection and analysis of data; the first of these possibilities. There is well-established work that has deployed learning analytics techniques as a research tool to understand the processes of learning in CSCL contexts, often making use of technological affordances that support student interaction (on and offline; the second of the possibilities). Analytics of collaborative learning (ACL) raise new kinds of challenges, including: 1. How to conduct research in this interdisciplinary space, requiring bringing experts in data mining, learning analytics, education, CSCL, and more together in productive dialogue. 2. The relationship between theory and data (and its analysis) in CLA. 3. The specific object of the analytics, for example, the group or individual; or the episode or idea. 4. How to deal with new kinds of data in this context (multimodal, textual, etc.), and particularly the practical challenges of interoperability of such data across CLA systems and contexts. Moreover, in the second part of this chapter, we have pointed to the potential of learning analytics to support the fourth potential raised by Bannon: of an emerging approach to what we have called collaborative learning analytics. In this view, analytics can act as a computer support for collaborative learning. Such a potential allows us as researchers to use analytics as a tool-to-think with, to instantiate and test theoretical notions about what matters for collaboration by creating analytics that are a part of the collaboration rather than simply a reflection of the environment in which it occurs. This novel potential raises important challenges for CLA: 1. How can CLA support and develop human agency? 2. Who is the audience for CLA? (groups, individuals, and/or collectives; students and/or teachers). 3. What is the intended impact of CLA (and how do we evaluate that impact)?
Collaborative Learning Analytics
439
4. How do we design for impact, respecting the needs to integrate and implement CLA in practical learning contexts? 5. How can CSCL have an impact for learning contexts where, often commercial, vendors are rapidly developing products? This chapter introduced a set of exemplifications of research being pursued at the intersection of CSCL and learning analytics; there are, of course, many more that could not be discussed. The potential of log-file data mining, natural language processing, and sensors that support multimodal learning analytics for CSCL is clear, across a range of learning contexts. We have suggested that learning analytics provide a new tool in the CSCL toolbox. Moreover, that CSCL, which offers a contemporary perspective on learning for a knowledge society, is a specific and important site of action for learning analytics research, to create CLA, that both build our understanding of collaborative learning and support that learning.
References Bakharia, A., & Dawson, S. (2011). SNAPP: a bird’s-eye view of temporal participant interaction. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 168–173). ACM. Bannon, L. (1994). Issues in computer supported collaborative learning. In C. O’Malley (Ed.), Computer supported collaborative learning. Proceedings of NATO advanced research workshop on computer supported collaborative learning, Aquafredda di Maratea, Italy, Sept. 24–28, 1989. NATO ASI Series, SERS F Vol.128. Berlin: Springer. ISBN 3-40-57740-8. Bodemer, D., & Dehler, J. (2011). Group awareness in CSCL environments. Computers in Human Behavior, 27(3), 1043–1045. https://doi.org/10.1016/j.chb.2010.07.014. Buckingham Shum, S., & Ferguson, R. (2012). Social learning analytics. Journal of Educational Technology & Society, 15(3), 3–26. Retrieved from https://www.jstor.org/stable/ jeductechsoci.15.3.3. Chen, B. (2017). Fostering scientific understanding and epistemic beliefs through judgments of promisingness. Educational Technology Research and Development, 65(2), 255–277. https:// doi.org/10.1007/s11423-016-9467-0. Chen, B., Resendes, M., Chai, C. S., & Hong, H.-Y. (2017). Two tales of time: Uncovering the significance of sequential patterns among contribution types in knowledge-building discourse. Interactive Learning Environments, 25(2), 162–175. https://doi.org/10.1080/10494820.2016. 1276081. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. Perspectives on Socially Shared Cognition, 13, 127–149. De Liddo, A., Shum, S. B., Quinto, I., Bachler, M., & Cannavacciuolo, L. (2011). Discourse-centric learning analytics. Proceedings of the 1st international conference on learning analytics and knowledge (pp. 23–33). ACM. Dillenbourg, P., & Fischer, F. (2007). Computer-supported collaborative learning: The basics. Zeitschrift für Berufs-und Wirtschaftspädagogik, 21, 111–130. Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards collaboration translucence: Giving meaning to multimodal group data. In Proceedings of ACM CHI conference (CHI’19), Glasgow, UK. New York: ACM. doi: https://doi.org/10.1145/3290605.3300269 Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66.
440
A. F. Wise et al.
Gašević, D., Dawson, S., Mirriahi, N., & Long, P. D. (2015). Learning analytics–A growing field and community engagement (editorial). Journal of Learning Analytics, 2(1), 1–6. Haythornthwaite, C. (2011). Learning networks, crowds and communities. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 18–22). ACM. Howley, I., Adamson, D., Dyke, G., Mayfield, E., Beuth, J., & Rosé, C. P. (2012). Group composition and intelligent dialogue tutors for impacting students’ academic self-efficacy. In S. A. Cerri, W. J. Clancey, G. Papadourakis, & K. Panourgia (Eds.), Intelligent tutoring systems: Lecture notes in computer science (Vol. 7315). Berlin, Heidelberg: Springer. Jermann, P., & Dillenbourg, P. (2008). Group mirrors to support interaction regulation in collaborative problem solving. Computers & Education, 51(1), 279–296. https://doi.org/10.1016/j. compedu.2007.05.012. Jermann, P., Mullins, D., Nüssli, M.-A., & Dillenbourg, P. (2011). Collaborative gaze footprints: Correlates of interaction quality. CSCL2011 Conference Proceedings (Vol. I - Long Papers, pp. 184–191). Kirschner, P. A., Buckingham Shum, S. J., & Carr, C. S. (Eds.). (2003). Visualizing argumentation: Software tools for collaborative and educational sense-making. London, UK: Springer. Kitto, K., Buckingham Shum, S. & Gibson, A. (2018). Embracing imperfection in learning analytics. In Proceedings of LAK18: International conference on learning analytics & knowledge, March 5–9, 2018, Sydney, AUS, pp. 451–460. New York, NY: ACM. doi: https://doi.org/ 10.1145/3170358.3170413 Knight, S., Buckingham Shum, S., & Littleton, K. (2014). Epistemology, pedagogy, assessment: Where learning meets analytics in the middle space. Journal of Learning Analytics, 1(2), 23–47. https://doi.org/10.18608/jla.2014.12.3. Kumar, R., & Kim, J. (2014). Special issue on intelligent support for learning in groups. International Journal of Artificial Intelligence in Education, 24(1), 1–7. https://doi.org/10.1007/ s40593-013-0013-5. Lee, A. V. Y., & Tan, S. C. (2017). Promising ideas for collective advancement of communal knowledge using temporal analytics and cluster analysis. Journal of Learning Analytics, 4(3), 76–101. https://doi.org/10.18608/jla.2017.43.5. Liu, A. L., & Nesbit, J. C. (2020). Dashboards for computer-supported collaborative learning. In M. Virvou, E. Alepis, G. A. Tsihrintzis, & L. C. Jain (Eds.), Machine learning paradigms: Advances in learning analytics (pp. 157–182). Cham: Springer. https://doi.org/10.1007/978-3030-13743-4_9. Magnisalis, I., Demetriadis, S., & Karakostas, A. (2011). Adaptive and intelligent systems for collaborative learning support: A review of the field. IEEE transactions on Learning Technologies, 4(1), 5–20. McLaren, B. M., Scheuer, O., & Mikšátko, J. (2010). Supporting collaborative learning and e-discussions using artificial intelligence techniques. International Journal of Artificial Intelligence in Education, 20(1), 1–46. https://doi.org/10.3233/JAI-2010-0001. Okada, A., Buckingham Shum, S., & Sherborne, T. (Eds.). (2014). Knowledge cartography: Software tools and mapping techniques (2nd ed.). London, UK: Springer. Rosé, C. P. (2018). Learning analytics in the learning sciences. In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 511–519). New York: Routledge. Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education, 26(2), 784–795. Scardamalia, M. (2003). Knowledge building environments: Extending the limits of the possible in education and knowledge work. In A. DiStefano, K. E. Rudestam, & R. Silverman (Eds.), Encyclopedia of distributed learning (pp. 269–272). Thousand Oaks, CA: Sage Publications. Scardamalia, M., Bereiter, C., McLean, R. S., Swallow, J., & Woodruff, E. (1989). Computersupported intentional learning environments. Journal of Educational Computing Research, 5 (1), 51–68. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397.
Collaborative Learning Analytics
441
Schneider, B., & Pea, R. (2015). Does seeing one another’s gaze affect group dialogue? A computational approach. Journal of Learning Analytics, 2(2), 107–133. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2016). Detecting collaborative dynamics using mobile eye-trackers. Proceedings of ICLS 2016. (pp. 522–-529). Singapore: International Society of the Learning Sciences Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. D. (2015). 3D tangibles facilitate joint visual attention in dyads. Proceedings of CSCL 2015 (pp. 158–165). Gothenburg, Sweden: International Society of the Learning Sciences. Schneider, B., Worsely, M., & Martinez-Maldonado, R. (this volume). Gesture and gaze: Multimodal data in dyadic interactions. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Sharma, K., Caballero, D., Verma, H., Jermann, P., & Dillenbourg, P. (2015). Looking AT versus looking THROUGH: A dual eye-tracking study in MOOC context. Proceedings of CSCL 2015 (pp. 260–267). Gothenburg, Sweden: International Society of the Learning Sciences. Soller, A., Jermann, P., Mühlenbrock, M., & Martinez, A. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. Stahl, G. (2013). Transactive discourse in CSCL. International Journal of Computer-Supported Collaborative Learning, 8(2), 145–147. Stahl, G., Jeong, H., Ludvigsen, S., Sawyer, R. K., & Suthers, D. D. (2013). Workshop: Across levels of learning: A workshop on resources connecting levels of analysis. Presented at the International conference of computer-supported collaborative learning (CSCL 2013), Madison, WI. Suthers, D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. https://doi.org/10.1007/s11412-009-9081-9. Suthers, D., & Rosen, D. (2011). A unified framework for multi-level analysis of distributed learning. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 64–74). ACM. Suthers, D., Weiner, A., Connelly, J., & Paolucci, M. (1995). Groupware for developing critical discussion skills. In J. L. Schnase & E. L. Cunnius (Eds.), Proceedings of CSCL95: First international conference on computer support for collaborative learning (pp. 341–348). Bloomington, IN: Lawrence Erlbaum Associates. Tan, J. P-L., Koh, E., Jonathan, C. R., & Tay, S. H. (2018). Visible teaching in action: Using the WiREAD learning analytics dashboard for pedagogical adaptivity. Paper presented at the 2018 Annual meeting of the American educational research association. Retrieved October 31, from the AERA Online Paper Repository. Tan, J. P. L., Koh, E., Jonathan, C. R., & Yang, S. (2017). Learner dashboards a double-edged sword? Students’ sense-making of a collaborative critical reading and learning analytics environment for fostering 21st century literacies. Journal of Learning Analytics, 4(1), 117–140. Teplovs, C., & Fujita, N. (2013). Socio-dynamic latent semantic learner models. In D. D. Suthers, K. Lund, C. P. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 383–396). New York: Springer. https://doi.org/10.1007/978-1-46148960-3_21. Tomasello, M. (1995). Joint attention as social cognition. In C. Moore & P. J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 103–130). Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc. Tsovaltzi, D., Weinberger, A., Schmitt, L., Bellhäuser, H., Müller, A., Konert, J., et al. (2019). Group formation in the digital age: Relevant characteristics, their diagnosis, and combination for productive collaboration. In K. Lund, G. P. Niccolai, É. Lavoué, C. Hmelo-Silver, G. Gweon, & M. Baker (Eds.), A wide lens: combining embodied, enactive, extended, and embedded learning in collaborative settings 13th international conference on computer supported collaborative learning (Vol. 2, pp. 719–726). Retrieved from https://ris.utwente.nl/ws/portalfiles/portal/ 129957345/CSCL_2019_Volume_2.pdf#page¼205.
442
A. F. Wise et al.
van Leeuwen, A. (2015). Learning analytics to support teachers during synchronous CSCL: Balancing between overview and overload. Journal of Learning Analytics, 2(2), 138–162. van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2014). Supporting teachers in guiding collaborating students: Effects of learning analytics in CSCL. Computers & Education, 79, 28–39. van Leeuwen, A., Janssen, J., Erkens, G., & Brekelmans, M. (2015). Teacher regulation of cognitive activities during student collaboration: Effects of learning analytics. Computers & Education, 90, 80–94. Vogel, F., Weinberger, A., & Fischer, F. (this volume). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Walker, E., Rummel, N., & Koedinger, K. R. (2011). Designing automated adaptive support to improve student helping behaviors in a peer tutoring activity. International Journal of Computer-Supported Collaborative Learning, 6(2), 279–306. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46(1), 71–95. Wise, A., Zhao, Y., & Hausknecht, S. (2014). Learning analytics for online discussions: Embedded and extracted approaches. Journal of Learning Analytics, 1(2), 48–71. Wise, A. F. (2019). Learning analytics: Using data-informed decision-making to improve teaching and learning. In O. Adesope & A. G. Rudd (Eds.), Contemporary technologies in education: maximizing student engagement, motivation, and learning (pp. 119–143). New York: Palgrave Macmillan. Wise, A. F., Azevedo, R., Stegmann, K., Malmberg J., Rosé C. P. et al. (2015). CSCL and learning analytics: Opportunities to support social interaction, self-regulation and socially shared regulation. In Proceedings of the 11th International Conference on Computer Supported Learning (pp. 607–614). Gothenburg, Sweden: ICLS. Wise, A. F., Knight, S., & Ochoa, X. (2018). When are learning analytics ready and what are they ready for. Journal of Learning Analytics, 5(3), 1–4. Wise, A. F., & Schwarz, B. S. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Wise, A. F., & Vytasek, J. M. (2017). Learning analytics implementation design. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (1st ed., pp. 151–160). Edmonton, AB: SoLAR. Wise, A. F., Vytasek, J. M., Hausknecht, S., & Zhao, Y. (2016). Developing learning analytics design knowledge in the “Middle Space”: The student tuning model and align design framework for learning analytics use. Online Learning, 20(2), 155–182.
Further Readings2 Erkens, G. (n.d.). Gijsbert Erkens: Automated argumentation analyses. Retrieved September 16, 2019, from ISLS NAPLES Network website: http://isls-naples.psy.lmu.de/intro/allwebinars/erkens/index.html. An example of how analyses of argumentation can be automated for insight.
2
For an overview of learning analytics, readers may refer to the Journal of Learning Analytics (learning-analytics.info—including a special section in Spring 2021 on collaborative learning analytics), the International Conference on Learning Analytics & Knowledge (LAK, www. solaresearch.org/events/lak/), and the Handbook of Learning Analytics (1st edition available at http://solaresearch.org/publications/hla-17 second edition forthcoming). There are also examples of learning analytics work in CSCL, including via the following excellent NAPLES resources:
Collaborative Learning Analytics
443
Janssen, J. (2013). Jeroen Janssen: Group awareness tools. Retrieved September 16, 2019, from ISLS NAPLES Network website: http://isls-naples.psy.lmu.de/intro/all-webinars/janssen_ video/index.html. An overview of a classic CSCL area with parallels in emerging learning analytics dashboard work. Jermann, P. (n.d.). Patrick Jermann: Physiological measures in learning sciences research. Retrieved September 16, 2019, from ISLS NAPLES Network website: http://isls-naples.psy. lmu.de/intro/all-webinars/jermann/index.html. A specific example of how physiological measures can give insight into constructs of interest to the CSCL community. Rosé, C. P. (2014). Carolyn Rosé: Learning analytics and educational data mining in learning discourses. Retrieved September 16, 2019, from ISLS NAPLES Network website: http://islsnaples.psy.lmu.de/intro/all-webinars/rose_all/index.html. Potential for CLA in applying learning analytics and educational data mining to learning discourses. Williamson Shaffer, D. (n.d.). David Williamson Shaffer: Tools of Quantitative Ethnography: Epistemic Network Analysis and nCoder. Retrieved September 16, 2019, from ISLS NAPLES Network website: http://isls-naples.psy.lmu.de/intro/all-webinars/shaffer_video/index.html. An example learning analytics approach grounded in the learning sciences, which demonstrates moving through analytic lenses.
Tools and Resources for Setting Up Collaborative Spaces Carolyn Rosé and Yannis Dimitriadis
Abstract Great strides have been made in the field of CSCL toward fostering diversity at all levels including theory, methods, and technologies. This chapter provides a reflection on the field from the standpoint of the endeavor to provide tools to expedite the work we do as learning scientists. It points to some notable existing resources while also exploring the reasons why the development of high-profile, wide distribution tools to support the work has not been a priority for the community. It then provides a vision for future work that respects these reasons but points to ways community resources might be better served with greater care and attention allocated to this vision. Keywords Tools · Technology · CSCL platforms
1 Definitions and Scope: What Is the Difference Between a Tool and a Technology, and Why Is That Distinction Important? The many chapters within this volume have illustrated the scientific work of the CSCL community using a multitude of lenses, which serve to separate a spectrum of theories, methods, and technologies that are the raw material from which our work as a scientific community is built. Theories are storehouses of knowledge that have grown from our research while methods are practices we take up in order to interpret C. Rosé (*) Language Technologies Institute and Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA e-mail: [email protected] Y. Dimitriadis GSIC/EMIC Research Group and Department of Signal Theory, Communications and Telematics Engineering, Universidad de Valladolid, Valladolid, Spain e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_24
445
446
C. Rosé and Y. Dimitriadis
and use data or observations, and then contribute back to theory. Collaboration and learning each exist as both personal and interpersonal processes. With that in mind, technologies are used to create affordances to foster processes of collaboration and learning as well as to facilitate analysis of data traces captured during collaboration (Kirschner 2002). This does not require that the technologies were initially created specifically for this purpose. One way of conceptualizing our research as learning scientists is that we are working to map the space of properties of technologies that can be leveraged to create affordances that facilitate learning (Sandoval 2014). Technologies function within a social environment in which the affordances are not inherent within the artifact itself, but embedded within the cultural practices around the artifact, which support perception of those affordances (Norman 1999). Although wielding technologies with their multifaceted properties require some art as an endeavor in itself, and though some work in the design-based research area begins to describe methodologies whereby we as a field can systematically work to map out the space just mentioned, overall the published work in CSCL fronts the issues related to collaboration and learning, and relegates tools and technologies to the background. In fact, our experience writing this chapter is that even while specifically searching for articles about tools developed in this community, we are about ten times more likely to find papers about empirically grounded principles that would guide the development of effective tools than papers about tools that have been built using those principles. While this makes sense given the primary scientific focus of the community, it makes it challenging to learn from reading our literature what the available tools and technologies are, though making those resources more visible would lower barriers to entry to our field. Running the risk of appearing “technocentric” in our stance in contrast to that norm, the goal of this chapter is to provide a conceptual frame in which to think about emerging technologies in our work as learning scientists and how tools are meant to make those accessible within our work, thus highlighting rather than obscuring the role of tools in the learning sciences. The fact is that we live in a progressively more computer based and online world. This was far less true at the inception of the CSCL community than it is today. Even if we limit the time, our kids spend on screens when they’re small, more and more, human beings in the modern world live their lives in interaction with and within computers. Thus, if learning is going to be situated within the world, we must understand how to situate it in the spaces where people live and act, and more and more, that means we must learn to situate learning within online and computer-based spaces, which are created by technologists, for better or for worse. That being the case, technologists continue to build that world as they develop new technologies that initially appear quite opaque and out of reach to those outside core fields of computer science, and sometimes strain the skills of technical people in more applied areas of computer science at the frontiers of fields like education, such as educational technology. It takes work to make new technologies usable by those whose center of gravity falls within spheres other than technology. Within the increasingly rapidly changing world we live in, it becomes more and more important for our field to bring CS, design, education, and psychology together in productive synergy in order to keep up. As new technologies are created, most probably beginning in core fields of CS quite apart from research in education, we as a community can choose to ignore/
Tools and Resources for Setting Up Collaborative Spaces
447
Fig. 1 An illustration of the relationship between technology, tools, and interventions
reject them out of hand, or we can test them and adapt them using our theories as guides, and then refine them through continued experimentation. While on the one hand, we can adopt a reactionary stance that clings to fear of the potential dangers of flocking to every passing technical fad (Rummel et al. 2016), that reactionary stance might leave us running the risk of losing touch with the world students live in and becoming irrelevant. The specific focus of this chapter is on tools. However, the term “tool” has been used in more than one way within the field and thus requires a working definition for this chapter. Most typically, the word tool conjures up images connected with technology in some way. However, the recent Wise and Schwarz (2017) article characterizing the field and presenting eight provocations adopts a different perspective. That article described tensions around a vision for providing a comprehensive set of tools to the field as the first of the provocations. Not only does that provocation acknowledge that the field is quite far from having this to offer and also question whether that is a goal we should take up, its notion of “tool” is actually distinctly different from what we are talking about here. In particular, the focus of that provocation was on theoretical frameworks as tools, and very little of what is written in that provocation is about technology at all. We will come back to this provocation in a moment. The notion of “tool” adopted in this chapter is that a tool is a technological artifact. And in connection with that we examine how artifacts may function as tools as part of social processes that cast the technology in that tool role. However, we do not consider every technological artifact as a tool. The confusion arises because the same artifact may function sometimes as a tool and sometimes as a technology. However, a tool does not need to include technology per se and technology may not be a tool, and thus there must be a distinction. We illustrate this idea in Fig. 1 and
448
C. Rosé and Y. Dimitriadis
explain it in the remainder of this section. Note that there is nothing in this conceptualization that requires either the tools or the technologies to exist only for the purpose of use within our field of learning sciences. Nevertheless, to the extent that we as learning scientists use these to carry out our mission, these become our technologies and our tools, even if they were not originally created with that intent, and even if they are not limited to that use. The English Oxford Living Dictionary defines a tool as “a device or implement, especially one held in the hand, used to carry out a particular function.” Here we adopt a slightly broader framing, which allows a tool to be used within nonphysical spaces. In particular, we consider a tool to be an instrument designed to enhance the ability of an individual or group in their active work to produce an artifact. For example, a dictionary is a tool that might be used in support of formulation of the articulation of an idea. The dictionary need not be physical. For example, it might be accessible through a speech interface. The act of formulation might occur within the speaker’s mind. And the articulation may again be in speech. Thus, neither the tool nor the artifact is physical in this scenario. The key is the agency taken by the tool user. This definition, which privileges agency, distinguishes tools from technologies. For example, social awareness visualizations create affordances that support collaborative processes, but when these are placed within an environment through design and development, the users who then are placed into the environment may use or benefit from those resources without taking agency to select those affordances to actively pursue a goal. In that case, the visualizations are technological affordances, but they are not functioning in this scenario as tools, though tools may have enabled those affordances to be embedded within the environment. As illustrated in Fig. 1, tools transform technology into interventions used by teachers and students. To the extent that technologies are what enable affordances that support learning processes, technologies cannot be considered apart from the affordances they support. Developers use tools in accordance with design principles and contribute back to design principles through their practice (Sandoval 2014). Theory sets expectations for what can be achieved through the use of them in creating, orchestrating, and studying collaborative learning. Researchers use tools to develop interventions in accordance with theory and contribute back to theory in their experimentation with those interventions. Depending on role, the same artifact may sometimes serve as a technology (e.g., the software for a collaboration platform, which may be found in a code repository), may have associated tools that can be used to build interventions (e.g., configuration scripts that come with the software), or an intervention itself (e.g., an instance of the environment that may house an online activity that will be compared experimentally with a face-to-face version). Coming back to Wise and Schwarz (2017), we must acknowledge that technologies and tools as they are appropriated within our work do not exist within a theoretical and methodological vacuum as hinted in the paragraph above. Instead, they frequently come with some theoretical “baggage.” For example, the idea of a script used to scaffold collaborative processes reifies the belief that collaborative skills begin with externalized knowledge, which is then internalized over time as scaffolding enables practice (Vogel et al. this volume). Because of that, none of the
Tools and Resources for Setting Up Collaborative Spaces
449
artifacts are universally usable/transferable across the field because there isn’t one theoretical framework that everyone’s work can comfortably be situated within. The Wise and Schwarz provocation focuses on the lack of theoretical uniformity across the field and thus casts doubt that tools may be created with universal usefulness across the field. This chapter presents the other side of the coin—if there is a lack of uniformity in theoretical underpinnings (or even theoretical tools) across the field, and the tools and other software artifacts we create and use are not divorced from these underpinnings, then it would not be reasonable to expect that there could be a common set of tools out there for new researchers to use without first considering their appropriateness. One goal of this chapter is to highlight this consideration in order to encourage reflective uptake of tools and technologies within the field, in the same spirit as Sandoval (2014) has argued.
2 History and Development: Toward Formalization of the Work of Learning Sciences As recounted in the first declaration of this history of the field (Stahl et al. 2014), the earliest foundational work in the field of computer-supported collaborative learning produced platforms for hosting the earliest research in field such as ENFI, CSILE, and 5thD. These platforms were designed and developed in order to embody specific theoretical frameworks. Over time, more general-purpose platforms were constructed, such as Moodle (Dougiamas and Taylor 2003). To enable customization of more general-purpose platforms in order to support learning, frameworks have been developed to motivate platform designs for creating affordances to encourage collaboration, such as the Script Theory of Guidance (Vogel et al. this volume), and more focused tools have been developed to enable the use of such frameworks in the preparation for supporting and studying collaborative learning. The focus of this chapter is specifically on tools as we have defined that term and even more specifically on tools as they are taken up and used to support the work of learning scientists. Thus, while works characterizing a spectrum of technologies relevant for the field have organized the presentation based on different affordances for collaboration (Jeong and Hmelo-Silver 2016), we instead organize our review around kinds of work that researchers do in the field to situate their work within technical spheres. Within that framing, a tool is a resource that aids a learning scientist in doing this important part of the work of our field. We turn now to defining what we consider the work of learning scientists in the field of computer-supported collaborative learning. Many of the readers of this chapter will be researchers, and as such, we may be tempted to think the work we do in this community is research. The truth is that all of the stakeholders we see in Fig. 1 are doing work that is part of the field of CSCL. Nevertheless, good writing must take on a point-of-view in order to remain focused and coherent. The remainder of this chapter will, therefore, adopt the lens of the researcher, and thus we will use
450
C. Rosé and Y. Dimitriadis
research as the frame of reference, and refer to the other activities within that frame. In that case, the primary work is design of studies that enable further development of theories and design principles. Theory drives the use of tools for the purpose of advancing theory. Within studies, if researchers use tools, it is largely in service of constructing interventions to be used in the research studies with the goal of contributing back to theory. Interventions are embedded within contexts and are constructed using what we will define below as foundation tools, building tools, and management tools. Interventions produce data. A final type of tools is then used to transform the data into findings that contribute answers to research questions. We will define these tools below as Analysis tools. In the remainder of this section, we introduce each of these types of tools and discuss the reflective process of a researcher taking up and using these tools. In the next section, we then elaborate on these definitions and illustrate them with specific examples. With the above preface in mind, one initial stage of the work is embedding it in a learning context. Sometimes, the collaboration is housed completely within an online environment, such as a chat space, Wikipedia, GitHub, a MOOC platform, or any sort of virtual learning environment (VLE). In this case, the online space is the context. At the other end of the spectrum, the collaboration may be embedded in the physical world, and the technology may be mainly an artifact. In that case, the notion of a platform is less relevant. In some face-to-face collaborations, mobile devices are positioned in between these two extremes, where participants may use a digital notebook that prompts interaction with it and potentially with other students. In that case, the software running on the mobile device could still be thought of as a platform. If a context with appropriate affordances to foster desired collaborative behaviors or at least provide the opportunity to engage in them, then one of the tasks required in order to set up a collaboration, for research, practice, or a combination of the two, is to construct the context. As we see in Fig. 1, the platform is a technology with affordances, which has been constructed by a developer using design principles. That developer might be instructed by a researcher, drawing upon theory to design a context that is appropriate for addressing a research question that contributes back to theory. These stakeholders may work together to do this task. Instructors may be involved as well, alternately wearing their teacher hat to construct a learning opportunity that meets a curricular objective, then as a teacher–researcher to bridge between research questions and those curricular goals, and finally as a developer, working to set up the environment with desirable affordances. Platforms are tools that enable the creation of contexts in which to house collaborative learning encounters. We can consider platforms to be Foundation tools. Multiple approaches have been employed in the frame of research or even wider projects. In most cases, ad hoc learning environments are crafted, typically guided by researchers and the corresponding research objectives. In other cases, specific tools may be embedded/plugged in an existing learning environment (that allows for some collaborative learning affordances), such as a standard VLE/LMS. In this case, the context for CL is set through “bricolage” (Lévi-Strauss 1962), either by a researcher or a teacher, allowing for collaborative learning activities and lesson plans, e.g., a small group activity or a peer review, in Moodle. Finally, according to
Tools and Resources for Setting Up Collaborative Spaces
451
the learning design approach, a lesson plan/activity/course is designed and configured using script authoring tools, using and customizing existing collaborative learning templates/patterns, such as jigsaw. Then, the lesson plan is deployed in a target learning environment, thus setting up the context for the collaborative encounter. Technologies create affordances. Building tools can be used to embed those affordances within platforms to customize them for the specific work. An example might be a chat plugin or a group awareness tool. Typically, developers are the stakeholders who wield these tools. They do so in order to endow a context with the affordances that are needed to support desired processes. This work is informed by theory and design principles and may be directed by scientists. Teachers, as consumers of theory, are less likely to have time to engage in this form of development with tools. However, it is not completely unheard of. Sometimes this embedding is specific to experimental conditions, and other times these affordances are provided to all participants. Technologies that have been packaged for easy integration into platforms could be considered tools. This packaging is often an abstraction layer created in part to insulate the user from the details of how the technology works. This abstraction should enable more focus on the achievement of embedding an affordance within a platform and less focus on the technical details about how the affordances are achieved. More abstraction achieves a greater insulation, potentially greater ease, but also limits the degrees of freedom in the resulting affordances. Less abstraction leaves more degrees of freedom for those users who have the expertise to wield the technology, but that requires more expertise and more work. These are enduring trade-offs within the realm of tools. Tools may also exist to assist with the orchestration of an activity—we can refer to these as Management tools. Rather than creating affordances, these tools are used to simplify and/or speed up the management work an instructor does in support of collaboration within a platform. Recently there has been increasing amounts of work on dashboards to aid teachers in their work managing the learning experiences of their students, usually in real time (Matuk et al. 2019). Management tools might include applications that assign students within a class to learning groups or aid the grading and feedback process. From a student perspective, management tools might be used to aid in reflection or offer direction or correction during or after the learning process (Soller et al. 2005). Tools of this sort might also remind students when a transition to a new task phase is about to begin. Finally, Analysis tools are used by researchers to understand what has transpired during collaboration in order to answer research questions and thus contribute back to theory. At the most basic and general-purpose level, there are a plethora of general-purpose statistical analysis tools such as Excel, JMP, and SPSS, which are probably the most common. There are tools for building visualizations, like Tableau. There are also modeling technologies for specific kinds of data. Other types of analysis tools include tools for social network analysis (Gephi, etc.), eye-tracking tools.
452
C. Rosé and Y. Dimitriadis
3 State of the Art Tools: In Support of Foundations, Building, Orchestrating, and Analyzing Here we discuss examples of specific tools that have been used in the four types of work of learning scientists that employ tools, which we introduced in the last section, namely building the foundation or context for collaboration, building interventions within contexts, orchestrating student and teacher activities in learning contexts, and analyzing data. At every stage of the work of setting up to study collaborative learning, a researcher engages in reflection about the needed affordances for supporting the desired processes that are valued within a theoretical framework. Tools are used to fashion technologies to create these affordances within the specific context of learning. Engaging in this reflection is the conjecturing process advocated in prior work (Sandoval 2014). As a particular illustrative running example, we will discuss Minerva (Kosslyn and Nelson 2017), which is a convenient running example because almost all tool types of our taxonomy can be illustrated within it, though we are not arguing that it serves as a “gold standard” of any sort. In spite of the project’s proprietary nature that does not allow learning scientists to have direct access to its tools, it is still interesting to show some of the design decisions and features. Other initiatives, such as FROG (Håklev et al. 2019), are also sketched in this section as good examples of open-source projects. One would wish that Minerva and other similar projects would be readily available to learning scientists in the near future. Minerva’s tools are especially interesting for the CSCL community since they mainly aim to support active learning in small seminars, i.e., a flipped classroom model which is based on synchronous online collaborative activities at a small group and whole classroom level. In particular, the Active Learning Forum—ALF—is the learning platform for active learning in small seminars, which allows for orchestration and analysis. On the other hand, the course builder enables “designing, coordinating, and improving course syllabi and lesson plans created for ALF.” It serves both as a foundation and a building tool. Finally, the assessment tools can serve for analysis since they are based on a well-defined hierarchical tree that provides a longitudinal and coherent assessment for each student, while dashboards allow for progress tracking. In addition to Minerva, we will mention other tools as well along the way as additional illustrations.
3.1
Foundation Tools
Foundation tools such as Platforms enable the creation of contexts in which to house collaborative learning encounters. Within Minerva, Course Builder is an integrated environment that supports the whole process of course development and review by the different stakeholders.
Tools and Resources for Setting Up Collaborative Spaces
453
Lesson plans are based on a library of existing templates—scripts that implement sound pedagogical and orchestration practices, as e.g., breakout groups for peer instruction, group formation policies, or “flex time” to accommodate for inevitable time variance regarding the duration of steps and activities. Although the teacher is the main stakeholder in creating the syllabus and the lesson plans, best practices are embedded in order to provide consistency across courses, and constraints are implemented so that all designed steps and activities can be effectively deployed in the target ALF environment. Overall, Course Builder implements a Learning Design approach in which multiple stakeholders are involved, and it allows deployment in a single target learning platform, as opposed to a bricolage approach in which context is directly authored in the learning environment. Such an approach allow for reuse and consistency within an institution. We should mention that Course Builder is not especially innovative in terms of its main conception, since similar tools have been developed and employed for the creation of learning contexts, especially related to online education. Nevertheless, it has integrated several strategies that are especially interesting for the support and study of design and orchestration of collaborative learning. The creation of hybrid learning spaces (Cook et al. 2015), in which learning occurs beyond the boundaries of dichotomies, such as digital/physical or formal/informal, is much more challenging, and therefore appropriate tools are still missing, so that learning scientists may experiment in these new contexts.
3.2
Building Tools
Building tools enable building interventions, for example, for collaboration support. Collaborative learning encounters occur specifically in ALF, a platform that embeds various technologies, e.g., videoconferencing, clicker-type polling and voting, collaborative whiteboards. Teachers may dynamically adapt/change the lesson plans that were previously designed and deployed using Course Builder, and they can perform multiple orchestration/management actions. For example, on the fly, they can monitor and adjust timing of activities, reorganize the timeline of the lesson plan, or manage dominant students through the talk-time tool. Similarly, they can visit and listen to breakout groups, display contents–artifacts of small groups for the whole classroom, initiate peer-review and classroom-level discussions, or rotate groups. Notably, ALF tools aim at reducing the teacher cognitive/orchestration load of such a complex collaborative learning setting providing easy access to critical data and actions, using overlays instead of dashboards, and allowing them to “take or leave any insight the tools provide.” At least a decade of research shows that students can benefit from their interactions in learning groups when automated support is provided, especially interactive and context-sensitive support. The aforementioned Learning design approach has been exemplified in multiple script authoring tools, e.g., WebCollage (VillasclarasFernández et al. 2013) or Edit2 (Sobreira and Tchounikine 2012, 2015). Well
454
C. Rosé and Y. Dimitriadis
established collaborative learning patterns/scripts (such as pyramid, jigsaw, or think–pair–share) can be customized and reused, or combined with other activities at multiple social planes. These scripts can be shared and reused in a community, and they can be deployed in a target learning environment (Hernández-Leo et al. 2018). Until recently, the state-of-the-art in computer-supported collaborative learning has consisted static forms of support, such as structured interfaces, prompts, and assignment of students to scripted roles. Now technology for dynamic support of collaborative learning is publically available. Some results suggest that dynamic script-based support for collaborative learning leads to improvements in learning over otherwise equivalent static forms of support, and leads to 1.24 standard deviations more learning than learning with the same materials alone and without the support of a conversational agent (Kumar et al. 2007). Work in the past decade building on this foundational result has made use of the Bazaar architecture and tool kit (Adamson et al. 2014). This and other tools for building synchronous and asynchronous collaborative learning interventions are available through the DANCE website (http://dance.cs.cmu.edu).
3.3
Management Tools
FROG (Håklev et al. 2019, 2017) is an open-source tool (https://github.com/chiliepfl/FROG), which allows authoring and running orchestration graphs (Dillenbourg 2015). Similarly to the Minerva or WebCollage—ILDE projects, FROG follows the Learning design approach, in order to create the context for the collaborative learning encounter. As a platform, it covers all types of tools in our taxonomy, since it allows teachers and researchers to author and build the learning scenario (orchestration graph); teachers to run, monitor, and manage the graphs; and researchers to analyze the rich set of data that emerge. Orchestration graphs, as implemented through FROG, represent the interplay of activity and data flows along multiple social planes (individual, small group, and classroom). Thus, rich educational scenarios (scripts) can be defined, contextualized, run, monitored, and managed in both blended and purely online collaborative settings. Besides the authoring tool and the runtime engine, FROG provides a rich and extensible library of activity types and operators that may be reused in multiple scripts. It is noteworthy that multiple orchestration actions can be carried out by the teacher, informed through learning analytics dashboards. FROG is a good example of a tool suite that can be shared, used, and extended by the learning scientists, since it is an open-source project with a clear design and API; it takes advantage of the flexible representation of orchestration graphs; and FROG activities may be embedded in other systems, as e.g., GRAASP (http://graasp.eu/) for inquiry-based learning, while H5P (https://h5p.org/), PhET (https://phet. colorado.edu/en/simulations) or WISE (https://wise.berkeley.edu/) activities may be embedded in FROG scripts.
Tools and Resources for Setting Up Collaborative Spaces
3.4
455
Analysis Tools
Analysis tools allow for real-time sensing of collaborative processes as well as analysis of collaborative learning data after it has been collected. Automatic analysis of collaborative processes has value for real-time assessment during collaborative learning, for dynamically triggering supportive interventions in the midst of collaborative-learning sessions, and for facilitating efficient analysis of collaborative-learning processes. Context-sensitive or need-based support for collaborative learning necessitates online monitoring of collaborative learning processes. Work in that area has been active for at least a decade (Rosé et al. 2008). Minerva addresses some unique concerns/issues regarding assessment, which are closely related to analysis tools. On the one hand, Minerva’s assessment/analysis tools provide formative feedback in context, using e.g., the video analysis and transcription tool for each ALF-based seminar. On the other hand, tools support aggregate and consistent assessment across different activities and courses for every single student. Such global assessment is based on a hierarchical tree which aims to provide aggregate assessment regarding “the four core competencies of thinking critically, thinking creatively, communicating effectively, and interacting effectively.” Such longitudinal assessment is made possible due to the centrally controlled hierarchical assessment tree and the consistent use of course scripts (course builder) and learning platform (ALF). Finally, student- and teacher-oriented assessment dashboards allow for tracking of the progress of such assessment. Other specific analysis tools focus on different parts of the problem. There are tools for data infrastructure, such as DiscourseDB (Rosé 2017). There has been a growing interest in analytics applied to discourse data (Buckingham-Shum 2013; Buckingham-Shum et al. 2014). An extensive overview of issues and methods in computer-supported collaborative learning can be found in three earlier published journal articles (Rosé et al. 2008; Mu et al. 2012; Gweon et al. 2013). The area of automatic collaborative process analysis has focused on discussion processes associated with knowledge integration. Frameworks for analysis of group knowledge building are plentiful and include examples such as transactivity (Berkowitz and Gibbs 1983; Teasley 1997; Weinberger and Fischer 2006) or intersubjective meaning making (Suthers 2006). So far work in automated collaborative learning process analysis has focused on text-based interactions and key-click data (Soller and Lesgold 2000; Erkens and Janssen 2008; Rosé et al. 2008; McLaren et al. 2007; Mu et al. 2012; Fiacco and Rosé 2018). However, similar work applied to audio data has begun (Gweon et al. 2013), and in the future, we anticipate automated analysis of video will enable dynamic support for learning in groups, which is starting to be used in MOOC contexts, currently unsupported (Kulkarni et al. 2015). LightSIDE is a widely used open-source tool bench for building automated models for monitoring collaborative discourse in real time (Mayfield and Rosé 2013).
456
C. Rosé and Y. Dimitriadis
4 The Future: From Research into Practice In this chapter, we have focused on supporting the work of learning scientists. We have introduced our conceptualization of tools and have offered an overview of some tools that have been used in the CSCL community. The specific focus of this chapter has been on doing research rather than preparing to teach or doing teaching or even doing learning. However, one can imagine there may also be tools designed specifically to aid instructors in doing their work as well. Some existing tools, like FROG, Minerva, or WebCollage, may function along multiple dimensions. Our goal has been to highlight the reflective process of using tools to create technologies with affordances that foster desired collaborative processes, perhaps within a reflective design-based research methodology (Sandoval 2014), or analogously within a quantitative, experimental approach. We close the chapter by returning again the Wise and Schwarz provocation related to tools. We see that there are four areas of work that learning scientists do where tools are most relevant. The technologies that are used in these four areas are quite distinct and draw from very different areas of expertise. Some are very generic in terms of theoretical framework, but specific to a methodology. For example, analysis tools like LightSIDE (Mayfield and Rosé 2013) are specific to quantitative approaches to the study of CSCL, but apart from that could be used in connection with virtually any theoretical framework. On the other hand, a context tool (like the Knowledge Forum infrastructure) is specific to a particular theoretical framework, namely Knowledge Building. It tends to be studied within a sociocultural research approach, nevertheless, quantitative methods can and have been applied within this work. With these large existing distinctions, which we as a community are not willing or interested in sacrificing, it seems unlikely that one uber-tool kit will ever be brought into existence in this community. Nevertheless, what we can do is allocate more time and space in publications to making the reflective process of tool use within our work more visible so that the community can learn from our work not only the scientific findings we have gained but also learn the meta-knowledge related to practices for engaging effectively in the reflective tool use process we have highlighted in this chapter. More broadly, we began this chapter with the observation that tool usage has not been a major focus of high-profile publications in the field of CSCL. Though we find tool papers in relevant journals, and while we find some tools used to create interventions reported in the same journals, what we don’t see is a major thrust for researchers to use tools created by other researchers in the field. This situation represents a continual state of missing opportunities for readers of our work to learn how to do the work of our field through imitation of these important practices. As an example of how things could be different: Resource sharing among researchers has played a bigger role in more technology-focused research communities such as the Language Technologies Community. Here large governmentfunded research efforts have produced publically available tool kits (McCallum 1996; Bird & Loper 2004; Manning et al. 2014) and benchmark data sets (Iida
Tools and Resources for Setting Up Collaborative Spaces
457
et al. 2017; Malmasi et al. 2013), which have been instrumental in making the research applicable in a variety of application areas (Chapman et al. 2011). Access to large data resources is also available through the Linguistic Data Consortium (Liberman and Cieri 1998). This resource sharing has been fueled in part by the existence of yearly competitions. Shared tasks with shared resources lower barriers for entry, enhance reproducibility, and foster more intensive competition, which fuels technical advancement. Moreover, any open-source and open data initiatives greatly contribute to collaborative efforts that may lead to more efficient support to learning scientists. As a side note, a notable line of work in some related technologyenhanced learning communities, which has not been a major focus in the CSCL community, is work on standards and interoperability (e.g., LTI, Learning Tools Interoperability or xAPI https://xapi.com/overview/), or the proposals formulated by the EU-funded project LACE for learning analytics http://www.laceproject.eu/lace/. These efforts serve to make existing tools more usable and useful across a wider range of contexts and encourage efficient use of community resources. We end by challenging the field to take up the vision of active partnership between education researchers and core computer scientists to join forces to address this problem going forward. Through this engagement, more technical expertise would be made readily available within the community, and open dialogue could then be fostered in which exchange of ideas may be allowed to shape the agenda of what research questions could be addressed through an integration of deep insight into rich theoretical frameworks and appreciation for more of what is enabled through advanced technologies. Some available wisdom may exist within work related to casting artificial intelligence as a material for designers (Yang et al. 2018a, 2018b).
References Adamson, D., Dyke, G., Jang, H., & Rosé, C. P. (2014). Towards an agile approach to adapting dynamic collaboration support to student needs. International Journal of Artificial Intelligence in Education, 24(1), 92–124. Berkowitz, M. W., & Gibbs, J. C. (1983). Measuring the developmental features of moral discussion. Merrill-Palmer Quarterly (1982-), 399–410. Bird, S., & Loper, E. (2004). NLTK: the natural language toolkit. In Proceedings of the ACL 2004 on interactive poster and demonstration sessions (p. 31). Association for Computational Linguistics. Buckingham-Shum, S. (2013). Proceedings of the 1st international workshop on discourse-centric analytics, workshop held in conjunction with learning analytics and knowledge 2013, Leuven, Belgium. Buckingham-Shum, S., de Laat, M., de Liddo, A., Ferguson, R., & Whitelock, D. (2014). Proceedings of the 2nd international workshop on discourse-centric analytics, workshop held in conjunction with learning analytics and knowledge 2014, Indianapolis, IN. Chapman, W. W., Nadkarni, P. M., Hirschman, L., D'avolio, L. W., Savova, G. K., & Uzuner, O. (2011). Overcoming barriers to NLP for clinical text: The role of shared tasks and the need for additional creative solutions. Journal of the American Medical Informatics Association, 18 (5), 540–543. https://doi.org/10.1136/amiajnl-2011-000465.
458
C. Rosé and Y. Dimitriadis
Cook, J., Ley, T., Maier, R., Mor, Y., Santos, P., Lex, E., Dennerlein, S., Trattner, C., & Holley, D. (2015). Using the hybrid social learning network to explore concepts, practices, designs and smart services for networked professional learning. In Y. Li, M. Chang, M. Kravcik, E. Popescu, R. Huang, & N.-S. C. Kinshuk (Eds.), State-of-the-art and future directions of smart learning, proceedings of international conference on smart learning environments (ICSLE 2015), 23–25 Sep’15, Sinaia, Romania. Heidelberg: Lecture Notes in Educational Technology, SpringerVerlag, GmbH. Dillenbourg, P. (2015). Orchestration graphs. Lausanne: EPFL Press. Dougiamas, M., & Taylor, P. (2003). Moodle: Using learning communities to create an open source course management system. In D. Lassner & C. McNaught (Eds.), EdMedia + innovate learning (pp. 171–178). North Carolina, USA: Association for the Advancement of Computing in Education (AACE). Erkens, G., & Janssen, J. (2008). Automatic coding of dialogue acts in collaboration protocols. International Journal of Computer-Supported Collaborative Learning, 3(4), 447–470. Fiacco, J. & Rosé, C. P. (2018). Towards domain general detection of transactive knowledge building behavior. In Proceedings of the fifth ACM conference on Learning@Scale. Gweon, G., Jain, M., Mc Donough, J., Raj, B., & Rosé, C. P. (2013). Measuring prevalence of other-oriented transactive contributions using an automated measure of speech style accommodation. International Journal of Computer Supported Collaborative Learning, 8(2), 245–265. Håklev, S., Faucon, L., Olsen, J., & Dillenbourg, P. (2019). FROG, a tool to author and run orchestration graphs: Affordances and tensions. In Proceedings of the 13th conference on Computer-Supported Collaborative Learning, CSCL 2019 special interactive session. Håklev, S., Faucon, L., Hadzilacos, T., & Dillenbourg, P. (2017). Orchestration Graphs: Enabling rich social pedagogical scenarios in MOOCs. In Proceedings of the Fourth (2017) ACM conference on Learning@Scale (pp. 261–264). ACM. Hernández-Leo, D., Asensio-Pérez, J. I., Derntl, M., Pozzi, F., Chacón, J., Prieto, L. P., & Persico, D. (2018). An integrated environment for learning design. Frontiers in ICT, 5, 9. https://doi.org/ 10.3389/fict.2018.00009. Iida, R., Komachi, M., Inoue, N., Inui, K., & Matsumoto, Y. (2017). NAIST text corpus: Annotating predicate-argument and coreference relations in Japanese. In N. Ide & J. Pustejovsky (Eds.), Handbook of linguistic annotation (pp. 1177–1196). Dordrecht: Springer. Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educational Psychologist, 51(2), 247–265. Kirschner, P. (2002). Can we support CSCL? Educational, social and technological affordances for learning. In P. Kirschner (Ed.), Three worlds of CSCL: Can we support CSCL (pp. 7–47). Heerlen: Open University of the Netherlands. Kosslyn, S. M., & Nelson, B. (2017). Building the intentional university: Minerva and the future of higher education. Cambridge: MIT Press. Kulkarni, C., Cambre, J., Kotturi, Y., Bernstein, M. S., & Klemmer, S. R. (2015). Talkabout: Making distance matter with small groups in massive classes. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing (pp. 1116–1128). ACM. Kumar, R., Rosé, C. P., Wang, Y. C., Joshi, M., & Robinson, A. (2007). Tutorial dialogue as adaptive collaborative learning support. Frontiers in Artificial Intelligence and Applications, 158, 383. Lévi-Strauss, C. (1962). The savage mind. Letchworth: Garden City Press. Liberman, M., & Cieri, C. (1998). The creation, distribution and use of linguistic data: The case of the linguistic data consortium. In Proceedings of the 1st international conference on language resources and evaluation (LREC) (pp. 159–164). Malmasi, S., Wong, S. M. J., & Dras, M. (2013). NLI shared task 2013: MQ submission. In Proceedings of the Eighth Workshop on innovative use of NLP for building educational applications (pp. 124–133).
Tools and Resources for Setting Up Collaborative Spaces
459
Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations (pp. 55–60). Matuk, C., Tissenbaum, M., & Schneider, B. (Eds.). (2019). Real-time orchestrational technologies in computer-supported collaborative learning. International Journal of Computer Supported Collaborative Learning, 14, 251–414. Mayfield, E., & Rosé, C. P. (2013). LightSIDE: Open source machine learning for text. In Handbook of automated essay evaluation (pp. 146–157). New York: Routledge. McCallum, A. K. (1996). Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. Retrieved from http://www.cs.cmu.edu/mccallum/bow/. McLaren, B., Scheuer, O., De Laat, M., Hever, R., de Groot, R. & Rosé, C. P. (2007). Using machine learning techniques to analyze and support mediation of student E-discussions. Proceedings of the 2007 conference on artificial intelligence in education: Building technology rich learning contexts that work (pp. 331–338). Mu, J., Stegmann, K., Mayfield, E., Rosé, C. P., & Fischer, F. (2012). The ACODEA framework: Developing segmentation and classification schemes for fully automatic analysis of online discussions. International Journal of Computer Supported Collaborative Learning, 138, 285–305. Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38–43. Rosé, C., Wang, Y. C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., & Fischer, F. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer-Supported Collaborative Learning, 3(3), 237–271. Rosé, C. P. (2017). Expediting the cycle of data to intervention. Learning: Research and Practice, 3 (1), 59–62. Special issue on learning analytics. Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education, 26(2), 784–795. Sandoval, W. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18–36. Sobreira, P., & Tchounikine, P. (2012). A model for flexibly editing CSCL scripts. International Journal of Computer-Supported Collaborative Learning, 7(4), 567–592. https://doi.org/10. 1007/s11412-012-9157-9. Sobreira, P., & Tchounikine, P. (2015). Table-based representations can be used to offer easy-touse, flexible, and adaptable learning scenario editors reference. Computers & Education, 80, 15–27. https://doi.org/10.1016/j.compedu.2014.08.002. Soller, A., & Lesgold, A. (2000). Knowledge acquisition for adaptive collaborative learning environments. In AAAI fall symposium: Learning how to do things (pp. 251–262). Soller, A., Martínez, A., Jermann, P., & Muehlenbrock, M. (2005). From mirroring to guiding: A review of state of the art technology for supporting collaborative learning. International Journal of Artificial Intelligence in Education, 15(4), 261–290. Stahl, G., Koschmann, T., & Suthers, D. (2014). Computer-supported collaborative learning. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (2nd ed., pp. 479–500). Cambridge: Cambridge University Press. Suthers, D. D. (2006). Technology affordances for intersubjective meaning making: A research agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1(3), 315–337. Teasley, S. D. (1997). Talking about reasoning: How important is the peer in peer collaboration? In L. B. Resnick, C. Pontecorvo, & R. Säljö (Eds.), Discourse, tools, and reasoning: Essays on situated cognition (pp. 361–384). Berlin, Heidelberg: Springer. Villasclaras-Fernández, E., Hernández-Leo, D., Asensio-Pérez, J. I., & Dimitriadis, Y. (2013). Web collage: An implementation of support for assessment design in CSCL macro-scripts. Computers and Education, 67, 79–97.
460
C. Rosé and Y. Dimitriadis
Vogel, F., Weinberger, A., & Fischer, F. (this volume). Collaboration scripts: Guiding, internalizing, and adapting. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer supported collaborative learning. Computers & Education, 46, 71–95. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Yang, Q., Banovic, N., & Zimmerman, J. (2018a). Mapping machine learning advances from HCI research to reveal starting places for design innovation. In Proceedings of the 2018 CHI conference on human factors in computing systems (p. 130). ACM. Yang, Q., Scuito, A., Zimmerman, J., Forlizzi, J., & Steinfeld, A. (2018b). Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 designing interactive systems conference (pp. 585–596). ACM.
Further Readings1 Jeong, H., & Hmelo-Silver, C. E. (2016). Seven affordances of computer-supported collaborative learning: How to support collaborative learning? How can technologies help? Educational Psychologist, 51(2), 247–265. In this chapter, we have made a distinction between tools and technologies, but this survey article by Jeong and Hmelo-Silver provides a synergistic conceptual framework for thinking about technologies. Kosslyn, S. M., & Nelson, B. (2017). Building the intentional university: Minerva and the future of higher education. Cambridge: MIT Press. Throughout this chapter, we have mentioned several tools developed within the Minerva project, which are further described in Kosslyn and Nelson’s 2017 book. We offer it as a highly relevant additional source of information on tools and technologies for learning.
1
Many tools and resources are available for free and open download at the following URLs: Discussion Affordances for Natural Collaborative Exchange (DANCE) http://dance.cs.cmu.edu FROG https://github.com/chili-epfl/FROG LearnSphere http://learnsphere.org/
Part IV
Methods
Case Studies in Theory and Practice Timothy Koschmann and Baruch B. Schwarz
Abstract What sets CSCL research apart is a principled commitment to learning in settings of collaboration. This commitment necessitates developing a foundational understanding of how participants build meaning together in practical situations. Case studies are a traditional means of investigating such matters. Researchers must be cognizant, however, of the assumptions underlying their approach. Historically, case studies have been undertaken within multiple disciplines and from a variety of theoretical perspectives. We provide here a set of examples in CSCL research. Questions that arise include: What is being construed as a “case?” How was it selected? What forms of contrast are built into the analysis and to what end? What is the role of time and sequence within the analysis? Does the study seek to alter the social phenomenon under investigation or merely document it faithfully? As case studies become a more prominent feature of CSCL research, we need to develop a keener appreciation of these issues. Keywords Ethnography · Cultural-Historical Activity Theory (CHAT) · Critical Theory · Dialogic Theory · Actor-Network Theory (ANT) · Ethnomethodology · Conversation Analysis (CA)
1 Definitions and Scope In this chapter, we take up the humble case study, by which we mean the careful examination and description of a single entity or event. In anthropology, where case studies abound, the term ethnography is commonly used to describe the method of T. Koschmann (*) Department of Medical Education, Southern Illinois University, Springfield, IL, USA e-mail: [email protected] B. B. Schwarz School of Education, Hebrew University of Jerusalem, Jerusalem, Israel e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_25
463
464
T. Koschmann and B. B. Schwarz
studying cases in the field and this usage has crept into the literatures of educational research (e.g., Erickson 1984). As a consequence, we will use the two terms interchangeably here. A classic example of a classroom ethnography can be found in McDermott’s (1976) unpublished doctoral dissertation. McDermott spent a year in a first-grade classroom making notes, occasionally interacting with students, and recording the interaction between students and the teacher. What went on in that classroom over the course of the year constituted his case. As an anthropologist, McDermott undertook to study the culture of this classroom, but he was particularly interested in how this social world was seen and understood by students, both those who were succeeding and those who were not. He sought to understand “the ways people create environments for each other” in “situations consequential to the participants and beyond” (McDermott and Raley 2011, p. 373, original authors’ emphasis). “Situation” here “is conceived as neither a variable, nor an environment, but as a playground for adaptation, rearrangement, and ingenuity” (McDermott and Raley 2011, p. 373). McDermott’s thesis was chiefly based on detailed analyses of two reading groups. One of his main findings, summarized in the dissertation’s title, “Kids Make Sense,” is that all the children, even those who struggle in the classroom, conduct themselves in ways that are orderly and meaningful. Ethnographic investigations are concerned, in one way or another, with meaning and how it is produced within specified contexts. This orientation to the individual case stands in marked contrast to the standing convention in educational research of evaluating instructional achievement in the aggregate. By this approach, which we might term the “Thorndikean tradition” (Koschmann 2011, p. 6), performance is transformed into a score. These scores can then be pooled to minimize individual differences and formulate statistical inferences. Each individual score might be seen as standing for a case, but with context and meaning-making shorn away. This cleaving away of context, however, comes at a cost for researchers seeking to understand how meaning is constructed in situ. To begin to understand such matters, a more naturalistic form of inquiry is required. Compared to a controlled, clinical trial, a case study might seem methodologically primitive—one selects a case (or set of cases), collects pertinent materials (e.g., video recordings, field notes, artifacts, exhibits, interview data), and then produces some analytic observations. Unlike a traditional research report describing some sort of experimental intervention, a method section hardly seems necessary. This is not meant to suggest that a case study lacks method, however. A well-done study offers a new way of seeing and, in so doing, the findings of such a study are a method—to the extent that readers of an ethnographic report are able to see for themselves what the analyst had sought to make plain, they will have replicated the study. The roots of ethnographic inquiry are unusually diverse. Historically it has been utilized in multiple disciplines and from a variety of different theoretical perspectives. Since the scientific rigor of a case study rests, not upon rote following of a fixed methodology, but rather upon its theoretical coherence, it is especially
Case Studies in Theory and Practice
465
important that researchers seeking to do ethnographic work be thoroughly conversant with the foundational assumptions of the research tradition that motivates their inquiries.
2 History and Development Case-based methods have been utilized broadly across the social sciences drawing on diverse theoretical foundations. Though space does not allow a comprehensive survey, we will examine a selected set of frameworks that have been prominent in the CSCL literature. For each, we will endeavor to articulate the kind of question or questions to which the approach has been applied. Such questions are a critical determinant of the kinds of discoveries that are possible within a study.
2.1
Cultural-Historical Activity Theory (CHAT)
This theoretical framework is tied back to the writings of Lev Vygotski (1978). His writings help us to understand and analyze the relationship between the human mind and activity. Vygotski’s insight into the dynamics of consciousness was that it is shaped by the history of each individual’s social and cultural experience. Core ideas are: (1) humans act collectively, learn by doing, and communicate in and via their actions; (2) humans make, employ, and adapt tools of all kinds to learn and communicate; and (3) community is central to the process of making and interpreting meaning—and thus to all forms of learning, communicating, and acting. Vygotski’s theory of cultural mediation claims that the relationship between a human subject and an object is never direct but must be sought in society and culture; consciousness emerges from human activity mediated by artifacts (tools) and signs. The unit of analysis is principally the individual, which internalizes external (social, societal, and cultural) forms of mediation (Vygotski 1978). The CHAT second generation has moved toward a collective model in which the unit of analysis has been expanded to include collective motivated activity toward an object, making room for understanding how collective action by social groups mediates activity (Engeström 1987). Hence the inclusion of community, rules, division of labor, and the importance of analyzing their interactions with each other. Rules may be explicit or implicit. Engeström focused on the meditational effects of the systemic organization of human activity. His expansion of the unit of analysis to a collective activity system includes the social, psychological, cultural, and institutional perspectives. In his Expansive Learning Theory, Engeström summarizes the current state of CHAT with five principles: (1) The activity system as primary unit of analysis, (2) multi-voicedness, (3) historicity, (4) the central role of contradictions as sources of change and development, and (5) Activity Systems’ possibility for expansive transformation (cycles of qualitative transformation).
466
T. Koschmann and B. B. Schwarz
Another second-generation CHAT approach was that of Situated Learning (Lave 2011; Lave and Wenger 1991). From it came a number of important conceptualizations related to apprenticeship learning, such as “communities of practice” and “legitimate peripheral participation” (Lave and Wenger 1991, p. 29). In spite of the obvious relevance of Activity Theory to the CSCL community, CSCL researchers rarely use it (for exceptions, see the cases presented in the next section). This neglect is probably linked to the fact that CSCL research mainly focuses on emergent learning rather than on development, a fact which is a major shortcoming in current CSCL research (Wise and Schwarz 2017).
2.2
Critical Theory and Critical Pedagogy
The title “Critical Theory” covers a broad range of interests and ideas. It has roots in two quite different scholarly strands, one based in political philosophy and the other in literary analysis. The first, most prominently associated with the writings of Habermas (1988) and the Frankfurt School, has its foundations in classic Marxist social theory and adopts a critical stance toward positivism in science (Epperson 2010). The second has origins in Bakhtin’s notion of voices in dialogue and advances to Derrida’s method of deconstruction. A form of social activism is shared by both strands. Critical pedagogy (Freire 2000; Giroux 1988; McLaren 2015) seeks to explore these themes within institutions of formal instruction. An instructive example would be Luke’s (1988) case study examining the reading curriculum in primary schools in British Columbia. The province-mandated curriculum as manifested in teacher guides and students’ texts constitutes the case under study. Luke’s study is “critical” in the sense that it uses technical methods developed in narrative analysis to understand and articulate the ideologies embodied by the curriculum. Studies constructed from a critical perspective are not exclusively based on texts, but also on the political and economic contexts in which those texts were created and embedded (e.g., Gee 2008). Like CHAT, critical theory/pedagogy is a natural methodology for studying collaboration as a practice aimed not only at educational change but also at social change. It makes visible power relations in interactions. The ideological direction adopted in CSCL is generally hidden. The opposition of educational institutions to collaboration is notorious and should have provided precious food for a critical stance. Using critical tools to examine power relations when students are “invited” to interact through CSCL that afford/impose collaborative practices should have been natural.
Case Studies in Theory and Practice
2.3
467
Dialogic Theory
The adoption of a critical methodology in research in CSCL is very rare. A notable exception is Dialogic Theory, which intuitively seems natural and unproblematic for educationalists committed to progressive pedagogies (Trausan-Matu et al. this volume). However, Dialogic Theory is a domain in which the two trends of Critical Theory are clattering. In their fierce attack of positivism, postmodernists have argued that there are no sustainable norms of rationality, that educational discourse should be political discourse, and should enfranchise certain groups of interest from others. Differences cannot be described through practical elements of interpretation based on a formal analysis. It pertains to what Derrida (1973) described as différance—the active process of identification with one’s group and non-identification with other groups. The celebration of difference becomes a presumption of incommensurability, a presumption that becomes untenable, especially in education. Therefore, in education, teacher authority takes significance against an insidious backdrop of relations of domination (e.g., Freire 2000). Some theorists have even questioned the desirability of dialogue, and others contrast it with what they call the dialectic, that is an inexorable imposition of rationality (Matusov 2011; Wegerif 2011). However, following Habermas’ ideas of social rationality, some dialogists see Dialogic Theory serving as a bridge between CHAT and Critical Theory. In Voices of the Mind, Wertsch (1991) incorporates Bakhtin’s key ideas of voice, and dialogue to expand Vygotski’s arguments about the mediation of human activity by signs. Dialogue emerges in the context of mediation, which triggers social and psychological insights (Wertsch and Kazak 2011). Wegerif (2007, 2011), who holds a radical postmodernist stance, is critical of Wertsch’s position, since (as anti-modernists would claim), dialogue cannot be imposed on learners. His alternative account of “education into dialogue” aims at liberating learners beyond mediation (Wegerif 2011), through tensions between differences. Wegerif embraces a new form of dialogue, one that avoids any sameness, any consensus. However, in contrast to his antimodernist and seemingly defeatist ancestors, he proposes a practical vision of what he sees as dialogic. With Mercer, he had contrasted exploratory talk (Barnes 1976) with disputational talk (Mercer et al. 1999), exploratory talk being a mode of discourse in which “partners engage critically but constructively with each other’s ideas” within a process of reasoning through “interthinking” (Mercer 2000). Many researchers committed to educational change do not follow Wegerif’s radical position and reconcile the dialectic and the dialogic in argumentative activities (e.g., Schwarz and Shahar 2017). The merger of ideas from CHAT and Critical Theory could then provide a path forward.
468
2.4
T. Koschmann and B. B. Schwarz
Actor–Network Theory
Actor–Network Theory (ANT) seeks to give an account of how the social world endlessly disassembles and reassembles itself through the complex interactions of its actors, which may be animate or inanimate. Latour (2005) writes, “ANT is simply an attempt to allow members of contemporary society to have as much leeway in defining themselves as that offered by ethnographers” (p. 41). The goal is to trace the chains of associations whereby actors construct and maintain networks to achieve various purposes. ANT spurns traditional social “forces” as determinate of actions, seeking to account for social structure through an analysis of the processes and connections through which networks are assembled. Analyses proceed by tracing the “careers of mediators” (p. 126) “along chain[s] of action” (p. 216). An early example would be Latour and Woolgar’s (1979) ethnography of a biomedical laboratory. Drawing on citational data and research reports from the lab, supplemented with fieldwork and interviews, they sought to understand how something becomes established as scientific “fact.” The laboratory becomes a place where facts are “secreted” through processes that involve researchers as actors assemble and interpret data (Latour and Woolgar 1979). Their findings and “social constructionist views of scientific activity” (Lynch 1993, p. 91) contributed to the development of a “'new’ sociology of scientific knowledge” (p. 71).
2.5
Ethnomethodology and Conversation Analysis
To coordinate our actions with those of others, that is to behave in an orderly and intelligible fashion, we must recognize what others are doing and they must do the same (Garfinkel 1967, 2002). Ethnomethodology (EM) is a program of study within sociology dedicated to providing an account of just how this is accomplished at the moment. It focuses on the analytic methods (hence the name) society’s members employ in making sense of the world around us. Like ANT, EM rejects the use of abstractions and exogenous factors to explain local order. Rather than treating culture and social forces as “resources” upon which an analysis can be constructed, EM approaches such matters as “topics” for inquiry in their own right (Zimmerman and Pollner 1970). The notion of Situated Action (not to be confused with Situated Learning mentioned earlier) is sometimes encountered in the CSCL literature and has direct ties to EM (Sharrock and Button 2003; Suchman 1987). Garfinkel’s (1967) study of “Agnes” an “‘intersexed’ person” who sought to pass as a female in the early 1960s is one example of an EM-informed case study. Conversation analysis (CA) is a related approach that focuses on sensemaking in talk and social interaction (Uttamchandani and Lester this volume). It seeks to elucidate the underlying machinery through which we construct turns at talk, rectify misunderstandings, build conversational sequences, and so forth (Koschmann 2013a). The proper role of ethnography in CA has been controversial (Mandelbaum
Case Studies in Theory and Practice
469
1991). Moerman (1988), for instance, argued that to offer a meaningful account of talk in interaction some amount of “ethnographic background” (p. 21), such as past history, the roles of the participants, cultural roles and expectations, and so forth is essential. Schegloff (1992), on the other hand, argued that the role of context in meaning-making, rather than being a given, should be the very matter under investigation. He stipulated, however, that regardless of whether we seek to do this by studying “single episodes” (Schegloff 1987) or assembled “collections” (Schegloff 1996, p. 502), “the locus of order is the single case” (Schegloff 1987, p. 102). CA studies, therefore, are examples of case-based inquiry, even if the cases are restricted to a single turn or two of talk. There is a substantial body of CA-inspired work looking at interaction in classrooms. For instance, McHoul (1990) explored the methods employed when teachers perform correction within classroom recitation (see also Macbeth 2004). Interaction analysis (Jordan and Henderson 1995) is an offshoot of CA which takes up questions of learning in interaction (Hall and Stevens 2016; Koschmann 2018). Practitioners have noted stark incompatibilities between some of the frameworks surveyed here. For instance, Latour (2005, pp. 248–251), writing from an ANT perspective, has been generally dismissive of approaches based on Critical Theory. Advocates of CHAT (Kaptelinin and Nardi 2006; Lave 1988, p. 193 FN 7; Nardi 1996) and Critical Theory (Habermas 1988, pp. 108–117) have vigorously opposed ethnomethodologically informed approaches. For their part, researchers from the EM camp have assumed a critical stance with respect to ANT (Lynch 1993, Chap. 3), Critical Theory (Button et al. 2015; Lynch 2000; Macbeth 2001, 2003), and interaction analysis (Randall et al. 2001). They have also endeavored to differentiate their work from classical ethnography (Button et al. 2015; Pollner and Emerson 2001; Sharrock and Anderson 1982). Given the contentious nature of this landscape, researchers must be especially cautious when attempting to blend theories and methods across different traditions.
3 State of the Art We will now take up some prominent examples of case studies within the CSCL literature. For each we will describe the research tradition or traditions within which the study was framed, the nature of the case studied, the research question or questions that motivated the study, and the central finding or findings reported. 1. Roschelle (1992) may be the quintessential example of a case study in CSCL. Roschelle described a series of five episodes in which two students, “Dana” and “Carol,” conducted experiments using a computer-based ballistics simulator, the Envisioning Machine. Dana and Carol’s full experience with the Envisioning Machine, including the assigned exercises and Roschelle’s debriefing interviews, constituted the case for study. Drawing on theories of conceptual change from Cognitive science (diSessa 1983), Roschelle hypothesized that learning occurs as
470
T. Koschmann and B. B. Schwarz
learners’ understandings of what they are doing together come to converge both with each other and with the canonical understandings of science. In his study, all of the episodes were videotaped, capturing the contents of the computer screen as well as the students’ talk and gestures. Roschelle employed interaction analysis to transcribe and analyze the recordings focusing on evidence of conceptual change on the part of the participants. He argued that, rather than being a fundamentally cognitive achievement, conceptual change has a social component which is witnessable and which can be traced within the details of participants’ social interaction. 2. Hakkarainen (2003): The research question addressed in this case study was whether 10- and 11-year-old children, collaborating within a CSILE (Computer-Supported Intentional Learning Environments) classroom, could engage in progressive inquiry. The study was carried out by qualitatively analyzing written notes logged by the students to CSILE’s database. Results of the study indicated that with teacher guidance, students were able to produce meaningful intuitive explanations about biological phenomena, guide this process by pursuing their own research questions, and engage in constructive peer interaction that helped them go beyond their intuitive explanations and toward theoretical scientific explanations. In later writing, Paavola and Hakkarainen (2005) revisited this case and drew on second-generation CHAT to analyze the students’ work. Their approach went beyond a monological or dialogical metaphor to adopt a “trialogical” metaphor, i.e., learning as a process of knowledge creation which concentrates on mediated processes where common objects of activity are developed collaboratively. 3. Schwarz and Hershkowitz (2001): In another study based on CHAT, Schwarz and Hershkowitz (2001) focused on mathematics classrooms in which Geogebra tools helped participants transform—stretch, shrink, clone, or merge, artifacts they previously produced. Instead of a being presented a static textual representation of a concept which is ambiguous in the sense that only some of the properties displayed in this drawing represent the figure. Computer tools like Geogebra afford a “variable” method for displaying mathematical entities, Schwarz and Hershkowitz used AT to show that representative ambiguity is beneficial for constructing states of intersubjectivity in collaborative settings for the function concept. They analyzed how the teacher collected various computer artifacts produced by her students in group activities, to disambiguate a situation in which all were different but some represented the same mathematical function. Thus, construction of meaning proceeds as a collaborative process of ambiguity dissipation through the use of computer-based artifacts. The way in which the design of the intervention was done in collaboration with the teacher was consistent with the philosophy of design-based research (Cobb et al. 2003). 4. Fujita et al. (2019): The researchers in this case study took a semiotic/dialogic approach to investigate how a group of young adolescents hierarchically define and classify quadrilaterals. Through qualitatively analyzing students’ decisionmaking processes, the authors found that the students’ decision-making processes are interpreted as transforming their informal/personal semiotic representations of
Case Studies in Theory and Practice
471
parallelograms to more institutional ones. They also found that students’ decision-making was influenced by their inability to see their peers’ points of view dialogically, i.e., requiring a genuine interanimation of different perspectives such that there was a dialogic switch in which the learners came to see the problem as if through the eyes of another. To report a failure to achieve a projected outcome is highly unusual in education research and, yet, the analysis of actual outcomes is essential to advancing our understanding. 5. Barab et al. (2004): In this case study, the methodological approach was based upon “critical design ethnography.” “Criticality” in this case points to social structures. The study entails participatory design work aimed at transforming a local context while producing an instructional design that can be used in multiple contexts. The report describes the Quest Atlantis project, in which a multiuser virtual environment, a collection of other media resources, a series of associated centers, and a set of social commitments were designed to aid children in valuing their communities and to help them recognize that they can contribute to their communities and the world in important ways. The authors’ goal was to empower groups and individuals, thereby facilitating social change. In this way, they functioned as change agents who were collaboratively developing structures intended to critique and support the transformation of the communities being studied. They described their struggles in attempting to bring this change about. The authors reflect on the opportunities and challenges that emerged as they built local critiques to overtake power relations, then reified them into strategies for innovating an instructional reform. 6. Stevens and Hall (1998) bring together two case studies both involving inscription and two-dimensional Cartesian representations. The first documents a series of tutoring sessions involving an eighth-grade student, Adam, and Bluma, a graduate student in mathematics. The second case involves two civil engineers working through a design problem for a roadway. The tutoring activity extends over a six-week period and is reported in a series of eight scattered fragments, while the workplace study is reported from a single stretch of interaction in which the design problem came up and was resolved. Video recordings served as the principal materials for study and the recordings were transcribed and studied using interaction analysis. The authors argue that the cases serve “to document how technoscientific perception changes through interaction” (p. 139). In the way in which the mathematical notion of slope is brought to the fore in both cases, the study follows Latour’s (2005) recommendation to trace the agency of a mediator through different courses of action. At the same time, it is possible to register high notes of other approaches as well. The idea of seeing in a “disciplined” way, owes much to Lave and Wenger’s (1991) notion of “communities of practice.” The authors also make a nod to EM in noting that they use the term “accountability” in both its everyday sense and in the technical sense in which it is employed by ethnomethodologists (p. 146fn8). 7. Greiffenhagen and Watson (2009) deal with embodied repair and correction in a CSCL setting. In linguistics and conventional studies of communication, mis-hearings and misconstruals, speaker restarts, momentary lapses in speech,
472
T. Koschmann and B. B. Schwarz
and “ums” and “ahs,” and other disfluencies were treated as noise and were essentially ignored. One of CA‘s most important contributions was to show that the ways in which repairs of understanding are accomplished are methodically organized (Schegloff et al. 1977). Greiffenhagen and Watson sought to investigate how students working with a shared screen manage to correct errors and misunderstandings, not verbally, but through manual actions. Their selected cases involve a novel teaching activity, which incorporated a computer-based storyboarding tool to aid students in understanding a Shakespearean play. The four fragments included in the report were extracted from a larger corpus of classroom recordings. We find an unmistakable orientation to EM‘s foundational concern with how order and understanding are managed in the moment.
4 The Future There are commonalities and differences to be seen across the seven studies presented. Though all involve the study of cases, what counts as a case varies substantially depending on how the study was framed and the logic of the ethnographic inquiry pursued. Looking across the selected examples, a number of issues stand out. The scope of what counts as a case: The scope of a case can be long or quite brief, depending on the nature of the question being asked. In McDermott (1976), the author sought to document the social life of a single classroom. A class is an organizational thing that comes to life at the beginning of an academic year and then undoes itself at the conclusion. McDermott’s research interest, therefore, dictated the scope of his case. We see this as well in Roschelle (1992). Focusing on one particular pair of students, Carol and Dana, Roschelle sought to give an account of the totality of their experience with the simulation-based activity. This included not only their experiments using the Envisioning Machine but also the post hoc interviews with the author. The sequence of excerpts in the report constitutes a single case. Contrast this with the fragments presented by Greiffenhagen and Watson, each of which represents an instance of embodied repair work and each of which stands as a case in its own right. Determining what is to count as a case is a critical part of the logic of doing ethnographic work. Its duration can be measured in months, as in McDermott’s study, or in seconds, as in Greiffenhagen and Watson’s. Single case or multiple cases in contrast? Hakkarainen (2003) documented the rollout of an instructional innovation within a particular classroom and this constituted a discrete case for study. Rather than extending to the full life history of a class as in McDermott (1976), the scope of Harrakainen’s case was coeval with the instructional activity under study. This was also true for Roschelle (1992). Multiple cases, however, can sometimes be included in a single report as well. Stevens and Hall (1998), for instance, sought to examine how the notion of slope was made relevant in two quite different settings and their analysis was based on constructing a contrast between the two. But the analysis in Stevens and Hall also played on another
Case Studies in Theory and Practice
473
kind of contrast. Their description of Bluma and Adam’s tutoring sessions was structured as a series of episodes. As we will explain, the order in which these episodes occurred held significance for their analysis as well. Learning, development, and sequential emergence: Issues of sequentiality become central when we wish to take up the topic of learning in that most theories of learning entail a notion of change over time (Koschmann 2013b). To detect learning, either technically or in a vernacular sense, involves engaging in a “‘same-but-different’ analysis” (p. 1039). That is, an event or activity must be similar enough to another to be recognizable as being ‘the same’ while, at the same time, being sufficiently different to support an attribution of formative change. Roschelle (1992), for instance, monitored how Carol and Dana’s understandings changed over the course of the activity as demonstrated by their actions and comments. Each episode was chosen to illustrate significant moments in Carol and Dana’s unfolding experience within the activity. A similar kind of analysis was employed in Stevens and Hall’s (1998) examination of the algebra tutoring sessions. Sometimes change occurs over periods much longer than those commonly studied within a case study. In its more radical version, the dialogic approach points to learning as an endless emerging process, and this characteristic does not fit the constraints of school learning (see Fujita et al.’s case study). Here the case may simply represent a point in a longer trajectory. Theoretical approaches like CHAT also examine how tools and artifacts evolve over time (Wise and Schwarz 2017). The case studies by Hakkarainen (2003) and Schwarz and Hershkowitz (2001) are of this kind. For example, the production of computer artifacts by groups of students solving a mathematical problem, the collection and redistribution of them to the whole class by the teacher to create contradictions, and their further resolution in small groups, led Schwarz and Hershkowitz to identify meaning making through the observation of coarse-grained transformations (computer artifacts), yet along a relatively long trajectory. Sequentiality in the practical organization of interaction: Sequential development may enter into an analysis in yet another way. The excerpts presented in Greiffenhagen and Watson (2009) each represent a different case and the cases are not being employed to make a claim regarding change over time. Consequently, the temporal order in which the individual cases occurred held no particular significance for their overall argument. Instead, Greiffenhagen and Watson sought to document the diversity of practices available for accomplishing correction. By their analysis, we see that the sequence with which the actions unfold within each fragment is critical to developing sense. An attention to this kind of intra-fragment sequencing can also be seen in Roschelle (1992) and Stevens and Hall (1998). Agency for change versus leaving things as they are: There is tension within ethnographic research between approaches that seek to document some social phenomenon without intervening in or otherwise disrupting it and those that are committed to some form of social change. The study by Barab and colleagues had a stated ambition of “designing for change.” Not only were the researchers seeking to change the thing they were studying, but, beyond that, to ultimately effect a change at a societal level. It was to be achieved through their struggles to deconstruct existing social structures.
474
T. Koschmann and B. B. Schwarz
In marked contrast, EM has adopted a principled stance of “indifference” with respect to any and all a priori theories or formulations (Koschmann et al. 2007, pp. 135–136). In his earliest writings, Garfinkel (1952) stipulated: “In respect of the theoretical content of any given theory, we shall abstain from passing any judgment at all, and our whole discussion shall respect the limits imposed by this abstention” (pp. 5–6). This policy of indifference results in a form of bracketing whereby one endeavor to set aside one’s preconceptions about what is good and bad for the purposes of the analysis. The task of the analyst is simply to document how things are done without intervening in the matters under investigation or passing critical judgment. As concerned citizens, it might seem irresponsible to propose that we, as researchers, not at least attempt to speak to some of the world’s inequities and deficiencies through our work. Indeed, EM’s policy of “leaving things as they are” has been criticized by some as a form of “political quietism” (Sharrock and Anderson 1991, p. 62). Though not all educational research is as overtly activist as the Barab et al. report, there is inevitably a commitment to reforming instructional practice in some way. Is it possible to reconcile a desire to achieve social justice or to improve instructional practice while maintaining a commitment to honest discovery? We would argue that it is. The analysis of some matter in context is not a solipsistic exercise. Though the situation under study is not altered by the analyst, to the extent that an analytic report offers practitioners a new way of seeing their world, it has the potential for seeding change. In sum, we may be able to find the path forward by looking back and closely examining the history and traditions of case-based research. Successfully conducting a case study necessitates understanding the tensions and assumptions built into one’s perspective and making mindful decisions concerning the research questions one wishes to pursue. To build an ethnographic argument, one must carefully attend to the issues that have been enumerated here: What is being constructed as a “case?” How was it selected? What forms of contrast are built into the analysis and to what end? What is the role of time and sequence within the analysis? Does the study seek to alter the social phenomenon under investigation or merely document it faithfully? But regardless of one’s strategic goals, one thing remains clear: as CSCL researchers delve more deeply into the practical details of learning in settings of collaboration, the naturalistic study of cases will become a more and more important part of our collective research.
References Barab, S. A., Thomas, M. K., Squire, K., & Newell, M. (2004). Critical design ethnography: Designing for change. Anthropology and Education Quarterly, 35, 254–268. Barnes, D. (1976). From communication to curriculum. Harmondsworth, UK: Penguin Education. Button, G., Crabtree, A., Rouncefield, M., & Tolmie, P. (2015). Deconstructing ethnography: Towards a social methodology for ubiquitous computing and interactive systems design. New York: Springer.
Case Studies in Theory and Practice
475
Cobb, P., diSessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13. Derrida, J. (1973). Différance. In J. Derrida (Ed.), Speech and phenomena: And other essays on Husserl’s theory of signs. Evanston, IL: Northwestern University Press. diSessa, A. A. (1983). Phenomenology and the evolution of intuition. In D. Gentner & A. L. Stevens (Eds.), Mental models (pp. 18–34). Hillsdale, N.J.: Lawrence Erlbaum Assoc. Engeström, Y. (1987). Learning by expanding: An activity-theoretical approach to developmental research. Helsinki: Orienta-Konsultit. Epperson, T. W. (2010). Critical ethnography in the VMT project. In G. Stahl (Ed.), Studying virtual math teams (pp. 529–553). New York: Springer. Erickson, F. (1984). What makes school ethnography ‘ethnographic’? Anthropology & Education, 15, 51–66. https://doi.org/10.1525/aeq.1984.15.1.05x1472p. Freire, P. (2000). Pedagogy of the oppressed. New York: Bloomsbury Publishing. Fujita, T., Doney, J., & Wegerif, R. (2019). Students’ collaborative decision-making processes in defining and classifying quadrilaterals: A semiotic/dialogic approach. Educational Studies in Mathematics,1–16. Garfinkel, H. (1952). The perception of the other: A study in social order [unpublished dissertation]. Harvard University, Cambridge, MA. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Garfinkel, H. (2002). Ethnomethodology’s program: Working out Durkheim’s aphorism. Lanham, MD: Rowman & Littlefield. Gee, J. P. (2008). Social linguistics and literacies: Ideology in discourses (3rd ed.). New York: Routledge. Giroux, H. A. (1988). Teachers as intellectuals: Toward a critical pedagogy of learning. Granby, MA: Bergin & Garvey. Greiffenhagen, C., & Watson, R. (2009). Visual repairables: Analyzing the work of repair in human-computer interaction. Visual Communication, 8, 65–90. Habermas, J. (1988). On the logic of the social sciences (S. Weber Nicholsen & J. A. Stark, Trans.). Cambridge, MA: MIT Press. Hakkarainen, K. (2003). Progressive inquiry in a computer-supported biology class. Journal of Research in Science Teaching, 40(10), 1072–1088. Hall, R., & Stevens, R. (2016). Interaction analysis approaches to knowledge in use. In A. A. diSessa, M. Levin, & N. J. S. Brown (Eds.), Knowledge and interaction: A synthetic agenda for the learning sciences (pp. 72–108). New York: Routledge. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. Journal of the Learning Sciences, 4(1), 39–103. https://doi.org/10.1207/s15327809jls0401_2. Kaptelinin, V., & Nardi, B. A. (2006). Acting with technology: Activity theory and interaction design. Cambridge, MA: MIT Press. Koschmann, T. (2011). Theorizing practice. In T. Koschmann (Ed.), Theories of learning and studies of instructional practice (pp. 3–17). New York: Springer. Koschmann, T. (2013a). Conversation analysis and collaborative learning. In C. Hmelo-Silver, C. Chinn, C. Chan, & A. O’Donnell (Eds.), International handbook of collaborative learning (pp. 149–167). New York: Routledge. Koschmann, T. (2013b). Conversation analysis and learning in interaction. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics (pp. 1038–1043). Oxford, U.K.: Wiley-Blackwell. Koschmann, T. (2018). Ethnomethodology: Studying the practical achievement of intersubjectivity. In F. Fischer, C. Hmelo-Silver, S. Goldman, & P. Reimann (Eds.), International handbook of the learning sciences (pp. 465–474). New York: Routledge. Koschmann, T., Stahl, G., & Zemel, A. (2007). The video analyst’s manifesto (or the implications of Garfinkel’s policies for studying practice within design-based research). In R. Goldman, B. Barron, S. Derry, & R. Pea (Eds.), Video research in the learning sciences (pp. 133–143). Mahwah, NJ: Lawrence Erlbaum Assoc.
476
T. Koschmann and B. B. Schwarz
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press. Latour, B., & Woolgar, S. (1979). Laboratory life: The construction of scientific facts. Princeton, NJ: Princeton University Press. Lave, J. (1988). Cognition in practice. New York: Cambridge University Press. Lave, J. (2011). Apprenticeship in critical ethnographic practice. Chicago: University of Chicago Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. Luke, A. (1988). Literacy, text books, and ideology: Postwar literacy instruction and the mythology of Dick and Jane. Philadelphia: Falmer Press. Lynch, M. (1993). Scientific practice and ordinary action: Ethnomethodology and social studies of science. New York: Cambridge University Press. Lynch, M. (2000). Against reflexivity as an academic virtue and source of privileged knowledge. Theory, Culture & Society, 17(3), 26–54. Macbeth, D. (2001). On ‘reflexivity’ in qualitative research: Two readings, and a third. Qualitative Inquiry, 7, 35–68. Macbeth, D. (2003). Hugh Mehan’s Learning Lessons reconsidered: On the differences between the naturalistic and critical analysis of classroom discourse. American Educational Research Journal, 40, 239–280. Macbeth, D. (2004). The relevance of repair for classroom correction. Language in Society, 33(5), 703–736. Mandelbaum, J. (1991). Beyond mundane reason: Conversation analysis and context. Research on Language and Social Interaction, 24, 333–350. Matusov, E. (2011). Irreconcilable differences in Vygotsky’s and Bakhtin’s approaches to the social and the individual: An educational perspective. Culture and Psychology, 17, 99–119. McDermott, R. (1976). Kids make sense: An ethnographic account of the interactional management of success and failure in one first-grade classroom (PhD). Palo Alto, CA: Stanford University. McDermott, R. P., & Raley, J. (2011). Looking closely: Toward a natural history of human ingenuity. In E. Margolis & L. Pauwels (Eds.), The SAGE handbook of visual research methods (pp. 372–391). Thousand Oaks, CA: Sage Publications. McHoul, A. (1990). The organization of repair in classroom talk. Language in Society, 19, 349–377. McLaren, P. (2015). Pedagogy of insurrection. New York: Peter Lang. Mercer, N. (2000). Words and minds: How we use language to think together. London: Routledge. Mercer, N., Wegerif, R., & Dawes, L. (1999). Children’s talk and the development of reasoning in the classroom. British Educational Research Journal, 25, 95–111. Moerman, M. (1988). Talking culture: Ethnography and conversation analysis. Philadelphia, PA: University of Pennsylvania Press. Nardi, B. (1996). Studying context: A comparison of activity theory, situated action models, and distributed cognition. In B. Nardi (Ed.), Context and consciousness. Cambridge, Mass., MIT Press. Paavola, S., & Hakkarainen, K. (2005). The knowledge creation metaphor: An emergent epistemological approach to learning. Science & Education, 14, 535–557. Pollner, M., & Emerson, R. M. (2001). Ethnomethodology and ethnography. In P. Atkinson, A. Coffey, S. Delamont, J. Lofland, & L. Lofland (Eds.), Handbook of ethnography (pp. 118–135). Thousand Oaks, CA: Sage. Randall, D. L., Marr, L., & Rouncefield, M. (2001). Ethnography, ethnomethodology, and interaction analysis. Ethnographic Studies, 6, 31–43. Roschelle, J. (1992). Learning by collaboration: Convergent conceptual change. Journal of the Learning Sciences, 2(3), 235–276.
Case Studies in Theory and Practice
477
Schegloff, E. (1987). Analyzing single episodes of interaction: An exercise in conversation analysis. Social Psychology Quarterly, 50, 101–114. Schegloff, E. (1992). In another context. In A. Duranti & C. Goodwin (Eds.), Rethinking context: Language as an interactive phenomenon (pp. 191–228). New York: Cambridge University Press. Schegloff, E. (1996). Confirming allusions: Towards an empirical account of action. American Journal of Sociology, 104, 161–216. Schegloff, E., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53, 361–382. Schwarz, B. B., & Hershkowitz, R. (2001). Production and transformation of computer artifacts towards the construction of mathematical meaning. Mind, Culture and Activity, 8(3), 250–267. Schwarz, B. B., & Shahar, N. (2017). Combining the dialogic and the dialectic: Putting argumentation into practice for classroom talk. Learning, Culture and Social Interaction, 12, 113–132. Sharrock, W., & Anderson, B. (1991). Epistemology: Professional scepticism. In G. Button (Ed.), Ethnomethodology and human sciences (pp. 51–76). New York: Cambridge University Press. Sharrock, W., & Anderson, R. (1982). On the demise of the native: Some observations on and a proposal for ethnography. Human Studies, 5, 119–135. Sharrock, W., & Button, G. (2003). Plans and situated action ten years on. Journal of the Learning Sciences, 12(2), 259–264. Stevens, R., & Hall, R. (1998). Disciplined perception: Learning to see in technoscience. In M. Lampert & M. L. Blunk (Eds.), Talking mathematics in school: Studies of teaching and learning (pp. 107–149). New York: Cambridge University Press. Suchman, L. (1987). Plans and situated actions: The problem of human/machine communication. New York: Cambridge University Press. Trausan-Matu, S., Wegerif, R., & Major, L. (this volume). Dialogism. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer International Publishing. Uttamchandani, S., & Lester, J. N. (this volume). Qualitative approaches to language in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer International Publishing. Vygotski, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wegerif, R. (2007). Dialogic education and technology: Expanding the space of learning (Vol. 7). Springer Science & Business Media. Wegerif, R. (2011). From dialectic to dialogic: A response to Wertsch and Kazak. In T. Koschmann (Ed.), Theories of learning and studies of instructional practice (pp. 201–222). New York: Springer. Wertsch, J. (1991). Voices of the mind: A sociocultural approach to mediated action. Cambridge, MA: Harvard University Press. Wertsch, J., & Kazak, S. (2011). Saying more than you know in instructional settings. In T. Koschmann (Ed.), Theories of learning and studies of instructional practice (pp. 153–166). New York: Springer. Wise, A., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer Supported Collaborative Learning, 12, 423–467. Zimmerman, D. H., & Pollner, M. (1970). The everyday world as phenomenon. In J. Douglas (Ed.), Understanding everyday life: Toward the reconstruction of sociological knowledge (pp. 80–103). Chicago, IL: Aldine.
478
T. Koschmann and B. B. Schwarz
Further Readings Button, Crabtree, Rouncefield, & Tolmie (2015) is a book-length examination of how to do ethnography in CSCW. Given that researchers and designers in CSCL grapple with many of the same issues as those who work in CSCW, this book is a good starting place for CSCL researchers who would like to adopt an ethnographic approach. Erickson (1984) serves as a useful introduction to classical ethnography in anthropology. McDermott (1976) is a beautifully constructed classroom ethnography. It’s findings are provocative and have withstood the passage of time. Roschelle (1992) continues to serve as a paradigm for CSCL research. Though the technology is primitive by current standards, its careful attention to interaction in joint activity continues to be an inspiration for researchers today. Stevens & Hall (1998) reference is one of the most widely cited ethnographies in the CSCL canon. It is rich example, one that demonstrates how case studies can be used to construct a contrastive analysis and one that examines practice at the worksite.
Design-Based Research Methods in CSCL: Calibrating our Epistemologies and Ontologies Yael Kali and Christopher Hoadley
Abstract Design-based research (DBR) methods are an important cornerstone in the methodological repertoire of the learning sciences, and they play a particularly important role in CSCL research and development. In this chapter, we first lay out some basic definitions of what DBR is and is not, and discuss some history of how this concept came to be part of the CSCL research landscape. We then attempt to describe the state-of-the-art by unpacking the contributions of DBR to both epistemology and ontology of CSCL. We describe a tension between two modes of inquiry—scientific and design—which we view as inherent to DBR, and explain why this has provoked ongoing critique of DBR as a methodology, and debates regarding the type of knowledge DBR should produce. Finally, we present a renewed approach for conducting a more methodologically coherent DBR, which calibrates between these two modes of inquiry in CSCL research. Keywords Design-based research (DBR) · CSCL epistemology · CSCL ontology · Methodological alignment · Design researchers’ transformative learning (DRTL)
1 Definitions and Scope DBR is one of a cluster of terms used to describe various intersections between design and research, especially in the realm of academic research in either education or human–computer interaction. In this section, we attempt to define what we mean by design-based research and contrast it with other definitions. Y. Kali (*) Faculty of Education, University of Haifa, Haifa, Israel e-mail: [email protected] C. Hoadley Educational Communication and Technology Program, New York University, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_26
479
480
Y. Kali and C. Hoadley
DBR methods were originally defined (Design-Based Research Collective [DBRC] 2003; Hoadley 2002), like the earlier concept of design experiments (Brown 1992; Collins 1990, 1992), as a research method or related methodology which used a blended form of design activities and research activities to produce design-relevant, empirically supported knowledge. Designed interventions in DBR are tested iteratively in a context of use, and the iterations become settings to collect data that support or refute inferences about underlying theoretical claims. At the same time, the iterations are used for increasing the fit between the theory, the design, and the enactment or implementation so as to best test the theoretical conjectures. Unlike earlier definitions associated with design experiments (notably Brown’s 1992), DBR methods were claimed to be not merely related to hypothesis generation, but a scientific enterprise in their own right. This approach stemmed from a very practical problem described earlier by Simon (1969) in his seminal book—The Sciences of the Artificial—namely, that. . . . the genuine problem is to show how empirical propositions can be made at all about systems that, given different circumstances, might be quite other than they are (p. XI).
In the case of DBR as a science of the artificial, this genuine problem concerns making empirical propositions regarding designs of learning environments that are studied while they are being created. The notion of DBR as a research methodology contrasts with other points of connection between design and research. Specifically, Instructional design, Usercentered design, and other similar terms from the fields that attempt to create educational interventions—materials, or technologies might be lumped under the terminology of research-based design (RBD) methods. In such methods, the tools of empirical research are subservient to the goal of ultimately creating a useful designed product or intervention. The main difference, thus, is that DBR uses design processes to produce research knowledge, where RBD uses research techniques to produce designs. Evaluation research of designs is similar to DBR in that at the end there is both a design and research output, but the difference is that these activities in evaluation research are by necessity separated from each other. The intervention or tool is complete at the moment in which evaluation is taking place, and the data used to inform the design are typically distinct from the data used to evaluate that design. The terms design research or design studies are used variously in different communities, ranging from the journal Design Studies, which focuses on studies of designers and design processes, to a notion of design research which labels the learning process a designer must go through in order to connect a context to a designed solution (e.g., Laurel 2013). Another more recent term is Design-Based Implementation Research (Fishman et al. 2013; Kali et al. 2018), which can be characterized as a subset of DBR with three main distinctive characteristics: joint ownership of the research agenda by practitioners and designers/researchers; an inherent focus on designs and research questions related to the issues of scaling interventions systemically (e.g., across a large school system or a geographic region); and a linkage between micro-level design (design of a particular intervention, for instance at a classroom level) and macro-level systems change (e.g., design of an institution-wide framework for adoption) (Law et al. 2016).
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
481
We look more specifically at the issue of DBR methods in the particular sense of a research methodology which yokes the design process and research process to produce knowledge outcomes (and not just useful, validated designs). Although many have suggested more generalized definitions of DBR since its introduction (e.g., McKenney and Reeves 2012/2018), we rely on the earlier characterization from the Design-Based Research Collective (DBRC 2003) as it encapsulates more directly what critics find challenging about DBR. In this definition, DBR has five characteristics: (a) overlap between the design and research process (both temporally and intellectually); (b) iterative cycles of design, enactment in context, analysis, and redesign; (c) a goal of theory development that is relevant to practice; (d) a commitment to understanding the designs in authentic settings (as opposed to more reductionist approaches); and (e) a recognition that the design and the enactment are intertwined in producing the outcomes (i.e., that outcomes are the result of both the use of designed artifacts and the way they are used).
2 History and Development: DBR in CSCL We believe the connection between DBR methods and the CSCL research community is not a coincidence, but rather a natural byproduct of the ways in which almost all CSCL research is contingent on shifting, culturally and technologically grounded social contexts for learning, and on theories that help encompass that social context. Various authors (e.g., Kaptelinin and Cole 2002; Koschmann 1999; Paavola et al. 2004; Stahl et al. 2006) have explored how socially contextualized theories intersect with a technology-enhanced action orientation of research. Such a design orientation for research is notable, for example, in Kaptelinin and Cole’s (2002) classic use of activity theory for analyzing the design of a collaborative learning environment. The design is conceptualized as a perturbation of activity structures, placing the scope less on a particular tool and more on how the tool, together with the designed collaboration processes support learning. As Stahl et al. (2006) point out, the intersubjective nature of learning and the challenges of intersubjectivity among researchers and analysts of human behavior influence the relationship between design and research in CSCL: CSCL research has both analytic and design components. . . . To design for improved meaning making, however, requires some means of rigorously studying praxis. In this way, the relationship between analysis and design is a symbiotic one—design must be informed by analysis, but analysis also depends on design in its orientation to the analytic object. (Stahl et al. 2006, p. 11).
Challenges such as these have led to discussions and debates about DBR within the context of CSCL research and development. In the early 2000s, a blossoming of scholarship on DBR methods yielded a number of special issues, including those published in Educational Researcher (2003), Journal of the Learning Sciences (2004), Educational Psychologist (2004), Educational Technology (2005). The articles included in these special issues
482
Y. Kali and C. Hoadley
helped legitimize the approach, but also proliferated alternative definitions of what constitutes DBR and how it would fit with other related concepts such as “design research,” and engaged with critiques of the method and its underlying epistemologies. Prominent critiques included a failure to contend with lack of appropriate experimental control for causal inferences (Desforges 2000), difficulty conveying in adequate detail the relevant aspects of the design and the data (Reeves 2005), being susceptible to overinterpreting and/or cherry-picking interpretations given the breadth of data collected under evolving, rather than fixed, protocols (Dede 2004, JLS), and a lack of a clear argumentative grammar (Kelly 2004).
3 State of the Art: Argumentative Grammars and Tensions Within DBR Epistemology and Ontology As described above, one way to understand DBR is its dual goal in advancing both learning theory—explanatory evidence-based arguments on how people learn in various instructional contexts (especially in those involving CSCL), and learning design (the features and principles for environments that support such learning). When it comes to theory, we might start with a positivistic psychological or cognitive framing of what a theory is, but we can also extend the notion of learning theories much more broadly with interpretivistic sociocultural conceptions, situative understandings, humanistic theories, etc. On the other hand, design knowledge might encompass specific designed artifacts or interventions, ideas about how to instantiate particular goals through human agency, or ideas about what interventions might be possible. Unlike in traditional experimental research, design in DBR is not solely a means for the purpose of conducting research—it is a goal by and of itself, juxtaposed to its twin goal of advancing theory. Yet, there are important differences in what makes good or useful outcomes in these two arenas—theory and design. These differences create an inherent tension within DBR, which affects how we judge the worth of the processes of knowing—DBR’s epistemology, as well as the nature and types of knowledge produced—what we might term DBR’s knowledge ontology. Following Chi’s notion of ontological commitments (Chi 1992; Slotta 2011), it is worth saying that the types of knowledge produced in DBR fall into different sorts of categories which are determined in part by the ontological commitments we hold as designers and researchers. An ontology of DBR in CSCL should include different categories of knowledge, ranging from design patterns to presumed universal laws of psychology. In other words, the tension between theory and design in DBR in CSCL affects how we know things, and what kinds of knowledge are produced. In this section, we describe debates within the learning sciences and CSCL communities concerning the value of DBR, how it can best be conducted and communicated, and what its outcomes should look like. We then illustrate how these debates are in fact a result of the theory–design inherent tension within DBR.
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
3.1
483
DBRs Dual Epistemic Game
People follow rules in deciding what claims are valid in different research contexts. One term for this is epistemic games. In introducing this term, Perkins (1997) referred to patterns of inquiry, such as goals, moves, and rules, which he described as: . . . woven together in a course of inquiry... [and are often] played competitively, as in the adversarial system of justice of scientific debates. (p. 52)
Another term for describing the ways in which researchers’ progress toward knowledge and understanding in a field is argumentative grammar. In the world of methodologies, the argumentative grammar determines the rules for making an argument within the coherent world of epistemology or method. Thus, epistemic games can be thought of as the language of claims and debates in a field, and argumentative grammar as the underlying structure of that language. Within DBR, a criticism has been that it is not clear on its argumentative grammar (Kelly 2004): What, therefore, is the logos of design studies in education? What is the grammar that cuts across the series of studies as they occur in different fields? Where is the “separable” structure that justifies collecting certain data and not other data and under what conditions? What guides the reasoning with these data to make a plausible argument? Until we can be clear about their argumentative grammar, design study methods lack a basis for warrant for their claims. (p. 119)
Such criticism objected the pluralism that DBR researchers such as Bell (2004), and later on McKenney and Reeves (2012/2018) or Bakker (2018) ascribed to DBR. Bell, for instance, already in 2004 maintained that: At a time when many efforts that are reviewing the status of educational research seem to be operating under the working assumption that our theoretical and methodological complexity should be reduced, I argue that rigor and utility can be actively pursued through pluralism—a coordination of different theoretical views on learning and education. (Bell 2004, p. 251)
We claim that this ambiguity within DBR methodologies (even if we refer to methodologies in plural and not a single methodology) results not only from the broad range of theoretical views studied using DBR, but rather—is rooted in the epistemological tension inherently embedded in the dual goal of DBR. Consequently, the lack of a clear argumentative grammar in DBR is mainly related to lack of clear linkage between the two languages we speak (advancing theory and advancing design). That is, we (design researchers) typically play two epistemic games, and oftentimes—are not clear enough about how we switch between them. To illustrate what we mean by a dual epistemic game, we turn to philosophical notions of design. In their seminal book, “The design way,” Nelson and Stolterman (2012) characterize the unique mode of inquiry that designers follow, by contrasting it with the one followed by scientists. While scientists, in general, strive to reason from the concreteness and complexity of the actual world, to the abstractness and
484
Y. Kali and C. Hoadley
The "Trueˮ
Increasing level of abstraction
Abstracting generalized explanations "Scienceˮ mode of inquiry DBR Calibrating the dual Epistemic game Particularization "Designˮ mode of inquiry
Increasing level of Concreteness
The "Actualˮ Increasing level of simplicity
Increasing level of complexity
Fig. 1 Contrasting “science” and “design” modes of inquiry (adapted from Nelson and Stolterman 2012) and the dual intertwining epistemic game we play in DBR, iterating between abstraction and particularization
simplicity of principles and laws (yellow arrow, going up the curve in Fig. 11), designers, they say, strive to do the opposite. That is, designers use such abstractions to create specific designs in the actual world (e.g., a specific product or policy) by making design judgments (blue arrow, going down the curve in Fig. 1). Therefore, science and design constitute quite different traditions of inquiry that encompass contrasting rules within their epistemic games. We claim though, that in DBR we play and intertwine both these traditions, iterating between abstraction and particularization (green arrow in Fig. 1). A DBR study typically begins by identifying a gap in educational theory that we (DBR researchers) aim to explore by designing and enacting an intervention within the so-called real world, what Bhaskar (1975) would call the actual world. To develop 1
Many thanks to the anonymous reviewer who brought our attention to Bhaskar’s conceptions of philosophy of science making a distinction between (a) the “real” world, i.e., laws of nature independent of human interpretation, (b) the “actual” world, i.e., things that have come to exist through the action of those laws of nature, and (c) the “empirical” world, i.e., what we, as humans come to observe, measure, describe, or experience of the actual world. Neilson and Stolterman use the term “real” for the x-axis but we have relabeled it to be the “actual” to align with Bhaksar’s terminology. We believe this is closer to what Neilson and Stolterman meant.
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
485
an initial design, we take into account generalized abstractions (e.g., theories, design principles), and embody them into a specific design (going down the curve). Then, we collect (messy) data regarding how learners interact with our designs in the actual world, and analyze this data (using the existing theoretical lenses, but open to refining them) to come up with new generalized conjectures about learning (going up the curve) and use them to refine the designs (down the curve), to test these conjectures (up again), and so on with as many iterations as needed to contribute to both theory and practice. It turns out that within this abstraction–particularization tango, we constantly switch epistemic languages, and therefore it is clear why DBR is missing one agreed upon argumentative grammar. In doing so, DBR is sometimes used within a positivistic framing to make strong, generalizable truth claims about a presumably objectively knowable world. But DBR is also sometimes used within an interpretivist framing to explore aspects of the human experience that are presumed to be knowable only through individual interpretation and which are inherently not generalizable. DBR researchers may violate some of the core tenets of either of these core epistemologies, much to the consternation of researchers hoping to fit it in with their existing epistemological commitments. Such distress is expressed in the following excerpt from an anonymous reviewer in his/her comments regarding a manuscript describing a DBR project: [the manuscript entails] an awkward combination of qualitative and quantitative research perspectives. Symptomatic of this is the fact that you use both the word “causal” and the word “holistic” in your title! Show us where you stand. (Anonymous reviewer)
Thus, DBR sits in tension both with positivism and interpretivism, both “quantitative” and “qualitative” research, and better adheres to mixed methods (Bell 2004). Knowledge claims rely heavily on the designer’s stance and interpretation of not only the data but also their interpretation of the design context, circumstances, and goals (Tabak 2004). Therefore, such claims are presumed to be somewhat generalizable, but—using diSessa’s (1991) terminology—based on local (rather than global) sciences. Cobb and Gravemeijer (2008) refer to such generalizations as domain-specific instructional theories.
3.2
Why We Have Multiple Argumentative Grammars, and What Is Still Missing
Recently, Bakker (2018) suggested to address Kelly’s criticism by noting that we do not necessarily need one argumentative grammar, but rather, multiple grammars. This view is in line with the pluralistic view of DBR methodology described earlier (e.g., Bell 2004; McKenney and Reeves 2012/2018). In the chapter “Argumentative grammars used in design research,” Bakker lays out various solutions that have been developed in the past two decades to serve as underlying “rules” for making DBR
486
Y. Kali and C. Hoadley
Fig. 2 A generalized conjecture map (adapted from Sandoval 2014)
arguments. He presents these “rules” using Toulmin (1958) general argumentation scheme which clearly distinguishes claims, evidence, and reasoning to illustrate the external structural logic of these grammars. Within these grammars he includes: (a) Proof of principle that certain learning outcomes are possible (e.g., O’Neill 2012), which requires advance setting of criteria for success and failure; (b) small changes per iteration, which enable experimental approaches for comparing learning outcomes between iterations (e.g., Kali et al. 2009); (c) building on the experience of the DBR community, as in the design principles database (Kali 2006, 2008) in which DBR researchers can use, refine, and share their own design principles, making it possible to abstract generalized explanations based on refinement of insights across studies; (d) answering the “how” question, which illustrates the logic of experimental designs that aim to develop insights regarding how a particular educational approach can support learners achieve certain educational goals (e.g., Smit et al. 2013); and (e) conjecture mapping (Sandoval 2014), which distinguishes between high-level conjectures that are derived from theory and embodied into the design of learning environments, design conjectures that define the relation between features in the environments (e.g., tools, activity structures) and the resulting mediating processes, and theoretical conjectures that focus on the learning outcomes that result from these processes (Fig. 2). We focus specifically on Sandoval’s (2014) conjecture mapping due to its wide acceptance and use among DBR researchers, but also, because we contend that it nicely illustrates the dual epistemological game, and the intertwining between the abstracting–generalized explanations and the particularization modes of inquiry (Fig. 1). First, the embodiment of a high-level theoretical conjecture into design features within a learning environment clearly demonstrates a “down the curve” process of particularization. Then, characterizing the learning that occurs during enactment in terms of mediating processes represents beginning stages (typically with interpretive methods) of an “up the curve” process in seek for abstracted generalized explanations (e.g., patterns of use), which are then further substantiated in terms of theoretical conjectures (how the mediating processes support learning outcomes). But as noted by Sandoval (2014), such mapping represents only part of a trajectory of
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
487
studies (multiple iterations), that together enable the development of generalized explanations in DBR. That is, conjecture maps are revised from iteration to iteration, and additional back-and-forth movements within the abstraction–particularization curve are typically conducted. Therefore, we believe that although multiple argumentative grammars, as suggested by Bakker (2018) enable DBR researchers the flexibility in making decisions about what counts as DBR, it does not solve the dual-language issue inherent to DBR, which requires better calibration between the two epistemic games involved. Moreover, as we explain in the next section, we view the chasm between the two epistemic games as percolating from DBR epistemology into DBR knowledge ontology. Due to this chasm, researchers debate not only rigorousness of DBR methods but also the value of DBR outcomes.
3.3
How the Dual Epistemic Game Percolates into Design Ontology
The debate regarding the value of DBR outcomes was most notably expressed in a series of three “reports and reflection” articles in the Journal of the Learning Sciences. Bereiter (2014) argued that DBR researchers fail to produce outcomes that embed “know why” knowledge within “know-how” artifacts. He labeled such blended knowledge—having the potential to be useful for both researchers and practitioners in generating innovation—principled practical knowledge (PPK). Janssen et al. (2015), however, in their response article—Practicality studies: how to move from what works in principle to what works in practice—maintained that PPK, as specified by Bereiter, is too abstract to support teachers in implementing innovations developed in DBR research. They contended that the DBR community underestimates the magnitude of usability issues, and suggested an additional type of knowledge—fast and frugal heuristics—to complement PPK. This debate continued with Bereiter’s (2015) response cautioning DBR researchers from being too specific regarding how to implement the outcomes of their studies. Such specificity, he claims, may communicate a message of disrespect to teacher professionalism, and hinder teachers from venturing successfully beyond conventional practices. This ongoing debate relates back to the dual epistemic game exemplified in Fig. 1. Is DBR trying to make truth claims within a coherent (interpretive, positivist, or other) epistemology? Sometimes, DBR produces knowledge that is contingent on context, but more actionable. In other words, sometimes DBR is more concerned with producing usable knowledge, than with producing truth claims. This tension in DBR has been referred to in various terminologies such as actionable knowledge versus knowledgeable action (Markauskaite and Goodyear 2017); generalization versus generativity (Bakker 2018); and analytical versus creative mindsets (McKenney and Reeves 2012/2018). Interestingly, all of the researchers who pointed to this tension note a detrimental bias in which the research community
488
Y. Kali and C. Hoadley
typically prefers the “scientific” over the “design” mode of inquiry, as indicated in standards of publication and the like. That is, actionable knowledge tends to be valued more than knowledgeable action, generalization more than generativity, and analyticality more than creativity, in conducting DBR studies.
4 The Future: Capitalizing on the Dual Epistemic Game in DBR to Spur Creativity and Innovation in Rigorous DBR Research Up to this point we have characterized DBR as being pluralistic, accommodating of a wide range of methodologies, and have shown how this pluralism has drawn criticism, and interpreted as a lack in argumentative grammar (e.g., Kelly 2004). We also illustrated how DBR researchers have addressed such criticism with various argumentative grammars, as well as with the notion that having multiple grammars is pertinent (Bakker 2018). However, we believe that DBR researchers need to acknowledge the duality in the epistemic game we play and that this duality is not a fair target for the criticism of lack of an argumentative grammar. Rather, we suggest that DBR be examined on the basis of the coherence of arguments across the dominant argumentative grammars as researchers intertwine the abstraction–particularization curve (Fig. 1). The next step for DBR is not only to acknowledge but also to capitalize on this epistemological and ontological duality while considering the systemic validity of the activity. That is, it is less important that the epistemic games are narrowly played, and more important that the outcomes of the research matter and make sense both in the knowledge realm and to the people involved, leading to actions and decisions that support a consequential validity of the research. To do so, in this section we draw on two frameworks: (a) methodological alignment (Hoadley 2004) and (b) design researchers’ transformative learning (DRTL, Kali 2016).
4.1
Methodological Alignment as Means for Calibrating the Theoretical and Practical Aspects of DBR
The notion of methodological alignment is essential to our understanding of rigor and research validity. It involves the ways in which researchers connect theories to hypotheses, hypotheses to interventions, interventions to data gathering, and data gathering to interpretation and application. Fifteen years ago, Hoadley (2004) argued that we tend to overemphasize certain types of validity at the expense of others. Specifically, he argued that measurement validity is often regarded as the sole or at least main indicator of rigor. That is, the efforts of ensuring that the means of data collection accurately align with what is being measured predominates our view of well-designed research. DBR, he claimed—with its unique research design—affords
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
489
three other types of validity: (a) treatment validity—ensuring that the treatments we create accurately align with the theories we are examining, (b) systemic validity— that the inferences we make to prove our claims are aligned with these theories, and (c) consequential validity—that these theories are applicable to decisions based on the research. We view these three types of validity measures for reaching methodological alignment as principles for calibrating methodological moves in DBR, aiming at both theoretical and practical advancements. That is, the multiple iterations in DBR—each involving back-and-forth movements within the abstractionparticularization curve, between scientific and design modes of inquiry (Fig. 1)— afford DBR researchers with multiple opportunities to reach higher degrees of treatment, systemic and consequential validity. In this way, methodological alignment principles can serve DBR in achieving a unique type of rigor, which traditional research methods in education may fail to afford. At the same time, these calibration principles can address the ontological debate and assist in producing PPK. Traditional education research believes that the knowledge (or what Nelson and Stolterman (2012) refer to as “the true”) lives in the abstracted generalized explanations that are typically expressed in journal articles. Traditional design believes the knowledge lives in the designed artifacts—curricula, technology-enhanced learning environments, etc. (what designers add to “the actual”—according to Nelson and Stolterman (2012)). In DBR, because we have this different ontological status of knowledge, it lives in neither and both. If we follow the CSCL way of seeing knowledge as contextualized, distributed, culturally embedded, and constantly negotiated by real human beings using information communication technologies, we need to understand that PPK doesn’t live in a research article or a designed learning environment alone. It lives in humans who must negotiate the ontological tensions we have outlined, and this demands personal transformation.
4.2
Transforming Ourselves as a Prerequisite for Transforming Others
In the “Design Researchers’ Transformative Learning” (DRTL) framework, Kali (2016) claimed that DBR provides an especially fertile ground for transformative learning among those who conduct it. DRTL builds on Mezirow’s (1996) transformative learning theory, in which such learning is characterized as “the process of using a prior interpretation to construe a new or revised interpretation of the meaning of one’s experience in order to guide future action” (p. 162). That is, transformative learning results not so much in a learners’ recognition of new facts about matters under study. Rather, these are personal “aha moments” that bring learners to reorganize the ways of looking at, thinking about, and acting on those matters. Kali (2016) claimed that in DBR, such personal “aha moments” often expose
490
Y. Kali and C. Hoadley
Fig. 3 Model for calibrating DBR epistemologies and ontologies in DBR
researchers to flaws in their earlier conceptualization (which is one of the reasons DBR researchers tend to keep these parts of their research behind the scenes). But more importantly, the transformative learning enables design researchers not only to develop new conceptualizations for how to continue their research but also for how they position themselves as actors within the situation they are exploring. In this personal positioning aspect, DRTL differs from the three aspects of learning described by Edelson (2002) in his “what we learn when we engage in design” article, which are domain theories, design frameworks, and design methodologies, which do not include the more personally experienced notion of design knowledge. We claim that what makes DBR such a potentially fertile ground for DRTL is the methodological alignment it affords, and the careful, iterative calibration between the pursuit of advancing theory and design both in terms of DBR epistemology and ontology. Figure 3 illustrates DRTL as part of the model we suggest for calibrating DBR epistemologies and ontologies. We claim that DRTL that results from following the principles of methodological alignment described above leads to what McKenney and Reeves (2012, 2018) describe as blending of analytical and creative mindsets, which is crucial in developing CSCL innovation. It is worth noting, as we exemplify in the case study below, that such iterative calibration within both our epistemologies and ontologies requires a somewhat adventurous attitude to research. It also often involves developing unconventional types of knowledge that may be difficult to judge and share through traditional forms of knowledge dissemination (e.g., academic publishing, see Kali 2016) and valuing (e.g., peer review, tenure processes, etc.).
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
4.3
491
Methodological Alignment and DRTL: A CSCL Case Study
This case focuses on a DBR study conducted in the context a large-scale undergraduate level, semester-long course in biology. In addition to a quick summary of the story already told (described in detail in Sagy et al. 2019; Sagy et al. 2018; Tsaushu et al. 2012), the following sections present the story behind the scenes of this DBR study. Specifically, it illustrates how the back-and-forth movements within the abstraction–particularization curve enabled the DBR team to reach higher degrees of methodological alignment, calibrating between the two modes of inquiry, and how this eventually brought to their transformative learning, and the development of PPK (Fig. 3).
4.3.1
Story Already Told—Part 1: Redesigning an Undergraduate Biology Course
The motivation for this project came from the course instructors—two biology professors who have been teaching the course for many years in traditional ways. A DBR team was initiated, which included the instructors, two science education researchers, and two CSCL researchers. The research was conducted by gradually intervening within the course. In each of the 3 years of the study, a more advanced stage of the intervention was enacted with a new cohort of about 300 students. All three stages involved the use of a website that the team designed to go along with the course, which was used differently at the three stages of the intervention (Sagy et al. 2018). At the first stage (and first year of the study), the course was taught as it had been taught for years, through lecturing in a large hall. The only difference was that students could use the course website to review the contents taught in lectures. At the second stage, the instructors still gave lectures, but students were required to use the website. At the third stage of the intervention, the course website replaced the lectures. In addition, the instructor served as facilitator in weekly “mini-conference” meetings, each time on a different topic of the course with a different group of about 30 students. To prepare for this, students used team websites designed for this purpose, which included content resources as well as process scaffolds for developing team knowledge artifacts to share and discuss in the “mini-conference.”
4.3.2
Untold Story: Dilemma in Research Highlighting the Need for Methodological Alignment
The DBR team’s initial assumption was that within each stage of the intervention they will be able to find relationships between students’ patterns of use of the course website and their understanding of the scientific content. They also assumed that they will find improvement in learning outcomes and attitudes toward biology
492
Y. Kali and C. Hoadley
learning as the stage of the intervention became more advanced. (This represents a “scientific” mode of inquiry aspect of this DBR endeavor—going up the abstractionparticularization curve.) However, following design, enactment, and data analysis (representing a “design,” or particularization mode of inquiry—going down the curve), both assumptions were refuted. That is, no meaningful or interesting findings were found using what seemed straightforward means of analysis (e.g., comparing students’ achievements in the course test between iterations, and seeking relationships between students’ use of the website and their learning outcomes within each iteration using learning analytics techniques). While interview data seemed to hint at deeper learning as the intervention advanced, the processes that supported student learning (mediating processes, in Sandoval’s 2014 terminology) were not clear, nor were the design features supporting them. Eventually, further back-and-forth movements within the abstraction–particularization curve enabled identification of a gap between the values that guided students in their learning process and the instructors’ perceptions about these values (Sagy et al. 2019).
4.3.3
Story Already Told—Part 2: The Culture of Learning Continuum as a Conceptual Lens
This new lens, which the DBR team called “the culture of learning continuum (CLC)” (Sagy et al. 2018), indicated that students who learned in more advanced versions of the course referred to course features with higher degrees of what was described in the CLC as internal values. Specifically, students were more likely to seek personal growth, appreciate the formative nature of assessment, make efforts to learn (and not only succeed in the test), negotiate meaning with peers (rather than seek the “right” answer for the test), and take ownership of their own learning process.
4.3.4
Retrospective Analysis of Relationships Between Methodological Alignment, DRTL, and PPK
Retrospectively, the difficulty to explain the intervention outcomes in terms of mediating processes at preliminary stages of the project, eventually, improved the teams’ methodological alignment. Changing what was measured (culture of learning instead of students’ patterns of use of the website) and how it was measured (measurement validity), transformed the DBR researchers conception about the intervention. That is, they developed a renewed understanding of what the intervention represented from a theoretical point of view (treatment validity). As a result, they developed a renewed view of their role as researchers and designers within the study. They took on a role that focused more on exploration within the
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
493
unknown—being open to “build the plane while flying it”—discover the means of analysis while conducting the research, which required the blending of analytical and creative mindsets (McKenney and Reeves 2012/2018). But there was also a shift in the ontological work being conducted—as designers, they understood that their role is to develop PPK in the form of not only the course’s website with its various digital resources but also the social activity structures that can support them and the cultural lens for explaining the rationale behind them (the principled aspect of the practical tools). These turned out to be crucial for continued implementation after the research was already over (evidence exists that the instructors continued to implement the advanced versions of the course for many years). This long-lasting effect was possible due to the transformative learning of the instructors too, who were part of the research team, who adopted to their professional identity a role of cultivating a culture of learning (Tsaushu et al. 2012).
4.4
Concluding Remark
The literal meaning of DBR (design-based research) is that we are nudging both the epistemology and ontology to follow scientific as well as design modes of inquiry and knowledge outcomes. This unusual property of the ontology, as pertained in PPK, calls for an unusual epistemology. At the same time, the dual epistemic game of advancing theory while advancing design helps holding the knowledge accountable. The eclecticism of DBR relates to the many ways we can intertwine scientific and design modes of inquiry, going back-and-forth the abstraction–particularization curve (Fig. 1). What unifies these activities is moving toward increased coherence, and therefore systemic validity. Over 15 years ago, Hoadley noted that “the promise of having better alignment in research—certain and sure links from theories to hypotheses to interventions to data gathering activities to interpretation and application—should be a strong incentive to continue to pursue the design-based research approach” (p. 211). The model we suggest for calibrating DBR epistemologies and ontologies (Fig. 3) can assist in capitalizing on the dual epistemic and ontologic game inherent to DBR, to spur creativity and innovation in rigorous research. Thus, we claim that DBR, while accommodating multiple epistemic games, is not simply a laundry list of ways to make knowledge. Rather, our flexibility in DBR’s epistemic games should be driven by, and accountable to calibration between these games. In particular, we believe that the inherently embedded and contextualized nature of CSCL, as well as its design orientation, demands a set of knowledge activities which seek to use treatment, systemic, and consequential validity of research as the principles for moving between different epistemic framings, and indeed—different knowledge ontologies. By doing so, we transform not only the types of knowledge produced but also the knowers themselves, reshaping the role and perspective of students, teachers, and DBR researchers.
494
Y. Kali and C. Hoadley
References Bakker, A. (2018). Design research in education: A practical guide for early career researchers. Routledge. Bell, P. (2004). On the theoretical breadth of design-based research in education. Educational Psychologist, 39(4), 243–253. Bereiter, C. (2014). Principled practical knowledge: Not a bridge but a ladder. Journal of the Learning Sciences, 23(1), 4–17. Bereiter, C. (2015). The practicality of principled practical knowledge: A response to Janssen, Westbroek, and Doyle. Journal of the Learning Sciences, 24(1), 187–192. Bhaskar, R. (1975). A realist theory of science. Leeds Books. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of Learning Sciences, 2(2), 141–178. Chi, M. T. H. (1992). Conceptual change within and across ontological categories: Examples from learning and discovery in science. In R. Giere (Ed.), Cognitive models of science: Minnesota studies in the philosophy of science (pp. 129–186). University of Minnesota Press. Cobb, P., & Gravemeijer, K. (2008). Experimenting to support and understand learning processes. In A. E. Kelly, R. A. Lesh, & J. Y. Baek (Eds.), Handbook of design research methods in education (pp. 68–95). New York: Routledge. Collins, A. (1990). Toward a design science of education. Center for Technology in Education. Collins, A. (1992). Toward a design science of education. In New directions in educational technology (pp. 15–22). Springer. Dede, C. (2004). If design-based research is the answer, what is the question? A commentary on Collins, Joseph, and Bielaczyc; diSessa and Cobb; and Fishman, Marx, Blumenthal, Krajcik, and Soloway in the JLS special issue on design-based research. The Journal of the Learning Sciences, 13(1), 105–114. Desforges, C. W. (2000). Familiar challenges and new approaches: Necessary advances in theory and methods in research on teaching and learning. Retrieved February 1, 2019, from https:// web.archive.org/web/20180624013426/http://www.leeds.ac.uk/educol/documents/00001535. htm Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–8. diSessa, A. A. (1991). Local sciences: Viewing the design of human-computer systems as cognitive science. In J. M. Carroll (Ed.), Designing interaction (pp. 162–202). Cambridge University Press. Edelson, D. C. (2002). Design research: What we learn when we engage in design. The Journal of the Learning Sciences, 11(1), 105–121. Fishman, B. J., Penuel, W. R., Allen, A. R., Cheng, B. H., & Sabelli, N. O. R. A. (2013). Designbased implementation research: An emerging model for transforming the relationship of research and practice. National Society for the Study of Education, 112(2), 136–156. Hoadley, C. (2002). Creating context: Design-based research in creating and understanding CSCL. In G. Stahl (Ed.), Computer Support for Collaborative Learning 2002 (pp. 453–462). Erlbaum. Hoadley, C. (2004). Methodological alignment in design-based research. Educational Psychologist, 39(4), 203–212. Janssen, F., Westbroek, H., & Doyle, W. (2015). Practicality studies: How to move from what works in principle to what works in practice. Journal of the Learning Sciences, 24(1), 176–186. Kali, Y. (2006). Collaborative knowledge-building using the design principles database. International Journal of Computer Support for Collaborative Learning, 1(2), 187–201. Kali, Y. (2008). The design principles database as means for promoting design-based research. In A. E. Kelly, R. A. Lesh, & J. Y. Baek (Eds.), Handbook of design research methods in education: Innovations in science, technology, engineering, and mathematics learning and teaching (pp. 423–438). Erlbaum.
Design-Based Research Methods in CSCL: Calibrating our Epistemologies. . .
495
Kali, Y. (2016). Transformative learning in design research: The story behind the scenes. Keynote presented at the International Conference of the Learning Sciences, Singapore. Kali, Y., Eylon, B.-S., McKenney, S., & Kidron, A. (2018). Design-centric research-practice partnerships: Three key lenses for building productive bridges between theory and practice. In M. Spector, B. Lockee, & M. Childress (Eds.), Learning, design, and technology: An international compendium of theory, research, practice, and policy. Springer. https://doi.org/10.1007/ 978-3-319-17727-4_122-1. Kali, Y., Levin-Peled, R., & Dori, Y. J. (2009). The role of design-principles in designing courses that promote collaborative learning in higher-education. Computers in Human Behavior, 25(5), 1067–1078. Kaptelinin, V., & Cole, M. (2002). Individual and collective activities in educational computer game playing. In T. Kosmann, R. Hall, & N. Miyake (Eds.), CSCL (Vol. 2, pp. 303–316). Mahwah, NJ: LEA. Kelly, A. E. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences, 13(1), 115–128. Koschmann, T. (1999). Computer support for collaboration and learning. Journal of the Learning Sciences, 8(3–4), 495–497. https://doi.org/10.1080/10508406.1999.9672077. Laurel, B. (2013). Computers as theatre. Addison-Wesley. Law, N., Niederhauser, D. S., Christensen, R., & Shear, L. (2016). A multilevel system of quality technology-enhanced learning and teaching indicators. Educational Technology & Society, 19 (3), 72–83. Markauskaite, L., & Goodyear, P. (2017). Epistemic fluency and professional education: Innovation, knowledgeable action and actionable knowledge. Springer. McKenney, S., & Reeves, T. C. (2012/2018). Conducting educational design research. Routledge. Mezirow, J. (1996). Contemporary paradigms of learning. Adult Education Quarterly, 46(3), 158–172. Nelson, H. G., & Stolterman, E. (2012). The design way: Intentional change in an unpredictable world (2nd ed.). MIT Press. O’Neill, D. K. (2012). Designs that fly: What the history of aeronautics tells us about the future of design-based research in education. International Journal of Research and Method in Education, 35(2), 119–140. https://doi.org/10.1080/1743727x.2012.683573. Paavola, S., Lipponen, L., & Hakkarainen, K. (2004). Models of innovative knowledge communities and three metaphors of learning. Review of Educational Research, 74(4), 557–576. Perkins, D. N. (1997). Epistemic games. International Journal of Educational Research, 27(1), 49–61. Reeves, T. C. (2005). Design-based research in educational technology: Progress made, challenges remain. Educational Technology, 45(1), 48–52. Sagy, O., Hod, Y., & Kali, Y. (2019). Teaching and learning cultures in higher education: A mismatch in conceptions. Higher Education Research & Development, 38(4), 849–863. Sagy, O., Kali, Y., Tsaushu, M., & Tal, T. (2018). The culture of learning continuum: Promoting internal values in higher education. Studies in Higher Education, 43(3), 416–436. Sandoval, W. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18–36. Simon, H. A. (1969). The sciences of the artificial. MIT press. Slotta, J. D. (2011). In defense of chi’s ontological incompatibility hypothesis. Journal of the Learning Sciences, 20(1), 151–162. Smit, J., van Eerde, H. A. A., & Bakker, A. (2013). A conceptualisation of whole-class scaffolding. British Educational Research Journal, 39(5), 817–834. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning: An historical perspective. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (pp. 409–426). Cambridge University Press. Tabak, I. (2004). Reconstructing context: Negotiating the tension between exogenous and endogenous educational design. Educational Psychologist, 39(4), 225–233.
496
Y. Kali and C. Hoadley
Toulmin, S. E. (1958). The uses of argument. Cambridge University Press. Tsaushu, M., Tal, T., Sagy, O., Kali, Y., Gepstein, S., & Zilberstein, D. (2012). Peer learning and support of technology in an undergraduate biology course to enhance deep learning. CBE—Life Sciences Education, 11(4), 402–412.
Further Readings Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5–8. This paper, published in a special issue of Educational Researcher (the first special issue published on DBR), is used in the current chapter to characterize DBR, as it encapsulates what critics find challenging about DBR, which our model for calibrating epistemologies and ontologies addresses. Hoadley, C. (2004). Methodological alignment in design-based research. Educational Psychologist, 39(4), 203–212. This paper provides a detailed explanation of the notion of methodological alignment, which is one of the two components (the other being DRTL) in our model for calibrating DBR epistemologies and ontologies. Kelly, A. E. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences, 13(1), 115–128. The critique in this paper, concerning a missing argumentative grammar in DBR, has provoked an ongoing debate, as well as various approaches for enhancing rigor in DBR. It is a good starting point for researchers who are already conducting DBR and are required to convince reviewers of the rigor in their work to show that Yes—it can be methodological! McKenney, S., & Reeves, T. C. (2012/2018). Conducting educational design research. Routledge. This book provides a generic model for conducting DBR and explains in detail its main elements: analysis and exploration; design and construction; evaluation and reflection; and implementation and spread. The book also offers guidance for proposing, reporting, and advancing DBR, and is recommended especially for graduate students, as well as experienced researchers who are new to this approach. Sagy, O., Kali, Y., Tsaushu, M., & Tal, T. (2018). The culture of learning continuum: promoting internal values in higher education. Studies in Higher Education, 43(3), 416–436. This DBR study is the case we use in our chapter to illustrate the “behind the scenes” DRTL processes. The study also illustrates the use of Sandoval’s (2014) conjecture mapping in DBR. We claim that such mapping highlights the tension within both epistemic and ontological games within the abstraction-particularization curve.
Experimental and Quasi-Experimental Research in CSCL Jeroen Janssen and Ingo Kollar
Abstract (Quasi-)experimental designs play an important role in CSCL research. By actively manipulating one or several independent variables while keeping other influencing factors constant and through the use of randomization, they allow to determine the causal effects of such independent variables on one or more dependent variable(s) that may be of interest to CSCL researchers. So far, (quasi-)experimental CSCL studies have mainly looked at the effects of certain tools and scaffolds on the occurrence of hoped-for learning process and outcomes variables. While earlier CSCL research mainly ignored the interdependence of data from learners who learned in the same group, more recent research uses more advanced statistical methods to analyze the effects of different CSCL settings on learning processes and outcomes (such as multilevel modeling). Because of the replication crisis in psychology, preregistration and the open science movement are becoming increasingly important also for CSCL research that uses (quasi-)experimental designs. Keywords Computers-supported collaborative learning · Experimental research · Quasi-experimental research · Reproducibility · Open science
1 Definitions and Scope Due to its interdisciplinary nature, research on computer-supported collaborative learning (CSCL) has always used a variety of different research designs to investigate its questions. In their synthesis of CSCL research, Jeong et al. (2014) differentiated between (a) descriptive studies, (b) design-based studies, and (c) studies that J. Janssen (*) Department of Education, Utrecht University, Utrecht, The Netherlands e-mail: [email protected] I. Kollar Educational Psychology, University of Augsburg, Augsburg, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_27
497
498
J. Janssen and I. Kollar
use experimental designs. This chapter aims at providing an overview of studies that fall into the last mentioned category. Therefore, we first define (quasi-)experimental designs and describe their purpose in empirical research. We then give an overview of topics pursued by CSCL research using (quasi-)experimental designs, thereby describing (a) the kinds of independent variables and (b) the kinds of dependent variables CSCL research has shown particular interest in. After that, we describe the state of the art of CSCL research that builds on (quasi-)experimental designs, explaining recent advances regarding the way data from such studies is being analyzed. Finally, looking into the future, we argue that in the wake of the so-called replication crisis in psychology, topics such as replications, open science, and preregistration of empirical studies are likely to become increasingly relevant also for (quasi-)experimental research on CSCL. To be able to understand how (quasi-)experimental research works and what its merits are, several key concepts are essential. These are cause, effect, and causal relationships (Shadish et al. 2002). Furthermore, concepts like independent variables, dependent variables, and random assignment are inherent to experimental research designs (Cresswell 2008). Causes are conditions or events that produce an effect. In CSCL research, for example, support offered to students in the form of a collaboration script may lead to the effect that the group collaborates effectively and efficiently. Collaboration scripts are scaffolds offered to collaborating students (e.g., verbally, on paper, or through computer support) to guide and support them during their interaction (Fischer et al. 2013; Vogel et al. 2017). In experimental research, the researcher tries to have control over a hypothesized cause by manipulating an independent variable. In the example about the effect of a collaboration script on effectiveness and efficiency of collaboration, the researcher will manipulate the availability of the collaboration script: Some groups will work with the script and others will not. The availability of the collaboration script is then the independent variable. It should be noted that it is also possible that a study addresses more than one cause and thus investigates multiple independent variables. Effects are what happens as a result of a certain cause. In experiments, researchers try to establish the effects of a cause or intervention by measuring one or more dependent variables. In the example, we described on the effects of a collaboration script, the researcher could try to measure effectiveness of the collaboration by administering a knowledge posttest. Dependent variables can be measured using different types of instruments, such as knowledge tests, questionnaires, or observation schedules. The ultimate goal of experiments is to establish whether the hypothesized cause and effect(s) are related. That is, the researcher’s aim is to establish whether there exists a causal relationship between cause and effect. In our example study, the causal relationship is investigated by manipulating the availability of the collaboration script and observing the dependent or outcome variables afterward. First, this feature of experiments helps to establish whether the cause preceded the effect in time. Second, manipulating the independent variable helps to establish whether variation in the cause yields different effects. To determine the existence of a causal
Experimental and Quasi-Experimental Research in CSCL
499
relationship, a researcher needs not only to determine what happens when a cause is present but also what happens in its absence. To determine the effect of the collaboration script, the researcher needs to show both what happens when the collaboration script is available and when it is not. Finally, establishing a causal relationship requires that researchers take measures to reduce the plausibility of other explanations for the obtained effect. The most common way to rule out possible alternative explanations is random assignment to conditions. In our example, the researcher will randomly assign each individual student to either the experimental condition in which they have access to the collaboration script or to the control condition in which they do not have access to the script. Random assignment helps to rule out alternative explanations for an obtained effect, because it creates a situation that allows the researcher to have confidence that differences between experimental groups with respect to irrelevant extraneous variables are negligible. In our example, students’ prior knowledge is likely to also have an impact on students’ performance on the knowledge posttest. By using random assignment, the researcher is able to equate the two groups with respect to the level of prior knowledge and other possibly important variables. Whether random assignment is successful at creating groups that are equal with respect to extraneous variables, will depend on the sample size of the study (cf. law of large numbers). If—after using random assignment with a large enough sample—the researcher then finds that students who worked with the collaboration script outperformed students who did not, it is highly unlikely that this difference can be attributed to a difference in prior knowledge between the two conditions. The influence of extraneous variables can however never be ruled out completely; bias or error due to differences between groups will always occur despite random assignment. However, random assignment—assuming a large enough sample size—enhances the probability that the effects of prior differences between experimental conditions are negligible. We should note that in CSCL studies, randomization requires additional decisions compared to research that focuses solely on individual students. In CSCL experiments, the researcher needs to decide who is randomly assigned to conditions: the group or the individual? In some cases, individual group members may be assigned to different conditions (e.g., when group members are randomly assigned different roles). However, in most cases, groups will be assigned to conditions instead of individuals (e.g., some groups will randomly be given the collaboration script, others will not). In the latter case, the group in essence becomes a bottleneck when one wants to rule out the possibility that experimental conditions were incomparable before the start of the experiment: There are less units to assign randomly to a condition when groups are assigned to conditions compared to when individuals are assigned to conditions. As we will see later on, this same problem also arises when researchers want to make informed decisions about the statistical power of their studies. The absence of random assignment is what distinguishes quasi-experiments from randomized experiments (Shadish et al. 2002). Like randomized experiments, quasiexperiments aim to establish cause-and-effect relationships, and thus they employ treatment and control groups, administer posttests, and so on. However, in quasiexperiments, the researcher is unable to randomly assign participants to experimental
500
J. Janssen and I. Kollar
conditions. In our example, the researcher might be unable to use random assignment, because she has access to a classroom of students who already work with the collaboration script and another classroom of students who do not work with the script. Because quasi-experiments lack random assignment, it is more difficult to rule out alternative explanations for the findings. Quasi-experiments may thus compromise the internal validity of the study. Threats to internal validity make it difficult for researchers to be certain about a causal relationship: other factors might exist that explain why an effect was found. A comprehensive discussion of all the possible threats to internal validity that quasi-experiments might suffer from, is beyond the scope of this chapter. However, we point out that in our example, selection might be a threat to the internal validity of this study: Because the researcher had no control over the assignment to conditions, the treatment group may have had higher prior knowledge compared to the control group, introducing a selection bias that offers an alternative explanation for obtained effects. To establish group equivalence prior to the start of the quasi-experiment, the researcher could administer a pretest to assess students’ prior knowledge (see Cook and Campbell (1979) and Shadish et al. (2002) for a further description of measures researchers can take to minimize threats to internal validity when using quasi-experimental designs). Finally, we should note that for all their merits, experimental designs also have their problems. The most often noted problem is threats to external validity. That is, the outcomes of experiments may not always generalize to participants or contexts outside the experimental design. For example, Henrich et al. (2010) noted that most samples in psychological research can be characterized using the acronym WEIRD: Western, Educated, Industrialized, Rich, and Democratic. When researchers use such a sample for their experiment, this might threaten their ability to generalize their findings to, for example, educational contexts that one might encounter in Asian countries or to students who come from less affluent backgrounds. Another drawback of experimental designs is that they might be considered reductionist. For example, because researchers try to establish the effects of an intervention using quantitative measures, they might fail to capture the richness and complexity of human collaboration and learning. Related to this, Shadish et al. (2002) noted that experimental designs are helpful to describe the effects of a treatment or intervention, they are, however, less able to offer explanations for the mechanisms through which those effects occur. In our example, the researcher might be able to demonstrate the effect of offering collaboration scripts to students, but it remains difficult to explain why that effect occurs (unless the researcher decided to measure further process variables that might explain possible effects of the intervention on the learning outcome; see section on “Dependent variables”). However, in spite of their problems, experimental designs—when conducted well—provide the most compelling evidence for cause–effect relations.
Experimental and Quasi-Experimental Research in CSCL
501
2 History and Development Studies using (quasi-)experimental designs always represented an important part of the methodological portfolio of CSCL research (see Stahl et al. 2006). At the very first CSCL conference, Inkpen et al. (1995), for example, reported an empirical study during which 104 elementary school children played a computer game and were asked to solve a number of problem-solving tasks. Participants were randomly assigned to one of three conditions: In the first condition, students were playing the game individually (Solo Play). In a second condition (Parallel Play), students were randomly assigned to dyads and were seated next to each other on two computers while playing the game. Students in the third condition were asked to play the game in dyads together in front of one computer (Integrated Play). Furthermore, the authors looked at the extent to which gender had an effect on the number of problem-solving tasks that were solved in the computer game. The results showed that both males and females solved more tasks in the Integrated Play condition than in the Solo Play condition (at least descriptively). In the Parallel Play condition, however, success was dependent on gender, as girls solved significantly fewer, and boys solved significantly more tasks in this condition than in the other two conditions. In Jeong et al.’s (2014) study, 37% of the empirical studies conducted in the field followed an experimental approach. Since 2014, this picture does not appear to have changed much: When examining the publications of the community’s own journal “International Journal for Computer-Supported Collaborative Learning” from 2015 to 2018, still about one-third of all published articles used (quasi-)experimental designs (e.g., Cesareni et al. 2016; Cuendet et al. 2015; Erkens et al. 2016; Harney et al. 2017; Tegos et al. 2016). Studies following a (quasi-)experimental approach have also been published in other high-ranking journals such as Learning and Instruction (e.g., Kollar et al. 2014), Computers in Human Behavior (e.g., Wang et al. 2017), Computers and Education (e.g., Yilmaz and Yilmaz 2019), or Instructional Science (e.g., Mende et al. 2017). In the following, we first give an overview over the kinds of independent variables that CSCL has shown a particular interest in. We then look at the kinds of dependent variables that are often covered in that research (see Table 1). Throughout the descriptions, we present the actual (quasi)-experimental designs that were used in exemplary empirical studies, along with their effects. Wherever possible, we also Table 1 Overview of independent and dependent variables in CSCL research Independent variables Collaborative learning setting Computer use Support by tools and scaffolds
Individual learning setting No computer use No support by tools and scaffolds
Dependent variables Individual Collaborative process process Individual Collaborative outcome outcome
502
J. Janssen and I. Kollar
report results from meta-analyses that summarize the effects from multiple (quasi)experimental studies.
2.1
Independent Variables
According to Chen et al. (2018) much (quasi-)experimental CSCL research falls into one of three categories:
2.1.1
Studies on the Effects of Collaboration Versus Individual Learning in Computer-Supported Learning Settings
An example for studies that fall into this category is a study by Kolloffel et al. (2011). They compared learners who either worked individually or in dyads with respect to the knowledge they gained through interacting with a simulation-based inquiry learning environment. They found that collaborative learners reached significantly higher knowledge gains than individuals. This result is in line with the findings from the meta-analysis by Chen et al. (2018) which yielded a significant positive effect of collaborative versus individual learning in computer-based settings on knowledge achievement (g ¼ 0.42), skill acquisition (g ¼ 0.64), and perceptions (g ¼ 0.38). Yet, across studies, there was quite a large variation in effect sizes, which points to the existence of possible moderator variables that explain this variability. For example, effects from quasi-experimental studies were larger than effects obtained in randomized experimental studies. Another moderator variable seems to be the kind of guidance that students receive while learning. In accordance with this, Weinberger et al. (2010) showed that groups only outperformed individuals in a knowledge test when groups were previously supported by a collaboration script, but not when they were not supported.
2.1.2
Studies on the Effects of (Different Kinds of) Computers on Learning
Studies that fall into this category compare conditions in which learners worked by aid of different kinds of computers (handhelds, desktops, etc.) with conditions in which learners learned the same content without computers. An example is the study by Brom et al. (2016). In their study, 325 students were randomly assigned to one of the three conditions: (1) playing a social role-playing game with competitive elements on computers, (2) playing a similar game, but without computers, and (3) participating in a nongame workshop on the topic (control condition). Results showed that both gaming conditions were superior to the control condition with respect to evoking positive affect and flow and knowledge acquisition. Yet, the two game conditions (with vs. without computers) did not differ with respect to any of
Experimental and Quasi-Experimental Research in CSCL
503
these variables. The meta-analysis by Chen et al. (2018), however, found positive effects of learning with computers versus learning without computers with respect to several dependent variables (knowledge achievement: g ¼ 0.45; skill acquisition: g ¼ 0.53; group task performance: g ¼ 0.89; social interaction: g ¼ 0.57). Again, effect sizes partially showed a very high variance. Moderator analyses showed that effect sizes were larger in quasi-experimental (compared to experimental) studies and in studies using smaller (N < 100; compared to larger) sample sizes. Several explanations can be offered why methodological characteristics of CSCL studies may affect the magnitude of (quasi-)experimental effects. For example, small studies that yield insignificant effects will likely not be published in peer-reviewed journals, because reviewers will judge the insignificance of the results to be an artifact of the small sample size (Slavin and Smith 2009).
2.1.3
Studies on the Effects of Learning Environments and Scaffolds in CSCL
Studies of this kind compare the effects of different kinds of learning environments, tools, or scaffolds (resp. their presence vs. absence) on learning processes and outcomes. For example, Bause et al. (2018) randomly assigned 219 students to groups of three and had them solve a hidden profile task. Hidden profile tasks (Stasser and Titus 1985) are tasks in which a group has to come to a decision (e.g., finding the best-qualified candidate for an open position in a company) by aid of information items that are partially shared (i.e., given to all group members) and partially unshared within the group (i.e., only single group members have access to them). To support groups in their decision-making process, Bause et al. (2018) gave all triads the opportunity to work on a multi-touch table. They compared two conditions: in the control condition, each of the three group members had a private work space in front of her or him (represented on the multi-touch table) that contained all the information pieces that were given to her or him. In the experimental condition, groups additionally had a joint work space into which they were able to move information items from their private work spaces and cluster and merge them. Results showed that groups in the experimental condition showed greater discussion intensity, more indicators of mutual understanding, and better decision performance than groups in the control condition. This result is in line with the findings of the meta-analysis by Chen et al. (2018) who found positive effects of purposefully designed learning environments, tools, and strategies (such as collaboration scripts, e.g., Schwaighofer et al. 2017, or group awareness tools, e.g., Janssen and Bodemer 2013) as compared to the absence of them on knowledge achievement (g ¼ 0.55), skill acquisition (g ¼ 0.79), perceptions (g ¼ 0.32), group task performance (g ¼ 0.66), and social interaction (g ¼ 0.40). Moderator analyses showed that effects were dependent on the kinds of environments, tools, and scaffolds that were used. For example, regarding skill acquisition, graphs, and multimedia environments seemed to be much more effective than online discussions.
504
J. Janssen and I. Kollar
Thus, all in all, it seems that CSCL research has been and still is interested in a wide variety of independent variables, ranging from the mere use of computer support (vs. no computer support) for collaborative learning to very specific tools and scaffolds.
2.2
Dependent Variables: Process Versus Outcome Variables
As described above, variables that are assumed to be affected by some kind of experimental variation (i.e., one or several independent variables) are called dependent variables. In CSCL research, such dependent variables may vary with respect to at least two dimensions: First, whether they refer to some kind of (learning) process (e.g., variables regarding the quality of argumentation during scripted collaboration), or whether they refer to some kind of learning outcome (e.g., learners’ performance in a subsequent knowledge test). And second, whether they refer to an individuallevel variable or a group-level variable.
2.2.1
Dependent Variables at the Individual Process Level
Dependent variables of this kind may refer to very different kinds of processes that individual learners engage in or experience during CSCL. Very often, studies look at the kinds of contributions that individuals make to the discourse within the group. For example, Harney et al. (2015) investigated the effects of different kinds of prompts (task-level vs. task-plus-process-level prompts) on various indicators of individuals’ argumentation quality that were assessed through content analysis of verbal protocols. They found that students in the process-level prompts condition showed significantly higher values on 3 out of 12 measures related to argumentation quality. Even though most CSCL studies that use individual-level process variables seem to focus on different aspects or types of discourse contributions, there are also studies that look at other variables. An example is the aforementioned study by Brom et al. (2016) which investigated the effects of different kinds of games on motivational–affective process variables such as flow and positive affect learners experienced during learning. They found significantly higher levels of positive affect and flow in the two gaming conditions than in the regular classroom condition.
2.2.2
Dependent Variables at the Collaborative Process Level
There often are good theoretical reasons to take a particular look at dependent variables at the collaborative rather than at the individual process level. For example, research on Knowledge Building (Scardamalia and Bereiter 2006) essentially is concerned with how to support learning communities (and not individuals, at least not in the first place) in their joint knowledge construction processes. For example,
Experimental and Quasi-Experimental Research in CSCL
505
Resendes et al. (2015) investigated the effects of different kinds of scaffolds that were implemented in a KnowledgeForum environment (compared to a KnowledgeForum environment without these scaffolds) on the number of notes or the number of academic words that were used in posts in communities that used these different versions of KnowledgeForum and found that the scaffolded version led to a higher interpersonal connectedness of the discourse as compared to the discourse that was observed in the less scaffolded KnowledgeForum environment.
2.2.3
Dependent Variables at the Individual Outcome Level
Quite often, CSCL studies are interested in the effects of some independent variable on the amount or quality of knowledge or (collaboration) skills that individuals acquire through computer-supported collaboration. For example, Lin et al. (2015) were interested in the effects of two kinds of awareness tools (knowledge-context awareness vs. social-context awareness) on participants’ individual performance on a range of knowledge tests. They found that the social-context awareness information led to higher amounts of knowledge than the knowledge-context awareness information, at least for one of the knowledge tests they used. While there seems a clear focus of most CSCL studies to use cognitive (i.e., knowledge- and skill-related) variables at the individual outcome levels, a few studies also look at motivational– affective outcome variables. An example is a study by Serrano-Cámara et al. (2014). They found intrinsic motivation (as measured with a Likert-type scale questionnaire) to be higher in a CSCL scenario that was supported by a specifically designed tool than in an unstructured collaborative learning condition.
2.2.4
Dependent Variables at the Collaborative Outcome Level
Some CSCL studies include dependent variables at the collaborative outcome level. A construct that falls into this category, for example, is knowledge convergence (see Jeong and Chi 2007). A study that referred to this concept and developed a measure for it in a CSCL context comes from Fischer and Mandl (2005): They investigated the effects of different kinds of external representations (content-specific vs. contentindependent) and of different collaboration settings (video-conferencing vs. face-toface collaboration) on the amount to which knowledge (as measured in an individual posttest) was shared or unshared between members of groups. Although there was convergence on various process variables, neither the kind of external representation nor the collaboration setting exerted significant effects on knowledge convergence. In sum, we can conclude that CSCL research uses a wide variety of dependent variables at different levels and different kinds (process or outcome). While most research seems to be interested in (socio-)cognitive process and outcome variables (both at the individual and the group level), a few studies also investigate the effects of CSCL on affective–motivational processes and outcomes.
506
J. Janssen and I. Kollar
3 State of the Art In this part of the chapter, we describe recent developments with respect to (quasi-) experimental research in the field of CSCL that we consider to be state of the art. These are multilevel analysis, performing adequately powered studies, and mixed methods. These developments are by no means exhaustive as the field of CSCL research is highly dynamic and adapting to new developments and practices.
3.1
Multilevel Analysis
When conducting a CSCL experiment, a researcher may encounter several problems (Janssen et al. 2013). We will illustrate these problems by referring to our example of the researcher investigating the effect of a collaboration script. As with any CSCL study, in this study too, students work together on a problem in small groups. This creates a hierarchically nested data set: Individuals are nested within groups. CSCL studies therefore usually have two levels of analysis: The individual at the lowest level and the group at the highest level, although in some cases, more levels may be present (cf. De Wever et al. 2007; Schellens et al. 2005). Second, because the students in her study work in groups, the observations of the dependent variable the researcher uses, a knowledge posttest, are likely nonindependent (Cress 2008; Kenny et al. 2006): The mutual experience of collaboration likely affects group members’ scores on the posttest (i.e., they will be correlated to some extent). Third, the researcher in our example comes across variables that have different units of analysis. For the dependent variable, the unit of analysis is the individual: Each individual group member receives a score for the knowledge posttest. For the independent variable, however, the group is the unit of analysis: Some groups are given the collaboration script, others not. To address these issues, CSCL researchers have acknowledged that traditional ways of analyzing data (e.g., t-tests, ANOVAs), are inadequate and that multilevel analysis is suited to deal with these problems. An extensive discussion of the application of multilevel analysis is beyond the scope of this chapter, but Cress (2008) and Janssen et al. (2013) provide an introduction into the theory and application of multilevel analysis for CSCL research. Specialized works explaining multilevel analysis in depth are also available (cf. Hox 2003; Snijders and Bosker 1999). Specialized software programs exist that can be used for multilevel analysis (e.g., HLM7), but common software programs, such as SPSS or R, can also be used by CSCL researchers who want to apply multilevel analysis. Textbooks such as Field (2018) and Field et al. (2012) explain how data may be analyzed using SPSS and R.
Experimental and Quasi-Experimental Research in CSCL
3.2
507
Considering Statistical Power When Preparing CSCL Experiments
Let’s return to our researcher investigating the effects of a collaboration script on effectiveness of collaboration. When she is planning her study, she will need to determine an appropriate sample size. One way to assess which sample size is adequate for a study is to perform a power analysis (Cohen 1988). Performing a power analysis prior to conducting a study informs the researcher about the probability with which a statistically significant effect can be obtained. The probability to commit a Type II error (i.e., concluding there is no effect when in fact there is one) is related to statistical power: When a study is adequately powered, the likelihood that a significant effect will not be detected decreases. It is therefore important that researchers pay careful attention to the statistical power of their study when preparing their experiment. In recent years, several researchers have warned about the possible detrimental effects of performing underpowered studies (e.g., Ioannidis 2005; Lakens 2014). In order to perform a power analysis, the researcher will need to determine (a) the alpha level, (b) the desired statistical power, and (c) the expected effect size. By convention, alpha level is usually set at 0.05. The researcher furthermore specifies that her desired statistical power will need to be at least 80%. In other words, she wants to have an 80% chance to be able to detect an effect when there truly is one. Finally, the researcher needs to make an informed decision about the expected effect size. To do so, she consults the meta-analysis by Chen et al. (2018), to learn that the mean effect of CSCL strategies that provide additional guidance on knowledge development is d ¼ +0.41. She, therefore, speculates that she may expect an effect size for her collaboration script of d ¼ +0.41. She then performs a power analysis and learns that to detect such an effect with a power of 80%, she needs at least 95 participants per group and thus 190 participants in total. Thus, by performing the power analysis prior to her study, this researcher is able to make an informed decision about sample size she requires for her study. The question then arises: Are CSCL studies typically adequately powered or not? The meta-analysis by Chen et al. (2018) provides some useful information. It shows that the average sample size in the included studies is 124 (SD ¼ 195). The smallest sample size was 15 participants and the largest sample size 2574; the mode was 40 participants and the median was 74. This shows us that adequately powered studies are probably not often performed in the field of CSCL. Based on the information from Chen et al.’s meta-analysis, this means that many CSCL studies were probably underpowered, especially if one takes into account that the calculations provided above are for studies where two conditions are compared; when three or more conditions are compared or when factorial designs are used, larger sample sizes are required. Thus, while in CSCL research—like in other fields of social science—the awareness is growing that performing adequately powered studies is important, there is still room for improvement.
508
J. Janssen and I. Kollar
We should also caution the reader that power analysis in CSCL studies is often more complex than the individual-centered designs one often encounters in psychological research. When considering statistical power in CSCL research, other factors also play a role (Kenny et al. 2006; Snijders 2005), such as, for example, the sample size at the lowest level of analysis (i.e., the number of individuals within each group; in CSCL research typically between 2 and 6), the number of units at the highest level of analysis (i.e., the number of groups or dyads), the correlation between individuals’ scores and those of their group members, the intraclass correlation, and whether one wants to detect an individual-level effect (e.g., prior knowledge of a student) or a group-level effect (e.g., use of a collaboration script vs. no use of such a script). While currently no established guidelines seem to exist for calculating the sample sizes necessary to determine individual- or group-level effects with sufficient statistical power, several authors have cautioned that over 50 groups may be necessary in circumstances comparable to CSCL studies (e.g., Du and Wang 2016; Maas and Hox 2005). It is our hope that the CSCL community will develop a sensitivity to statistical power while taking into account the complexities of group research for calculating statistical power. Luckily, researchers in related fields are beginning to develop apps that can be used to determine adequate sample sizes in situations that are close to CSLC situations (cf. Ackerman and Kenny 2016).
3.3
Using Mixed Methods to Enhance CSCL Experiments
Experiments rely on gathering quantitative data to determine the effects of an intervention. However, CSCL researchers often collect qualitative data alongside their quantitative data, for example by carrying out observations, conducting interviews, or analyzing participants’ responses to open-ended questions in a questionnaire (e.g., Strijbos et al. 2007). The purposeful combination of quantitative and qualitative methods, also called mixed methods research, is often employed in CSCL studies. From the earliest days of CSCL research, CSCL researchers have not revealed themselves as either quantitative or qualitative purists (cf., Lincoln and Guba 2000). This is perhaps explained by the fact that CSCL research is often interested in the process of collaboration as it unfolds between group members. These processes can be studied using quantitative methods, but CSCL research has a long tradition of studying group processes using qualitative methods as well (see Zahn et al. this volume; Uttamchandani and Lester this volume). By combining quantitative and qualitative research methods, researchers may be able to utilize the strengths of both methods and cancel out their weaknesses (Johnson and Onwuegbuzie 2004). Using a mixed-method design allows CSCL researchers to reach several goals. Two goals that researchers often pursue are to complement one method with the other or to use both methods to establish triangulation (Cresswell 2008). When CSCL researchers use qualitative data to complement quantitative data, they, for example, use the qualitative data to illustrate their
Experimental and Quasi-Experimental Research in CSCL
509
quantitative findings by providing rich descriptions of students’ collaboration processes or their perceptions of the learning process. When mixing quantitative and qualitative methods with the purpose to triangulate the findings, researchers seek to corroborate the findings from both methods with the aim to find out whether the findings of both methods converge to a similar conclusion (Cresswell 2008). An example of a CSCL study that employed a mixed-methods methodology with the aim to triangulate the findings is the study by Strijbos et al. (2007) that used students’ grades, questionnaires, content analysis of e-mail communication, and responses to open-ended questions to study the effect of functional role assignment during CSCL. Strijbos et al. found that students in the role condition perceived their collaboration as more effective than students in the nonrole condition. This finding was corroborated by the content analysis, which also showed that students in the role condition were more adept at coordinating their collaboration. Furthermore, the analysis of the open-ended question also indicated higher perceived effectivity of the group process in the role condition. This careful combination of research methods therefore allowed Strijbos et al. to have greater confidence in their finding that roles impact students’ collaboration, compared to when they would have employed only one method. This example shows the added benefit for CSCL researchers when they add a qualitative data collection to their experimental design.
4 The Future: Reproducibility, Replications, and Open Science Over the last decade, concerns have grown that a large number of reported social research findings may be difficult to replicate by independent researchers (Makel and Plucker 2014) or may even be false (Ioannidis 2005). Some high-profile cases of scientific misconduct and fraud in psychological science (e.g., Levelt Committee 2012), made these concerns also acute to the general public. Furthermore, several researchers demonstrated the prevalence of questionable research practices or demonstrated how researcher degrees of freedom may dramatically affect the chance that researchers publish false positives (i.e., incorrectly rejecting the null hypothesis, see Simmons et al. 2011). These and other factors led to appeals for greater attention to reproducibility of research, for more attention to replicating findings, and in general to more openness in scientific conduct and reporting (e.g., Koole and Lakens 2012; Makel and Plucker 2014; van’t Veer and Giner-Sorolla 2016). What does this all mean for CSCL research? In general, direct replications (van’t Veer and Giner-Sorolla 2016) are scarce in CSCL research (Gress et al. 2010). It is difficult to say what percentage of CSCL studies are replications, but there is no reason to assume that this percentage will be substantially higher than the 0.13% of published articles in education sciences reported by Makel and Plucker (2014). Given the concerns raised above about sample sizes in CSCL research, it would be good if direct replications of important CSCL studies were attempted. It is, however,
510
J. Janssen and I. Kollar
important to note that barriers may exist that prevent CSCL researchers from attempting direct replications (e.g., funding, bias against replications from editors/ reviewers, cf. Koole and Lakens 2012; Makel and Plucker 2014). Similarly, we think the future of experimental CSCL research will evolve toward open science. In psychological research and education science, researchers have, for example, advocated preregistration of research plans to increase the transparency of experimental research and to reduce publication and reporting bias (van der Zee and Reich 2018; van’t Veer and Giner-Sorolla 2016). To support this, journals that serve as outlets for CSCL research could start adopting registered reports. In registered reports, researchers explain the theoretical background and methods of their study along with a detailed analysis plan. This manuscript is then submitted for peer review and may lead to an in-principle acceptance. This means that the editor agrees to publish the study if it was completed as described, regardless of the outcome of the study. Once the study has been completed, the researchers submit a complete manuscript to the journal which may then be published if it is performed as planned. At this moment, however, not many journals that serve as outlet for CSCL studies, allow preregistration (exceptions are AERA Open and Journal of Computer Assisted Learning). We think that for the field of CSCL, a growing awareness for the necessity and utility of replication studies, preregistration, and open science is highly needed. We started this chapter by noting that (quasi-)experimental research constitutes an important part of CSCL research. Although learning environments and research questions have changed since the inception of the field, the basic ideas and promises of (quasi-)experimental designs are still highly valued within the community. Thus, we are confident that (quasi-)experimental research will continue to provide strong contributions to the theoretical and empirical development of the field. On this path, (quasi-)experimental research might especially benefit from the rapid advances in Learning Analytics research (see Wise et al. this volume), as algorithms developed in Learning Analytics are becoming more and more powerful to reliably measure important process variables and to use this information to adapt the design of scaffolds to the needs of each particular group of learners (Walker et al. 2011). We like to end this chapter by expressing our hope that in future (quasi-) experimental CSCL research, the methodological state of the art we outlined (using multilevel analysis, conducting adequately powered studies, and employing mixed methods when applicable) will be used increasingly, and that future CSCL studies follow the current movement towards open and reproducible research.
References Ackerman, R. A., & Kenny, D. A. (2016). APIMPower: An interactive tool for actor-partner interdependence model power analysis [Computer software]. Retrieved from https://robert-aackerman.shinyapps.io/apimpower/
Experimental and Quasi-Experimental Research in CSCL
511
Bause, I. M., Brich, I. R., Wesslein, A.-K., & Hesse, F. W. (2018). Using technological functions on a multi-touch table and their affordances to counteract biases and foster collaborative problem solving. International of Computer-Supported Collaborative Learning, 13, 7–33. https://doi. org/10.1007/s11412-018-9271-4. Brom, C., Sisler, V., Slussareff, M., Selmbecherova, T., & Hlavka, Z. (2016). You like it, you learn it: Affectivity and learning in competitive social role play gaming. International Journal of Computer-Supported Collaborative Learning, 11, 313–348. https://doi.org/10.1007/s11412016-9237-3. Cesareni, D., Cacciamani, S., & Fujita, N. (2016). Role taking and knowledge building in a blended university course. International Journal of Computer-Supported Collaborative Learning, 11, 9–39. https://doi.org/10.1007/s11412-015-9224-0. Chen, J., Wang, M., Kirschner, P., & Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88(6), 799–843. https://doi.org/10.3102/0034654318791584. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Rand McNally. Cress, U. (2008). The need for considering multilevel analysis in CSCL research: An appeal for the use of more advanced statistical methods. International Journal of Computer-Supported Collaborative Learning, 3, 69–84. https://doi.org/10.1007/s11412-007-9032-2. Cresswell, J. W. (2008). Research designs: Quantitative, qualitative, and mixed methods approaches. Thousand Oaks: SAGE. Cuendet, S., Dehler-Zufferey, J., Ortoleva, G., & Dillenbourg, P. (2015). An integrated way of using a tangible user interface in a classroom. International Journal of Computer Supported Collaborative Learning, 10, 183–208. https://doi.org/10.1007/s11412-015-9213-3. De Wever, B., Van Keer, H., Schellens, T., & Valcke, M. (2007). Applying multilevel modelling to content analysis data: Methodological issues in the study of role assignment in asynchronous discussion groups. Learning and Instruction, 17, 436–447. https://doi.org/10.1016/j. learninstruc.2007.04.001. Du, H., & Wang, L. (2016). The impact of the number of dyads on estimation of dyadic data analysis using multilevel modeling. Methodology, 12, 21–31. https://doi.org/10.1027/16142241/a000105. Erkens, M., Bodemer, D., & Hoppe, H. U. (2016). Improving collaborative learning in the classroom: Text mining based grouping and representing. International Journal of ComputerSupported Collaborative Learning, 11, 387–415. https://doi.org/10.1007/s11412-016-9243-5. Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). Thousand Oaks: SAGE. Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R. Thousand Oaks SAGE. Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi. org/10.1080/00461520.2012.748005. Fischer, F., & Mandl, H. (2005). Knowledge convergence in computer-supported collaborative learning: The role of external representation tools. Journal of the Learning Sciences, 14, 405–441. https://doi.org/10.1207/s15327809jls1403_3. Gress, C. L. Z., Fior, M., Hadwin, A. F., & Winne, P. H. (2010). Measurement and assessment in computer-supported collaborative learning. Computers in Human Behavior, 26, 806–814. https://doi.org/10.1016/j.chb.2007.05.012. Harney, O. M., Hogan, M. J., Broome, B., Hall, T., & Ryan, C. (2015). Investigating the effects of prompts on argumentation style, consensus and perceived efficacy in collaborative learning. International Journal of Computer-Supported Collaborative Learning, 10, 367–394. https:// doi.org/10.1007/s11r412-015-9223-1. Harney, O. M., Hogan, M. J., & Quinn, S. (2017). Investigating the effects of peer to peer prompts on collaborative argumentation, consensus and perceived efficacy in collaborative learning.
512
J. Janssen and I. Kollar
International Journal of Computer-Supported Collaborative Learning, 12, 307–336. https:// doi.org/10.1007/s11412-017-9263-9. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X. Hox, J. (2003). Multilevel analysis: Techniques and applications. Mahwah: Erlbaum. Inkpen, K., Booth, K., Klawe, M., & Upitis, R. (1995). Playing together beats playing apart, especially for girls. In J. L. Schnase & E. L. Cunnius (Eds.), Proceedings of CSCL ‘95: The first international conference on computer support for collaborative learning (pp. 177–181). Erlbaum. https://doi.org/10.3115/222020.222164. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2, e124. https://doi.org/10.1371/journal.pmed.0020124. Janssen, J., & Bodemer, D. (2013). Coordinated computer-supported collaborative learning: Awareness and awareness tools. Educational Psychologist, 48(1), 40–55. https://doi.org/10. 1080/00461520.2012.749153. Janssen, J., Cress, U., Erkens, G., & Kirschner, P. A. (2013). Multilevel analysis for the analysis of collaborative learning. In C. E. Hmelo-Silver, C. A. Chinn, C. Chan, & A. M. O’Donnell (Eds.), The international handbook of collaborative learning (pp. 112–115). Routledge. Jeong, H., & Chi, M. T. H. (2007). Knowledge convergence and collaborative learning. Instructional Science, 35, 287–315. https://doi.org/10.1007/s11251-006-9008-z. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005–2009. International Journal of Computer-Supported Collaborative Learning, 9, 305–334. https://doi.org/10.1007/s11412014-9198-3. Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A paradigm whose time has come. Educational Researcher, 33(7), 14–26. https://doi.org/10.3102/ 0013189X033007014. Kenny, D. A., Kashy, D. A., & Cook, W. L. (2006). Dyadic data analysis. New York: Guilford Press. Kollar, I., Ufer, S., Reichersdorfer, E., Vogel, F., Fischer, F., & Reiss, K. (2014). Effects of collaboration scripts and heuristic worked examples on the acquisition of mathematical argumentation skills of teacher students with different levels of priorachievement. Learning and Instruction, 32, 22–36. https://doi.org/10.1016/j.learninstruc.2014.01.003. Kolloffel, B., Eysink, T. H. S., & de Jong, T. (2011). Comparing the effects of representational tools in collaborative and individual inquiry learning. International Journal for Computer-Supported Collaborative Learning, 6(2), 223–251. https://doi.org/10.1007/s11412-011-9110-3. Koole, S. L., & Lakens, D. (2012). Rewarding replications: A sure and simple way to improve psychological science. Perspectives on Psychological Science, 7, 608–614. https://doi.org/10. 1177/1745691612462586. Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44, 701–710. https://doi.org/10.1002/ejsp.2023. Levelt Committee (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel. Retrieved from https://poolux.psychopool.tu-dresden.de/mdcfiles/gwp/Reale% 20F%C3%A4lle/Stapel%20-%20Final%20Report.pdf Lin, J.-W., Mai, L.-J., & Lai, Y.-C. (2015). Peer interaction and social network analysis of online communities with the support of awareness of different contexts. International Journal of Computer-Supported Collaborative Learning, 10, 139–159. https://doi.org/10.1007/s11412015-9212-4. Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 163–188). SAGE. Maas, C. J. M., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1, 86–92. https://doi.org/10.1027/1614-2241.1.3.86.
Experimental and Quasi-Experimental Research in CSCL
513
Makel, M. C., & Plucker, J. A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304–316. https://doi.org/10.3102/ 0013189X14545513. Mende, S., Proske, A., Körndle, H., & Narciss, S. (2017). Who benefits from a low versus high guidance CSCL script and why? Instructional Science, 45, 439–468. https://doi.org/10.1007/ s11251-017-9411-7. Resendes, M., Scardamalia, M., Bereiter, C., Chen, B., & Halewood, C. (2015). Group-level formative feedback and metadiscourse. International Journal of Computer-Supported Collaborative Learning, 10, 309–336. https://doi.org/10.1007/s11412-015-9219-x. Scardamalia, M., & Bereiter, C. (2006). Knowledge building: Theory, pedagogy, and technology. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 97–115). Cambridge: Cambridge University Press. Schellens, T., Van Keer, H., & Valcke, M. (2005). The impact of role assignment on knowledge construction in asynchronous discussion groups: A multilevel analysis. Small Group Research, 36, 704–745. https://doi.org/10.1177/1046496405281771. Schwaighofer, M., Vogel, F., Kollar, I., Ufer, S., Strohmaier, A., Terwedow, I., Ottinger, S., Reiss, K., & Fischer, F. (2017). How to combine collaboration scripts and heuristic worked examples to foster mathematical argumentation—When working memory matters. International Journal of Computer-Supported Collaborative Learning, 12, 281–305. https://doi.org/10.1007/s11412017-9260-z. Serrano-Cámara, L. M., Paredes-Velasco, M., Alcover, C.-M., & Velazquez-Iturbide, J. Á. (2014). An evaluation of students’ motivation in computer-supported collaborative learning of programming concepts. Computers in Human Behavior, 31, 499–508. https://doi.org/10.1016/j. chb.2013.04.030. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin Company. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. https://doi.org/10.1177/2F0956797611417632. Slavin, R. E., & Smith, D. (2009). The relationship between sample sizes and effect sizes in systematic reviews in education. Educational Evaluation and Policy Analysis, 31, 500–506. https://doi.org/10.3102/0162373709352369. Snijders, T. A. B. (2005). Power and sample size in multilevel linear models. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of statistics in behavioral science (Vol. 3, pp. 1570–1573). Chicester: Wiley. Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Thousand Oaks: SAGE. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning: An historical perspective. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences (pp. 409–426). Cambridge: Cambridge University Press. Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48, 1467–1478. https://doi.org/10.1037/0022-3514.48.6.1467. Strijbos, J.-W., Martens, R. L., Jochems, W. M. G., & Broers, N. J. (2007). The effect of functional roles on perceived group efficiency during computer-supported collaborative learning: A matter of triangulation. Computers in Human Behavior, 23, 353–380. https://doi.org/10.1016/j.chb. 2004.10.016. Tegos, S., Demetriadis, S., Papadopoulos, P. M., & Weinberger, A. (2016). Conversational agents for academically productive talk: A comparison of directed and undirected agent interventions. International Journal of Computer-Supported Collaborative Learning, 11, 417–440. https:// doi.org/10.1007/s11412-016-9246-2.
514
J. Janssen and I. Kollar
Uttamchandani, S., & Lester, J. N. (this volume). Qualitative approaches to language in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. van der Zee, T., & Reich, J. (2018). Open education science. AERA Open, 4, 1–15. https://doi.org/ 10.1177/2332858418787466. van’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology: A discussion and suggested template. Journal of Experimental Social Psychology. https://doi.org/10.1016/j. jesp.2016.03.004. Vogel, F., Wecker, C., Kollar, I., & Fischer, F. (2017). Socio-cognitive scaffolding with collaboration scripts: A meta-analysis. Educational Psychology Review, 29, 477–511. https://doi.org/ 10.1007/s10648-016-9361-7. Walker, E., Rummel, N., & Koedinger, K. R. (2011). Designing automated adaptive support to improve student helping behaviors in a peer tutoring activity. International Journal of Computer-Supported Collaborative Learning, 6(2), 279–306. https://doi.org/10.1007/s11412011-9111-2. Wang, X., Wallace, M. P., & Wang, Q. (2017). Rewarded and unrewarded competition in a CSCL environment: A coopetition design with a social cognitive perspective using PLS-SEM analysis. Computers in Human Behavior, 72, 140–151. https://doi.org/10.1016/j.chb.2017.02.045. Weinberger, A., Stegmann, K., & Fischer, F. (2010). Learning to argue online: Scripted groups surpass individuals (unscripted groups do not). Computers in Human Behavior, 26, 506–515. https://doi.org/10.1016/j.chb.2009.08.007. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Yilmaz, F. G. K., & Yilmaz, R. (2019). Impact of pedagogic agent-mediated metacognitive support towards increasing task and group awareness in CSCL. Computers & Education, 134, 1–14. https://doi.org/10.1016/j.compedu.2019.02.001. Zahn, C., Ruf, A., & Goldman, R. (this volume). Video data collection and video analyses in CSCL research. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer.
Further Readings Chen, J., Wang, M., Kirschner, P., & Tsai, C.-C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88(6), 799–843. https://doi.org/10.3102/0034654318791584. This article provides a meta-analytical synthesis of prior (quasi-)experimental research in the field of computersupported collaborative learning (CSCL). It describes that most (quasi-)experimental CSCL research falls into three categories: (a) studies that look at the effects of collaborative versus individual computer-supported learning, (b) studies that investigate the effects of computer use during collaboration (vs. no computer use), and (c) studies that are interested in the effects of purposefully designed learning environments, tools, and scaffolds. De Wever, B. (n.d.). NAPLeS webinar series: 15 minutes about selecting statistical methods for the learning sciences and reporting their results. Retrieved from http://isls-naples.psy.lmu.de/ video-resources/guided-tour/15-minutes-dewever/index.html. In this webinar, Bram De Wever (Ghent University, Belgium) discusses relevant recommendations for conducting quantitative experimental CSCL studies. The webinar, for example, discusses the hierarchical grouping of participants that necessitates the use of multilevel analysis. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin Company. This book describes the
Experimental and Quasi-Experimental Research in CSCL
515
theory behind experimentation, causality, and validity. It, furthermore, offers an exhaustive description of different experimental and quasi-experimental designs and their strengths and weaknesses. Strijbos, J.-W., Martens, R. L., Jochems, W. M. G., & Broers, N. J. (2004). The effect of functional roles on group efficiency: Using multilevel modeling and content analysis to investigate computer-supported collaboration in small groups. Small Group Research, 35, 195–229. https://doi.org/10.1177/1046496403260843. This study was perhaps the first CSCL study to use multilevel analysis. Strijbos et al. compared role and nonrole groups with respect grades, efficiency, and collaboration. The article also describes the rationale for using multilevel analysis in a CSCL study. Weinberger, A., Ertl, B., Fischer, F., & Mandl, H. (2005). Epistemic and social scripts in computersupported collaborative learning. Instructional Science, 33, 1–30. https://doi.org/10.1007/ s11251-004-2322-4. This article presents two classical studies on the effects of computersupported collaboration scripts on knowledge acquisition. In both studies, epistemic and social scripts were varied in a 2 x 2 design. While study 1 implemented the different conditions in an asynchronous, text-based learning environment, study 2 realized a synchronous, video-based environment. Both studies showed positive effects of the social scripts on knowledge acquisition, but no or even negative effects of the epistemic scripts that were used.
Development of Scalable Assessment for Collaborative Problem-Solving Yigal Rosen, Kristin Stoeffler, Vanessa Simmering, Jiangang Hao, and Alina von Davier
Abstract While the field of computer-supported collaborative learning (CSCL) is focused primarily on the development of computational artifacts and social interaction, the key research advances in collaborative problem-solving (CPS) domain are associated with competency model development and assessment at scale. Numerous research reports indicate that CPS competency is increasingly important in today’s complex interconnected world, therefore, of increasing interest in teaching and assessing with students. However, learning and assessment design, data analytics, and reporting on CPS competency, specifically in CSCL setting encapsulates multiple opportunities and challenges. This chapter introduces a spectrum of approaches for CPS competency development and scalable assessment to advance the theory and practice of measuring CPS with the focus on recent work at the nonprofits ACT and Educational Testing Service. Keywords Collaboration assessment · Problem-solving assessment · Collaborative problem solving · Conversational agent · Collaborative learning
Y. Rosen (*) BrainPOP, New York, NY, USA e-mail: [email protected] K. Stoeffler · V. Simmering ACT, Inc., Iowa City, IA, USA e-mail: kristin.stoeffl[email protected]; [email protected] J. Hao Educational Testing Service, Princeton, NJ, USA e-mail: [email protected] A. von Davier Duolingo, Pittsburgh, PA, USA EdAstra Tech LLC, Boston, MA, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_28
517
518
Y. Rosen et al.
1 Definitions and Scope Solving problems in collaboration with others is a common practice in today’s workplace. Typically, peers establish shared understandings of the problem space and their expertise as it relates to the problem space, divide workload and responsibilities, as well as take actions to advance objectives, monitor progress, and provide feedback. The importance of collaboration extends beyond the workplace. Numerous research reports indicate that collaborative problem-solving (CPS) is increasingly important in today’s complex and interconnected world and is therefore of increasing interest as a skill to be taught and assessed in students (Binkley et al. 2012; Graesser et al. 2018a; OECD 2017b; World Economic Forum 2015). However, scalable learning and assessment solutions for CPS in school settings remain lacking (Graesser et al. 2018b; Rosen et al. 2020). CPS is a unique and challenging skillset to explore in the classroom as it typically requires group work and engagement in rich performance tasks across not only a range of educational domains but also with high level of interaction and interdependence required among students. To add additional complexity, CPS has both cognitive and social aspects, and the outcomes from CPS tasks are generally the results of the interaction of both (Liu et al. 2016). Cognitive skills are used to complete tasks both independently and collaboratively with other team members. For example, cognitive dimensions of CPS in the scientific domain typically include conceptual understanding and inquiry skills in science (e.g., data collection, data analysis, prediction making, evidencebased reasoning) which support both independent problem-solving and the processes required to negotiate and create the shared understandings required for collaboration. Social dimensions of CPS skills are often demonstrated in social interactions with peers and can affect both group and individual performance.
1.1
Competency Model Development
The CPS competency has been the focus of many efforts to address the challenges of teaching and measuring twenty-first-century skills (von Davier et al. 2017). The complexities that arise around authenticity in both the assessment design and measurement of CPS (Care and Kim 2018; Rosen 2017), and around our ability to effectively measure both the cognitive and the social components that are required for successful CPS, require a well-defined construct that provides ample opportunity to address these challenges. The importance of these efforts was highlighted by the selection of CPS as a major area of focus by the Organization for Economic Co-operation and Development (OECD) for the 2015 Programme for International Student Assessment (PISA) (OECD 2017a). PISA 2015 defines CPS competency as “the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills and efforts to
Development of Scalable Assessment for Collaborative Problem-Solving
519
reach that solution” (OECD 2017a). This definition treats the competency as a conjoint dimension of collaboration skills and the skills needed to solve a problem. For assessment design purposes, the focus is on individual capacities within collaborative situations. The effectiveness of CPS depends on the ability of group members to collaborate and to prioritize the success of the group over individual successes. At the same time, this ability is still a trait in each of the individual members of the group. The competency is assessed by evaluating how well the individual collaborates with agents during the problem-solving process. This includes the key skills of establishing and maintaining shared understanding, taking appropriate actions to solve the problem, and establishing and maintaining group organization. First, CPS as represented in PISA framework requires students to be able to establish, monitor, and maintain the shared understanding throughout the problem-solving task by responding to requests for information, sending important information to agents about tasks completed, establishing or negotiating shared meanings, verifying what each other knows, and taking actions to repair deficits in shared knowledge. Second, collaboration requires the capability to identify the type of activities that are needed to solve the problem and to follow the appropriate steps to achieve a solution. This process involves exploring and interacting with the problem situation. It includes understanding both the information initially presented in the problem and any information that is uncovered during interactions with the problem. The accumulated information is selected, organized, and integrated in a fashion that is relevant and helpful in solving the particular problem and that is integrated with prior knowledge. Third, students must be able to help organize the group to solve the problem; consider the talents and resources of group members; understand their own role and the roles of the other agents; follow the rules of engagement for their role; monitor the group organization; reflect on the success of the group organization, and help handle communication breakdowns, conflicts, and obstacles. In addition to the CPS framework from PISA, there are several other frameworks and assessment approaches being developed for assessments of CPS, such as that in the Assessment and Teaching of twenty-first-Century Skills (ATC21S; Griffin et al. 2012b). According to Griffin et al. (2012a), CPS refers to the ability to recognize the points of view of other persons in a group; to contribute knowledge, experience, and expertise in a constructive way; to identify the need for contributions and how to manage them; to recognize structure and procedure involved in resolving a problem; and as a member of the group, to build and develop group knowledge and understanding. In the CPS assessment from ATC21S, two students are assigned to a team to complete several tasks collaboratively via text chats. Each student’s CPS is scored primarily based on his or her responses and actions in the tasks (Scoular et al. 2017). Professional assessment and learning companies, such as ACT, Inc. (www.act. org), and Educational Testing Service (ETS, www.ets.org), have developed CPS frameworks for their CPS assessment prototypes. Within the ACT Holistic Framework, the CPS construct is divided into two major categories of skills (Camara et al. 2015): Team Effectiveness and Task Effectiveness. The Team Effectiveness strand encompasses the knowledge and skills that facilitate an inclusive team dynamic, clarity of team and task structure, an open flow of information, and a shared sense of
520
Y. Rosen et al.
purpose. The Task Effectiveness strand encompasses the knowledge and skills required to effectively create a negotiated understanding of the problem and the goal, to effectively identify and negotiate a strategy to achieve the goal, and the ability to effectively execute and monitor the effectiveness of the execution of the strategy. Within the Team and Task Effectiveness strands, there are nine functional categories that outline collections of skills, which when used in concert fulfill specific functions that contribute to effective CPS. The Team Effectiveness strand includes the functional categories: Inclusiveness, Clarity, Communication, and Commitment. The Task Effectiveness strand includes the functional categories: Problem Orientation, Goal Orientation, Strategy, Execution, and Monitoring and Evaluating. ETS scientists proposed several CPS frameworks targeting different aspects of CPS activities. For example, Liu et al. (2016) considered CPS as a “process that includes both cognitive and social practices in which two or more peers interact with each other to share and negotiate ideas and prior experiences, jointly regulate and coordinate behaviors and learning activities, and apply social strategies to sustain the interpersonal exchanges to solve a shared problem,” and four CPS constructs, sharing ideas, negotiating ideas, regulating problem-solving, and maintaining communication have been identified as targets for assessment. Andrews et al. (2017) considered CPS from the perspective of interactions and proposed six types of interactive patterns such as collaborative, cooperative, dominant–dominant, dominant–passive, expert–novice, and fake collaboration. While the key research advances in CPS are associated with skill development and assessment, the field of computer-supported collaborative learning (CSCL) has focused primarily on computational artifacts and social interaction (Ludvigsen et al. this volume; Stahl et al. 2006). This work in CSCL overlaps with CPS when the latter is studied in contexts relying on similar technology. CSCL can be designed to be asynchronous and physically distributed, but can also support face-to-face learning in real time, and can therefore be translated to similar scenarios for CPS. Computational artifacts serve various purposes in CSCL, including applications designed to support communication and shared representation construction, instructional tools that provide scaffolds or scripts to structure collaboration, and final products from group projects (Medina and Stahl this volume; Stahl et al. 2014). Instances of such artifacts that relate to problem-solving can be similarly implemented for CPS. In particular, applications designed to support communication and construction of shared representations in CSCL can serve the same purpose in CPS. Social interactions in CSCL concern not just discussion of content and negotiation of roles, but also generally building interpersonal relationships and understanding others’ perspectives (Kreijns et al. 2003). These are foundational characteristics of collaboration in both CSCL and CPS, and when they are captured in a computerized interface, they can easily be analyzed for research and assessment purposes. A fundamental challenge for both CSCL and CPS is to ensure that collaboration skills in learners are adequately supported and measured to track progress.
Development of Scalable Assessment for Collaborative Problem-Solving
521
2 History and Development Technology advances have made computer-supported collaboration widely adopted in academia and the workplace. Given the fact that computers have already permeated into almost every part of our life, computers play important roles even in many face-to-face collaborative situations. Multimodal communication and virtual reality technology are further blurring the border between the computer-supported collaboration and the face-to-face communication channels. As such, many of the skills valued in face-to-face collaboration also become highly relevant in computersupported collaboration, though they may not perfectly overlap. Compared to faceto-face collaboration, computer-supported collaboration reduced the constraints of space and time, making CPS activities more likely to be carried out in standardized and scalable way. Computer-supported collaboration paves the road toward understanding the statistical properties of collaboration with larger samples which furthers our knowledge of collaboration that cannot be attained through small-scale studies. Computer-supported collaboration has vastly used cases in the real world, which makes the relevant research important for guiding real-world practices.
2.1
Assessment Approaches
Recognition that learning and assessment is not an isolated activity solely occurring in the individual learner’s mind has focused a number of studies (McCaslin and Burross 2011; Salomon 1997). CPS competency has been the focus of many efforts to address the challenges of measuring twenty-first-century skills (von Davier et al. 2017). Student performance in CPS can be assessed through a number of different methods. These include measures of (1) the quality of the solutions and the objects generated during the collaboration (Avouris et al. 2003), (2) analyses of log files that track intermediate results paths to the solutions (Adejumo et al. 2008), team processes and structure of interactions (O’Neil et al. 1997; von Davier and Halpin 2013), and quality and type of collaborative communication (Cooke et al. 2003; Foltz and Martin 2008; Graesser et al. 2008; Hao et al. 2017a), (3) statistical modeling of the relationship between the features describing the collaboration process and measures characterizing the collaboration outcomes (Hao et al. 2019, 2017b; Hao and Mislevy 2019). To ensure valid measurement on the individual level, each student should be paired with partners displaying various ranges of CPS characteristics (Rosen 2017; Rosen and Foltz 2014). This composition allows for each individual student to be situated fairly similarly, facilitating their ability to demonstrate his or her proficiency in CPS. To ensure fair measurement, each individual student would need to be paired with a similar number of other students who display the same range of characteristics. The human-to-human (H-H) approach provides an authentic human–human interaction which is highly familiar situation for students. Students may be more
522
Y. Rosen et al.
engaged and motivated to collaborate with their peers, compared to virtual characters. Additionally, the H-H situation is closer to the CPS situations students will encounter in their personal, educational, professional, and civic activities. However, pairing can be problematic because of individual differences that can significantly affect the CPS process and its outcome. Therefore, the H-H assessment approach of CPS may not provide enough opportunity to cover variations in group composition, diversity of conflict situations, and different team member characteristics in controlled manners, which are all essential for assessment on an individual level. Simulated conversational team members with preprogrammed profiles, actions, and communication would potentially ensure that the assessment could cover the full range of collaboration skills with sufficient control. In the human-to-agent (H-A) approach, CPS skills are measured by pairing each individual student with a computer agent(s) that can be programmed to act as a team member(s) with varying characteristics relevant to different CPS situations (Rosen 2017; Rosen and Foltz 2014). In the PISA assessment, CPS competency is measured through student interaction with computer. This allows the assessment to standardize the behavior of the other agents in order to isolate the CPS ability of the student being evaluated. Had the student been in a group with other students, his or her performance would have depended on the ability of the other students and the preexisting relationships between them. When trying to solve a problem together through the exchange of ideas and actions, the students in both H-H and H-A settings of assessment construct shared meanings that the individual often would not have attained alone. The collaborative facet of shared understanding in a meaningful CPS process is expressed through raising issues for group discussion and joint confrontation of the problems and obstacles, the outcome of which is the ongoing process of consensus building. Different methods of CPS assessment can be uniquely effective for different educational purposes. For example, a formative assessment program, which has adopted rich training on the communication and collaboration construct for its teachers, may consider the H-H approach for CPS assessment more powerful at informing teaching and learning (e.g., ATC21S by Griffin et al. 2012a; Animalia Project by Rosen 2017), while H-A may be implemented as a formative scalable tool across a large district or in a large-scale standardized assessment such as PISA. Nonavailability of students with a certain CPS level in a class may limit the fulfillment of assessment needs, but technology with computer agents can fill the gaps. On this note, CSCL community acknowledged the complexity of developing CPS assessment in the context of standardized large-scale assessment such as PISA and the advantages of human-to-agent setting as a promising solution (Looi and Dillenbourg 2013; Stahl 2013). According to Stahl (2013), CPS framework as defined in PISA is a rationalist model; student collaboration is highly situated, tacit, interactive. The student moves and collaboration behaviors are constrained for test purposes, and the communication options for chat are limited.
Development of Scalable Assessment for Collaborative Problem-Solving
523
3 State of the Art as Illustrated by Examples from ACT and ETS Psychometric values such as validity, reliability, comparability, and fairness are the cornerstones of any serious assessment (Messick 1994). Addressing these issues becomes much more challenging in the assessments of CPS, as the evidence in collaborative settings is usually the results of the interactions of multiple people instead of a single person as in most other assessments. As such, how to properly disentangle the effects due to the partners becomes an important design consideration in addressing the psychometric quality of CPS assessment. In practice, separating the core reporting purpose will greatly simplify the assessment design for CPS (Hao et al. 2019). There are two main types of score reporting for assessments. The first and most common one is the individual score reporting with typical assessment examples such as SAT and ACT. The second is a group-score reporting where only the distribution of competency of a subgroup will be of major interest. For example, average scores of each country instead of each student’s score are reported in PISA. To report an individual CPS score with the removal of partner effect, one can use “standardized” virtual partners (human–agent collaboration) or alternatively a round-robin-like scheme with properly sampled human partners (Hao et al. 2019). On the other hand, to report a group-level CSP score, the design can be significantly simplified as the partner effect can be balanced by randomly teaming up partners. In this case, both human–human and human–agent tasks can be used. Human–agent collaboration can be considered as a generalized adaptive assessment and the psychometric methodology for adaptive assessment can be applied. For human– human collaboration intended for individual score reporting, an additional step to enable the CPS score when working with each partner is needed to get an average CPS score. For human–human collaboration intended for group score reporting, one can consider the unit of analysis as the team instead of each team member, by which team-level analysis can be carried out using most traditional psychometric methods. In the following section, the authors will show some empirical examples. As described in prior sections, developing valid, reliable, and scalable assessment of CPS that can be administered on a regular basis is very challenging (Graesser et al. 2018a; Hao et al. 2017b; Rosen 2017; Stoeffler et al. 2020). In this section, the authors focus on recent work conducted at ACT and ETS, aimed to develop scalable assessments for CPS skills supported with evidence from extensive empirical studies.
3.1
ACT Efforts Toward CPS Assessment
ACT developed learning and assessment prototypes for CPS in human–human and human–agent settings for science classes. In the first study, the authors focused on some key opportunities to make advancements in formative assessment of CPS
524
Y. Rosen et al.
competency in gamified CSCL setting. These advancements were possible through the design of an evidence-based game experience that capitalized on advancements in conversational agent technologies and the application of computational psychometrics to process and product data (Stoeffler et al. 2020). In this study, a gamified formative assessment, Circuit Runner, was designed to immerse participants in a first-person CPS environment where interdependencies exist between participants. The interdependencies were designed to allow asymmetric allocation of resources and the specific game functionalities required to solve a range of problem types and to work through a range of difficulties and challenges requiring a range of skill proficiencies for success. The game design focused on neutralizing the effects of prior knowledge and expertise and presented a range of environments, both on-task (problem-solving) and off-task (conversational/reflective). These environments allowed us to explore the range of proficiency with the skills required for effective collaboration and problem-solving under varying contexts and conditions (Stoeffler et al. 2017). The measurement of CPS competency was rooted in the alignment of the response and process data with a robust and detailed construct outlining the skills and their function in effective CPS. The measurement extended beyond item response options focused on an isolated correct answer to response options that allowed participants to express a range of proficiency within a single response set. Information regarding skills was also collected outside the common sets of items experienced by all participants to include opportunities to demonstrate skills through lower stakes conversational and off-task interactions. The assessment of items with Item Response Theory (IRT) and correlation analyses for the Circuit Runner game provided preliminary support for the use of the game for assessment of the intended CPS competency and possible pathways for improvement. In our second study (Rosen et al. 2020), students from five countries participated in Animalia online minicourse designed to foster students’ CPS skills in the context of complex ecosystems. Students worked in teams with assigned scientific roles to explore cause–effect relationships in Animalia. Each team member had access to role-relevant information about the overall situation so that determining the root cause of the problems in Animalia required each team member to share his/her information with the others. Digital tools such as shared documents, virtual labs, videoconference, and forums were used in support of collaborative problem-solving. In this study, two conditions were created based on the analysis of early-stage inputs to a shared document. Teams of students in each of these conditions were exposed to identical learning tasks and were able to collaborate and solve scientific problems by using identical methods and resources. In the distributed condition, groups were able to benefit from equal or close to equal division of effort. In contrast, students who participated in the centralized condition were constantly facing a dominant teammate. Although these skills were further developed through the participation in the Animalia project, the findings indicated that students who participated in the centralized condition underperformed in their learning gains, compared to their peers in the distributed condition. This research indicated how different approaches of distributed cognition in collaborative learning contexts contribute to learning outcomes, as measured at the end of the CPS learning experience. More specifically,
Development of Scalable Assessment for Collaborative Problem-Solving
525
collaborative patterns identified at early stages of learning were used to determine possible differential effects on learning outcomes. The analysis of collaborative patterns has been conducted to determine qualitatively different collaborative behaviors in the second week of collaborative learning focused on the number of inputs to a shared document per day across all the groups. In this analysis, an input is considered to be an appropriate and nonrepetitive contribution of a sentence longer than three words. Two trained research assistants coded students’ contributions to the shared document. Each input that met the criteria of appropriateness (i.e., a response of appropriate if it is understandable AND the coder can identify a contribution to the collaborative problem-solving activity) and nonrepetitive (i.e., absence of a similar input by the contributor or other teammates) has been coded as 1. Next, an accumulative score has been assigned to each student across all the groups. Inter-rater reliability reached 94% based on the training subset of students’ responses.
3.2
ETS Efforts Toward CPS Assessment
ETS was deeply involved in developing the CPS assessment for PISA 2015. In addition, ETS carried out extensive research activities to explore the possibility of large-scale and standardized assessments of CPS that can be administered on a regular basis and primarily target formative assessments and group-scoring assessments. A comprehensive research agenda was set forth (von Davier et al. 2017) and a wide range of research projects have been launched since 2013. These research activities can be roughly aligned to four main threads: • Development and refinement of CPS constructs that are suitable for large-scale assessments (e.g., Andrews et al. 2017; Andrews-Todd and Kerr 2019; Liu et al. 2016). • Development of technological infrastructure, such as the ETS Platform for Collaborative Assessment and Learning (EPCAL; Hao et al. 2017c), the automated annotation systems (e.g., Flor et al. 2016; Hao et al. 2017a), and the data analytics system (e.g., Hao et al. 2016). • Development of assessment prototypes across various domains (e.g., AndrewsTodd et al. 2019; Hao et al. 2017b; Martin-Raugh et al. 2020). • Development of psychometric modeling methodologies (e.g., Halpin et al. 2017; Hao et al. 2016; Hao and Mislevy 2019; von Davier and Halpin 2013; Zhu and Andrews-Todd 2019; Zhu and Zhang 2017). Based on these extensive research activities, a set of psychometric guidelines regarding the CPS assessment design have been summarized to guide large-scale assessment of CPS. Furthermore, a general scoring strategy that emphasizes the joint consideration of collaboration process, outcome, and the mapping between process and outcome have been developed and applied to several large-scale empirical studies. A recent summary of these findings can be found in Hao et al. (2019).
526
Y. Rosen et al.
4 The Future The importance of facilitating learners’ ability to acquire and cultivate CPS skills over time is essential and depends on our ability to identify, measure, and track proficiency with the skills that support these critical competencies. This is an ambitious endeavor because these multidimensional and complex synchronizations of social and cognitive skills present a number of challenges in construct design, assessment design, and measurement. Our current collective research and development work focused on some key opportunities to make advancements in this space. These advancements were made possible through the thoughtful design of a robust and well-defined construct, psychometric considerations, and advances in assessment technologies. The CPS assessments described in this chapter allow researchers to explore the range of proficiency of skills required for effective collaboration and problemsolving under varying contexts and conditions. The measurement of skill proficiency is rooted in the alignment of the response and process data with a robust and detailed construct outlining the skills and their function in effective CPS (Stoeffler et al. 2020). Measurement extended beyond the conventional item types to enable opportunities to demonstrate skills through synchronous and asynchronous collaboration and group interactions using evidence-centered design (ECD) that formulates the process of assessment development to ensure consideration and collection of validity evidence from the onset of the test design (Kim et al. 2016; Mislevy et al. 2003). The focus of future work in this space will seek to build on previously validated work addressing the exploration of CPS skills in highly authentic environments and involving small groups of students participating in varying group compositions in multiple domains and contexts. The intention of this work is to further explore the demonstration and learning of these skills over an authentic and extended CPS experience, performing a wide range of CPS assessment tasks, in a formative environment where teams have the opportunity to learn and improve over time. This work will also include the opportunity to provide summative insights regarding student CPS skills, but also provide diagnostic function in adaptive CPS learning experiences. Collaborative learning in data-intensive digital environments enables the deployment of machine learning techniques in order to inform teaching and learning in real time. Ultimately, collaborative learning detectors based on “activity logs” of student group and individual activity, and constantly produce learner’s behavioral state and next best activity for the group or individual to focus on (Rosen 2017). The feasibility of creating collaborative learning deficiency detectors of an acceptable accuracy would mean that instead of self-report or highly timeconsuming instructor-led monitoring of all groups, one could deploy more scalable real-time “early warning system” that would inform teachers and learners on collaborative deficiencies and call for additional attention to address the needs of the struggling groups. Conceptually, an adaptive CPS system is a combination of two parts (Rosen 2017). The first part provides diagnostics driven by an algorithm to dynamically
Development of Scalable Assessment for Collaborative Problem-Solving
527
assess group and individual learner’s current profile and deficiencies (the current state of knowledge, skills, and affective factors, such as frustration level). The second part is the recommendation engine that informs (i.e., decide or recommend) what the learner should do next individually or in group setting. In this way, the collaborative learning system seeks to optimize both individual and group learning experiences, based on each learner’s prior actions, but also based on the actions of other members of the group. A significant remaining challenge for work on CPS and CSCL concerns the transfer of learning. We can consider transfer not only of what is learned in a collaborative activity but also how collaboration skills may transfer to new situations. For knowledge transfer, research has established a number of important factors that influence how well a learner will transfer knowledge (e.g., Mestre 2002), and there are reasons to expect the potential for either better or worse outcomes for collaborative learning. One reason to expect a better outcome is that collaboration can induce deeper processing than working alone, as students need to communicate with collaborators about their thinking. However, transfer is often limited when students’ knowledge is too closely tied to the context in which it was learned. Context in these types of studies often refers to the content domain or problem type (e.g., physics versus math), but it could extend to differences between working with others versus working alone and working with versus without computer support. A meta-analysis of transfer from group versus individual learning showed a positive average effect, indicating empirical evidence that group learning leads to better transfer (Pai et al. 2015; Sottilare et al. 2018). The available studies did not allow for separate analysis of the details of the groups, for example, whether the use of technology moderated these effects, and did not specifically address the question of contextual similarity between group learning and the test of transfer. An important consideration for scaling CSCL and CPS across grades is the potential for developmental differences in how students approach and learn from collaboration. Much prior research in these areas includes adolescents and young adults, but recently there has been more interest in testing collaboration in earlier grades. Developmental psychology has found a protracted period of change for collaboration in young children, raising the question of whether approaches to CSCL and CPS can be translated directly from those with older students. Both social and cognitive skills undergo dramatic changes from early childhood through adolescence, so the effects of these changes might combine in complex ways in the context of collaboration. Asterhan et al. (2014) tested the effects of feedback to ninth-grade students either individually or in dyads with different compositions based on pretest knowledge of how to approach a proportional reasoning task. Students were classified as exhibiting the right approach or one of two wrong approaches, then dyads were formed in which one type of wrong approach was matched with another student with the same approach, a student with a different wrong approach, or a student with the right approach. Half of the dyads and individuals were provided outcome feedback during their problem-solving. Results showed an overall benefit of feedback, and that students did not generally benefit from working in dyads without feedback. A closer
528
Y. Rosen et al.
analysis of dyad makeup showed that pairing with a student with the right approach improved learning outcomes of the partner with the wrong approach, but that pairs with the wrong approach, whether same or different, did not learn more than individuals. Furthermore, an interaction between feedback condition and dyad composition showed that significant benefits were limited to students in the right– wrong dyads who received feedback. These results highlight the complexity of understanding how learning occurs in collaboration. It is important to acknowledge that collaborative activities designed for assessments must build in particular constraints and features to assure standardization across students. However, it is unclear how closely these assessments align with students’ peer interactions outside of these structured environments, which may limit how easily students can transfer these skills. Furthermore, because the social components of collaboration can vary widely according to the students involved, real collaborations can fail even if students individually show proficiency in other contexts. As a concrete example, negotiation is an important component of collaboration that is often simplified for practical purposes in assessments, as when an agent with a limited potential dialog is used as the partner or students are required to choose from a set of options rather than respond freely. Similarly, students may treat computer-mediated communications differently from face-to-face, both in terms of their own contributions and in how they interpret others. If collaboration is primarily learned and assessed in computer-supported scenarios, then students may have difficulty translating those skills to communication and negotiation with a peer in an unmediated collaboration. Further research on CPS assessments across settings, domains, and age groups is needed in order to make significant advancements in design and development of these assessments at scale.
References Adejumo, G., Duimering, P. R., & Zhong, Z. (2008). A balance theory approach to group problem solving. Social Networks, 30(1), 83–99. Andrews, J. J., Kerr, D., Mislevy, R. J., von Davier, A. A., Hao, J., & Liu, L. (2017). Modeling collaborative interaction patterns in a simulation-based task. Journal of Educational Measurement, 54(1), 54–69. Andrews-Todd, J., Jackson, G. T., & Kurzum, C. (2019). Collaborative problem solving assessment in an online mathematics task (research report no. RR-19-24). Educational Testing Service. Andrews-Todd, J., & Kerr, D. (2019). Application of ontologies for assessing collaborative problem solving skills. International Journal of Testing, 19, 172–187. https://doi.org/10.1080/ 15305058.2019.1573823. Asterhan, C. S. C., Schwartz, B. B., & Cohen-Eliyahu, N. (2014). Outcome feedback during collaborative learning: Contingencies between feedback and dyad composition. Learning and Instruction, 34, 1–10. https://doi.org/10.1016/j.learninstruc.2014.07.003. Avouris, N., Dimitracopoulou, A., & Komis, V. (2003). On analysis of collaborative problem solving: An object-oriented approach. Computers in Human Behavior, 19(2), 147–167.
Development of Scalable Assessment for Collaborative Problem-Solving
529
Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M., & Rumble, M. (2012). Defining twenty-first century skills. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 17–66). New York, NY: Springer. Camara, W., O'Connor, R., Mattern, K., & Hanson, M. A. (2015). Beyond academics: A holistic framework for enhancing education and workplace success. ACT Research Report Series, 2015 (4). ACT. Care, E., & Kim, H. (2018). Assessment of twenty-first century skills: The issue of authenticity. In E. Care, P. Griffin, & M. Wilson (Eds.), Assessment and teaching of 21st century skills: Research and applications (pp. 21–39). New York, NY: Springer. Cooke, N. J., Kiekel, P. A., Salas, E., Stout, R., Bowers, C., & Cannon-Bowers, J. (2003). Measuring team knowledge: A window to the cognitive underpinnings of team performance. Group Dynamics: Theory, Research and Practice, 7(3), 179–219. Flor, M., Yoon, S.-Y., Hao, J., Liu, L., & von Davier, A. A. (2016). Automated classification of collaborative problem-solving interactions in simulated science tasks. In Proceedings of the 11th workshop on innovative use of NLP for building educational applications (pp. 31–41). Association for Computational Linguistics. Foltz, P. W., & Martin, M. J. (2008). Automated communication analysis of teams. In E. Salas, G. F. Goodwin, & S. Burke (Eds.), Team effectiveness in complex organizations and systems: Cross-disciplinary perspectives and approaches (pp. 411–431). New York, NY: Routledge. Graesser, A. C., Fiore, S. M., Greiff, S., Andrews-Todd, J., Foltz, P. W., & Hesse, F. W. (2018a). Advancing the science of collaborative problem solving. Psychological Science in the Public Interest, 19, 59–92. Graesser, A. C., Foltz, P. W., Rosen, Y., Shaffer, D. W., Forsyth, C., & Germany, M. (2018b). Challenges of assessing collaborative problem solving. In E. Care, P. Griffin, & M. Wilson (Eds.), Assessment and teaching of 21st century skills (pp. 75–91). Springer. https://doi.org/10. 1007/978-3-319-65368-6_5. Graesser, A. C., Jeon, M., & Dufty, D. (2008). Agent technologies designed to facilitate interactive knowledge construction. Discourse Processes, 45(4), 298–322. Griffin, P., Care, E., & McGaw, B. (2012a). The changing role of education and schools. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching 21st century skills (pp. 1–15). New York, NY: Springer. Griffin, P., McGaw, B., & Care, E. (Eds.). (2012b). Assessment and teaching of 21st century skills. New York, NY: Springer. Halpin, P. F., von Davier, A. A., Hao, J., & Liu, L. (2017). Measuring student engagement during collaboration. Journal of Educational Measurement, 54(1), 70–84. Hao, J., Chen, L., Flor, M., Liu, L., & von Davier, A. A., (2017a), CPS-rater: automated sequential annotation for conversations in collaborative problem-solving activities (ETS research report no. RR-17-58). Educational Testing Service. Hao, J., Liu, L., Kyllonen, P., Flor, M., & von Davier, A. A. (2019). Psychometric considerations and a general scoring strategy for assessments of collaborative problem solving (research report no. RR-19-41). Educational Testing Service. Hao, J., Liu, L., von Davier, A. A., & Kyllonen, P. C. (2017b). Initial steps towards a standardized assessment for CPS: Practical challenges and strategies. In A. A. von Davier, M. Zhu, & P. C. Kyllonen (Eds.), Innovative assessment of collaboration. New York, NY: Springer. Hao, J., Liu, L., von Davier, A. A., Lederer, N., Zapata-Rivera, D., Jkal, P., & Bakkenson, M. (2017c). EPCAL: ETS platform for collaborative assessment and learning (ETS research report no. RR-17-49). Educational Testing Service. Hao, J., & Mislevy, R. J. (2019). Characterizing interactive communications in computer-supported collaborative problem-solving tasks: A conditional transition profile approach. Frontiers in Psychology, 10, 1011. https://doi.org/10.3389/fpsyg.2019.01011. Hao, J., Smith, L., Mislevy, R., von Davier, A. A., & Bauer, M. (2016). Taming log files from game/ simulation-based assessments: Data models and data analysis tools. ETS Research Report Series, 2016(1), 1–17. https://doi.org/10.1002/ets2.12096.
530
Y. Rosen et al.
Kim, Y., Almond, R., & Shute, V. (2016). Applying evidence-centered design for the development of game-based assessments in physics playground. International Journal of Testing, 16(2), 142–163. Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: A review of the research. Computers in Human Behavior, 19(3), 335–353. Liu, L., Hao, J., von Davier, A. A., Kyllonen, P., & Zapata-Rivera, D. (2016). A tough nut to crack: Measuring collaborative problem solving. In Y. Rosen, S. Ferrara, & M. Mosharraf (Eds.), Handbook of research on computational tools for real-world skill development. IGI-Global. Looi, C.-K., & Dillenbourg, P. (2013). How will collaborative problem solving be assessed at international scale? (invited panel). In N. Rummel, M. Kapur, M. Nathan, & S. Puntambekar (Eds.), 10th International conference on computer-supported collaborative learning (CSCL) 2013, conference proceedings: Vol. 2. Short papers, panels, posters, Demos & Community Events. ISLS. Ludvigsen, S., Lund, K., & Oshima, J. (this volume). A conceptual stance on CSCL history. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Martin-Raugh, M. P., Kyllonen, P. C., Hao, J., Bacall, A., Becker, D., Kurzum, C., Yang, Z., Yan, F., & Barnwell, P. (2020). Negotiation as an interpersonal skill: Generalizability of negotiation outcomes and tactics across contexts at the individual and collective levels. Computers in Human Behavior, 104, 105966. McCaslin, M., & Burross, H. L. (2011). Research on individual differences within a sociocultural perspective: Co-regulation and adaptive learning. Teachers College Record, 113(2), 325–349. Medina, R., & Stahl, G. (this volume). Analysis of group practices. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23. Mestre, J. (2002). Transfer of learning: Issues and research agenda (National Science Foundation Report No. NSF03-212). Mislevy, R., Steinberg, L., & Almond, R. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspective, 1(1), 3–62. O’Neil, H. F., Chung, G., & Brown, R. (1997). Use of networked simulations as a context to measure team competencies. In H. F. O’Neil Jr. (Ed.), Workforce readiness: Competencies and assessment (pp. 411–452). Hillsdale, NJ: Erlbaum. OECD. (2017a). PISA 2015 collaborative problem-solving framework. Paris, France: OECD Publishing. OECD. (2017b). PISA 2015 results: Collaborative problem solving. Paris, France: OECD Publishing. Pai, H., Sears, D. A., & Maeda, Y. (2015). Effects of small-group learning on transfer: A metaanalysis. Educational Psychology Review, 27(1), 79–102. Rosen, Y. (2017). Assessing students in human-to-agent settings to inform collaborative problem solving learning. Journal of Educational Measurement, 54(1), 36–53. Rosen, Y., & Foltz, P. W. (2014). Assessing collaborative problem solving through automated technologies. Research & Practice in Technology Enhanced Learning, 9(3), 389–410. Rosen, Y., Wolf, I., & Stoeffler, K. (2020). Fostering collaborative problem solving skills in science: The Animalia project. Computers in Human Behavior, 104, 105922. https://doi.org/ 10.1016/j.chb.2019.02.018. Salomon, G. (Ed.). (1997). Distributed cognitions: Psychological and educational considerations. Cambridge, UK: Cambridge University Press. Scoular, C., Care, E., & Awwal, N. (2017). An approach to scoring collaboration in online game environments. Electronic Journal of e-Learning, 15(4), 335–342.
Development of Scalable Assessment for Collaborative Problem-Solving
531
Sottilare, R. A., Burke, C. S., Salas, E., Sinatra, A. M., Johnston, J. H., & Gilbert, S. B. (2018). Designing adaptive instruction for teams: A meta-analysis. International Journal of Artificial Intelligence in Education, 28, 225–264. Stahl, G. (2013). Workshop presentation: Assessing the PISA assessment of CSCL. Presented at the international conference of computer-supported collaborative learning (CSCL), Madison, WI. Stahl, G., Koschmann, T., & Suthers, D. (2006). Computer-supported collaborative learning. In R. K. Sawyer (EB22d.), Cambridge handbook of the learning sciences. Cambridge, UK: Cambridge University Press. Stahl, G., Ludvigsen, S., Law, N., & Cress, U. (2014). CSCL artifacts. International Journal of Computer-Supported Collaborative Learning, 9(3), 237–245. Stoeffler, K., Rosen, Y., Bolsinova, M., & von Davier, A. A. (2020). Gamified performance assessment of collaborative problem solving skills. Computers in Human Behavior, 104, 106036. https://doi.org/10.1016/j.chb.2019.05.033. Stoeffler, K., Rosen, Y., & von Davier, A. A. (2017). Exploring the measurement of collaborative problem solving using a human-agent educational game. In LAK’17: Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 570–571). Association for Computing Machinery. doi: https://doi.org/10.1145/3027385.3029464. von Davier, A. A., & Halpin, P. F. (2013). Collaborative problem solving and the assessment of cognitive skills: Psychometric considerations (research report no. RR-13-41). Educational Testing Service. von Davier, A. A., Hao, J., Liu, L., & Kyllonen, P. (2017). Interdisciplinary research agenda in support of assessment of collaborative problem solving: Lessons learned from developing a collaborative science assessment prototype. Computers in Human Behavior, 76, 631–640. World Economic Forum. (2015). New vision for education: Unlocking the potential of technology. Vancouver, BC: British Columbia Teachers’ Federation. Zhu, M., & Andrews-Todd, J. (2019). Understanding the connections of collaborative problem solving skills in a simulation-based task through network analysis [paper presentation]. International Conference on Computer Supported Collaborative Learning, Lyon, France. Zhu, M., & Zhang, M. (2017). Network analysis of conversation data for engineering professional skills assessment (research report no. RR-17-59). Educational Testing Service.
Further Readings Care, E., & Griffin, P. (2014). Assessment of collaborative problem solving. Research and Practice in Technology Enhanced Learning, 9, 367–388. This paper outlines the process, within the ATC21S project, of moving from the definition of collaborative problem-solving to its assessment. It describes the development of learning progressions, assessment tools, and scoring methods, tested with 4056 adolescents from six countries: Australia, Costa Rica, USA, Finland, the Netherlands, and Singapore. Care, E., Griffin, P., & Wilson, M. (Eds.). (2017). Assessment and teaching of 21st century skills: Research and application. Springer. Following Griffin and Care (2015), this edited volume includes 15 chapters that showcase research and application related to two 21st-century skills, collaborative problem-solving and learning in digital networks. The research reviewed includes data generated from the ACTC21S school trials as well as a case study of preservice teacher education in Finland. Dingler, C., von Davier, A. A., & Hao, J., (2017). Methodological challenges in measuring collaborative problem-solving skills over time. In E. Salas, W. B. Vessey, L. B. Landon (Eds.), Team dynamics over time (Vol. 18, pp. 51–70). Emerald Publishing. This chapter provides an interdisciplinary survey of recent developments in measurement of teamwork and collaboration in educational contexts. It discusses a range of methods for collecting and
532
Y. Rosen et al.
analyzing teamwork data and compares five frameworks for measuring collaborative problemsolving over time (from PISA, ATC21S, ETS, ACT, and von Davier & Halpin, 2013). Preliminary results of the assessments developed from these frameworks show innovative task designs, data-mining techniques, and novel applications of stochastic models. Griffin, P., & Care, E. (Eds.). (2015). Assessment and teaching of 21st century skills. Springer. This edited volume includes 15 chapters on the ATC21S project, providing an overview of the method and framework for assessment, along with considerations for how collaborative tasks are delivered, scored, calibrated, and interpreted. Fieldwork chapters review the project implementation in the participating countries (Australia, Costa Rica, USA, Finland, the Netherlands, and Singapore). The concluding chapters address teaching and policy considerations related to collaborative learning. Song, Y. (2018). Improving primary students’ collaborative problem solving competency in project-based science learning with productive failure instructional design in a seamless learning environment. Education Technology Research and Development, 66, 979–1008. https://doi.org/ 10.1007/s11423-018-9600-3. This article reports an empirical study using mixed research methods to investigate primary students’ collaborative problem-solving competency in project-based learning with versus without productive failure instructional design. Data collection included student products from their coursework, as well as reflections, focus group interviews, and domain pre- and posttests. Results indicated that students in the class with productive failure instructional design showed multiple benefits, suggesting this as a fruitful avenue for further investigation.
Statistical and Stochastic Analysis of Sequence Data Ming Ming Chiu and Peter Reimann
Abstract Two common CSCL questions regarding analyses of temporal data, such as event sequences, are: (i) What variables are related to event attributes? and (ii) what is the process (or what are the processes) that generated the events? The first question is best answered with statistical methods, the second with stochastic or deterministic process modeling methods. This chapter provides an overview of statistical and stochastic methods of direct relevance to CSCL research. Many of the statistical analyses are integrated into statistical discourse analysis. From the stochastic modeling repertoire, the basic hidden Markov model as well as recent extensions is introduced, ending with dynamic Bayesian models as the current best integration. Looking into the near future, we identify opportunities for a closer alignment of qualitative with quantitative methods for temporal analysis, afforded by developments such as automization of quantitative methods and advances in computational modeling. Keywords Statistical discourse analysis · Time analysis · Stochastic models · Process mining
1 Definitions and Scope In this chapter, we introduce two complementary approaches for the analysis of temporal data, in particular for the analysis of discrete event sequences: statistical and stochastic analysis. The basic distinction is that a stochastic process is what (one assumes) generates the data that statistics analyze. To say that a process is M. M. Chiu (*) Special Education and Counseling, The Education University of Hong Kong, Tai Po, Hong Kong e-mail: [email protected] P. Reimann Centre for Research on Learning and Innovation, University of Sydney, Sydney, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_29
533
534
M. M. Chiu and P. Reimann
“stochastic” is to say that at least part of it happens “randomly”; it can be studied using probability theory and/or statistics. The analysis of stochastic processes is the subject of probability theory, like statistics, a field of study in mathematics. In probability theory, we have some given probability distribution and want to determine the probability of some specific event. The following sections will be introducing regression models as a powerful statistical modeling method and hidden Markov models as an example of a stochastic method. In their combination, they can be used for both empirical and theoretical modeling.
1.1
Statistical View of Sequential Processes
Computer-supported collaborative learning (CSCL) researchers often ask five types of questions that involve time: (a) are there common sequences of actions/events (e.g., disagree ! explain)? (b) do these sequences have antecedents at various levels? (c) are there pivotal events? (d) do these sequences differ across time periods? and (e) are these sequences related to outcomes? First, are disagreements more likely than other utterances to be followed by explanations in an online forum? These types of questions ask whether one event (e.g., disagree) is more likely than otherwise to be followed by another event (e.g., explain, Chiu 2008). Second, are factors at other levels (e.g., gender of author or recipient; mean writing grade of group) related to the likelihood of a disagree ! explain sequence? Such questions help build a comprehensive theoretical model of the different attributes across levels that might influence the likelihood of such sequences (Chiu and Lehmann-Willenbrock 2016). Third, does a pivotal action/process (e.g., summary) change the likelihood of a disagreement ! explanation sequence across time (Wise and Chiu 2011)? Such questions seek to identify actions/events that radically change the interaction (pivotal events, Chiu and Lehmann-Willenbrock 2016). Fourth, are disagree ! explain sequences more likely at the beginning, middle, or end of a discussion? Such questions ask whether a particular sequence is more likely at different time periods, thereby examining their generality across time (Chiu 2008). Lastly, do groups with more disagree ! explain sequences than others show superior group solutions or subsequent individual test scores? Such questions help build a comprehensive theoretical model of the consequences of such sequences for groups/individuals (Chiu 2018). Before proceeding further, we define several terms: sampling unit, session, time period, event, and sequence. The object under study (group, dyad, or individual) is the sampling unit, which is observable during one or more sessions (occasions). If warranted, we can divide each session into time periods. During a session, we observe one or more learners’ behaviors, which we call events. One or more adjacent events is a sequence. A statistical analysis of data that address the above research questions has three major assumptions (Teddlie and Tashakkori 2009), which we explicate in the
Statistical and Stochastic Analysis of Sequence Data
535
context of online students chatting about designing a paper airplane to stay aloft longer. First, instances of a category (e.g., disagree) with the same value (e.g., disagree vs. not disagree [coded as 1 vs. 0]) are sufficiently similar to viewed as equivalent. Second, earlier events (disagree in parent message 87) or fixed attributes (e.g., author gender) influence the likelihood of a specific event at a specific time (explanation in message 88). Third, our statistical model fully captures our theoretical model, so that unexplained aspects of the data (residuals) reflect attributes that are not related to our theoretical model.
1.2
The Stochastic View of Sequential Processes
The stochastic perspective of sequential data in CSCL assumes that a recorded sequence—for instance, a sequence of dialogue moves—is produced by a stochastic process. Events are seen as different in kind from processes: Processes produce (generate, bring about) events. While recorded events can be analyzed to identify structure and properties of processes, they are not identical with the latter. The ontological position that processes are different from events is foundational to stochastic (and deterministic) models, but it is not shared by regression models and most other variants of the general linear model, with the exception of structural equation models under a certain interpretation of what latent variables mean (Loehlin 2004). Regression models’ variables are ontologically “flat”; the only difference between them is epistemic: the variation in the dependent variable is explained in terms of the covariation with one or more independent variables. Note that multilevel modeling (Cress 2008) does not change the ontological status of the variables included either: The nesting relation in multilevel modeling is different from the generative relation that links structure/process to events. What are the “practical” consequences of this distinction for the learning researcher? For one, stochastic models are not dependent on distribution assumptions, such as normal distribution. Secondly, stochastic modeling allows to simulate the implications of changes to theoretical assumptions; they afford counterfactual (“what if?”) reasoning. And thirdly, with this kind of model one can determine the likelihood of an individual event sequence being producible by the process the model describes. Thus, they are not so much an alternative to statistical models than they allow to answer additional questions.
2 History and Development 2.1
Early Statistical Analyses
Early researchers analyzed their data with simple, mathematics calculations, namely conditional probabilities. To test hypotheses, researchers developed statistical
536
M. M. Chiu and P. Reimann
Table 1 A comparison of conditional probabilities, sequential analysis, and regressions Properties Discrete outcomes (explain vs. not) Discrete explanatory variables Significance test Goodness of fit Continuous outcomes Continuous explanatory variables (notably time) Explanatory variables at other levels Nonconsecutive events Complex models Small sample size
Conditional probability ✓ ✓
✓
Sequential analysis ✓ ✓ ✓ ✓
Vector autoregression ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
methods, such as sequential analysis and regressions (see Table 1). We explicate these methods using examples from online students chatting about paper airplane design.
2.1.1
Conditional Probability
The probability of an event (e.g., explain) given that another event (disagree) has occurred is its conditional probability (CP, e.g., Farran and Son-Yarbrough 2001). To compute it, we divide the overall probability (OP) of the disagree ! explain sequence (e.g., 13%) by the overall probability of disagreeing (e.g., 39%), yielding 33% (¼13%/39% ¼ OP [disagree ! explain]/OP[disagree]) via Bayes’ theorem. CPs apply to sequences of any length. However, CP has no significance tests or goodness-of-fit measures, so researchers must subjectively decide whether a CP supports or rejects their hypotheses.
2.1.2
Sequential Analysis
To test hypotheses, researchers developed statistical methods, such as sequential analysis (SA). Building on CP, SA supports hypothesis testing. SA models events across time as a discrete process in which the current event (state) determines the probability of the next event (Gottman and Roy 1990). For example, a group in a state of disagreement without explanation is more likely than otherwise to move to a state of disagreement with explanation (Chiu and Lehmann-Willenbrock 2016). SA tests for significant differences (z-score) and evaluates the goodness-of-fit of each explanatory model (via likelihood ratio chi-squared tests, Bakeman and Gottman 1986). Like CP, SA only applies to discrete outcomes and explanatory variables at
Statistical and Stochastic Analysis of Sequence Data
537
the same level (message), requires consecutive events, and can require enormous sample sizes to test somewhat complex explanatory models.
2.1.3
Vector Auto-Regression
Addressing these three limitations of CP and SA, vector auto-regressions (VAR, Kennedy 2008) model continuous variables, explanatory variables at different levels, nonconsecutive events, and complex phenomena with small samples. For a continuous outcome variable, an ordinary least squares regression fits a line to the data (or more generally a curve), which enables analyses of outcomes as a function of time (traditional time-series data, Kennedy 2008). For a dichotomous outcome (explanation vs. no explanation), a logit, probit, or gompit regression fits an S-curve to the data (see Fig. 1 for an example with a continuous, explanatory variable age; Cohen et al. 2003). This example also shows how regressions can test explanatory variables at any level (message, person, group, etc., Kennedy 2008). Furthermore, regressions can model nonconsecutive relations, such as whether a student who disagreed two messages ago (grandparent message: disagree [–2]) raises the likelihood of an explanation in the current message (explain), namely, disagree (–2) ! ! explain (Chiu and Lehmann-Willenbrock 2016). In general, we can test whether an attribute of an earlier event is related to an attribute of the current event (Kennedy 2008). Also, a regression can create simpler models of complex phenomena via multidimensional coding. For example, to model sequences with five events from four dimensions with two choices per dimension (e.g., female [vs. male], student
Age and Probability of Explaining
Probability of Explaining
1.00
0.75
0.50
0.25
0.00
0
10
20
30
40
Age Fig. 1 Logit regression fitting an S-curve to data on age and explanation
50
60
538
M. M. Chiu and P. Reimann
[vs. teacher], disagree [vs. agree], and explain [vs. not], SA requires a sample size of 5,242,880 (¼5 × [24]5; ¼ 5 × [event combinations]sequence length, Gottman and Roy 1990). In contrast, a regression only requires 20 explanatory variables (20 ¼ 4 dimensions × sequence length of 5) and a much smaller sample (Cohen et al. 2003). Applying Greene’s (1997) sample size formula for regressions (N > 8 × [1 – R2]/R2 + M – 1; with expected explained variance R2 ¼ 0.1 and number of explanatory variables M ¼ 20), testing this explanatory model requires a sample size of only 91 (¼8 × (1 – 0.10)/0.10 + 20 – 1). Hence, a multidimensional coding scheme can capture the complexity of the model, reduce the number of needed variables, and reduce the minimum sample size needed for a regression (Chiu and Lehmann-Willenbrock 2016).
2.2
Early Applications of Stochastic Analysis
An important method to modal temporal data probabilistically is the hidden Markov model (HMM). It has been applied in CSCL research for analyzing discourse sequences, for example. This formalism is an extension of the (discrete) Markov Process model, which we introduce first.
2.2.1
Markov Models
The underlying assumption of probabilistic models is that the event sequence can be characterized as a parametric random process and that the parameters of the stochastic process (the structure, not the event sequence) can be determined (estimated) in a precise, well-defined manner (Rabiner 1989, p. 255). A Markov process model describes a system made out of N distinct states, S1, S2, . . ., SN. At equally spaced discreet times (t ¼ 1, 2, . . .), the system undergoes a change of state, with the state at time t denoted as qt. A full description of the system would require the specification of the current state (time t) as well as all the predecessor states. For the important special case of a discrete first-order Markov chain, it is assumed that this description can be truncated to just the current and the predecessor state, i.e., ⌈ ⌉ ⌈ ⌉ P qt ¼ S j ¼ P qt ¼ S j jqt–1 ¼ Si : We further assume that the transitions between states are independent of time, that the system itself doesn’t change over time. This leads to a set of state transition probabilities aij of the form. ⌈ ⌉ aij ¼ P qt ¼ S j jqt–1 ¼ Si , 1 ≤ i, j ≤ N
Statistical and Stochastic Analysis of Sequence Data
539
and with the property that the sum of all transitions probabilities across the states Sj is equal to 1: N X
aij ¼ 1
j¼1
To provide an example, let’s assume we want to describe a (hypothetical) group with three states: (1) forming, (2) storming, or (3) norming (Tuckman 1965). Recording observations of group communication as they unfold, we can describe the system in terms of transition probabilities between these three states: 9 8 0:4 0:3 0:3 > > = < { } A ¼ aij ¼ 0:2 0:6 0:2 > > ; : 0:1 0:1 0:8 The value in middle, 0.6, for instance, means that the probability that if the group is in the storming phase at time ti it will be in that phase at time ti + 1 as well is 0.6; and the 0.2 to the left refers to the chance of changing from forming to storming. One question this model can be used to answer is: What is the probability of the group over the next days being “forming–storming–forming–norming. . .,” or any other specific sequence? Another question that can be answered from the model is: Given that the model is in a known state, what is the probability that it will stay there for exactly t number of interactions? Note that the transition matrix is also the place where theoretical assumptions can be varied, to the extent that they can be expressed as (transition) probabilities. For instance, to express that it should not be possible to move from forming to norming directly one can set the corresponding transition probability to a very small number.
2.2.2
Hidden Markov Models
With HMMs, we can account for the relation between states and observed events by making the observation a probabilistic function of the state. The resulting hidden Markov model is “. . .a doubly embedded stochastic process with an underlying stochastic process that is not observable (it is hidden), but can only be observed through another set of stochastic processes that produce the sequence of observations” (Rabiner 1989, p. 259). To describe an HMM, we need to specify (1) the number of states in the model, (2) the number of distinct observation symbols per state, i.e., the observables; (3) the state transition probability distribution, (4) the observation probability distribution for each state, and (5) the initial state distribution (a vector with the probabilities of the system being in state j). Based on this specification, an HMM can be used in three main ways: (a) for generating (“predicting”) observations; (b) for modeling how a given observation sequence
540
M. M. Chiu and P. Reimann
was generated by an appropriate HMM; (c) for parsing a given observation sequence and thereby deciding if the observed sequence is covered by (or explained by) the model. The best-known early example of HMM use in CSCL is likely (Soller 2004). Here, chat contributions from pairs of learners involved in a problem-solving task were first coded into categories and HMM models were then trained on the chat sequences (observations) for successful and unsuccessful pairs, respectively. The method proved useful to identify sequences that led to successful knowledge sharing from those that did not. In general, training an HMM on known observations requires specification of the initial state probability distribution as well as the state transition matrix and observation probability distribution for each state. Based on these initial specifications, programs such as seqHMM (Helske and Helske 2017) can calculate values for parameters in the HMM that lead to a best fit with the observed data. Boyer et al. (2009) used HMMs in a similar fashion. Here the pairs were formed by a student and a tutor, and the dialogue acts were coded in terms of categories relevant for tutor-student discourse. States were interpreted as “dialogue modes.” As in the Soller study, the number of states were determined by balancing the number of states with the fit to training data.
3 State of the Art 3.1
Recent Developments in Statistical Analysis
CP, SA, and simple regressions all assume: (a) sequences, (b) identical task difficulties, (c) no group/individual differences, (d) no time periods, (e) a single outcome, (f) observed events only, (g) direct effects only, (h) no measurement error and (i) an immediate outcome (see Table 2). Specifically, researchers addressed them via: Table 2 Analytic issues and suitable statistical strategies Analytic issue Parallel chats or trees Task difficulty Group/individual differences Pivotal event Time periods Multiple target events Model latent processes underlying events, indirect, mediation effects and measurement error Later group/individual outcomes
Statistical strategy Store parent message Item response theory (IRT) Multilevel analysis (ML) (hierarchical linear models, HLM) Breakpoint analysis Breakpoint analysis and multilevel analysis Multilevel structural equation modeling (ML-SEM)
Add outcome and its interaction as explanatory variables and multilevel moderation via random effects
Statistical and Stochastic Analysis of Sequence Data
541
(a) stored parent message, (b) item response theory, (c) multilevel analysis, (d) breakpoint analysis, (e, f, g, h) multilevel structural equation model, and (i) multilevel moderation via random effects.
3.1.1
Parallel Chats and Trees
Although much talk occurs in sequence with one speaker after another, sometimes learners separate into parallel conversations or online participants engage with messages according to their thread structure (often trees) rather than in temporal order (Chiu and Lehmann-Willenbrock 2016). To analyze such nonsequential data, researchers can identify and store the previous message of each message in a variable parent message; specifically, a computer program can create this variable by traversing parallel chats/conversations or trees of messages/turns of talk (this program is available in Chen and Chiu 2008).
3.1.2
Task Difficulty
Tasks differ in difficulty, so ignoring these differences can mask a student’s learning progress (or difficulties). Item response theory simultaneously models the difficulty of each task and each student’s overall competence (along with guessing success on multiple choice questions, Embretson and Reise 2013). An IRT model that incorporates a time parameter enables modeling of learning (or changes across time, additive factors model, Cen et al. 2006).
3.1.3
Group/Individual Differences
Groups and individuals likely differ. Specifically, messages written by the same student likely resemble one another more than those by different students. Likewise, messages in the same thread/topic likely resemble one another more than those in different threads/topics. CP and SA cannot model these differences, and a regression would negatively bias the standard errors. Hence, we apply a multilevel analysis to yield unbiased results (Goldstein 2011; also known as hierarchical linear modeling, Bryk and Raudenbush 1992). In general, such nested data (students within groups within classrooms within schools, etc.) require multilevel analysis for accurate results (Goldstein 2011).
3.1.4
Differences Across Time
An outcome (e.g., explanation) might be more likely at the beginning, the middle, the end, or in a specific time interval (Chiu and Lehmann-Willenbrock 2016). Furthermore, the relations among explanatory variables and outcomes might differ
542
M. M. Chiu and P. Reimann
across time (Chiu and Lehmann-Willenbrock 2016). Although humans can decide how to divide a stream of data into time periods, past studies show that such subjective methods are unreliable (e.g., Wolery et al. 2010). In contrast, breakpoint analysis objectively identifies pivotal events that substantially increase (or decrease) the likelihood of an outcome (e.g., explanation, Chiu and Lehmann-Willenbrock 2016). Researchers can then test explanatory models to characterize when these pivotal events occur. For example, discussion summaries were often breakpoints that sharply elevated the quality of online discussions, and students assigned the roles of synthesizer or wrapper were far more likely than others to create discussion summaries (Wise and Chiu 2011). These pivotal events divide the data series into distinct time periods of significantly higher versus lower likelihoods of the outcome (e.g., explanations are much more likely in one time period than another, Chiu and Lehmann-Willenbrock 2016). These time periods provide an additional level to the above multilevel analysis (Chiu and Lehmann-Willenbrock 2016). Researchers can then test whether relations among variables are stronger in some time periods than in others. For example, when groups of high school students worked on an algebra problem, a correct evaluation of a groupmate’s idea raised the likelihood of a correct contributions in most time periods, but not all of them; the effect ranged from –0.3% to +9% across time periods (Chiu 2008).
3.1.5
Multiple Target Events, Latent Process, Indirect Effect, and Measurement Error
Often, researchers are interested in how processes affect multiple types of targeted events (e.g., explanations and correct, new ideas [micro-creativity], Chiu and Lehmann-Willenbrock 2016). As multiple types of target events might be related to one another, standard analyses designed for a single dependent variable can yield biased standard errors (Kennedy 2008). Hence, researchers have developed methods such as multilevel structural equation models (ML-SEM, Joreskog and Sorbom 2015) that simultaneously test multiple dependent variables; in the above algebra group, problem-solving example, a justification might yield both another justification and micro-creativity ( justification [–1] ! justification; justification [–1] ! micro-creativity). ML-SEMs also properly test indirect mediation effects [X ! M ! Y] and combine multiple measures of a single construct into a single index that increases precision, such as tests to measure intelligence (Muthén and Muthén 2018); continuing with the algebra group problem-solving example, for example, a correct evaluation often followed by another correct evaluation, which in turn is followed by micro-creativity (correct evaluation [–2] ! correct evaluation [–1] ! micro-creativity.
Statistical and Stochastic Analysis of Sequence Data
3.1.6
543
Later Group/Individual Outcomes
In addition to the immediate consequences of processes on target events, researchers are often interested in whether such sequences have longer term effects, such as the quality of a group’s final solution to the current problem or later individual test scores (Chiu 2018). The traditional approach of aggregating event-level data to the individual or group level (or any higher level) discards substantial information and yields inaccurate results (Goldstein 2011). Instead, researchers can use an event-level analysis to utilize all the available data (Chiu 2018). Consider groups of students designing plans to reduce climate change (e.g., reduce cafeteria beef dishes to reduce cow methane). A researcher wants to know if a group that has more disagree ! explain sequences than others creates a superior group plan. Chiu (2018) showed how to test this hypothesis via a regression with the dependent variable explain and the following explanatory variables: disagree [–1], group plan, and the interaction term disagree [–1] × group_plan. This message-level specification asks, “In groups with higher plan scores, is a disagree message more likely to be followed by an explain message?” The message sequences occur before the group plan, and time cannot flow backward, so the group plan cannot influence the message sequences. Likewise, a researcher can also test whether individuals that participate in more disagree ! explain sequences than others have higher subsequent science test scores by adding the following explanatory variables to the above regression specification: test score and disagree [–1] × test score. Hence, this elaborated specification simultaneously tests whether disagree ! explain sequences link to group plans or individual science test scores. More generally, a regression does not mathematically dictate the direction of causality, so traditional outcomes can serve as independent variables (Chiu 2018). For nested data (e.g., messages within time periods, see above), modeling such interactions requires a multilevel moderation via random effects (Chiu and Lehmann-Willenbrock 2016). In short, statistical methods enable researchers to test hypotheses regarding sequences of events, their antecedents at any level, parallel chats and trees, task difficulty differences, group/individual differences, pivotal events, time periods, multiple target events, latent processes, indirect links, measurement error, and later group/individual outcomes. See Chiu and Lehmann-Willenbrock’s (2016) statistical discourse analysis (SDA) regarding integration of most of the above analyses, along with statistical methods for addressing related issues (e.g., missing data, inter-rater reliability, false positives, etc.).
544
3.2 3.2.1
M. M. Chiu and P. Reimann
Recent Developments in Stochastic Modeling Extensions of Hidden Markov Models
In CSCL research, HMMs have been mainly applied for practical purposes: to provide a compact representation of long interaction sequences, one that is useful for making predictions. Learning is reflected not only in talk and conversation, but also in eye gaze, movement, and gestures. HMMs can be used on such kinds of data as well. This has been made easier as recent years have seen a vast expansion of the use of HMMs, enabled by the introduction of software packages that remove constraints on data modeling. Focusing on what is available in R, HMM packages have been developed that can learn from multiple channel observation sequences (Visser and Speekenbrink 2010), relevant for instance for cases where eye-tracking is combined with observations of interaction and verbal data (Schneider et al. 2018). In the same package, covariates can be added for initial and transition probabilities. This allows us, for instance, to model cases in which the participants are provided with time-dependent additional information, such as observations on a peer tutor (Walker et al. 2014). Another important extension concerns multiple observation sequences: the hidden states are seen as representing a distribution of states (O’Connell and Højsgaard 2011; Turner and Liu 2014). There are also extensions for modeling continuous time processes (Jackson 2011), relevant in CSCL for research that includes, for instance, physiological measurements (Mandryk and Inkpen 2004). One of the most comprehensive HMM packages for R currently available that reflects a range of these extensions is seqHMM (Helske and Helske 2017).
3.2.2
Dynamic Bayesian networks
An important development in stochastic modeling of temporal processes is dynamic Bayesian networks (DBNs). They provide a perspective for probabilistic reasoning over time that unifies (hidden) Markov modeling in all its variants with the Bayesian approach to modeling diagnostic reasoning, decision-making, and measurement. The ontology of a DBN is such that the world is a series of snapshots—of time slices—each of which contains a number of (unobservable) state variables and a number of observable variables that are indicators for states. For the simplest case of a DBN, we assume that the variables and their links are exactly replicated from slice to slice and that the DBN itself represents a first-order Markov process: each variable has “parents” (is linked to) only in its own slice and/or the immediately preceding slice (Russell and Norvig 2016, p. 590). To provide an example, the study on math learning by peer tutoring described in (Bergner et al. 2017) uses an input–output HMM to model the relation between tutor input, tutee’s capability (the hidden state), and the correctness of observed tutee actions (see Fig. 2). This model makes it explicit that a capability increase on side of
Statistical and Stochastic Analysis of Sequence Data
545
Fig. 2 DBN model of learning in a tutorial dialogue after Bergner et al. (2017)
the tutee depends on the tutor as well as the tutee. Linking the variables of a system makes it computationally much more tractable than when only the set of variables is provided. Because a DBN can express more structure than an HMM, it becomes more efficiently computable and it allows expression of a wider set of theoretical assumptions. As regards software support, the bnlearn package in R, for instance, can be used to construct DBNs theory-driven or to learn them from data (Nagarajan et al. 2013).
4 The Future On the horizon are dynamic social network analysis, massive data, automatic analyses, and qualitative/quantitative analysis cycles. While social network analyses can examine attributes of fixed networks of learners or ideas (epistemic network analysis), dynamic social network analysis offers the promise of examining how networks of learners, ideas, or both change over time (Oshima et al. 2018; Sarkar and Moore 2006; Shaffer et al. 2009). Massive data (colloquially, big data) encompass sharply greater volume, complexity, and velocity (e.g., from massive, open, online courses or MOOCs; National Research Council 2013). The increasing addition of computer chips into objects around us (colloquially, internet of things) and their technological embrace by educators to aid student learning is creating voluminous amounts of electronic data (Picciano 2012). Greater volumes of data largely enhance statistical analyses and enable greater precision in the results (Cohen et al. 2003). Although some data are in the familiar form of numbers, much of it is text, images, or videos (Gandomi and Haider 2015). These data require substantial effort before conversion into numbers for statistical analyses (e.g., 1 for presence of an image vs. 0 for its absence), so collaborations among experts in computational linguistics, image processing, acoustics, and statistics will likely become necessary (Bello-Orgaz et al. 2016). Also, highvelocity data collection entails repeated dynamic analyses to yield updated results (each day, hour, minute, etc.; Zikopoulos and Eaton 2011).
546
M. M. Chiu and P. Reimann
The growing size, complexity, and velocity of massive data and the accompanying demand for comprehensive, nuanced, updated analyses of them exceed human capacity, so they motivate automated, computer programs to take over increasingly greater statistics responsibilities (Assunção et al. 2015). After computer programs informed by computational linguistics, image processing, and acoustics create the databases (Bello-Orgaz et al. 2016), artificial intelligence expert systems can select and run the statistical analyses (repeatedly for high-velocity incoming data), interpret the results, and produce reports for humans (Korb and Nicholson 2010). The automation of statistical analyses also frees up human time for detailed qualitative analyses, so that both analyses mutually inform each other’s subsequent analyses, provide mutually supportive evidence, and complement each other’s strengths and weaknesses (Teddlie and Tashakkori 2009). For example, an initial, qualitative case study can select and scrutinize important phenomena in context to develop theory by identifying constructs, operationalizing them, recognizing patterns, and specifying hypotheses (possibly aided by data mining, Feldman and Sanger 2007). Next, a statistical analysis tests these hypotheses, identifies pivotal breakpoints, and pinpoints instances in the data that fit the theory extremely well or extremely poorly (Chiu 2013). The hypothesis testing results, breakpoints, well-fit instances, and poorly fitting instances target specific data for another round of qualitative analysis (Chiu 2013). This qualitative analysis can refine the hypotheses, develop new ones for breakpoints or poorly fitting instances for another round of statistical analyses, and so on (Teddlie and Tashakkori 2009). Researchers can flexibly start or stop at any point in the above multistep qualitative/quantitative cycle (Chiu 2013). Also interesting for bringing qualitative and quantitative approaches into closer contact are deterministic process models. Deterministic modeling applies when the structure of the process is known and one is interested in the behavior of the process under certain conditions. Deterministic models are particularly relevant for CSCL research when processes are designed, such as for the study of collaboration and argumentation scripts (Weinberger and Fischer 2006). A deterministic process differs from a fixed, invariant sequence of steps (activities, events). With known start and end states, it is a finite set of both states and activities (or actions) that can yield to an infinite number of different event sequences (e.g., chess). Computer science and operations research have examined deterministic process models expressed in forms such as finite-state machines and Petri Nets (Reimann 2009). As these models can represent choice and parallelism, they can help answer questions such as: Is an observed, sequence of events alignable with a particular designed process? Given a set of sequences of events, can a single deterministic model describe them? Deterministic models are also relevant in situations where the learners involved have knowledge about the process as a whole; for instance, participants in a formal discussion know the “moves” allowed as well as the end state (Schwarz and Baker 2016). We can therefore assume that their behavior in the discussion will to some extent be guided by this knowledge, by a sense of well-formedness. In human affairs, such situations abound, from social conduct in general to work processes. Although
Statistical and Stochastic Analysis of Sequence Data
547
obviously relevant for CSCL research, applications have been rare so far (Bannert et al. 2014; Reimann et al. 2009). The same can be said about the type of deterministic models that represent knowledge and beliefs of individual agents and simulate the interaction with other agents and resources, such as agent-based models. While of high relevance to phenomena studied in CSCL, applications are very rare. To appreciate the role they could play, Abrahamson et al.’s model of stratification of learning zones in the collaborative (math) classroom provides an excellent example (Abrahamson et al. 2007). In conclusion, a wide range of methods for analyzing and modeling temporal data is available to CSCL researchers, ranging from stochastic and statistical to deterministic computational. Our recommendation is to embrace the notions of model and modeling to a much deeper and much more comprehensive extent than has been the case in the past, by exploiting the potential that lies in combining theoretical with empirical modeling. We hope this chapter will make a small contribution to this widening of minds.
References Abrahamson, D., Blikstein, P., & Wilensky, U. (2007). Classroom model, model classroom: Computer-supported methodology for investigating collaborative-learning pedagogy. In C. Chinn, G. Erkens, & S. Puntambekar (Eds.), Proceedings of the 8th international conference on computer supported collaborative learning (CSCL) (Vol. 8, part 1, pp. 49–58). International Society of the Learning Sciences. Assunção, M. D., Calheiros, R. N., Bianchi, S., Netto, M. A., & Buyya, R. (2015). Big data computing and clouds: Trends and future directions. Journal of Parallel and Distributed Computing, 79, 3–15. Bakeman, R., & Gottman, J. M. (1986). Observing interaction: An introduction to sequential analysis. Cambridge: Cambridge University Press. Bannert, M., Reimann, P., & Sonnenberg, C. (2014). Process mining techniques for analysing patterns and strategies in students’ self-regulated learning. Metacognition and Learning, 9(2), 161–185. Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2016). Social big data: Recent achievements and new challenges. Information Fusion, 28, 45–59. Bergner, Y., Walker, E., & Ogan, A. (2017). Dynamic Bayesian network models for peer tutoring interactions. In A. A. von Davier, M. Zhu, & P. C. Kyllonen (Eds.), Innovative assessment of collaboration (pp. 249–268). New York: Springer. Boyer, K. E., Ha, E. Y., Phillips, R., Wallis, M. D., Vouk, M. A., & Lester, J. (2009). Inferring tutorial dialogue structure with hidden Markov modeling. In Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications—EdAppsNLP ‘09 (pp. 19–26). Association for Computational Linguistics. Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models. London: Sage. Cen, H., Koedinger, K., & Junker, B. (2006). Learning factors analysis–a general method for cognitive model evaluation and improvement. In M. Ikeda, K. D. Ashley, & T. W. Chan (Eds.), Intelligent tutoring systems, lecture notes in computer science (Vol. 4053, pp. 164–175). New York: Springer. Chen, G., & Chiu, M. M. (2008). Online discussion processes: Effects of earlier messages’ evaluations, knowledge content, social cues and personal information on later messages. Computers and Education, 50, 678–692.
548
M. M. Chiu and P. Reimann
Chiu, M. M. (2008). Flowing toward correct contributions during groups' mathematics problem solving: A statistical discourse analysis. Journal of the Learning Sciences, 17(3), 415–463. https://doi.org/10.1080/10508400802224830. Chiu, M. M. (2013). Cycles of discourse analysis statistical discourse analysis. In 10th International conference on computer supported collaborative learning, Madison, WI, USA. Chiu, M. M. (2018). Statistically modelling effects of dynamic processes on outcomes: An example of discourse sequences and group solutions. Journal of Learning Analytics, 5(1), 75–91. Chiu, M. M., & Lehmann-Willenbrock, N. (2016). Statistical discourse analysis: Modeling sequences of individual behaviors during group interactions across time. Group Dynamics: Theory, Research, and Practice, 20(3), 242–258. DOI: 10.1037/gdn0000048 Cohen, J., West, S. G., Aiken, L., & Cohen, P. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah, NJ: Lawrence Erlbaum. Cress, U. (2008). The need for considering multilevel analysis in CSCL research—an appeal for the use of more advanced statistical methods. International Journal of Computer-Supported Collaborative Learning, 3, 69–84. Embretson, S. E., & Reise, S. P. (2013). Item response theory. Hove, East Sussex, UK: Psychology Press. Farran, D. C., & Son-Yarbrough, W. (2001). Title I funded preschools as a developmental context for children's play and verbal behaviors. Early Childhood Research Quarterly, 16(2), 245–262. Feldman, R., & Sanger, J. (2007). The text mining handbook: Advanced approaches in analyzing unstructured data. Cambridge: Cambridge University Press. Gandomi, A., & Haider, M. (2015). Beyond the hype: Big data concepts, methods, and analytics. International Journal of Information Management, 35(2), 137–144. Goldstein, H. (2011). Multilevel statistical models. London: Edward Arnold. Gottman, J. M., & Roy, A. K. (1990). Sequential analysis: A guide for behavioral researchers. Cambridge: Cambridge University Press. Greene, W. H. (1997). Econometric analysis (3rd ed.). London: Prentice-Hall. Helske, S., & Helske, J. (2017). Mixture hidden Markov models for sequence data: The seqHMM package in R. Retrieved from http://arxiv.org/abs/1704.00543 Jackson, C. H. (2011). Multi-state models for panel data: The msm package for R. Journal of Statistical Software, 38(8), 1–29. Joreskog, K., & Sorbom, D. (2015). LISREL 9.2. New York: Scientific Software International. Kennedy, P. (2008). Guide to econometrics. New York: Wiley-Blackwell. Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. Boca Raton, FL: CRC Press. Loehlin, C. (2004). Latent variable models: An introduction to factor, path, and structural equation analysis. Hove, East Sussex, UK: Psychology Press. Mandryk, R. L., & Inkpen, K. M. (2004). Physiological indicators for the evaluation of co-located collaborative play. In Proceedings of the 2004 ACM conference on Computer Supported Cooperative Work—CSCW ‘04 (pp. 102–111). Association for Computing Machinery. Muthén, L. K., & Muthén, B. O. (2018). Mplus 8.1. Los Angeles, CA: Muthén & Muthén. Nagarajan, R., Scutari, M., & Lèbre, S. (2013). Bayesian networks in R. New York: Springer. National Research Council. (2013). Frontiers in massive data analysis. Washington, DC: National Academies Press. O’Connell, J., & Højsgaard, S. (2011). Hidden semi Markov models for multiple observation sequences: The mhsmm package for R. Journal of Statistical Software, 39(4), 1–22. Oshima, J., Oshima, R., & Fujita, W. (2018). A mixed-methods approach to analyze shared epistemic agency in jigsaw instruction at multiple scales of temporality. Journal of Learning Analytics, 5(1), 10–24. Picciano, A. G. (2012). The evolution of big data and learning analytics in American higher education. Journal of Asynchronous Learning Networks, 16(3), 9–20. Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257–286.
Statistical and Stochastic Analysis of Sequence Data
549
Reimann, P. (2009). Time is precious: Variable- and event-centred approaches to process analysis in CSCL research. International Journal of Computer-Supported Collaborative Learning, 4, 239–257. Reimann, P., Frerejean, J., & Thompson, K. (2009). Using process mining to identify models of group decision making processes in chat data. In C. O’Malley, D. Suthers, P. Reimann, & A. Dimitracopoulou (Eds.), Computer-supported collaborative learning practices: CSCL2009 conference proceedings (pp. 98–107). International Society for the Learning Sciences. Russell, S., & Norvig, P. (2016). Artificial intelligence: A modern approach (global edition). London: Prentice-Hall. Sarkar, P., & Moore, A. W. (2006). Dynamic social network analysis using latent space models. In Y. Weiss, B. Scholkopf, and J. Platt (Eds.) Advances in neural information processing systems 18 (pp. 1145–1152). Cambridge, MA: MIT Press. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2018). Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups. International Journal of Computer-Supported Collaborative Learning, 13(3), 241–261. Schwarz, B., & Baker, M. (2016). Dialogue, Argumentation and education. Cambridge: Cambridge University Press. Shaffer, D. W., Hatfield, D., Svarovsky, G. N., Nash, P., Nulty, A., Bagley, E., Frank, K., Rupp, A. A., & Mislevy, R. (2009). Epistemic network analysis: A prototype for 21st-century assessment of learning. International Journal of Learning and Media, 1(2), 33–53. Soller, A. (2004). Computational modeling and analysis of knowledge sharing in collaborative distance learning. User Modeling and User-Adapted Interaction, 14, 351–381. Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. London: Sage. Tuckman, B. W. (1965). Developmental sequence in small groups. Psychological Bulletin, 63(6), 384–399. Turner, R., & Liu, L. (2014). Hmm.discnp: Hidden Markov models with discrete non-parametric observation distributions. R Package Version 0.2-3. Retrieved from http://CRAN.R-project.org/ package¼hmm.discnp Visser, I., & Speekenbrink, M. (2010). depmixS4: An R Package for Hidden Markov Models. Journal of Statistical Software, 36, 1–21. Walker, E., Rummel, N., & Koedinger, K. R. (2014). Adaptive intelligent support to improve peer tutoring in algebra. International Journal of Artificial Intelligence in Education, 24(1), 33–61. Weinberger, A., & Fischer, F. (2006). A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers & Education, 46(1), 71–95. Wise, A., & Chiu, M. M. (2011). Analyzing temporal patterns of knowledge construction in a rolebased online discussion. International Journal of Computer-Supported Collaborative Learning, 6, 445–470. Wolery, M., Busick, M., Reichow, B., & Barton, E. E. (2010). Comparison of overlap methods for quantitatively synthesizing single-subject data. The Journal of Special Education, 44(1), 18–28. Zikopoulos, P., & Eaton, C. (2011). Understanding big data: Analytics for enterprise class Hadoop and streaming data. New York: McGraw-Hill Osborne Media.
Further Readings Abrahamson, D., Blikstein, P., & Wilensky, U. (2007). Classroom model, model classroom: Computer-supported methodology for investigating collaborative-learning pedagogy. In C. Chinn, G. Erkens, & S. Puntambekar (Eds.), Proceedings of the eighth International Conference on Computer Supported Collaborative Learning (CSCL) (Vol. 8, Part 1, pp. 49–58). International Society of the Learning Sciences. A powerful demonstration of how
550
M. M. Chiu and P. Reimann
(deterministic) computational modeling can interact with empirical (classroom) research. Using the agent-based modeling tool, NetLogo, the authors provide an analysis of the mechanisms that lead to the emergence of stratified learning zones in a prototypical collaborative classroom activity. Also important because it highlights the tension between collaborative solving problems and learning from collaboration. Bergner, Y., Walker, E., & Ogan, A. (2017). Dynamic Bayesian Network models for peer tutoring interactions. In A. A. von Davier, M. Zhu, & P. C. Kyllonen (Eds.), Innovative assessment of collaboration (pp. 249–268). Springer. This chapter provides a nice illustration of the use of modern HMM approaches to analyzing (peer) tutorial dialogue. While an important area of collaborative learning, research on tutor–tutee dialogue is only partially reflected in the CSCL literature, with this chapter providing a welcome connection between CSCL, AI in Education, and assessment research. It includes an application in the context of an empirical study. Chiu, M. M. (2008). Flowing toward correct contributions during groups’ mathematics problem solving: A statistical discourse analysis. Journal of the Learning Sciences, 17(3), 415–463. This empirical study applied statistical discourse analysis to test whether (a) groups that created more correct, new ideas (micro-creativity) were more likely to solve a problem and (b) students’ recent actions (microtime context of evaluations, questions, justifications, politeness, and status differences) increased subsequent micro-creativity. Chiu, M. M., & Lehmann-Willenbrock, N. (2016). Statistical discourse analysis: Modeling sequences of individual behaviors during group interactions across time. Group Dynamics: Theory, Research, and Practice, 20(3), 242–258. This article showcases statistical discourse analysis, a method that integrates most of the above methods (parallel chats, trees, group/ individual differences, pivotal events, time periods, multiple target events, indirect effects, later group outcomes) and addresses related issues (e.g., missing data, inter-rater reliability, false positives, etc.). Reimann, P. (2009). Time is precious: Variable- and event-centred approaches to process analysis in CSCL research. International Journal of Computer-Supported Collaborative Learning, 4, 239–257. This methodological paper provides an overview of qualitative, quantitative, and computational methods for analyzing temporal data in CSCL. It argues that there is a rather fundamental difference between explaining collaboration over time in terms of variables versus explaining them in terms of events. Implications for doing temporal analysis are discussed.
NAPLES Video Chiu, M. M. (2018). How to statistically model processes? Statistical discourse analysis. Network of Academic Programs in the Learning Sciences (NAPLeS) webinar. http://isls-naples.psy.lmu.de/ intro/all-webinars/chiu/index.html
Artifact Analysis Stefan Trausan-Matu and James D. Slotta
Abstract Artifacts are constructed as a result of human activity. They are the tools for further activities and the basis for communication and collaboration. Within any given learning context, artifacts may be produced and can serve as a basis for assessment as well as resources for subsequent activity, being semiotic mediators. Learning scientists analyze artifacts as a method of evaluating their own interventions and to informing their understanding of learning processes. This chapter provides a short review of relevant theoretical perspectives and prior research and describes different forms of language and text artifact analysis that are presently applied within the learning sciences. These include dialog analysis; conversation analysis; content analysis of verbal, textual, and other forms of data; social network analysis; and polyphonic analysis. Applications to the analysis of online discussions and classroom discourse are discussed, as well as future directions for research. Keywords Artifacts · Computer-supported collaborative learning · Analysis of conversations · Natural language processing · Social network analysis · Polyphonic model
1 Definitions and Scope Artifacts are constructed as a result of human activity. They may be physical objects and may also have a conceptual nature. Artifacts are many times the building blocks for constructing other artifacts, serving as tools for continuing the learning activities in which they were constructed (e.g., as the basis for communication and S. Trausan-Matu (*) Department of Computer Science and Engineering, University Politehnica of Bucharest, Bucharest, Romania e-mail: [email protected] J. D. Slotta Ontario Institute for Studies in Education, University of Toronto, Toronto, Canada © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_30
551
552
S. Trausan-Matu and J. D. Slotta
collaboration), or as resources for further activities, for example, interpreting human endeavors. From a semiotic perspective, many artifacts may be considered signs, one remarkable example being the words, which are signs (and, at the same time, tools for communication). However, not all signs are artifacts, for example, smoke is a sign of fire, but it is not, in general, an artifact (unless a human generated the smoke for signaling something or for smoking aliments). From the reversed perspective, artifacts are many times including more than one sign, for example, a text is an artifact that contains many words–artifacts–signs. Moreover, there are artifacts that are not signs, for example, the tool–artifacts (which, however, may include signs). Nevertheless, in learning settings, artifacts are semiotic mediators (Mariotti 2009). The specificity of the analysis of artifacts is due to their definitory features that they are constructed by human beings, that they may include other artifacts, and that they further mediate between humans and objects or between humans and other humans, being many times both signs and tools. Therefore, in the case of computer-supported collaborative learning (CSCL), the focus of artifact analysis should be on mediated human activities: communication, collaboration, semiosis, and knowledge building. Artifacts have various manifestations, including physical, technological, digital, positioning, conceptual, discursive, pictorial, behavioral abstraction, and others (Bereiter 2002; Halatchliyski et al. 2014; Holtz et al. 2018; Ludvigsen et al. 2015; Simpson et al. 2017; Stahl 2013; Stahl et al. 2014). They may be interconnected, forming artifact networks and many times complex artifacts are built up from simpler ones (Halatchliyski et al. 2014). Artifacts are essential to the research and practice of computer-supported collaborative learning and can play a multitude of key roles: artifacts . . . can provide communication media that support collaboration . . . they may structure the representations that groups of students use in building their intersubjective knowledge, making it visible, shared and persistent . . . the group efforts may be oriented toward co-construction of an artifact: a work of art, a design, a plan, a report or a summary text . . . the CSCL process may result in such a knowledge artifact, as the group product. (Stahl et al. 2014)
Consequently, artifacts are tools for supporting communication and intersubjective knowledge building. CSCL sessions produce several kinds of artifacts, which are systems of signs that may be used for the analysis of the outcomes of learning. Gerry Stahl (2006, p. 15) mentions the classification of artifacts introduced by Vygotsky: signs (words, gestures) and tools (instruments). Therefore, in CSCL processes, a first distinction may be made between these two main categories of artifacts: (1) signs—recorded spoken or written words (recorded speech, logs of online conversations, discussion forums, or collaboratively built texts, for example, wikis), videos of collaborative sessions, drawings on whiteboards, and gestures in collaborative sessions; (2) tools used in collaborative sessions. In CSCL, tool artifacts are software environments that mediate group discourse, facilitating collaboration (Stahl 2006). For example, the ConcertChat environment (Holmer et al. 2006), with its facility for explicit referencing to previous utterances provides tool– artifacts for empowering polyphonic parallel discussion threads (Trausan-Matu
Artifact Analysis
553
2010), in addition to the obvious role of being sign artifacts of related utterances. It should be emphasized that sign artifacts may also become tool artifacts when, for example, a recorded learning session is used as a tool in another learning setting, and even in the normal development of an ongoing session, artifacts having the role of semiotic mediators (Mariotti 2009). In addition to signs and tools, a third category of artifacts may be identified by taking into account that one of the main aims of CSCL is to achieve joint knowledge building through group interaction. These are (3) knowledge artifacts (Stahl et al. 2014), constructed as a result of CSCL sessions, which could include anything from drawings, to computer code, to physical constructions, and written or spoken discourse (e.g., arguments; online discussions). They may be seen both as signs (results) and as tool artifacts, to be used in further activities. An important component of any analysis of a CSCL session is to obtain data about the process of constructing knowledge artifacts, both from the perspective of the contribution of each participant and of the group as a whole. This includes methods of manual or automated content analysis of textual artifacts (e.g., using natural language processing, text/data mining), interaction analysis, social network analysis, and machine learning approaches. Although this artifact-centric perspective is very much in line with a constructivist understanding of (collaborative) learning, analytic approaches of this type are still less prominent than process or interaction-oriented methods, which are often based on action logs as primary data. Some researchers consider that there is also a class of conceptual artifacts (Bereiter 2002) including several types developed by students, as enumerated by Stahl (2006): theory, model, diagnosis, conceptual map, mathematical proof, or presentation. To this list, we would also add concepts, conceptual positions or arguments, designs, ideas, or voices in the extended sense considered in the polyphonic model (Trausan-Matu 2010). However, there remains a question whether such artifacts might be viewed as being encompassed by the class of knowledge artifacts discussed by Stahl et al. (2014). Finally, because such conceptual/knowledge artifacts are developed with the help of sign artifacts (e.g., language), this fact justifies the analysis of the constructed conceptual artifacts from recorded data with computerized tools such as natural language processing, image analysis, data mining, etc. Returning now to the class of sign artifacts, we may further refine their classification according to communication media with three categories: (1) spoken or written words (i.e., speech or text-based artifacts), (2) physical interactions (e.g., gestures or gaze—see Schneider et al. this volume), and (3) digital artifacts (e.g., sequences of interactions—see Halatchliyski et al. 2014) or digital knowledge bases (Holtz et al. 2018; Slotta et al. 2013). Spoken language and text-based artifacts may include: (a) utterances, recorded or transcribed verbal data of face-to-face discussions, (b) log files of online or offline conversations or in various forms (text chats, online courses, group assignments, etc.), (c) text in collaboratively constructed documents (e.g., wikis or shared documents such as Google Docs), (d) turn-taking/role-playing in online discussions, (e) repertoires of annotations designed collaboratively, (f) conceptual maps for
554
S. Trausan-Matu and J. D. Slotta
annotations collaboratively developed, (g) notes added in collaborative knowledgebuilding environments, (h) comments added as annotations to notes, documents, other artifacts. These species of spoken language and text-based artifacts may be grouped into interactional (e.g., conversations), textual (e.g., collaboratively built documents, e.g., wikis), and annotational (e.g., notes, comments, etc., added in CSCL sessions). Artifacts from physical interactions may also be of various forms, including gestural (either video recorded or captured by computer vision tools), behavioral (as in serious game play, captured on video, or as software interaction data), physical body movement (i.e., within VR or other embodied learning environments); physical constructions of objects in collaborative tasks, observational datasets collected in collaborative tasks, interactions (e.g., artifacts constructed or outcomes of inquiry) on collaborative tabletop of touch surfaces. The third group, digital artifacts, may be: collaboratively constructed usercontributed knowledge bases (Wikis, Pinterest), tags and votes in taxonomic, emergent collections, click traces in software environments (e.g., on whiteboards, game play, or simulations), interactions with models and simulations, learner-generated programs and computational models, videos and annotations to videos, classifications and taxonomies, and conceptual maps. The classifications made in this chapter are represented in Table 1. Related to the various classes and subclasses of artifacts, we should note that some of them may belong to several places. For example, a word (sign) in a CSCL chat log (digital artifact) may become a conceptual artifact that is included in a mathematical proof (knowledge artifact) and used as a tool (artifact) for learning new concepts. After the above presentation of the main ideas of artifacts and of their various forms, the remainder of this chapter focuses on the analysis of language or text-based artifacts and, related to them, on knowledge/conceptual artifacts. Other kinds of artifacts in CSCL research are discussed in other chapters of this handbook. For example, Zahn et al. (this volume) present video data collection and video analyses in CSCL research. Multimodal data in CSCL (gesture and gaze, including motion capture/tracking) are considered by Schneider et al. (this volume).
2 History and Development The analysis of language and text-based artifacts has a long history. A grammar for the Sanskrit language was developed more than 2500 years ago (Whitney 1879). Other notable precursors were Plato and Aristotle, the latter being known for his works on logic. In turn, modern logicians such as Frege, Boole, and the Vienna Circle developed theoretical foundations used for developing tools for analyzing natural language. Modern linguistics was born in the eighteenth century and has seen a surge of interest in recent decades with the appearance of the artificial intelligence, leading to the specialized domain of natural language processing, and the associated field of
Artifact Analysis
555
Table 1 A classification of artifacts Signs
Spoken or written words
Interactional (e.g., conversations)
Textual
Annotational
Physical interactions
Tools Knowledge artifacts
Conceptual artifacts
Utterances Recorded or transcribed verbal data of face-to-face discussions Log files of online or offline conversations Text in collaboratively constructed documents such as Google Docs Wikis Repertories of annotations designed collaboratively Conceptual maps for annotations collaboratively developed Notes added in collaborative knowledgebuilding environments Comments added as annotations to notes, documents, other artifacts
Gestures Behaviors Physical body movements Physical constructions of objects in collaborative tasks Observational data sets collected in collaborative tasks Interactions Digital Collaboratively constructed user-contributed knowledge bases artifacts Tags and votes in taxonomic, emergent collections Click traces in software environments Interactions with models and simulations Learner-generated programs and computational models Videos and annotations to videos Classifications and taxonomies Conceptual maps Environments that mediate group discourse, facilitating collaboration Conceptual/knowledge artifacts used as tools Theories Models Diagnosis Mathematical proofs or presentations Concepts Conceptual maps Conceptual positions or arguments Ideas Voices in the polyphonic model
556
S. Trausan-Matu and J. D. Slotta
computational linguistics. The huge amount of language and texts in digital format have also empowered the statistical approaches of machine learning. These technological advances were the basis for computer programs for knowledge extraction from texts (text mining), opinion mining, automatic summarization, machine translation, etc. (Jurafsky and Martin 2009), Natural language processing was used for developing powerful tools for assisting the analysis of language and text artifacts, which was performed only manually before. Many computational linguistics approaches are based on the ideas of Ferdinand de Saussure and Gottlob Frege’s compositional semantics paradigm, where the meaning of phrases is a combination of the meaning of the compounding words (Jurafsky and Martin 2009). Ferdinand de Saussure, one of the fathers of modern linguistics, structuralism, and semiotics, made a clear distinction between written (“langue”) texts and speech (“paroles”). He suggested that only the former should be considered in a systematic theoretical framework. He also introduced the semiotic position that considers words as arbitrary signs (de Saussure 1983). An alternative foundation of semiotics is that of Charles Sanders Peirce. Signs, in his semiotic triadic perspective, are very close conceptually to artifacts, both mediating between subjects and objects: “by ‘semiosis’ I mean, on the contrary, an action, or influence, which is, or involves, a cooperation of three subjects, such as a sign, its object, and its interpretant, this tri-relative influence not being in any way resolvable into actions between pairs” (Peirce 1998). In critiquing the ideas of de Saussure, Mikhail Bakhtin argued that conversations, dialogs, “paroles” should be the center of analysis, stating that everything is a dialog: “I hear voices in everything and dialogic relations among them” (Bakhtin 1986, p. 169), “everything in life is dialogue, that is, dialogic opposition” (Bakhtin 1984, p. 42). Moreover, the multitude of voices, seen in an extended sense, follow the counterpoint rules, the conceptual artifact that drives the polyphonic music weaving: “Everything in life is counterpoint, that is, opposition” (Bakhtin 1984, p.42). Mikhail Bakhtin’s dialogism, multi-vocality, and polyphony are considered by several theoreticians and researchers as fundamental for the CSCL domain (Koschmann 1999; Stahl 2006; Trausan-Matu 2010). Moreover, these ideas have also been used as a basis for developing computer tools for analyzing conversations (Dascalu et al. 2015; Dong 2005; Trausan-Matu et al. 2014).
3 State of the Art One of the most important ways of constructing knowledge in CSCL is through conversations, using either speech or online instant messengers (chat). The utterances emitted by the participants in conversations are artifacts, which are further the tools for building knowledge/conceptual artifacts, the outcomes of the learning process. The analysis of how they are built, how they are structured, and how they mediate learning may be performed manually, automatically, or semiautomatically.
Artifact Analysis
557
When the input is recorded spoken language, for the (semi-)automatic analysis transcription software is typically used for obtaining a textual format, which is easier to process. The audio recordings may also be processed to extract data about prosodic features that are not typically accessible through transcription, although some such features may be included in transcriptions using the Jeffersonian system of notation, which includes details such as overlapping speech, falling or rising pitch, pauses, audible exhalation or inhalation, annotation of nonverbal activity, etc. (Jefferson 1984). More complex analysis may be performed from video recordings, by analyzing both transcriptions and gestures (individual or as a group) as a whole process (Trausan-Matu 2013). The analysis of conversations logs is generally performed using approaches from several domains, and considering different perspectives: linguistics, sociology, dialogism, natural language processing, and social network analysis. There are qualitative and quantitative analysis methods, performed manually or (semi-) automatically. They usually start from the conversation components, the utterances, and they have different outcomes, many of them making classifications and statistics, others going further in applying machine learning for automatic classifications. Some of them, however, try to consider a holistic perspective on building knowledge artifacts. We shall discuss below some of these methods from the perspective of artifact analysis. In linguistics, conversation transcriptions or logs (textual interactional artifacts) are the subject of dialog analysis, which is based on the Speech Acts Theory (Jurafsky and Martin 2009) and other pragmatics approaches, such as conversational implicatures (Jurafsky and Martin 2009). These approaches are focused on the classification of the utterances (artifacts) as acts (e.g., engagements, declarations, etc.) and indirect speech acts (illocutionary and perlocutionary; conversational implicatures). Conversation analysis (CA) (Sacks 1992) is a sociology approach inspired by ethnomethodology (Garfinkel 1967), which aims to identify social interaction within a conversation (i.e., “talk in interaction”). The main investigation topics in CA are how turns take place, detecting adjacency pairs, analyzing repairs, silences, how endings are prepared, etc. (Sacks et al. 1974), all of them being in fact artifacts used as signs and tools for coping with the conversation. CA is a qualitative method that does not use computer-based support but some of its concepts and results (e.g., adjacency pairs) are used also in natural language processing. Computer-based analysis of conversational artifacts can take several forms, including statistics (e.g., in “Code and Count” approaches), graph algorithms (as used in social network analysis), and (semi-)automatic methods focused on the content of utterances and their interrelations (as used in natural language processing or dialogic/polyphonic models). Below, we discuss some of these approaches, which are also using machine learning algorithms.
558
3.1
S. Trausan-Matu and J. D. Slotta
Code and Count Approaches
“Code and count” approaches generally comprise two steps. First, utterances, text strings, or other elements of the artifacts are manually assigned codes from a repertory tailored to the phenomenon to be analyzed. There might be several dimensions of coding in the same conversation, for example, up to nine in the Virtual Math Teams project (http://gerrystahl.net/vmt/): conversation thread, conversation dimension (speech acts), problem-solving thread, and problem-solving, etc. (Çakir et al. 2009; Chiu 2013). In the second step, the distribution of the codes is analyzed using some statistical means. The outputs are statistical data about the artifacts (Chi 1997; Slotta et al. 1995; Zemel et al. 2009), some specific data (Chiu 2013) or, using Machine Learning, a model of the annotated phenomenon, which may be used for annotating an unannotated conversation, for example, in TagHelper (Rosé et al. 2008).
3.2
Social Network Analysis
Social network analysis is used to identify individual and global/community features in graphs that capture various social relations in chats and discussion forums. For applying this kind of analysis of natural language artifacts (i.e., those constructed in CSCL sessions), the recorded and transcribed speech or written texts (e.g., instant messenger conversations, discussion forums, or collaboratively constructed texts) are abstracted and represented as several kinds of graphs, algorithms being applied to compute metrics that reflect the collaboration processes, the individuals’ contributions, and their roles in conversations or discussion forums. The sociogram (Suthers and Rosen 2011), also known as the interaction graph (Trausan-Matu et al. 2014), has the participants as nodes, and the exchanged utterances or messages as arcs within the graph. In contrast, another representation—the graph of utterances—uses utterances as the nodes and explicit or implicit links among them as arcs (Trausan-Matu et al. 2014). Utterances are captured at various levels of granularity, ranging “from a short (single-word) rejoinder in everyday dialogue to the large novel or scientific treatise” (Bakhtin 1986). The cohesion graph extends the graph or utterances with nodes for each sentence and with three new kinds of links: Hierarchical links are generated between a sentence and the utterance to which it belongs; mandatory links connect adjacent contributions and sentences; relevant links are optionally added between semantically related nodes (Dascalu et al. 2015), the latter corresponding to the implicit links in the utterance graph. The cohesion graph may be used also for the analysis of the discourse of any text, such as essays. In addition to sociograms, Suthers and Rosen (2011) have introduced several other types of graphs. They suggested that very relevant to the analysis of CSCL sessions are the contingency and uptake graphs. The contingency graph relates
Artifact Analysis
559
events that may be contingent, whereas the uptake graph is a subgraph of the contingency graph, its arcs being clear interactions between events. Other graphs considered by Suthers and Rosen (2011) are the process traces, associograms (mediation models), which are bipartite graphs constructed from participants and objects through which they interact entity–relations graphs (domain model), and event models (Suthers and Rosen 2011). The metrics that may be computed starting from social networks (Carrington et al. 2005) reflect features of individual nodes (participants or utterances) or the network as a whole, for example, which participants were the most active, or if in a session there were clusters of participants that collaborated intensely. In the first case, the importance of nodes, the so-called centrality, representing the contribution of the nodes (participants or utterances), may be determined from the number of entering or outgoing arcs (in-degree and out-degree), or by the eigenvector centrality, computed with an algorithm similar to Google’s page rank, which considers that an important node has links from important nodes (Carrington et al. 2005). Closeness and betweenness centrality are variations that take into account the positioning of a node in relation to the others. There are also metrics of the network as a whole, such as centrality, density, etc. (Carrington et al. 2005).
3.3
Content Analysis of Verbal and Textual Data
Building on early contributions from cognitive psychology (e.g., Chi 1997; Ericsson and Simon 1984), content analysis can be performed on transcribed talk in interaction or on textual data built collaboratively (for example, wikis). This may be done manually, using, for example, conversation analysis (Sacks 1992) and discourse analysis (Tannen 2007), or (semi-)automatically, using natural language processing techniques. In the latter case, several approaches may be identified, depending on the details of the analysis. For example, the simplest approach is the identification of the frequent roots of content words (after the elimination of frequent, not bringing content “stop words” and stemming), continuing with more complex aspects, such as part of speech tagging, parsing, and semantic disambiguation (Jurafsky and Martin 2009). Semantic distances among concepts, utterances, and posts in a text, conversation, or forum may be computed using lexical databases like WordNet (Miller 1995), algebraic methods—latent semantic analysis—(Landauer and Dumais 1997), and probabilistic methods—latent Dirichlet allocation (Blei et al. 2003). The analysis of artifacts resulted from knowledge construction in verbal or textbased collaborative activities necessitates more complex forms of linguistics analysis, such as discourse analysis. This may be done by computing adjacency pairs, argumentation links, coreferences, rhetorical relations (Jurafsky and Martin 2009), and other discourse relations. The main feature of a good discourse, including conversations and discussion forums, is coherence (Jurafsky and Martin 2009). However, it is hard to measure it because, in fact, it is also hard to define. Instead, cohesion is used for analyzing the sound formation of discourses, because it may be
560
S. Trausan-Matu and J. D. Slotta
measured considering lexical chains (Budanitsky and Hirst 2006) and other cohesive devices (Halliday and Hasan 1976). Several implemented systems perform content analysis (Dascalu et al. 2015; Chi 1997; Rosé et al. 2008; Suthers and Desiato 2012; Trausan-Matu et al. 2014).
3.4
Polyphonic Analysis
The polyphonic analysis, based on the polyphonic model of discourse (TrausanMatu 2010), is a powerful method for investigating the dialogical dimension of natural language artifacts. It integrates content analysis and elements from social network analysis, combining the analysis on the longitudinal dimension (i.e., time) with the multivocal, transversal interactions, providing the means to reveal the process by which such knowledge artifacts are constructed during CSCL sessions. This is done through the identification of individual contributions, involvement (Tannen 2007), and overall collaboration. Content analysis can be used to verify that learners have discussed the expected topics, and reveal patterns of repetition and rhythm (Tannen 2007). Social network analysis can reveal involvement and patterns of interaction or exchange. In addition to these two methods, the polyphonic analysis allows the identification of “interanimation,” that means, not just whether different voices are concurrently participating and what they are saying, but also how they are entering into a dialog, interacting through divergences and convergences, and jointly contributing to the collaborative construction of knowledge artifacts. One useful analogy is to compare the interanimation of voices in a successful collaboration in a CSCL conversation to an improvisation in polyphonic jazz music (e.g., in the New Orleans style of jazz)—which can be seen as a rich form of musical conversation (Sawyer 2006). While content analysis could provide a summary of which instruments are involved and what they are playing, and social network analysis could capture the exchanges among the various instruments and which ones played together or successively, polyphonic analysis would be needed to really address the quality of the construction. It would blend the examination of social relations (i.e., between any participating instruments), what they contributed, and how they influenced one another, allowing for a coherent, holistic assessment of the quality of the performance. The polyphonic model of discourse is grounded in the dialogism perspective, which states that every textual artifact is a dialog, even novels (Bakhtin 1984). The model is based on the idea that polyphonic music and natural language artifacts have many common aspects (Bakhtin 1984), particularly in artifacts resulting from conversations (Trausan-Matu 2010). Polyphonic music is characterized by the sequential development of multiple interanimating voices in parallel, starting from a common theme. Each voice has its own “personality” and “plays” distinctly within a given common theme. Therefore, inherently dissonances are generated, but a coherent whole is achieved. In fact, these dissonances are very important, because
Artifact Analysis
561
they assure variation, novelty, and at the same time, they induce a need for achieving consonances (i.e., because dissonance is perceived, it raises the goal of achieving greater consonance). In such a fashion, an interanimation process takes place, similar to the divergence/convergence thinking steps in fostering creativity (Csikszentmihalyi 1996), which encourages the critical and creative thinking needed for knowledge construction. A similar idea is that of productive friction, which triggers knowledge construction processes (Holtz et al. 2018). The concept of “voice” has a generalized sense in polyphonic music: it is a conceptual artifact, a distinct position having a longitudinal dimension, being not limited to the vocal emissions of one person. For example, a fugue composed by J.S. Bach may have six voices, but it may be played by a single organist. At the same time, a fugue may be orchestrated with each voice played by a whole group of instruments. Similarly, the polyphonic model of discourse considers that, in addition to the obvious perspective in which each participant within a joint discursive activity is a voice, threads of ideas or concepts interanimate in any linguistic or textual artifacts, behaving also like voices in music. Trausan-Matu et al. (2014) describe seven steps necessary for the polyphonic method for the analysis of language and text artifacts, which may be supported by natural language processing technology: The first step consists of the identification of the artifact’s main concepts, either stated explicitly as the subjects of discussion or detected with content analysis techniques. The second step has the goal of delimiting each participant’s utterances, followed by the third step, in which links among utterances are detected starting from repetitions of words and phrases, semantically related words, explicit references, adjacency pairs (Jurafsky and Martin 2009), argumentation links (detected from cue phrases—discourse markers—such as “therefore,” “consequently,” “nevertheless” etc.), and co-references. The fourth step is the identification of candidates for voices, in which threads of links containing repeated word stems or phrases are considered as candidates for concepts and ideas. Voices may be proposed automatically, using systems such as ReaderBench (Dascalu et al. 2015) or semiautomatically, using other software tools. The identification of voices may be viewed as a matter of detecting knowledge (conceptual) artifacts within interactional (spoken or textual) artifacts. The fifth and sixth steps consist of the detection of voices’ interanimation by their co-occurrence and by the presence of links containing discourse markers that indicate divergences or convergences in the same or related utterances or sentences. Natural language processing and machine learning techniques may be used in this aim. The seventh and final step is dedicated to the analysis of various aspects of discourse building: identification of conceptual artifacts (meaning making), investigating pivotal moments (TrausanMatu 2013), rhythm as an indicator of involvement (Tannen 2007), collaboration regions, instantaneous, and overall level of collaboration. A series of systems that implement the polyphonic method of analysis integrating content analysis and social network analysis were developed. The most elaborated are PolyCAFe (Trausan-Matu et al. 2014) and ReaderBench (Dascalu et al. 2015). Both systems perform a series of processes: identify main concepts, compute their importance and their semantic distance to the other concepts, generate visualizations
562
S. Trausan-Matu and J. D. Slotta
Fig. 1 A conceptual map generated by ReaderBench for a CSCL conversation
of the utterance graph and sociogram, visualizations of voices’ threading, and compute metrics for the individual and overall collaboration for each utterance. ReaderBench automatically generates conceptual maps from a textual artifact, in which the diameter of the disc associated to a concept is proportional to its importance and the length of the edge between two concepts reflects the semantic distance between them (see Fig. 1). It also proposes candidates for voices using latent Dirichlet allocation (Blei et al. 2003). Figure 2a presents a screen caption from a PolyCAFe visualization, with the top part representing the utterance graph of a conversation and the middle part a numerical evaluation of the degree of collaboration, computed from interanimation and other features. On the vertical dimension are the five participants and on the horizontal axis the number of utterances. The links in the utterance graph are either explicit (indicated explicitly by the user (Holmer et al. 2006)) or implicit (detected with natural language processing tools). Figure 2b shows the weaving through the conversation of three voice candidates selected by the user and displayed by PolyCAFe.
4 The Future: Research Areas and Future Directions The analysis of artifacts resulted from CSCL sessions is driven by the needs of research and of evaluating knowledge construction in collaborative sessions. As our questions become focused on the cognitive and discursive processes that occur within the interventions, we require analyses that illuminate those processes. It is not sufficient just to perform a content analysis of the final product of a group, when
Artifact Analysis
563
Fig. 2 (a) Visualization of the utterance graph of a conversation and numerical values estimating collaboration; (b) visualization of threading of the candidates for voices
564
S. Trausan-Matu and J. D. Slotta
the research is concerned with how the individuals within the group interact to produce that artifact. Still, the artifact contains a wealth of information that can allow strong inferences about such processes, particularly if we have captured a record of its construction, and any other data sources (e.g., gestures or spoken discourse that occur during written construction). At present, several research areas and themes are well suited for the application of artifact analyses outlined above, as well as the gradual evolution of derivative forms of analysis. These include: • • • • • •
Indicators of user epistemologies and conceptual change Group cognition Embodied learning Awareness support (e.g., “cognitive group awareness”) Role modeling Process modeling, scripting, and orchestration
For future research, we believe that content-based approaches should be combined with process-based analytics to enhance the quality and scope of analytic approaches for CSCL. As dynamic multiuser editing of written and graphical documents becomes increasingly prominent at all levels of education (e.g., in Google Docs and many other available software applications), CSCL research will have correspondingly improved access to rich data that includes not only the content produced, and social information of its production, but also the fine-grained log files that contain nuanced records of its creation process. When combined with everimproving methods of natural language processing, we anticipate improved capacity and reliability of methods for polyphonic and other forms of multivocal analysis for such artifacts. This could include real-time analysis that could be fed back into the CSCL session itself, as a source of guidance for students and teachers (e.g., for group-level engagement, or to connect groups according to their respective content and patterns of participation). Artifacts will become more than just a product of CSCL interventions and artifact analysis will become more than just a means of inference about those interventions. They will ultimately become an element of the interventions themselves.
References Bakhtin, M. M. (1984). Problems of Dostoevsky’s poetics (C. Emerson, Ed.; C. Emerson, Trans.). Minneapolis: University of Minnesota Press. Bakhtin, M. M. (1986). Speech genres and other late essays. Austin: University of Texas Press. Bereiter, C. (2002). Education and mind in the knowledge age. Mahwah, NJ: Lawrence Erlbaum. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3(4–5), 993–1022. Budanitsky, A., & Hirst, G. (2006). Evaluating WordNet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1), 13–47.
Artifact Analysis
565
Çakir, M., Xhafa, F., & Zhou, N. (2009). Thread-based analysis of patterns in VMT. In G. Stahl (Ed.), Studying virtual math teams (pp. 359–371). New York, NY: Springer. Carrington, P., Scott, J., & Wasserman, S. (2005). Models and methods in social network analysis (structural analysis in the social sciences). Cambridge: Cambridge University Press. Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6(3), 271–315. Chiu, M. M. (2013). Social metacognition, micro-creativity and justifications: Statistical discourse analysis of a mathematics classroom conversation. In D. D. Suthers, K. Lund, C. Penstein Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of collaborative learning (pp. 141–160). Springer. Csikszentmihalyi, M. (1996). Creativity: Flow and the psychology of discovery and invention. New York: Harper Collins. Dascalu, M., Trausan-Matu, S., McNamara, D. S., & Dessus, P. (2015). ReaderBench—Automated evaluation of collaboration based on cohesion and dialogism. International Journal of Computer-Supported Collaborative Learning, 10(4), 395–423. Saussure, F. de (1983). Course in general linguistics (C. Bally & A. Sechehaye, Eds.; R. Harris, Trans.). Open Court. (Original work published 1916). Dong, A. (2005). The latent semantic approach to studying design team communication. Design Studies, 26(5), 445–461. Ericsson, K. A., & Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice-Hall. Halatchliyski, I., Moskaliuk, J., Kimmerle, J., & Cress, U. (2014). Explaining authors’ contribution to pivotal artifacts during mass collaboration in the Wikipedia’s knowledge base. International Journal of Computer-Supported Collaborative Learning, 9(1), 97–115. Halliday, M. A. K., & Hasan, R. (1976). Cohesion in English. London: Longman. Holmer, T., Kienle, A., & Wessner, M. (2006). Explicit referencing in learning chats: Needs and acceptance. In W. Nejdl & K. Tochtermann (Eds.), Innovative approaches for learning and knowledge sharing, first European conference on technology enhanced learning, EC-TEL 2006 (pp. 170–184). Springer. Holtz, P., Kimmerle, J., & Cress, U. (2018). Using big data techniques for measuring productive friction in mass collaboration online environments. International Journal of ComputerSupported Collaborative Learning, 13, 439–456. Jefferson, G. (1984). Transcription notation. In J. Atkinson & J. Heritage (Eds.), Structures of social interaction. Cambridge: Cambridge University Press. Jurafsky, D., & Martin, J. H. (2009). An introduction to natural language processing. Computational linguistics, and speech recognition (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. Koschmann, T. (1999). Toward a dialogic theory of learning: Bakhtin’s contribution to learning in settings of collaboration. In Proceedings of the 1999 conference on computer support for collaborative learning (pp. 308–313). International Society of the Learning Sciences. Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2), 211–240. Ludvigsen, S., Stahl, G., Law, N., & Cress, U. (2015). Collaboration and the formation of new knowledge artifacts. International Journal of Computer-Supported Collaborative Learning, 10 (1), 1–6. Mariotti, M. A. (2009). Artifacts and signs after a Vygotskian perspective: The role of the teacher. ZDM Mathematics Education, 41, 427–440. Miller, G. A. (1995). WordNet: A lexical database for English. Communications of the ACM, 38 (11), 39–41. Peirce, C. S. (1998). Pragmatism. In Peirce Edition Project (Ed.), The essential Peirce: Selected philosophical writings (Vol. 2, p. 411). Indiana University Press.
566
S. Trausan-Matu and J. D. Slotta
Rosé, C. P., Wang, Y. C., Cui, Y., Arpguello, J., Stegmann, K., Weinberger, A., & Fischer, F. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer Supported Collaborative Learning, 3(3), 237–271. Sacks, H. (1992). Lectures on conversation. Oxford: Blackwell. Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simplest systematics for the organization of turn taking for conversation. Language, 50(4), 696–735. Sawyer, R. K. (2006). Group creativity: Musical performance and collaboration. Psychology of Music, 34(2), 148–165. Schneider, B., Worsley, M., & Martinez-Maldonado, R. (this volume). Gesture and gaze: Multimodal data in dyadic interactions. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Simpson, A., Bannister, N., & Matthews, G. (2017). Cracking her codes: Understanding shared technology resources as positioning artifacts for power and status in CSCL environments. International Journal of Computer Supported Collaborative Learning, 12(3), 221–249. Slotta, J. D., Chi, M. T. H., & Joram, E. (1995). Assessing Students’ misclassifications of physics concepts: An ontological basis for conceptual change. Cognition and Instruction, 13(3), 373–400. Slotta, J. D., Tissenbaum, M., & Lui, M. (2013). Orchestrating of complex inquiry: Three roles for learning analytics in a smart classroom infrastructure. In Proceedings of the third international conference on learning analytics and knowledge (pp. 270–274). ACM. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge, MA: MIT Press. Stahl, G. (2013). Learning across levels. International Journal of Computer-Supported Collaborative Learning, 8(1), 1–12. Stahl, G., Ludvigsen, S., Law, N., & Cress, U. (2014). CSCL artifacts. International Journal of Computer-Supported Collaborative Learning, 9(3), 237–245. Suthers, D., & Desiato, C. (2012). Exposing chat features through analysis of uptake between contributions. In Proceedings of the 45th Hawaii international conference on system sciences (pp. 3368–3377). IEEE Computer Society. Suthers, D., & Rosen, D. (2011). A unified framework for multi-level analysis of distributed learning. In Proceedings of the first international conference on learning analytics and knowledge (pp. 64–74). Association for Computing Machinery. Tannen, D. (2007). Talking voices: Repetition, dialogue, and imagery in conversational discourse (2nd ed.). Cambridge: Cambridge University Press. Trausan-Matu, S. (2010). The polyphonic model of hybrid and collaborative learning. In F. L. Wang, J. Fong, & R. Kwan (Eds.), Handbook of research on hybrid learning models: Advanced tools, technologies, and applications (pp. 466–486). Hershey, NY: Information Science Publishing. Trausan-Matu, S. (2013). Collaborative and differential utterances, pivotal moments, and polyphony. In D. D. Suthers, K. Lund, C. Penstein Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of collaborative learning (pp. 123–139). New York: Springer. Trausan-Matu, S., Dascalu, M., & Rebedea, T. (2014). PolyCAFe—Automatic support for the polyphonic analysis of CSCL chats. International Journal of Computer-Supported Collaborative Learning, 9(2), 127–156. Whitney, W. D. (1879). A Sanskrit grammar, including both the classical language, and the older dialects, of Veda and Brahmana. Retrieved October 14, 2018, from Internet Archive website https://archive.org/details/1941sanskritgram00whituoft/page/n0 Zahn, C., Ruf, A., & Goldman, R. (this volume). Video data collection and video analyses in CSCL research. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Zemel, A., Xhafa, F., & Çakir, M. (2009). Combining coding and conversation analysis of VMT chats. In G. Stahl (Ed.), Studying virtual math teams (pp. 421–450). New York, NY: Springer.
Artifact Analysis
567
Further Readings Dascalu, M., McNamara, D. S., Trausan-Matu, S., & Allen, L. K. (2018). Cohesion network analysis of CSCL participation. Behavior Research Methods, 50(2), 604–619. The paper presents an empirical validation of the automated analysis of CSCL conversations based on the cohesion network, implemented in the opensource ReaderBench system (http://readerbench. com/), which integrates content analysis, social network analysis and the polyphonic analysis. The theoretical aspects which stay at the basis of ReaderBench are also presented. The validation was done with ten chat conversations, in which three to eight students debated the advantages and disadvantages of CSCL technologies. Human annotators scored each participant’s contributions regarding if they covered the central concepts of the conversation. Social network analysis was applied to compute metrics from the cohesive network analysis sociogram, in order to assess the degree of participation of each student. The results showed that there was a strong correlation between the computed values and the human evaluations of the conversations. Moreover, the computed indices collectively predicted 54% of the variance in the human ratings of participation, after a stepwise regression analysis. Holtz, P., Kimmerle, J., & Cress, U. (2018). Using big data techniques for measuring productive friction in mass collaboration online environments. International Journal of ComputerSupported Collaborative Learning, 13, 439–456. The main idea of the paper is that knowledge building is triggered by productive friction, both at individual and social levels of learning processes. The paper analyses how productive friction is involved in Wikipedia. Three approaches are analyzed: automatic classification of text, social network analysis, and cluster analysis, considering also an artifact-mediated collaboration perspective. Stahl, G. (2009). Studying virtual math teams. New York, NY: Springer. This book contains a comprehensive presentation of the theoretical aspects of group cognition in CSCL conversations and several approaches of analysis: conversation analysis, code and count, statistical analysis, content analysis, and polyphonic analysis. The theoretical framework and the examples of the various analysis methods were developed in the Virtual Math Teams NSF project (http:// gerrystahl.net/vmt/) from Drexel University, Philadelphia, PA, which had as subject the analysis of CSCL instant messenger (chat) sessions of students solving mathematical problems proposed as “Problems of the week” at mathforum.org. A website of the book may be accessed at http:// gerrystahl.net/elibrary/svmt/. Suthers, D., & Rosen, D. (2011). A unified framework for multi-level analysis of distributed learning. In Proceedings of the first international conference on learning analytics and knowledge (pp. 64–74). Association for Computing Machinery. A series of graphs and other artifacts that can be used for learning analytics are presented: process traces, contingency graphs, uptake graphs, sociograms, asociograms, and entity–relationships graphs. Trausan-Matu, S. (2012) Repetition as Artifact Generation in Polyphonic CSCL Chats. In Proceedings of the third international conference on emerging intelligent data and web technologies (pp. 194–198). IEEE Computer Society. The paper analyses CSCL chats performed by K– 12 students, which are solving together mathematics problems in the context of the Virtual Math Teams NSF project (http://gerrystahl.net/vmt/). The analysis is considering the polyphonic model of discourse inspired from the musical analogy, with emphasis on repetition and rhythm. Four cases are analyzed, in which the repetition of words, phrases, notation, and numbers transforms them in artifacts that drove to solving the problems.
Finding Meaning in Log-File Data Jun Oshima and H. Ulrich Hoppe
Abstract This chapter will start with a characterization of log-file data and related examples and then elaborate on ensuing levels of processing, interpretation/meaning-making, and finally support for decision-making and action (“actionable insights”). According to the characteristic of log files as sequences of action descriptions, we will set our focus on what has been called “Action Analysis” as compared to “Discourse Analysis.” Following up on the characterization of input data, we will review computational techniques that support the analysis of log files. Techniques of interest include process-oriented approaches (such as process mining, sequence analysis, or sequential pattern mining) as well as approaches based on social network analysis (SNA). Such techniques will be further discussed regarding their contribution to data interpretation and meaning-making. Finally, the future direction of log-file analysis is discussed considering the development of new technologies to analyze spoken conversation and nonverbal behaviors as part of action–log data. Keywords Log-file data · Action analysis · Discourse analysis · Process-oriented approach · Social network analysis
J. Oshima (*) Research and Education Center for the Learning Sciences, Shizuoka University, Shizuoka-shi, Japan e-mail: [email protected] H. U. Hoppe Department of Computer Science and Applied Cognitive Science, University of Duisburg-Essen, Duisburg, Germany e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_31
569
570
J. Oshima and H. U. Hoppe
1 Definitions and Scope CSCL is based on digital environments that enable the sharing of resources and synchronous or asynchronous multiuser interactions. In recent years, web-based technologies have facilitated the access to such environments using general-purpose technologies. User activities in such spaces leave digital traces that can be subjected to computational or partially automated sequential analyses to discover temporal patterns of interaction. Using the same type of source data, social network analysis allows for detecting relational structures that can be interpreted as roles or community substructures. These analytic methods rely on a well-adapted interplay between the representation of traces and ensuing processing techniques. Computational methods can typically help to (pre-)structure the data for further interpretation but rich interpretation and “meaning-making” rooted also in the theoretical foundations, including assumption of the epistemology of learning, and methodological premises. Lund and Suthers (2013) distinguish several dimensions of approaches to analyzing collaborative learning. Beyond the theoretical background and the overall purpose as general givens, units of (inter-)action and analysis as well as representations and manipulations are of specific interest in the context of this chapter. Log files can be conceived as sequential data streams in which every single element corresponds to an action executed in a technical system or component that can be described by attributes such as user (as originator), action type (such as “create,” “delete,” etc.), objects or resources involved together with a timestamp and other contextual attributes. In discourse-oriented systems, such as chat environments or forums, actions may correspond to textual contributions or utterances. Still, the basic characterization as data streams applies as well, yet the ensuing analyses will typically tie in with established methods of discourse analysis. Nowadays many technical tools are used to support CSCL scenarios, such as learning platforms, shared workspace tools for co-construction as well as computermediated communication tools, have built-in logging facilities that can be used as input to further analyses. Given that these data already reside in a computational environment, it is of particular interest to explore the potential of (partially) automated computational techniques in the analysis. However, the desired end point of the analysis would always be a meaningful interpretation of the data in the given practice scenario based on a certain theoretical context.
2 History and Development In the CSCL tradition, adequate representations and data models of user traces, related processing techniques, and interpretation frameworks have been investigated under the notion of “(group) interaction analysis.” This line of research has not only been pursued by research groups but also through coordinated efforts in the CSCL community. One such line of joint research has been conducted in several European projects including the Kaleidoscope Network of Excellence (Balacheff et al. 2009). Whereas Choquet et al. (2009) target the analysis of navigation patterns and trails
Finding Meaning in Log-File Data
571
through linked educational content materials; the work reported by Harrer et al. (2009) is actually focused on the analysis of collaborative interactions from user data with action logs as the standard case. Earlier work by Reffay and Chanier (2003) had used forum messages as a source for the study of cohesion in learning groups. Among other aspects, this second line of research was also directed toward standardizing data formats and interfaces for action log data aiming at technical interoperability and exchange. We see a similar tendency in the current development of learning analytics techniques with specifications, e.g., around “Tin Can API” or xAPI (cf. Kitto et al. 2015, for a discussion of these approaches). Although we can identify predecessors of this work in the CSCL context, current approaches will rely on these more recent developments in learning analytics. As a worldwide international cooperation of CSCL researchers, a series of workshops have been organized under the heading of “productive multivocality in the analysis of group interactions” (Suthers et al. 2013). This activity was conceived as a joint project involving multiple disciplinary perspectives searching a joint focus and synergy through the provision of case materials (i.e., data collections from prior experiences). Here, the focus was not on developing/standardizing technologies but on integrating different contributions, themselves based on computational methods, quantitative and qualitative approaches, to improve and enrich the understanding of the cases at hand. A variety of tools and methods were applied to a limited set of examples (including transcripts and log files) so that it was possible to see how these could complement each other. In this context, the tool Tatiana (Dyke et al. 2010) was exploited in its capability to analyze combinations of input data including action logs, video recordings as well as time-stamped transcripts, and supported by a visual environment in an integrated way. The Tatiana tool occupies an intermediate position between coding and analysis tools supporting qualitative research and largely automated tools supporting quantitative methods. Under the notion of “contingency analysis,” Suthers et al. (2010) have proposed a conceptual and formal framework for representing and analyzing interactions and ensuing dependencies in collaborative discourse structures. The original data source was a huge archive of educational chat dialogues. It is typical for such dialogues that inter-referencing of contributions is ambiguous, especially in the sense that several dialogue threads can overlap. Contingency analysis comes with a number of heuristics that allow for constructing so-called uptake graphs (as a representational basis), which in turn can be used as an input for applying network analysis techniques. The more recent developments in the field of learning analytics do not only provide standardized approaches to data logging (as mentioned above) but also have contextualized and adapted various general techniques of analytics and data mining to the study of learning interactions. This is an important reference that should also be taken into account in CSCL.
572
J. Oshima and H. U. Hoppe
3 State of the Art 3.1
Computational Methods
Following Hoppe (2017), three main types of computational methods for the analysis of learning and knowledge-building communities can be distinguished: (1) methods for detecting network structures including actor–actor (social) networks but also actor–artifact networks, (2) approaches that reveal time-dependent process structures, and (3) artifact analyses using text mining or other techniques of computational content analysis. Log-file data can be particularly as an input for social network analysis (type 1) and various kinds of process or sequence analysis methods (type 2). Such analyses can be additionally supported by considering also the artifacts (especially learner-generated artifacts), but techniques for analyzing textual and other artifacts are dealt with in other chapters of this handbook (Borge and Rosé this volume; Trausan-Matu and Slotta this volume). Although both approaches (types 1 and 2) may rely on action log data, they are of very different nature in terms of the output constructs that they generate: A single social network collapses action or communication data over certain time period or “window” into a relational network structure made up of nodes (actors) and connections representing relations between actors (for a basic introduction to social network analysis, cf. Wasserman and Faust 1994). The basic network structure does no longer represent time dependencies in the sense that a certain connection was established before another one. Although the original data logs typically contain such information. In contrast, the basic entities of process-oriented analyses are actions or events together with temporal dependencies of the type “happened before/after” or “followed by” as basic structures. A dynamic interpretation of social networks is possible on the basis of time series of single networks as “snapshots” taken over consecutive time windows. There is already a long tradition of adopting social network analysis (SNA) in CSCL research. As early adopters, Reffay and Chanier (2003) relied on SNA in their study of cohesion in learning groups using a shared forum. Martínez et al. (2003) presented an evaluation method combining SNA with traditional sources of data and analyses in blended collaborative learning scenarios. In the context of the “productive multivocality” initiative (see above), Oshima et al. (2013) have proposed the usage of SNA techniques to analyze structural patterns in the discourse and knowledge evolution based on transcripts from classroom interactions. In this work, network measures are not just aggregated over a given time period, but changes in network measures (esp. the sum of degree centralities) are interpreted as indicators of significant developments in the learning groups. Meanwhile, there are also a few meta-level analyses related to the usage of SNA in CSCL research that are based on the analysis of publication data (Dado et al. 2017; Tang et al. 2014). These results show that there is potential for enriching the palette of network-analytic methods beyond current standards of adoption. The majority of applications use standard centrality measures or subgroup detection methods. There
Finding Meaning in Log-File Data
573
is a large unexploited potential in using affiliation networks (i.e., network structures with two types of entities, such as learner-resource networks) and block modeling techniques for detecting roles of actors in communities. Process-oriented analysis methods have been introduced to CSCL in the already mentioned context of interaction analysis. One of the early example applications is the automatic detection of certain collaboration patterns in user traces from CSCL environments (Zumbach et al. 2002). Nowadays, CSCL applications can and should benefit from the advancement of general techniques for data mining and analytics. As for process-oriented methods, related techniques include “sequential pattern mining” (Fournier-Viger et al. 2014) or “process mining” (van der Aalst 2011). Sequential pattern mining has been included with the LeMo tool suite to analyze activities on online learning platforms (Elkina et al. 2013), whereas Bannert et al. (2014) have adopted “process mining” to characterize patterns and strategies in the context of self-regulated learning. In a recent analysis of the performance of small groups working on collaborative writing exercises in larger online courses, Doberstein et al. (2017) have used sequence analysis techniques derived from the analysis of DNA sequences in bioinformatics (Abbott and Tsay 2000). Here, sequence alignment was utilized to characterize learning groups with similar collaboration patterns. The ensuing clusters (based on sequential similarity) showed different characteristics in terms of the quality of cooperation and productivity. It could be shown that lower performance was more strongly related to missing early coordination than just to inactivity (Hoppe et al. 2020). Still, process-oriented analysis methods are less prominent than SNA in current CSCL research. The combination and integration of different types of computational analysis techniques are one of the big challenges of current CSCL research, both from a technical and a conceptual point of view.
3.2
Interpretation and Meaning-Making
In this section, the chapter authors discuss how log-file data analyses are related to different epistemic stances. They extend the definition of the log-file data as including real action logs detected in the face-to-face context. Although not available yet, sensoring and speech recognition technologies would be used to detect learners’ action logs in the synchronous CSCL including video-chat systems. Enyedy and Stevens (2014) discuss four different epistemic stances to analyze collaboration: (1) collaboration-as-a-window, (2) collaboration-for-distal-outcomes, (3) collaboration-for- proximal-outcomes, and (4) collaboration-as-learning. The different windows represent different epistemic stances and their corresponding methodological approaches. Researchers in the collaboration-as-a-window group focus on changes in individual minds as a target of their analyses. They are interested in how internal representations such as knowledge in individual minds are constructed through
574
J. Oshima and H. U. Hoppe
their interaction with the environment around them including other humans. So, they see collaboration as a context (or an experimental setting) where learners externalize their prior knowledge and strategies of learning. An example method is “constructive interaction” (Miyake 1986). In her study of how a pair of graduate students “collaborates” for figuring out how a sewing machine works, Miyake applied the method to detect individuals’ thoughts in their “natural” protocols. In the individual experiment, subjects are forced to “talk” about what they think that should be naturally inert in her mind. The constructive interaction is the method for a pair of people to “naturally explains [sic] not only what they have been thinking about, but why they think it” (Miyake 1986, p. 159). Beyond this category of research on collaboration as a tool or window onto thinking, CSCL studies have been more concerned with collaboration-for-distal-outcomes, collaboration-for-proximal-outcomes, and collaboration-as-learning. In the collaboration-for-distal-outcomes group, researchers are more concerned with specific patterns that lead to better or worse distal learning outcomes, i.e., how different types of collaboration lead learners to different levels of outcomes after their collaboration. They, therefore, attempt to identify the interaction patterns correlated with levels of student learning outcomes, and this type of approach has a long history in CSCL studies. In the early age of CSCL studies, Oshima et al. (1996) analyzed log-file data including elementary students’ written discourse and their operations such as searching and commenting in CSILE (Computer-Supported Intentional Learning Environments) for examining differences in cognitive processes taken by students who made high or low conceptual progress. Oshima et al. manually traced all the log-file data to recreate each student’s cognitive process of learning and coded their written discourse and graphics into different types of knowledge items: problemcentered, reference-centered, and metacognitive. Comparative analyses of frequencies of coded actions revealed that students made more progress in their conceptual understanding when dealing with problem-centered knowledge in interaction with reference-centered knowledge. Based on their analysis, Oshima et al. proposed cognitive models of high and low conceptual progress students. In CSCL studies of collaboration-for-distal-outcomes like Oshima et al., collaboration has been examined by process data and learning outcomes by other independent measures such as standardized tests after the collaboration. One thing the audience has to be aware of in discussing “collaboration-for-distaloutcomes” studies compared to “collaboration-as-a-window” studies is the fact that “collaboration-for-distal-outcomes” studies deal with collaboration as a sequence of actions by multiple persons, such as interaction patterns. Oshima et al. (1996) did not deal with students’ actions on CSILE as discrete units but those linking to each other. Every action was coded by how the action was influenced by its previous activities such as widening or deepening its previous knowledge. Studies categorized as “collaboration-as-a-window” group, on the other hand, code every unit of analysis as a discrete action. So, even if both groups of studies count their coded actions, the meaning they come up with by doing so is different by their epistemic stances.
Finding Meaning in Log-File Data
575
In the collaboration-for-proximal-outcomes group, researchers are interested in how learners engage in their co-construction of knowledge during their collaboration, but not necessarily in how their engagement is related to the distal learning outcomes. The epistemic stance related to the notion like collaboration as the joint construction of meaning has been prominent in the field of CSCL (Stahl et al. 2014). Since researchers are more focused on the details of collaboration processes for the proximal outcomes, e.g., how a pair of learners attains their “intersubjectivity,” the methodological approach taken in the group is critically different from the two previous groups. The microgenetic approach to examining the interaction process in collaboration is the most frequently used method. Roschelle (1992) is an excellent example of studies in the epistemic group. For investigating the interaction patterns leading to a conceptual change in physics, he developed a computer-based simulation, called Envisioning Machine, where multiple learners shared a screen of physics events by planning and conducting experiments. He video-recorded pairs of students who engaged in the Envisioning Machine challenges and transcribed their discourse and actions based on the transcript convention used in the conversation analysis. His interpretations based on the detailed description of students interacting with each other in front of a computer revealed how the students engaged in their collaboration to develop their own conceptual understanding of physics. The students were able to “construct increasingly sophisticated approximations to scientific concepts collaboratively, through gradual refinement of ambiguous, figurative, partial meanings” (1992, p. 237). The notion of “convergent conceptual change” discovered through his conversation analysis approach has been a basis for later studies to develop CSCL systems as well as the instructional designs with the systems. More recent works related to the collaboration-for-proximal-outcomes include Stahl’s (2016) “Group Cognition,” Cress’s “Co-evolution model of individuals and social systems” (Cress and Kimmerle 2008), and more. Studies like Cress and others attempt to apply a computational approach to the collaboration-for-proximal-outcomes for handling with big data. Finally, in the collaboration-as-learning group, researchers have their underlying assumption that collaboration is a cultural practice coordinated by participants themselves. In the first three epistemic groups, researchers have analyzed collaboration from a canonical view they held in their studies. In the collaboration-fordistal-outcomes group, Oshima et al. searched for the interaction patterns connected to high conceptual progress. The criteria of high conceptual progress were determined by their canonical notion of appropriate scientific knowledge or conceptual understanding. In the collaboration-for-proximal-outcomes group, Roschelle had physicist’s interpretations of phenomena appeared on a computer screen as criteria for him to how a pair of high school students was involved in the convergent conceptual change. Koschmann and Zemel (2009) reanalyzed Roschelle’s transcript data from the perspective of collaboration-as-learning. They proposed an approach called “discovery as occasioned production.” In their reanalysis of Roschelle’s transcripts, they first identified some matter discovered in the students’ own sense, worked backward for finding where the matter appeared in the conversation, and
576
J. Oshima and H. U. Hoppe
finally traced from the initial point forward for demonstrating “how the proposal for a possible discovery was ultimately transformed into a discovery achieved” (p. 213). The approach like collaboration-as-learning is exploratory and descriptive rather than established and prescriptive seen in the first three approaches. Learning through collaboration is seen as the improvement of community practices in ill-structured task contexts. As Enyedy and Stevens (2014) argued, the fourth approach would be more appropriate in examining collaboration in the informal context where learners intentionally find their peers and dynamically collaborate in their groups. We also see the value of this approach in its application to the knowledge creation practices. When community members collaborate for coming up with new ideas or solutions to complex problems or challenges, they have to engage in creating new cultural practices of learning (Scardamalia and Bereiter 2014). What we need to analyze in learning as knowledge creation is how new ideas emerge through learners’ interaction in collaboration. We cannot have any clear criteria for evaluating ideas beforehand. All we can do is to identify the occasioned discovery then trace backward and forward to describe what actors facilitate or influence the discovery process. It is still an active research question to develop the methodological approach to evaluate collaboration-as-learning. We have not yet established a method to measure community development or change in cultural practices based on log-file data. Several studies propose new statistical or computational methods to represent community knowledge and activities related to participants’ practices. However, not many studies articulately discuss how the proposed methods are developed based on their epistemic stances of learning. We hope that researchers in CSCL and learning analytics intensively collaborate for developing new research activities to integrate epistemologies and methodologies. We discuss some studies related to the new stream of research in the next section.
4 The Future While most studies of advanced computational log-file analysis are reported in the field of learning analytics, these approaches are also taken up in studies of CSCL and the learning sciences in recent years. In the final section, we discuss directions of how advanced learning analytics technologies can support our decision-making, scaffolding, and reflection in CSCL studies from the perspective of Sandoval’s (2014) idea of conjecture mapping of design-based research as a prominent methodological approach in the field. In the design-based research on CSCL learning environments, researchers have to handle many complex design embodiments (components) included in the environments such as educational materials, instructions, technologies, and participatory structures. The components should be interweaved based on the researchers’ epistemic stance behind their learning theories. Conjecture mapping conceptualizes the relationship between theories, design components, and learning outcomes induced through the design. Once mapping their conjectures in the framework proposed by
Finding Meaning in Log-File Data
577
Fig. 1 Diagram of conjecture mapping in the design-based research (Sandoval 2014)
Sandoval (2014), researchers can systematically conduct their progressive refinements of the learning environment by examining smaller parts of their design of learning environments. In design-based research, researchers start to think of what design embodiments should be implemented in an environment with high-level conjectures about how to support learning in our specific context. The design embodiments are tools and materials, task structures, participatory structures, and discursive practices, and they are expected to facilitate any specific mediating processes that finally lead to learning outcomes (Fig. 1). The first part of the conjecture when examining our design embodiments for facilitating mediating processes is called the “design conjecture.” In the design conjecture, we may apply the “collaboration-for-proximal-outcomes.” As described in the previous section, we should examine whether our design embodiments could facilitate our expecting mediating processes such as intersubjectivity through microgenetic methodological approaches. Besides that, we can also be more exploratory by taking the “collaboration-aslearning” approach for searching mediating processes that learners construct during their collaboration as Koschmann and Zemel (2009) argued. The efforts in the design conjecture should be further linked to the second part called the “theoretical conjecture” by examining whether our effort to facilitate the mediating processes during learners’ collaboration successfully develop their conceptual understanding, epistemic beliefs, motivation toward learning, and so on. The learning outcomes we focus on the theoretical conjecture can be either distal or proximal depending on researchers’ theoretical orientations. Several research projects pay attention to advanced log-file analytics in the design conjecture. Studies of learning analytics related to the knowledge building theory are the one. By combining a text-mining technology and a social network analysis of discourse by Oshima et al. (2012), Lee and Tan (2017) proposed an algorithm to evaluate students’ idea improvement through their knowledge-building discourse in Knowledge Forum. They used temporal analytics with unsupervised machine learning techniques. For identifying promising ideas, they first constructed a discourse unit network by using a text-mining algorithm (i.e., SOBEK) to detect keywords in
578
J. Oshima and H. U. Hoppe
the units based on learners’ use of vocabulary in their discourse. Their idea of the use of the text-mining technique for keyword selection is a good step forward to the collaboration-as-learning approach of the log-file data analytics in comparison with previous studies. Previous studies such as Oshima et al. (2012) applied the collaboration-for-proximal-outcomes approach as they constructed a list of words for discourse unit network based on a statutory criterion of understanding from a teacher’s perspective but not from learners’ perspective. Lee and Tan (2017), on the other hand, attempted to construct a list of words by computationally examining what words learners were focused on describing their ideas in discourse. Their approach is promising when we discuss how learners engage in their collective knowledge advancement through their discourse. Furthermore, they attempted to identify the types of discourse units as promising, potential, or trivial ideas by using an unsupervised machine learning technique, k-means clustering. Each discourse unit was plotted in a degree centrality–betweenness centrality graph then categorized into the three clusters. Lee and Tan examined the validity of their approach by conducting a qualitative analysis of discourse units identified as promising and found undiscovered discourse units related to promising ideas. Thus, the combination of NLP techniques and social network analysis might be a way to develop the framework of the log-file data analytics for the design conjecture of CSCL environments. The second example of the collaboration-as-learning approach to the log-file data analytics has been discussed as the theory-based approach in the field of learning analytics. Wise and Shaffer (2016) argued that it is dangerous to conduct analyses of learning without theories. Many researchers in the field assume that the data can tell for themselves if we have a large amount of information. In their argument, Wise and Shaffer insist that a more important theory should become in the analysis, a more massive amount of data we have. Shaffer and Ruis (2017) proposed a new theorybased approach based on the argument by Wise and Shaffer. They first presented their epistemic stance, epistemic frame theory, as an approach to describe how they conceptualize learning practices in a specific community of practice. Then, demonstrated their technique called epistemic network analysis for analyzing discourse and other data based on their epistemic theory. Epistemic frame theory (Shaffer 2012) is a theoretical perspective of thinking, acting, and being present in the community of practice (Lave and Wenger 1991). In a community of practice, community members have knowledge and skills situated in their cultural practices, values to guide how those could be used, and processes of decision-making and justification. Becoming members in the community requires a person to be adapted to a particular way of talking (discourse). Shaffer (2012) defines an epistemic frame as the grammar of the discourse. Having such a grammar of discourse (or becoming a member of a community of practice) does not mean that a person has knowledge, skills, values relevant to a community, but the discourse grammar represents a particular configuration of them. For examining learners’ cultural practices in collaboration, one of the most promising methodological approaches had been ethnomethodology and conversation analysis as we discussed in the previous section. Shaffer (2012) proposed
Finding Meaning in Log-File Data
579
“Epistemic Network Analysis” (ENA), a computational approach that extracts and quantifies connections among elements in coded data and visualizes these in dynamic network models, thus illustrating their interrelation and strength over time. With ENA, researchers can qualitatively and quantitatively examine cultural practices that participants engage in, such as engineering projects (e.g., Svarovsky 2011), through their discourse. It is a challenge in itself to interpret the results of automatic, computational analyses as a meaningful contribution to a better understanding of the examined practice and to connect this interpretation to theoretical framework. By building on principles of ethnographic research, namely the introduction of “codes,” as premises of the computational analysis of potentially massive data and by providing dynamic visual representations, ENA facilitates the process of meaningful interpretation following the analytics. This is the reason why this approach has been called “Quantitative Ethnography” (Shaffer 2017). It is a basis for bridging over between data analysis and meaning-making and combines the perspectives of learning analytics and CSCL, which is a general challenge that arises with log-file data analysis. The integration of CSCL and the learning analytics would make a further progress in the analysis of the log-file data. The chapter authors already have discussed the possibility that researchers would use language data in the face-toface as automatically coded transcripts by using a future technology, speech recognition. Other sensing technologies would also afford researchers to use nonverbal behaviors and emotions such as log-file data in the future. The multimodal learning analytics would come into CSCL so that researchers can more deeply dig into student collaborative learning by using a variety of log-file data. The common modalities for deepening our understanding of human learning had been summarized in Ochoa (2017). Among them, the following two modalities have been mostly examined in the learning analytics, body language and actions. Posture, gestures, and motion are interactively related as body language by carrying different types of information. Posture is the body position a human takes at a moment in time. The position of a learner may provide us with critical information related to what they think in her/his learning process. Gestures are movements coordinated by several body parts such as the head, arms, and hands. They are supposed to be used intentionally to communicate a specific meaning with others. In contrast to the gestures, motion is body position and its change revealing a learner’s inner state unconsciously, such as nervousness or doubt, during the learning process. In multimodal learning analytics, body language has been studied by capturing video with 2-D and 3-D sensors and computer vision algorithms for detecting critical features. The action mode is defined as purposeful movements that a human has learned, such as the manipulation of a tool. Actions have been indicators of how well learners learn skills by measuring the sequence or correctness of the actions. Therefore, the mode of action-log data has been used to estimate learners’ expertise, such as in engineering (Worsley and Blikstein 2014) or mathematical problem-solving (Ochoa et al. 2013). Researchers have used video-recorded data for analyzing actions. Depending on the types of actions, they focus on specific body movements (e.g.,
580
J. Oshima and H. U. Hoppe
the position and angle of the calculator in mathematical problem-solving) to sequentially categorize the actions. A deeper integration and synergy of CSCL and learning analytics would also lead to further progress in the analysis of log-file data. Especially multimodal learning analytics (Worsley and Blikstein 2014) opens the input channel of analysis to different communication modalities and media: These extensions include spoken language (through speech recognition and automatic transcription), gesture, and body tracking to identify trajectories in physical group learning scenarios, the detection of emotions based on facial expressions as well as the analysis of eye gaze to identify the focus of attention. The potential for deepening our analysis and understanding of human learning through these approaches have been recently summarized by Ochoa (2017). He particularly elaborates on body language and (physical) actions. Body language can be seen as a combination of posture (body position), gestures, and motion, each component carrying different types of information. The analysis of body language can help us to identify intentions from physical expressions in addition to what we can extract from language-bound interactions. Body language may reveal unconscious aspects of a learner’s inner state during the learning process. In multimodal learning analytics, body language has been studied by capturing video with 2-D and 3-D sensors and computer vision algorithms for detecting critical features. In the traditional understanding of log-file analysis, actions are typically conceived as changes in a (digital) system environment. Multimodal analytics adds actions in physical space to this analysis. This may partly involve computerized tools but also direct physical action. Actions as purposeful changes in the environment can serve as indicators of how well learners acquired certain skills by measuring the sequence or correctness of the corresponding actions. In this sense, multimodal action-log data have been used to estimate learners’ expertise, such as in engineering (Worsley and Blikstein 2014) or mathematical problem-solving (Ochoa et al. 2013). Researchers have also used video-recorded data for analyzing actions. Depending on the types of actions, they focus on specific body movements (e.g., the position and angle of the calculator in mathematical problem-solving) to sequentially categorize the actions. As in other fields, also the computational analysis of log files from educational setting relies more and more on complex and partially (or even largely) intransparent algorithms. This is particularly true for predictive models based on multilayer artificial neural networks but also other methods of data mining or network analysis. Such algorithms, as well as the prior sampling of data, are not neutral or “objective” but they introduce specific biases. This has led to a new discussion under the heading of “accountability of algorithms” (see, e.g., Diakopoulos 2016; Martin 2019). This new perspective calls for thorough meta-analyses of the premises as well as the systematic preferences and distortions that are inherent in certain algorithms. Only rarely, it will be possible to spell out these dependencies exactly and in detail. What is needed is an understanding of the premises and pitfalls in a way that is similar to the applicability conditions of traditional statistical methods, which are part of the
Finding Meaning in Log-File Data
581
established professional knowledge. This is another new challenge for data-intensive CSCL research.
References Abbott, A., & Tsay, A. (2000). Sequence analysis and optimal matching methods in sociology review and prospect. Sociological Methods & Research, 29, 3–33. Balacheff, N., Ludvigsen, S., De Jong, T., Lazonder, A., Barnes, S. A., & Montandon, L. (2009). Technology-enhanced learning. Springer. Bannert, M., Reimann, P., & Sonnenberg, C. (2014). Process mining techniques for analysing patterns and strategies in students’ self-regulated learning. Metacognition and Learning, 9(2), 161–185. Borge, M., & Rosé, C. P. (this volume). Quantitative approaches to language in CSCL. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Choquet, C., Iksal, S., Levene, M., & Schoonenboom, J. (2009). Users’ data: Trails analysis. In N. Balacheff, S. Ludvigsen, T. De Jong, A. Lazonder, S. A. Barnes, & L. Montandon (Eds.), Technology-enhanced learning (pp. 195–211). Springer. Cress, U., & Kimmerle, J. (2008). A systemic and cognitive view on collaborative knowledge building with wikis. International Journal of Computer-Supported Collaborative Learning, 3 (2), 105–122. Dado, M., Hecking, T., Bodemer, D., & Hoppe, H. U. (2017). On the adoption of social network analysis methods in CSCL research—A network analysis. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th international conference on computer supported collaborative learning (CSCL) 2017 (Vol. 1). Philadelphia, PA: International Society of the Learning Sciences. https://doi.org/10.22318/ cscl2017.40. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. Doberstein, D., Hecking, T., & Hoppe, H. U. (2017). Sequence patterns in small group work within a large online course. In Proceedings of CRIWG 2017 (CYTED-RITOS International Workshop on Groupware, August 2017) (pp. 104–117). Springer. Dyke, G., Lund, K., & Girardot, J. J. (2010). Tatiana, un environnement d’aide à l’analyse de traces d’interactions humaines. Technique et Science Informatiques, 29(10), 1179–1205. Elkina, M., Fortenbacher, A., & Merceron, A. (2013). The learning analytics application LeMo— Rationals and first results. International Journal of Computing, 12(3), 226–234. Enyedy, N., & Stevens, R. (2014). Analyzing collaboration. In K. R. Sayer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 191–212). Cambridge University Press. Fournier-Viger, P., Gomariz, A., Gueniche, T., Soltani, A., Wu, C. W., & Tseng, V. S. (2014). SPMF: A Java open-source pattern mining library. Journal of Machine Learning Research, 15 (1), 3389–3393. Harrer, A., Martínez-Monés, A., & Dimitracopoulou, A. (2009). Users’ data: Collaborative and social analysis. In N. Balacheff, S. Ludvigsen, T. De Jong, A. Lazonder, S. A. Barnes, & L. Montandon (Eds.), Technology-enhanced learning—Principles and products (pp. 175–193). Springer. Hoppe, H. U. (2017). Computational methods for the analysis of learning and knowledge building communities. In C. Lang, G. Siemens, A. F. Wise, & D. Gasevic (Eds.), The handbook of learning analytics (1st ed., pp. 23–33). Society for Learning Analytics Research (SoLAR). dooi: https://doi.org/10.18608/hla17.002
582
J. Oshima and H. U. Hoppe
Hoppe, H. U., Doberstein, D., & Hecking, T. (2020). Using sequence analysis to determine the well-functioning of small groups in large online courses. International Journal of Artificial Intelligence in Education, 1–20. https://doi.org/10.1007/s40593-020-00229-9 Kitto, K., Cross, S., Waters, Z., & Lupton, M. (2015). Learning analytics beyond the LMS: the connected learning analytics toolkit. In Proceedings of the 5th International Conference on Learning Analytics and Knowledge (pp. 11–15). ACM. Koschmann, T., & Zemel, A. (2009). Optical pulsars and black arrows: Discoveries as occasioned productions. The Journal of the Learning Sciences, 18(2), 200–246. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. Lee, A. V. Y., & Tan, S. C. (2017). Promising ideas for collective advancement of communal knowledge using temporal analytics and cluster analysis. Journal of Learning Analytics, 4(3), 76–101. Lund, K., & Suthers, D. D. (2013). Methodological dimensions. In D. D. Suthers, K. Lund, C. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 21–35). Springer. Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of Business Ethics, 160, 835–850. https://doi.org/10.1007/s10551-018-3921-3. Martínez, A., Dimitriadis, Y., Rubia, B., Gómez, E., & De La Fuente, P. (2003). Combining qualitative evaluation and social network analysis for the study of classroom social interactions. Computers & Education, 41(4), 353–368. Miyake, N. (1986). Constructive interaction and the iterative process of understanding. Cognitive Science, 10, 151–177. Ochoa, X. (2017). Multimodal learning analytics. In C. Lang, G. Siemens, A. F. Wise, & D. Gasévić (Eds.), Handbook of learning analytics (pp. 129–141). Society for Learning Analytics Research. Ochoa, X., Chiluiza, K., Méndez, G., Luzardo, G., Guamán, B., & Castells, J. (2013). Expertise estimation based on simple multimodal features. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI ‘13) (pp. 583–590). ACM. Oshima, J., Matsuzawa, Y., Oshima, R., & Niihara, Y. (2013). Application of social network analysis to collaborative problem solving discourse: An attempt to capture dynamics of collective knowledge advancement. In D. D. Suthers, K. Lund, C. Rosé, C. Teplovs, & N. Law (Eds.), Productive multivocality in the analysis of group interactions (pp. 225–242). Springer. Oshima, J., Oshima, R., & Matsuzawa, Y. (2012). Knowledge building discourse explorer: A social network analysis application for knowledge building discourse. Educational Technology Research and Development, 60(5), 903–921. Oshima, J., Scardamalia, M., & Bereiter, C. (1996). Collaborative learning processes associated with high and low conceptual progress. Instructional Science, 24, 125–155. Reffay, C., & Chanier, T. (2003). How social network analysis can help to measure cohesion in collaborative distance-learning. In B. Wasson, S. Ludvigsen, & U. Hoppe (Eds.), Designing for change in networked learning environments (pp. 343–352). Springer. Roschelle, J. (1992). Learning by collaborating: Convergent conceptual change. Journal of the Learning Sciences, 2(3), 235–276. Sandoval, W. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18–36. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In K. R. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 397–417). Cambridge University Press. Shaffer, D. W. (2012). Models of situated action: Computer games and the problem of transfer. In C. Steinkuehler, K. D. Squire, & S. A. Barab (Eds.), Games, learning, and society: Learning and meaning in the digital age (pp. 403–431). Cambridge University Press. Shaffer, D. W. (2017). Quantitative ethnography. Cathcart Press.
Finding Meaning in Log-File Data
583
Shaffer, D. W., & Ruis, A. R. (2017). Epistemic network analysis: A worked example of theorybased learning analytics. In C. Lang, G. Siemens, A. F. Wise, & D. Gasévić (Eds.), The handbook of learning analytics (1st ed., pp. 175–187). Society for Learning Analytics Research (SoLAR). Stahl, G. (2016). Constructing dynamic triangles together: The development of mathematical group cognition. Cambridge University Press. Stahl, G., Koschmann, T., & Suthers, D. (2014). Computer-supported collaborative learning. In K. R. Sayer (Ed.), The Cambridge handbook of the learning sciences (2nd ed., pp. 479–500). Cambridge University Press. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. Suthers, D. D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (2013). Productive multivocality in the analysis of group interactions. Springer. Svarovsky, G. N. (2011). Exploring complex engineering learning over time with epistemic network analysis. Journal of Pre-College Engineering Education Research, 1(2), 19–30. Tang, K. Y., Tsai, C. C., & Lin, T. C. (2014). Contemporary intellectual structure of CSCL research (2006–2013): A co-citation network analysis with an education focus. International Journal of Computer-Supported Collaborative Learning, 9(3), 335–363. Trausan-Matu, S., & Slotta, J. D. (this volume). Artifact analysis. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. van der Aalst, W. (2011). Process mining: Discovery, conformance and enhancement of business processes (Vol. 2). Springer. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge University Press. Wise, A. F., & Shaffer, D. W. (2016). Why theory matters more than ever in the age of big data. Journal of Learning Analytics, 2(2), 5–13. Worsley, M., & Blikstein, P. (2014). Deciphering the practices and affordances of different reasoning strategies through multimodal learning analytics. In Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge (MLA ‘14) (pp. 21–27). ACM. Zumbach, J., Muehlenbrock, M., Jansen, M., Reimann, P., & Hoppe, H. U. (2002). Multidimensional tracking in virtual learning teams: An exploratory study. In G. Stahl (Ed.), Computer support for collaborative learning: Foundations for a CSCL community (pp. 650–651). International Society of the Learning Sciences.
Further Readings Harrer, A., Martínez-Monés, A., & Dimitracopoulou, A. (2009). Users’ data: Collaborative and social analysis. In N. Balacheff, S. Ludvigsen, T. De Jong, A. Lazonder, S. A. Barnes, & L. Montandon (Eds.), Technology-enhanced learning—Principles and products (pp. 175–193). Springer. This article documents and summarizes earlier discussions (before the advent of learning analytics) about standardizing action-log formats very much from a CSCL perspective under the notion of “interaction analysis.”. Hoppe, H. U. (2017). Computational methods for the analysis of learning and knowledge building communities. In C. Lang, G. Siemens, A. F. Wise, & D. Gasévić (Eds.), The handbook of learning analytics (1st ed., pp. 23–33). Society for Learning Analytics Research (SoLAR). https://doi.org/10.18608/hla17.002. This article classifies and characterizes different types of computational analysis techniques relevant to the analysis of log files (and also learner-
584
J. Oshima and H. U. Hoppe
generated artifacts). It elaborates on certain distinctions also used in this chapter from a computer science perspective. Oshima, J., Oshima, R., & Fujita, W. (2018). A mixed-methods approach to analyze shared epistemic agency in jigsaw instruction at multiple scales of temporality. Journal of Learning Analytics, 5(1), 10–24. This empirical study discusses how to mix traditional discourse analysis with a new computational approach in the CSCL field. Based on the dialogism, the authors examine student collaborative discourse from the two analytical points of view of how the meaning is jointly constructed. Shaffer, D. W. (2017). Quantitative ethnography. Cathcart Press. In his book, Shaffer proposes a new discipline called quantitative ethnography. The recent development of information technologies and computer science would make it possible for us to treat big data in the CSCL field. Along with a robust epistemic stance, Shaffer explores a new direction of analyzing collaboration computationally. Suthers, D. D., Dwyer, N., Medina, R., & Vatrapu, R. (2010). A framework for conceptualizing, representing, and analyzing distributed interaction. International Journal of ComputerSupported Collaborative Learning, 5(1), 5–42. This is a good example of how theory-building in the analysis of traces from educational conversations goes together with the development of formal representations and analysis methods.
Quantitative Approaches to Language in CSCL Marcela Borge and Carolyn Rosé
Abstract In this chapter, we provide a survey of language quantification practices in CSCL. We begin by defining quantification of language and providing an overview of the different purposes it serves. We situate language quantification within the spectrum of more to less quantitative research designs to help the reader understand that both quantitative and qualitative researchers can quantify language. We then provide a review of articles published in IJCSCL from 2006 to 2018 to provide the reader with an understanding of who is quantifying language, in what contexts, how they are quantifying it, and for what purpose. The articles were sorted by theoretical stance to show how theoretical leanings influence (1) how researchers perceive language as a tool for learning and (2) how they quantify language. Finally, we discuss the future of language quantification in CSCL, including grand challenges we face, emerging practices, and new directions. Keywords Quantification of language · Computer-supported collaborative learning · Quantitative methods · Learning theories · Theoretical frameworks · Methodological frameworks · Survey
M. Borge (*) Department of Learning and Performance Systems, The Pennsylvania State University, University Park, State College, USA e-mail: [email protected] C. Rosé School of Computer Science, Carnegie Mellon University, Pittsburgh, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_32
585
586
M. Borge and C. Rosé
1 Definitions and Scope: Quantification of Language in CSCL What sets CSCL apart from simply CL is the technology component and the important focus on the impact of technology on collaborative processes. This chapter surveys the concept of quantification of language in CSCL, situated within the spectrum of theoretical stances and methods that are used to examine research aims. An earlier chapter focused simply on quantification of language in collaborative learning broadly (Howley et al. 2013), and we draw a connection with that earlier work. Given the dependency that CSCL has on language frameworks and that the very act of creating discrete coding categories to apply to language is a form of quantification, it is important to define what we mean by quantification of language. For the purposes of this chapter, the quantification of language goes beyond the discretization or abstraction of language. It is conceptualized as the quantification of verbal or written communications between individuals from video, audio, or online chat or textual data for the purpose of measuring, comparing, or describing patterns in language or language-dependent activities. This quantification can be accomplished through frequency counts, an analysis of when in time events occurred, statistical analysis, or computational modeling practices. Both quantitative and qualitative researchers can quantify language. Quantitative researchers may quantify language as a means to look for mathematical patterns, whereas qualitative researchers may quantify language as a means to select representative data or look for cultural patterns. Quantification of language can serve many purposes. It can provide a way to measure the quality of learner interactions or changes in learner activity; identify patterns between different phenomenon through case comparisons or statistical analysis; describe the activity as it occurs in a case study or empirical study; or examine the effectiveness of tools and interventions. Associated with each purpose, the act of quantification provides researchers with distinctive methodological capabilities. As a measurement tool, quantification becomes an end in itself as researchers quantify language for the purpose of creating new methods for evaluating collaborative interactions or learning outcomes. For example, Tissenbaum et al. (2017) proposed a method for assessing divergent learning in interactive tabletops. Others have proposed ways to assess the quality of collaborative processes in synchronous online discussion contexts (Borge et al. 2018; Meier et al. 2007) and have broadly disseminated computer-aided coding and rating methods (Dascalu et al. 2015; de Laat et al. 2007; Erkens et al. 2016; Erkens and Janssen 2008; Rosé et al. 2008). As a pattern recognition tool, quantification becomes a means to an end. Quantification can be used to identify common trends and problems that occur during collaboration or associations between specific language patterns and their impact on learning processes and outcomes. For example, we can count the frequency of certain types of discussion problems and see whether these problems interfere with learning outcomes. Alternatively, we can look for the prevalence of higher order thinking events to see whether they increase around specific collaborative activities.
Quantitative Approaches to Language in CSCL
587
Thus, we are not measuring the quality of different language events, but rather looking for relationships between language patterns and different processes. As a descriptive tool, quantification can help to unpack important activity to help us develop a deeper understanding of it. For example, researchers quantified patterns of argumentation that occurred in a popular massive online gaming context to show how players engaged in these practices (Alagoz 2013). Others have quantified language to describe technology-enhanced interactions associated with higher quality group products (Damsa 2014) or illustrate types of talk that led to richer collaborative learning when working with interactive tabletops (Falcão and Price 2011; Martinez-Maldonado et al. 2013). Beyond description, quantification of language can serve as a tool for evaluation. It can help researchers examine whether an intervention or tool succeeds in changing collaborative interactions in a desired way. For example, it is common in CSCL for researchers to examine the impact of software tools or features on the frequency, pattern, or quality of collaborative argumentation acts (Lund et al. 2007; Nussbaum et al. 2007; Schwarz and Glassner 2007; Schwarz et al. 2011). When examining the effectiveness of an intervention or tool, we expect that its use will lead to a desired change. For this reason, we look to see if and how the desired change occurred as a means to evaluate or improve the intervention or tool. Regardless of how it is used, the quantification of language allows researchers the ability to present readers with discernable language patterns to support research claims. It is a way to ensure that patterns seen by a researcher are not due to cognitive bias, but are a product of identifiable and countable patterns that exist in the data and can be found by others.
2 History and Development: Using Quantification as a Means to Address Issues of Bias Historically, qualitative and quantitative camps have existed in tension with one another. Quantitative research is sometimes criticized as shallow or even atheoretical, while qualitative research is sometimes dismissed as lacking rigor. However, both quantitative and qualitative research has strengths and weaknesses and as Creswell and Creswell (2017) explain, they exist on a continuum. Researchers on the more quantitative end of the spectrum, largely influenced by psychology, prioritize the testing of theories and rely on causal models. They isolate a variable or mechanism, and, if its level can be manipulated, expect its manipulation to affect the outcome of interest. Researchers on the qualitative end of the spectrum, largely influenced by anthropology and sociology, do not rely on causal models. They may produce new frameworks, principles, or models as a product of their research, but prioritize the characterization of human experience over the testing of theories (Creswell and Creswell 2017). The field of learning sciences has both strong qualitative and quantitative branches. The CSCL community, in particular, has made great efforts to bridge
588
M. Borge and C. Rosé
these communities to find productive synergies between them (Suthers et al. 2013). Of course, much research in CSCL is neither purely quantitative nor purely qualitative, but lies somewhere in between. This is because even our most quantitative researchers tend to rely heavily on qualitative forms of data like transcripts of verbal interactions. Regardless of where in the spectrum researchers may fit, they can quantify language as a means to reduce potential biases in their research. There are a variety of ways that biases can impact research when using qualitative data. Brown (1992) and Jordan and Henderson (1995) point out that when working with qualitative data, researchers can suffer from selection bias, where they only analyze events that show positive aspects of their interventions or beliefs. Cognitive biases can also impact data analyses, as a researcher may see patterns in the data that others, even those with similar expertise, may not. Thus, there is a need to establish that the coding of qualitative data has been reliably done and research results could be reliably found by others carrying out similar methods. To address the aforementioned problems, researchers, on the more quantitative end of the spectrum, use a variety of strategies to reduce the subjective nature of qualitative data. One classic example is provided by Chi (1997) who produced a guide for the quantification of qualitative data as a means to use language to objectively determine what a learner knows. Chi’s approach developed a quantitative way to tabulate the content of what is said so as to evaluate it. Quantitative researchers will also compare codes between raters mathematically to ensure that different coders can reliably code the same data the same way, i.e., establish interrater reliability. Inter-rater reliability can be established by using a variety of statistical techniques, such as Pearson’s correlation, intra-class correlation, Kappa, percent agreement, etc. But what about researchers on the more qualitative end of the spectrum? Do they quantify language? You may be surprised to know that they too quantify language in various ways to reduce bias and organize their thinking. For example, content logging, the practice of summarizing qualitative data at specific time intervals, is a form of language quantification in that it orders and simplifies language acts over time into more easily digestible representations of activity. What is more, Jordan and Henderson (1995) recommend using content logs as a means to select discourse-rich sessions and filter down data to those that contain the largest quantity of activity related to the research questions. These types of logs also help qualitative researchers select representative examples for their narratives and organize the unfolding of it. Jordan and Henderson also provide guidelines to ensure for reliable identification of phenomenon that does not rely on the mathematical comparisons of codes between raters. When using this approach, a researcher can use content logs to find critical episodes, and where in time they occurred, so as to play them for a group. The group then examines the episode together to audit what the initial viewer perceived and ensure that there is shared agreement or send the researcher back to reanalyze the episode. So, while qualitative researchers may not quantify language as a part of their analysis, they may do so for data selection, peer auditing, and datareporting purposes.
Quantitative Approaches to Language in CSCL
589
3 State of the Art: Common Language Quantification Frameworks in CSCL In this section, we draw on full articles published in the International Journal of Computer-Supported Collaborative Learning (IJCSCL) from March 2006 to September 2018 that met the following four criteria: 1. The authors examined language that occurred synchronously or asynchronously, as part of dialogue, defined by Enyedy and Hoadley (2006, pp. 243–244) as “an interaction in which participation is distributed across individuals, and where the production of meaning is dynamically negotiated within and dependent on the current context.” 2. The authors included a coding framework, i.e., formal construct map or description of codes and definitions. 3. The authors quantitatively described, measured, identified, or compared patterns in dialogue by using frequency counts, referring to specific points in time as a means to denote change, conducting statistical analysis, or using mathematically aided automated methods, etc. 4. The authors analyzed the content of what was said during dialogue as part of the paper and did not just measure/count amount or direction of participation or provide a meta-analysis of how others analyzed dialogue. Out of the 236 full articles that were examined, 105, or 44.5%, met our criteria. We used these papers as a sample to describe the most common methodological practices associated with quantification of language in our sample. To look for patterns in language quantification, we created a database for the articles with rows for each article and column for the characteristics of each article. Characteristics included research questions and aims, year published, type of data, learning context, unit of analysis for language coding, level of analysis for process/ outcome measures, type of technology used, city and country to which the authors belong, etc. We then examined the characteristics in the database alongside the full article to code articles by the type of research papers they were, the primary theoretical stance they emphasized, as well as research question themes. In determining the primary theoretical stance they emphasized, we considered what the authors identified as their main influence, the types of papers cited in their literature review to support their perspectives on learning, how they discussed the purpose of learning and indicators for learning, and literature cited in their methods section to justify their methodological approach. We then sorted papers by primary theoretical predisposition first and research paper type second to look for similarities across papers. We begin broadly by discussing where our sample of papers came from geographically and contextually and what types of research papers were most prominent. We then describe the primary ways researchers quantified language according to their theoretical stances.
590
3.1
M. Borge and C. Rosé
Geographical and Contextual Representation
The first and last authors of the articles came from a variety of countries across five continents (see Fig. 1). The majority of authors came from four countries: The United States (27 articles), Germany (13 articles), China (11 articles), and the Netherlands (Six articles). The articles covered a range of research contexts, but the majority of the data from the articles came from formal classroom contexts from the countries to which the authors belong (see Table 1). The next most common context was laboratory-based studies, which included data from experimental studies conducted in lab settings and quasi-experimental studies where classrooms would visit lab settings as part of course activity. The least represented contexts were studies conducted in informal learning and professional training contexts. The one article that focused on work training occurred at a professional workshop; there were no studies that were conducted in real-world work contexts. Articles were then classified into four different types of research papers: descriptive studies, methods papers, quasi-experimental studies, experimental studies, and design-based research papers (see Fig. 2 for frequencies of these papers over time).
Fig. 1 Location of first and last authors of representative papers Table 1 Frequency counts of the different contexts represented in the papers Research Contexts (count) Informal learning (6)
Professional development (1) Formal learning (73) Experimental (20) Unknown (5)
Subcontext Afterschool Museum Summer camp Work raining Classroom Online course Laboratory N/A
Total 3 2 1 1 58 14 21 5
Percent
5.71% 0.95% 69.52% 19.05% 4.76%
Quantitative Approaches to Language in CSCL
591
Fig. 2 The frequency of different types of research papers in IJCSCL that quantified language, over three time periods
This allowed us to see what types of papers were most likely to quantify language that occurred during collaboration. The two most common types of papers were descriptive studies (37.1% of papers) and quasi-experimental studies (27.6% of papers). Both of these types of studies have remained strong throughout the life of the journal and were conducted in realworld contexts (i.e., classrooms, online courses, or museums), where random assignment was not possible. Descriptive studies did not compare or test variables, whereas quasi-experimental studies focused on comparing the effects of different variables (i.e., interventions, software features, specific processes, or digital tools) on collaborative discourse practices and other outcomes. Methods papers were the next most common type of article, accounting for 15.2% of all identified papers. Methods papers were more common in earlier publications (2006–2010) than in more recent publications. These papers proposed ways to segment and categorize language manually or with computer assistance. Experimental studies accounted for 12.4% of all identified papers, but have increased in frequency in recent years (2015–2018). These studies were conducted in laboratory settings where participants were not completing aspects of regular, graded course activity, and were randomly assigned to conditions. Finally, design-based papers accounted for 7.6% of our sample. These papers were not common in our sample and have decreased in frequency since 2010. These papers described the root concept and evolution of a technology or curriculum. They also shared multiple iterations of user testing or classroom implementations for the purpose of redesign or documentation.
3.2
Primary Theoretical Stance
In CSCL, research has a social learning component, but within that social learning framework, the application of theories exists on a scale from more cognitive to more sociocultural. These theoretical stances also coincide with the spectrum of more
592
M. Borge and C. Rosé
Table 2 Description of the three main theoretical frameworks that are used to capture the spectrum of theoretical stances within the field of CSCL Primary theoretical stance Cognitivism
Theoretical assumptions Learning occurs in the head through cognitive restructuring of psychological and biological mechanisms in the brain
Social constructivism
Learning occurs in different forms (beliefs, values, interaction patterns, strategies, procedures, etc.) at multiple levels
Socioculturalism
Learning occurs in the social plane as a function of the gradual internalization/appropriation of social processes: cultural beliefs, norms, expectations, practices, and value systems
How language is treated As a stimulus for triggering cognitive processes; Language is quantified as a means to determine whether desired cognitive processes occurred or whether specific information was transferred As a method for promotingmore sophisticated social, cognitive processes; Language is quantified as a means to determine sophistication cognitive activity that occurred during social processes or changes in different forms of learning As a method for characterizing forms of cultural activity; Language is quantified as a means to determine where in time an activity took place, how common an occurrence it was, or how it changed
Primary unit of analysis The individual
Examples Concept mapping, conceptual change, scripting/scaffolding of individual thought processes, etc.
The individual and interactions between individuals
Scripting/scaffolding of collaborative processes, argumentation, peer tutoring, teacher–student interactions, sociometacognition, etc.
Interactions between individuals, between individuals and objects, between individuals and groups, within groups, or within communities
Dialogic Theory, Group Cognition, Identity, community knowledge building, diffusion of knowledge positioning and participation, etc.
quantitative (cognitive) to less quantitative (sociocultural) research designs. For the purposes of this paper, we categorized papers on this spectrum by examining their literature review and methods (see Table 2 for description of categories on the spectrum). We examined how they conceptualized social learning to determine
Quantitative Approaches to Language in CSCL
593
Fig. 3 Number of paper types by theoretical stance
which of the three theories on the spectrum they emphasized and what types of papers were most commonly produced. We then examined how they quantified language. In doing so, we found that differences in how language was treated within different theoretical camps aligned with epistemological commitments stemming from the influence of two fields that have greatly shaped CSCL, psychology, and anthropology. Psychology’s influence is more prevalent on the cognitive end of the spectrum, whereas anthropology’s influence is more pronounced on the sociocultural end. As such, the purpose and methods for quantifying language were highly influenced by theoretical stances. This was particularly evident in the types of papers that each stance produced and how they used language quantification to support claims. Epistemological differences were apparent in the types of papers produced by researchers, with experimental/quasi-experimental papers being produced more by those toward the cognitive end and more descriptive papers toward the sociocultural end (see Fig. 3 for counts of paper types by theoretical stance). Of the 105 papers in our sample, 2.9% were classified as being on the cognitive end of the spectrum and included quasi-experimental and experimental papers. The largest amount of papers, 66.7%, were classified under social constructivism. These papers were fairly split between quasi-experimental and descriptive papers, but also included 12/13 of the experimental papers. Just under a quarter of our sample (22.9%) was categorized under socioculturalism, which primarily produced descriptive papers. Finally, eight papers, (7.6%), were not classified at all because they were more technologically or methodologically oriented than they were oriented toward theories of learning. Theoretical and epistemological differences between these papers, often at odds with each other, impacted how language was perceived and quantified. This is true even in our sample that required language be quantified as part of process analysis. At the more cognitive end of the theoretical spectrum learning is conceptualized as cognitive restructuring and typically studied with the individual student as the
594
M. Borge and C. Rosé
level of analysis. Common theoretical frameworks include mental models (JohnsonLaird 1980), conceptual change (Nersessian 1989; Posner et al. 1982; White 1993), scripting or scaffolding of individual thought processes or behaviors (Bruner 1978; Schank 1980; Schank and Abelson 2013), and metacognition (Brown 1978; Brown 1987; Flavell 1979). A common way language is treated within this paradigm was as a stimulus for triggering individual cognitive processes. Researchers on this end of the spectrum focused on evaluating conversational patterns with respect to which cognitive processes they triggered and why. Highly influenced by methods in psychology, conversational patterns were categorized and defined in a reproducible way so that they could be counted and analyzed alongside variables to determine whether there was a statistical relationship between them. For example, Molenaar et al. (2011) wanted to examine the relationships between different scaffolds in an agent-based collaborative environment, metacognitive discussion activity, and individual learning outcomes. To do so, they segmented online communication from synchronous agent-led discussions by turns of talk and then classified the turns by type of activity: cognitive, metacognitive, relational, procedural, off-task, etc. They calculated the proportion of talk for each student in each category in relation to total turns of talk in the team. Then they connected the proportion of individual talk to individual learning outcomes to show that higher proportions of individual metacognitive talk led to higher levels of individual domain and metacognitive knowledge. At the sociocultural end of the spectrum, we find communities of practice and apprenticeship learning (Lave 1996; Lave and Wenger 1991; Rogoff et al. 1995). Language is treated more generally as a socialization tool and begins to broaden from the individual within a community to the community’s common practices as a whole. This end of the spectrum contains a range of research aims as learning is conceived as different types of cultural endeavors: (1) a collective, dialogic endeavor; (2) a knowledge creation endeavor; or (3) a participatory endeavor. At this end of the spectrum, there is less emphasis on the cognitive factors associated with learning and more on cultural factors like how people use cultural tools to mediate their activity (Morrow and Brown 1994), or how cultural tools are appropriated in interactions within a community. Cultural tools can include physical tools like computers, psychological tools like beliefs, as well as other people (Kozulin 1998). The analysis of conversation also goes beyond the Vygotskian apprenticeship model to examine how cultural practices are transmitted, mutated, and transferred from one person or context to the next. Researchers on the sociocultural end of the spectrum aligned more with anthropology and were, therefore, more qualitative, prioritizing realism and ecological validity over precision of measurement and control of variables. As such, they depended more on the rich description of participants, contexts, and unfolding events. They emphasized the enhancement of individual cognitive processes less and placed more emphasis on cultural tendencies and practices. As such, these papers often included case studies and quantified language for the purpose of showing how events unfolded temporally by counting the number of times an event occurred, pinpointing when in time an event occurred, or characterizing the
Quantitative Approaches to Language in CSCL
595
length of time of an event (e.g., see Fields and Kafai 2009; Kershner et al. 2010; Oner 2016; Simpson et al. 2017). The majority of these papers also coded language at the level of the small group or community by coding community notes, collective products, or episodes of joint talk. When examining learning at an individual level, sociocultural papers did not emphasize what individuals learned, but rather how and why it was learned. For example, Öztok (2016) examined how personal identity mediated knowledge construction during online discourse. To do so, he built on previous coding frameworks (Gunawardena et al. 1997; Wise and Chiu 2011) to code online discussion threads by the knowledge construction phase they pertained to. After coding threads of discussion by individual contribution, discussion threads were examined to show how individuals drew on different aspects of their identity to engage in knowledge construction. In one of the analyzed cases, one person enacted their teacher identity to discuss issues of students’ psychological safety in online environments, but then drew on her identity as a political activist to discuss larger issues related to the politics of social media use. Thus, Öztok’s main reason for quantifying language was as a means to describe context and connect different phases of talk to a psychological tool, identity. No statistical analysis was conducted, nor were descriptive statistics used. The main forms of language quantification were rating the type of exchanges according to knowledge construction phase and counting the number of notes created by a thread. In between the two theoretical poles of cognitive and sociocultural theory, we find social constructivism. This framework often bridges these two ends of the spectrum by combining cognitive and sociocultural theories, i.e., scripting or scaffolding of collaborative processes (Baker and Lund 1997; Dillenbourg 1999; Fischer and Mandl 2001; Rummel and Spada 2005), cognitive apprenticeship (Collins et al. 1991), and Fostering Communities of Learners (Brown and Campione 1996). Papers that were categorized under social constructivism tended to emphasize the cognitive end of the spectrum more than the sociocultural end and therefore shared more similarities with cognitive papers on our spectrum than sociocultural ones. Similar to the cognitive stance, social constructivists conceptualized learning as existing in the heads of individuals, but also facilitated by cognitive activities that occurred in different aspects of the social realm. For this reason, they described learning as an inherently social process that existed in multiple forms and scales of analysis, but often examined social learning processes in relation to individual learning outcomes. A primary pedagogical aim for this group was to use language as a means to create more sophisticated cognitive opportunities for learners and thereby enhance individual learning outcomes. As such, they relied heavily on defining and analyzing conversational patterns in reproducible ways and on quantifying different aspects of the analytical process to facilitate comparisons between process and outcome variables. Papers in this category focused on identifying content-based cognitive activity that occurred during collaborative discourse so as to determine whether there were mathematical relationships between identified patterns and interventions, technological features, or learning. The codification of conversational processes focused on
596
M. Borge and C. Rosé
identification and classification of knowledge co-construction processes like distribution of cognitive roles, argumentation, and help exchange, but primarily at the level of the individual message or utterance. For example, Cesareni et al. (2016) focused on examining the impact of assigned cognitive roles on the amount of individual participation and whether the content of discourse varied between roletakers and non-role-takers. They devised a coding scheme that synthesized sociocultural and cognitive theories to focus on four “global conversational functions” that included (a) introducing new problems or content, (b) taking up or revising previous information or ideas, (c) evaluating or reflecting, and (d) fostering and/or maintaining relations. To look for statistical differences between role conditions, Cesareni et al. (2016) segmented online communication into sentence-based segments and coded each segment of the online communication according to the type of conversational function. After coding, they used chi-square analysis to determine if there were significant differences between role-takers and non-role-takers on the distribution of the coded categories and whether there were differences between the conversational moves made by different role-takers. So, while they examined language that occurred within a group, they coded language at an individual level, i.e., how individuals contributed to the group conversation. Coding language at the level of the individual contribution was the most common methodological approach within social constructivism, especially quasiexperimental papers. Only 4 out of the 27 quasi-experimental papers in this category coded language at an interactional or group level. Three of these papers devised coding schemes to identify how team members responded to other’s discussion moves (Mercier et al. 2014; Rummel et al. 2012; Su et al. 2018) and one paper examined patterns of interactions between group members for entire discussions over time (Borge et al. 2018). As an illustration, Mercier et al. (2014) coded for specific leadership acts at the level of the individual turn, but also coded the success of these moves by examining whether team member turns that followed accepted them. They then examined the frequency of general leadership moves and successful leadership moves across tasks and children. Borge et al. (2018) devised rubrics to assess the quality of interactions that occurred between individuals throughout an entire discussion session. Their coding scheme accounted for different types of sense-making, number of speakers, and length of episodes. They then relied on statistical analysis of these patterns to see whether the use of an instructional intervention helped teams significantly improve on these patterns over time and to what extent different conditions of support facilitated higher quality content-based discourse and regulation. Even descriptive papers in this category relied heavily on the codification and quantitative analysis of language. These papers coded language as a means to describe the types of cognitive processes that occurred during collaboration and the relationships that existed between these processes and other social conditions. They then used descriptive statistics as a means to highlight these patterns. For example, Wise et al. (2014) examined online posts in asynchronous discussions to examine relationships between a variety of “listening” and “speaking” behaviors and knowledge co-construction behaviors. Listening behaviors were characterized by
Quantitative Approaches to Language in CSCL
597
Table 3 Overview of one of the coding schemes used by Wise et al. (2014) with Cohen’s Kappa for each scale Discursiveness Responsiveness (κ ¼ 0.71) 0 None 1 Acknowledging 2 Responding to an idea 3 responding to multiple ideas Content Argumentation (κ ¼ 0.74) 0 No argumentation 1 Unsupported argumentation (Position only) 2 Simple argumentation (Position + Reasoning) 3 Complex argumentation (Position + Reasoning + Qualifier/preemptive rebuttal) Reflectivity Reflection on Individual Process (κ ¼ 0.83) 0 No individual reflection 1 Shallow individual reflection 2 Deep individual reflection
Elicitation (κ ¼ 0.91) 0 None 1 Questions not clearly directed and anyone 2 Questions directed to one person 3 Questions directed to the group
Reflection on Group Processes (κ ¼ 0.75) 0 No group reflection 1 Shallow group reflection 2 Deep group reflection
different types of log-based behavior, such as number of “unique posts made by others that a student viewed divided by the total # of posts made by others,” with some listening behaviors showing evidence for richer engagement (Wise et al. 2014, p. 195). The authors then examined the relationship between different types of “listening” behaviors and different quality posts, or “speaking” behaviors, which they conceptualized as the presence of common knowledge co-construction behaviors like argumentation, idea-building, and questioning behaviors. To look for relationships between listening and speaking behaviors, 479 posts were extracted from an online asynchronous discussion-based tool and scored for five different speaking variables (see Table 3). These individual posts were used to examine relationships between individual and community “listening” and “speaking” behaviors. Statistical models of community activity were based on variable averages per discussion week. As such, language itself was coded at the level of the individual post, but was examined at multiple levels. Our analysis of papers demonstrates that theoretical stances have a large impact on how we use language as a tool for understanding social processes and supporting research claims. It also shows that there is a range of how we quantify language that, like our theoretical stances, exists on a spectrum. Our analysis also shows that the majority of researchers who are quantifying process language are social constructivists. Therefore, most of what we know about collaborative processes derived from
598
M. Borge and C. Rosé
quantification of language in IJCSCL was derived from examining how individuals contribute to and learn from collaborative interactions. In IJCSCL, very little has been published that quantitatively examines collaborative interactions at the level of the interactional group or community. Moreover, our sample did not adequately represent the diverse contexts and people that engage in computer supported collaborative learning across the globe. To understand these processes in a quantitative way, we need more contextual and cultural representation. We also need to expand how we examine collaborative interactions at the process level so as to account for interactions between multiple collaborators over time.
4 The Future: Emerging Trends and Important Considerations In recent years, the CSCL community has engaged in community-wide reflection of its scope and aims, including the recent Wise and Schwarz article with eight provocations for the field (Wise and Schwarz 2017). As the field continues to evolve, some emerging trends have implications in particular for the quantification of language. One of the grand challenges for the quantification of language going forward is accounting for cultural differences in language use. In the field of sociolinguistics, accounting for these differences has been of keen interest both in quantitative work on variationist approaches and qualitative work in interactional sociolinguistics. Despite the vast amount of work on these issues, cross-cultural comparisons remain methodologically problematic, though they are critical to our field. Weinberger et al. (2013) have published a careful quantification of differences in discourse practices in collaboration between students in German and in Finland engaging in the same task as well as differences in response to the same intervention. This flagship study paves the way for more work in the CSCL community in this important area going forward. A running theme in quantification of language related to collaboration is the extent to which social meaning in language is highly contextual and creative (Lakoff and Johnson 1980), which poses challenges for quantification, including modeling via machine learning (Nguyen et al. 2016). Much of the field of computational linguistics has focused on levels of linguistic analysis that are largely principle based and thus can be specified in a highly regular, top-down fashion. However, social meaning is more organic (Gee 2004). Though patterns of meaning have some regularities, they have notable irregularities that highlight the extent to which social meaning is a performance, and identification through language patterns is a choice (Goffman 1959; Martin and Rose 2003, 2007; Martin and White 2005). As characterization of collaboration involves the interplay of both cognitive and social processes, we cannot escape the complexity of constructs that have some regularity, but not to the extent that discretization would ideally rely on. Thus, we find ourselves in constant tension to specify what it is we are looking for. The meaning of social
Quantitative Approaches to Language in CSCL
599
signals embedded in language is community specific. But communities and identification are moving targets. As the CSCL community moves forward to examine collaboration at multiple time scales, these questions will become richer and more challenging to address. For many reasons, we predict a growing interest in this area going forward, especially as the desire to employ machine learning modeling in the work of the field grows. For one thing, issues related to cross-cultural comparison become more important as equity grows as an emphasis in the work of the community, understanding differences in communication, and how these differences impact participation in learning and positioning with respect to advancement become topics of keen interest to this community. With the large representation of sociocultural approaches to research in the CSCL community, we have highlighted in the review above, we are well positioned to take on this challenge, and thus the need for partnership both across methodological camps within the field and interdisciplinary collaborations with modeling experts grows. A related emerging area of emphasis is informal learning in social media where we already see a great deal of quantification in published research, especially quantification employing large-scale statistical modeling and machine learning. Here again the open domain poses challenges as does the opportunity to study learning over longer time periods. An additional challenge is that we have only partial information about the interactions we observe in social media. In the absence of experimental control and complete awareness of the contexts where users are positioned, we lack an anchor from which to account for many sources of variance in language choices beyond those variables we are specifically studying. On the one hand, we have access to vast numbers of participants. But on the other hand, we have a Zipfian distribution on the quantity of their participation, which means that for the great majority of those users, we actually have extremely little visibility into their participation. Populations are far more diverse, and yet we lack the signal that would enable us to account for that diversity. Thus, many important sources of variance in language behavior have an influence but are otherwise invisible to us—the invisible hand. A final but critical emerging direction is a growing emphasis on learning at work. Surveying the articles from the ijCSCL journal since its inception that has featured quantification of language, none of them have focused in this direction. As concerns about the changing landscape of employment and the growing need for reeducating those who are employed full time but either in danger of losing their jobs or not able to earn their subsistence, and in the wake of failure of Massive Open Online Courses (MOOCs) to provide a panacea, we expect to see more studies of learning embedded in work scenarios, including collaborative learning scenarios. In work settings, the tasks are more complex and varied, and social processes are similarly more complex, especially since more is at stake and working relationships exist over far longer spans of time. Efficiency as well as the trade-offs between learning and performance must be reconsidered. Learning is harder to quantify in the absence of a formal curriculum. These characteristics pose challenges for the science of learning in many aspects of methodology, including the quantification of language. With adult learners, we can expect more sophistication in their language in general in
600
M. Borge and C. Rosé
comparison with typical classroom corpora. Reflection of social processes will be far more subtle. Social hierarchies will also play a larger role. The interplay between professional roles and roles within the collaboration will be an added dimension of complexity.
References Alagoz, E. (2013). Social argumentation in online synchronous communication. International Journal of Computer-Supported Collaborative Learning, 8(4), 399–426. Baker, M., & Lund, K. (1997). Promoting reflective interactions in a CSCL environment. Journal of computer assisted learning, 13(3), 175–193. Borge, M., Ong, Y. S., & Rosé, C. P. (2018). Learning to monitor and regulate collective thinking processes. International Journal of Computer-Supported Collaborative Learning, 13(1), 61–92. Brown, A. L. (1978). Knowing when, where, and how to remember: A problem of metacognition. In R. Glaser (Ed.), Advances in instructional psychology (Vol. 1, pp. 77–165). Mahwah, NJ: Lawrence Erlbaum. Brown, A. L. (1987). Metacognition, executive control, self-regulation and other more mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 65–116). Mahwah, NJ: Lawrence Erlbaum. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141–178. Brown, A. L., & Campione, J. C. (1996). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. Mahwah, NJ: Lawrence Erlbaum. Bruner, J. (1978). The role of dialogue in language acquisition. The Child’s Conception of Language, 2(3), 241–256. Cesareni, D., Cacciamani, S., & Fujita, N. (2016). Role taking and knowledge building in a blended university course. International Journal of Computer-Supported Collaborative Learning, 11(1), 9–39. Chi, M. T. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6(3), 271–315. Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 15(3), 6–11. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage Publications. Damsa, C. I. (2014). The multi-layered nature of small-group learning: productive interactions in object-oriented collaboration. International Journal of Computer-Supported Collaborative Learning, 9(3), 247–281. Dascalu, M., Trausan-Matu, S., McNamara, D. S., & Dessus, P. (2015). ReaderBench: Automated evaluation of collaboration based on cohesion and dialogism. International Journal of Computer-Supported Collaborative Learning, 10(4), 395–423. de Laat, M., Lally, V., Lipponen, L., & Simons, R. (2007). Investigating patterns of interaction in networked learning and computer-supported collaborative learning: A role for social network analysis. International Journal of Computer-Supported Collaboration, 2(1), 87–103. Dillenbourg, P. (1999). What do you mean by “collaborative learning”? In P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches (pp. 1–16). Elsevier Science. Enyedy, N., & Hoadley, C. M. (2006). From dialogue to monologue and back: Middle spaces in computer-mediated learning. International Journal of Computer-Supported Collaborative Learning, 1(4), 413–439.
Quantitative Approaches to Language in CSCL
601
Erkens, G., & Janssen, J. (2008). Automatic coding of dialogue acts in collaboration protocols. International Journal of Computer-Supported Collaborative Learning, 3(4), 447–470. Erkens, M., Bodemer, D., & Hoppe, H. U. (2016). Improving collaborative learning in the classroom: Text mining based grouping and representing. International Journal of ComputerSupported Collaborative Learning, 11(4), 387–415. Falcão, T. P., & Price, S. (2011). Interfering and resolving: How tabletop interaction facilitates co-construction of argumentative knowledge. International Journal of Computer-Supported Collaborative Learning, 6(4), 539–559. Fields, D. A., & Kafai, Y. B. (2009). A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world. International Journal of Computer-Supported Collaborative Learning, 4(1), 47–68. Fischer, F., & Mandl, H. (2001). Facilitating the construction of shared knowledge with graphical representation tools in face-to-face and computer-mediated scenarios. In P. Dillenbourg, A. Eurelings, & K. Hakkarainen (Eds.), Proceedings of euro-CSCL 2001 (pp. 230–236). Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906. Gee, J. P. (2004). An introduction to discourse analysis: Theory and method. New York: Routledge. Goffman, E. (1959). The presentation of self in everyday life. Harmondsworth: Doubleday. Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397–431. Howley, I., Mayfield, E., & Rosé, C. P. (2013). Linguistic analysis methods for studying small groups. In C. Hmelo-Silver, A. O’Donnell, C. Chan, & C. Chin (Eds.), International handbook of collaborative learning. Hoboken: Taylor and Francis. Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive Science, 4(1), 71–115. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103. Kershner, R., Mercer, N., Warwick, P., & Staarman, J. K. (2010). Can the interactive whiteboard support young children’s collaborative communication and thinking in classroom science activities? International Journal of Computer-Supported Collaborative Learning, 5(4), 359–383. Kozulin, A. (1998). Psychological tools: A sociocultural approach to education. Cambridge: Harvard University Press. Lakoff, G. J., & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press. Lave, J. (1996). Teaching, as learning, in practice. Mind, Culture, and Activity, 3(3), 149–164. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. Lund, K., Molinari, G., Séjourné, A., & Baker, M. (2007). How do argumentation diagrams compare when student pairs use them as a means for debate or as a tool for representing debate? International Journal of Computer-Supported Collaborative Learning, 2(2–3), 273–295. Martin, J. R., & Rose, D. (2003). Working with discourse: Meaning beyond the clause. Bloomsbury Publishing. Martin, J. R., & Rose, D. (2007). Interacting with text: The role of dialogue in learning to read and write. Foreign Languages in China, 4(5), 66–80. Martin, J. R., & White, P. (2005). R, The evaluation of language: Appraisal in English. Palgrave Macmillan. Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., & Yacef, K. (2013). Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. International Journal of Computer-Supported Collaborative Learning, 8 (4), 455–485. Meier, A., Spada, H., & Rummel, N. (2007). A rating scheme for assessing the quality of computersupported collaboration processes. International Journal of Computer-Supported Collaborative Learning, 2(1), 63–86.
602
M. Borge and C. Rosé
Mercier, E. M., Higgins, S. E., & da Costa, L. (2014). Different leaders: emergent organizational and intellectual leadership in children’s collaborative learning groups. International Journal of Computer-Supported Collaborative Learning, 9(4), 397–432. Molenaar, I., Chiu, M. M., Sleegers, P., & van Boxtel, C. (2011). Scaffolding of small groups’ metacognitive activities with an avatar. International Journal of Computer-Supported Collaborative Learning, 6(4), 601–624. Morrow, R. A., & Brown, D. D. (1994). Critical theory and methodology (Vol. 3). London: Sage. Nersessian, N. J. (1989). Conceptual change in science and in science education. Synthese, 80(1), 163–183. Nguyen, D., Dogruöz, A. S., Rosé, C. P., & de Jong, F. (2016). Computational sociolinguistics: A survey. Computational Linguistics, 42(3), 537–593. Nussbaum, E. M., Winsor, D. L., Aqui, Y. M., & Poliquin, A. M. (2007). Putting the pieces together: Online argumentation vee diagrams enhance thinking during discussions. International Journal of Computer-Supported Collaborative Learning, 2(4), 479–500. Oner, D. (2016). Tracing the change in discourse in a collaborative dynamic geometry environment: From visual to more mathematical. International Journal of Computer-Supported Collaborative Learning, 11(1), 59–88. Öztok, M. (2016). Cultural ways of constructing knowledge: the role of identities in online group discussions. International Journal of Computer-Supported Collaborative Learning, 11(2), 157–186. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change. Science Education, 66(2), 211–227. Rogoff, B., Baker-Sennett, J., Lacasa, P., & Goldsmith, D. (1995). Development through participation in sociocultural activity. New Directions for Child and Adolescent Development, 1995 (67), 45–65. Rosé, C. P., Wang, Y. C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., & Fischer, F. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer Supported Collaborative Learning, 3(3), 237–271. Rummel, N., Mullins, D., & Spada, H. (2012). Scripted collaborative learning with the cognitive tutor algebra. International Journal of Computer-Supported Collaborative Learning, 7(2), 307–339. Rummel, N., & Spada, H. (2005). Learning to collaborate: An instructional approach to promoting collaborative problem solving in computer-mediated settings. The Journal of the Learning Sciences, 14(2), 201–241. Schank, R. C. (1980). Language and memory. Cognitive Science, 4(3), 243–284. Schank, R. C., & Abelson, R. P. (2013). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hillsdale: Psychology Press. Schwarz, B. B., & Glassner, A. (2007). The role of floor control and of ontology in argumentative activities with discussion-based tools. International Journal of Computer-Supported Collaborative Learning, 2(4), 449–478. Schwarz, B. B., Schur, Y., Pensso, H., & Tayer, N. (2011). Perspective taking and synchronous argumentation for learning the day/night cycle. International Journal of Computer-Supported Collaborative Learning, 6(1), 113–138. Simpson, A., Bannister, N., & Matthews, G. (2017). Cracking her codes: understanding shared technology resources as positioning artifacts for power and status in CSCL environments. International Journal of Computer-Supported Collaborative Learning, 12(3), 221–249. Su, Y., Li, Y., Hu, H., & Rosé, C. P. (2018). Exploring college English language learners’ self and social regulation of learning during wiki-supported collaborative reading activities. International Journal of Computer-Supported Collaborative Learning, 13(1), 35–60. Suthers, D., Lund, K., Rosé, C. P., Teplovs, C., & Law, N. (Eds.). (2013). Productive multivocality in the analysis of group interactions. New York: Springer.
Quantitative Approaches to Language in CSCL
603
Tissenbaum, M., Berland, M., & Lyons, L. (2017). DCLM framework: understanding collaboration in open-ended tabletop learning environments. International Journal of Computer-Supported Collaborative Learning, 12(1), 35–64. Weinberger, A., Marttunen, M., Laurinen, L., & Stegmann, K. (2013). Inducing socio-cognitive conflict in Finnish and German groups of online learners by CSCL script. International Journal of Computer-Supported Collaborative Learning, 8(3), 333–349. White, B. Y. (1993). ThinkerTools: Causal models, conceptual change, and science education. Cognition and Instruction, 10(1), 1–100. Wise, A. F., & Chiu, M. M. (2011). Analyzing temporal patterns of knowledge construction in a role-based online discussion. International Journal of Computer-Supported Collaborative Learning, 6(3), 445–470. Wise, A. F., Hausknecht, S. N., & Zhao, Y. (2014). Attending to others’ posts in asynchronous discussions: Learners’ online “listening” and its relationship to speaking. International Journal of Computer-Supported Collaborative Learning, 9(2), 185–209. Wise, A. F., & Schwarz, B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12, 423–467.
Further Readings Chi, M. T. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6(3), 271–315. This article is a well-known and often cited guide for analyzing the content of what people say in an objective and quantifiable way. In the article, Chi explains this type of verbal analysis, distinguishes it from other forms of analysis, and then provides a step-by-step guide for carrying it out. Csanadi, A., Eagan, B., Kollar, I., Shaffer, D. W., & Fischer, F. (2018). When coding-and-counting is not enough: using epistemic network analysis (ENA) to analyze verbal data in CSCL research. International Journal of Computer-Supported Collaborative Learning, 13(4), 419–438. This paper was not included in our original sample because it was published after our analysis was concluded. It provides an interesting critique of common coding and counting techniques and conducts an empirical investigation comparing a traditional coding and counting technique to one that uses temporal analysis, epistemic network analysis (ENA). They ask “which technique provides the best explanation of group differences with respect to learners’ engagement in different learning actions?” They found that ENA provided a deeper understanding of how cognitive activities unfolded and the relationships between thinking processes. Derry, S. J., Pea, R. D., Barron, B., Engle, R. A., Erickson, F., Goldman, R., Hall, R., Koschmann, T., Lemke, J. L., Garmoran Sherin, M., & Sherin, B. L. (2010). Conducting video research in the learning sciences: Guidance on selection, analysis, technology, and ethics. The Journal of the Learning Sciences, 19(1), 3–53. This article draws on the combined expertise of many learning scientists to point out major challenges in using video data. They go through and explain four sets of challenges including data selection, data analysis, technology to support qualitative data analysis, and ethics surrounding the analysis and sharing of qualitative data. In doing so, they provide suggestions to help researchers address challenges and conduct high-quality and ethically sound work. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103. This article provides a good foundation for qualitative research. It has examples of how to transcribe video in different ways, how to create content logs of video as a means to objectively filter data, and also describes a process for reducing bias in video analysis through peer auditing and collective viewing of video. Lund, K., & Suthers, D. D. (2013). Methodological dimensions. In D. Suthers, K. Lund, C. Rosé, C. Teploys, & N. Law (Eds.), Productive multivocality in the analysis of group interactions
604
M. Borge and C. Rosé
(pp. 21–35). New York: Springer. This chapter, along with the rest of the book, stems from a larger project aiming to build bridges between researchers with different philosophical assumptions and methodological predispositions. This chapter explains important concepts related to research like methodological assumptions, ontology, and epistemology. It also explains how our beliefs about how people learn, what learning is, and what can be known to influence the purpose of our research, and the approaches we take to make sense of it.
Qualitative Approaches to Language in CSCL Suraj Uttamchandani and Jessica Nina Lester
Abstract In this chapter, we discuss qualitative approaches to the study of language and discourse and their potential relevance for CSCL researchers. We begin by overviewing these approaches generally. Next, we discuss how language-based methodologies have historically been used in CSCL. We contextualize two of the more common methodological approaches in the field: conversation analysis and interaction analysis. Next, we discuss two methodological approaches to discourse analysis that have not yet seen wide use in CSCL but that we argue are of relevance to the field: critical discourse analysis and discursive psychology. For each approach, we briefly outline its history, analytic process, and quality markers and provide an illustrative example. We conclude by discussing the challenges and possibilities for using qualitative approaches to language in CSCL research. Keywords Computer-mediated communication · Interaction analysis · Conversation analysis · Critical discourse analysis · Discursive psychology
1 Definitions and Scope: The Landscape of Discourse Analysis This chapter introduces the reader to qualitatively-oriented language-based methodologies and methods, specifically those which we argue are either relatively common (e.g., interaction analysis) or less common but particularly promising for scholars in CSCL (e.g., discursive psychology). Generally, language-based methodologies
S. Uttamchandani (*) Center for Research on Learning and Technology, Indiana University, Bloomington, IN, USA e-mail: [email protected] J. N. Lester Counseling & Educational Psychology, Indiana University, Bloomington, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_33
605
606
S. Uttamchandani and J. N. Lester
include those qualitative methodologies and methods, such as discourse analytic approaches and conversation analysis, which focus on the close study of language, with language conceived of as including modes beyond linguistic/verbal communication. Within the landscape of language-based methodologies, discourse and conversation analysts have long provided perspectives on how researchers might go about studying talk (defined broadly) and texts (also defined broadly) (Jørgensen and Phillips 2002). Given that this methodological area is rather vast, in this chapter we highlight the place of conversation analysis and interaction analysis in CSCL scholarship and then introduce two discourse analytic perspectives (i.e., critical discourse analysis and discursive psychology) that may be newer to CSCL researchers. Discourse analysis (DA) does not have a single definition and is perhaps best described as an umbrella term that includes within it a range of theories and qualitative approaches focused on the study of language. Within this range of approaches and perspectives, there are also varied meanings of “language” (e.g., embodied interactions, digital discourse, materiality, etc.) and the kinds of “language” that might be conceived of as being relevant to a discourse analytic study. Broadly, discourse analytic approaches focus on the study of talk (e.g., classroom conversations, dinnertime conversations, etc.) and texts (e.g., blog posts, Twitter feeds, Facebook chats, asynchronous discussion forums, etc.) as produced in everyday social life (Potter 2012) wherein language is not assumed to be solely representative of inner thoughts; rather, it is assumed that language is always doing something. For instance, stating “will you come with me” is more than simply a stream of words; the way this utterance is structured also makes it a question or invitation to do something. That is, the statement “will you come with me” is structured grammatically in a particular way and this very structuring allows the statement to be heard as a question or invitation. In other words, language is presumed to be performative. Such ideas related to language are not new, as they can be traced back to linguistic philosophers such as Wittgenstein (1958), Winch (1967), and others (see Lester 2011, for a discussion of the history of discourse analysis as it relates to cognition). In the 1980s, however, there was a proliferation of discourse analytic perspectives across many disciplines that resulted in a range of discourse analytic approaches, including Bakhtinian discourse analysis (Bakhtin 1981), the discourse analysis model (Sinclair and Coulthard 1975, 1992), discursive psychology (Edwards and Potter 1992), Foucauldian discourse analysis (Arribas-Ayloon and Walkerdine 2008), and interactional sociolinguistics (Gumperz 1999), to name only a few. Many of these discourse analytic approaches arose in direct response to the concerns and interests of a particular discipline but have since been widely disseminated and used in multidisciplinary ways. For instance, in the 1980s and early 1990s, discursive psychology emerged as scholars working within social psychology began offering a critique of how language was conceptualized by that field (see Potter and Wetherell 1987, for further discussion). In the past decade, however, discursive psychology has been used within a range of disciplines. Across the many unique discourse analytic perspectives, there are several shared assumptions (Jørgensen and Phillips 2002). First, as noted above, language is presumed to be performative; that is, it is in and through language that people accomplish things. Language, for example, can be used to make a complaint or to
Qualitative Approaches to Language in CSCL
607
ascribe a particular identity. Notably, qualitative research writ large often studies people’s views, experiences, and perspectives via the language they produce to represent it. Yet, one of the things that distinguish language-based methodologies and methods from some of the other approaches to qualitative research is the orientation to language as performative. This particular focus is one that leads analysts to attend to what the language is doing. For instance, rather than researchers simply asking people to talk about their “identities” in an educational context, a discourse analyst would seek to empirically understand how identities are produced in and through language choices. For example, Benwell and Stokoe (2006) did not assume that all internet discussion forums have “newbies” and “regular participants,” but instead they analyzed how some people constructed their discourse to identify as and perform the role of a newcomer. Second, since discourse analysts assume that it is through language that the social world is built, it is perhaps unsurprising that a social constructionist position underlies many discourse analytic perspectives. Thus, across many of these methodological perspectives, the notion of absolute knowledge is rejected and language is positioned as being central to the generation of knowledge (Berger and Luckmann 1967; Burr 2003). Accordingly, language is viewed as constitutive, rather than merely reflective of inner, mental workings. In this way, language is not positioned as being directly correlated with people’s mental schema. Third, while across discourse analytic perspectives views on criticality and critical theory vary, there is a common commitment to critiquing that which is taken for granted and orienting to knowledge as culturally and historically specific. This view of knowledge is consistent with sociocultural approaches that view knowing and learning as culturally and historically situated (Lave and Wenger 1991) and related to issues of power and politics (Bang and Vossoughi 2016). Alongside these shared assumptions are a set of distinct views on the meaning of discourse, preferred data sources, and analytic foci. Thus, when drawing upon a particular discourse analytic perspective (or conversation analysis, for that matter), it is paramount to become familiar with the assumptions and philosophical underpinnings of that given perspective.
2 History and Development: The Place of Language-Based Methodologies in CSCL Foundationally in the learning sciences, language and discourse are understood to be the primary mediators of behavior (Vygotsky 1978). In this way, language-in-use shapes how individuals think and do work in the world, even as people’s goals and social actions dialectically shape the language they use. Disciplinary learning, then, can be thought of through a participatory metaphor as learning the (linguistic) cultures of a particular domain, where specific terminology, ways of talking, and discourses are a key aspect of such enculturation (Brown et al. 1989; Gee 2007;
608
S. Uttamchandani and J. N. Lester
Sfard 1998). From this sociocultural perspective, Sfard and Cobb (2014) suggested that discourses themselves may be “the primary objects of change in the learning of mathematics” (p. 547) and learning in general. This is consistent with interaction analytic approaches in the learning sciences, which treat learning “as a distributed, ongoing social process” (Jordan and Henderson 1995, p. 42). Sociocultural and social constructivist approaches agree that discursive and communicative interactions are both vital to and indicative of learning (Gutierrez et al. 1995; Palincsar 1998). Therefore, discourse analytic methods are an appropriate approach to studying learning as a concept. Within the CSCL literature base, it has been noted that many published studies continue to rely on primarily quantitative methodologies and methods, many of which serve to fragment (for the purposes of coding and counting) talk and text (Jeong et al. 2014). Indeed, there are reasons that such approaches are useful; however, in this chapter, we seek to highlight other less commonly applied methodological perspectives that orient to language in a bottom-up, inductive fashion. Unlike approaches that employ coding or counting of types of utterances (e.g., Chi 1997), the approaches we discuss here seek to situate individual utterances in their larger discursive context. Jeong et al.’s (2014) literature review of methodologies used in CSCL research found that while the field “eagerly embraced” qualitative methods, the majority (86%) of published studies used quantitative methods (sometimes but not always alongside qualitative methods). Significantly, only 8% of published studies claimed to use discourse analytic approaches, interaction analysis, or conversation analysis. Of the studies included in their review, Jeong et al. described 30% of methodologies employed as “loosely defined qualitative analysis” (p. 313), with this “looseness” positioned as a key methodological challenge of CSCL research. Of the qualitatively-oriented, language-based methodologies that have been employed within CSCL, interaction analysis (discussed in detail below) is perhaps the most recognized and utilized. Similarly, in writing this chapter, we conducted a systematic literature review, focusing on articles published in the International Journal of Computer Supported Collaborative Learning from its inception in 2006 through the summer of 2018. We searched the literature using the terms “critical discourse analysis,” “discursive psychology,” and “interactional sociolinguistics,” none of which resulted in any hits. We also searched the literature using the terms “conversation analysis” (40 hits), “interaction analysis” (60 hits), and “discourse analysis” (44 hits). This resulted in 114 unique hits. Many of these articles were commentaries, used qualitative coding-based approaches to discourse, used quantitative approaches to study discourse, or otherwise used discourse analysis as an umbrella term (rather than a specific subform of discourse analysis, e.g., critical discourse analysis). Therefore, we concluded that discourse analysis broadly is a well-accepted methodological approach in CSCL research, but that few empirical published studies of CSCL position themselves explicitly within one specific established discourse analytic approach. While this may indicate that CSCL scholars work productively across multiple approaches to discourse, we see value for CSCL in drawing on the insights from well-defined discourse analytic traditions. Doing so allows researchers to (a) align theoretical and methodological approaches in their research, (b) draw on
Qualitative Approaches to Language in CSCL
609
existing scholarship to justify claims and generate insights about learners’ talk and text, and (c) consider new possibilities for how CSCL scholars might go about examining talk and text—ways that might offer novel insights on constructs of interest to the field (e.g., disciplinary learning, collaboration, engagement). For example, for each approach we will discuss below, a sample research question relevant to CSCL could be: • Conversation analysis—“How can two (or more) people construct shared meanings for conversations, concepts, and experiences?” (Roschelle 1992, p. 245). • Interaction analysis—“What kinds of resources were recruited by students, and how were they deployed? In what ways, if any, was the setting transformed to support students’ conceptual agency in mathematics activity and learning?” (Ma 2017, pp. 342–343). • Critical discourse analysis—When and how are learners’ political alignments made relevant in their discussion of climate change? • Discursive psychology—In what ways do students’ uses of emojis in a virtual reality simulation in their chatroom make learning visible? Arguably, CSCL’s minimal use and familiarity with language-based methodologies and methods are perhaps linked to what Stahl (2015) described as “positivist conceptions of rigor” within the learning sciences more broadly. As Stahl noted, the majority of research tends to rely “upon pre/post-tests of individuals or coding of individual utterances/postings” (p. 338). And, of important note, Stahl highlights how methodologies such as discourse analytic approaches and conversation analysis require “extensive training and adoption of new practices . . . resulting in reports that may be harder for reviewers . . . to assess” (p. 338). Thus, it is our intent within this chapter to offer an overview of several methodologies that, while infrequently used within CSCL, are potentially fruitful.
2.1
Mainstay Possibilities for CSCL: Conversation Analysis and Interaction Analysis
Within CSCL, the use of conversation analysis (CA) (Sacks 1992), and the closely related interaction analysis (IA) (Jordan and Henderson 1995), has been slowly growing. Notably, CA can be conceptualized as a distinct qualitative methodology that focuses on studying talk-in-interaction in everyday (e.g., dinnertime conversations) and/or institutional contexts (e.g., schools). While there are similarities between CA and discourse analytic approaches, particularly DP, there are some core differences. Notably, CA’s micro-orientation to the study of talk makes it quite distinct from many discourse analytic approaches that define and study discourse as it relates to broader social conditions or structure. Further, it was distinctively
610
S. Uttamchandani and J. N. Lester
developed as a standalone qualitative methodology (see Koschmann and Schwarz this volume, for a discussion of CA’s ethnomethodological foundations). More particularly, CA attends to the sequentiality and orderliness of interactions (ten Have 2007). The primary focus of CA is generally to examine how talk is organized and, more particularly, how people interacting go about making sense of a given interaction. CA arose from the field of sociology, with Harvey Sacks and collaborators (Gail Jefferson and Emanuel Schegloff) credited as its originators. The underlying assumptions of CA have been informed by a range of scholars and disciplinary perspectives, including ethnomethodology, linguistic philosophy, and ethnography, among others. Most specifically, Garfinkel’s (1967) writing around ethnomethodology has had a significant influence on CA given its attention to the methods people use to manage their everyday business. Conversation analysts make sense of patterns of interaction by attending closely to conversational structures within a particular interaction, which includes structures such as repair, turn design, openings/closings, etc. Further, it is important to recognize that CA focuses on talk-in-interaction, rather than “discourse” broadly conceived. This focus is one that highlights CA’s concentration on what talk is doing in the interaction rather than what the talk is simply about (Schegloff 1999). That is, conversation analysts are particularly interested in the how of an interaction (e.g., an interaction may include a range of intonation shifts and significant gaps between conversational turns), not simply what is being talked about (e.g., climate change). As such, people who employ CA focus on the details of the organization of an interaction, attending to the function of things such as silences, inbreaths, intonation, emphasis, etc. Accordingly, a specialized transcription system, the Jefferson method (Jefferson 2004) is used that serves to represent not simply what is said but how something is said (see, Hepburn and Bolden 2017, for a discussion of the transcription processes and practices in CA). Notably, conversation analysts generally favor naturally occurring data rather than data produced for the purposes of research (e.g., interviews, focus groups). The focus on naturally occurring data is aligned with Sacks’ (1992) claims that: If we are to understand and analyze participants’ own concepts and accounts, then we have to find and analyze them not in response to our research questions, but in the places they ordinarily and functionally occur . . . in the activities in which they’re employed. (p. 27)
CA scholars thus emphasize the value of collecting online interactions or video/ audio-recordings of people going about their everyday and institutional activities. This is in contrast to collecting data wherein people are asked to talk about or reflect on their social practices. Nonetheless, some scholars have argued for the value of analyzing interviews when conducting CA studies (see Roulston 2006, for further discussion of this). To carry out a CA-informed analysis, specialized training and closely working with CA-trained scholars is helpful, as this approach to analysis is generally described as complex. And, similar to discourse analytic approaches in general, the analytic process is not conceived of as a stepwise, linear process. Rather, the
Qualitative Approaches to Language in CSCL
611
process is inductive and iterative. Nonetheless, Seedhouse (2004) offered a general overview of five general stages of the analysis process, including: 1. An analyst engages in unmotivated looking to identify patterns of interaction without preconceptions (e.g., a specific theoretical framework) guiding their “looking.” 2. Once a key pattern has been identified, an analyst searches the entire interactional dataset for instances wherein this pattern is present. 3. An analyst comes to understand a pattern in the dataset by studying how it is produced and made sense of by interactants. 4. An analyst carries out a line by line analysis of the various instances of the pattern, while also considering deviant cases. 5. An analyst interprets the primary social action(s) produced by/within a given patterns, thereby offering how a pattern relates to the broader interaction. Historically, the application of CA has centered around hearable (i.e., collected via audio- or video-recordings) and/or viewable (i.e., collected via video-recordings) interactions, with some of the earliest CA studies focused on the analysis of telephone conversations (Sacks 1992). However, CA has now been employed with data collected from a range of contexts, including online contexts (Paulus et al. 2016). In fact, studies of synchronous (e.g., Steensen 2014), quasi-synchronous (e.g., Meredith and Stokoe 2014), and asynchronous (e.g., Lester and Paulus 2011) interactions have drawn upon CA. In a position paper, Giles et al. (2015) argued for the value of a digital approach to CA, noting that while online interactions may look different than face-to-face interactions, CA’s underlying focus on the sequentiality of talk is particularly useful when studying interactions in online contexts. Building upon Giles et al.’s position paper, in a 2017 special issue of the Journal of Pragmatics focused on the microanalysis of online data, Giles et al. (2017) noted that as . . . the social sciences and humanities are turning to digital phenomena as their substantive objects of interest, it is becoming increasingly clear that traditional methods of inquiry need considerable adjustment to fully understand the kinds of interaction that are taking place in online environments. (p. 37).
Many of the articles within this special issue illustrate how traditional approaches to CA might be adapted to understand social interaction in online contexts. Indeed, there is a growing body of empirical work that has leveraged the analytic tools of CA to make sense of a range of phenomena in online contexts.
2.1.1
Interaction Analysis
In CSCL, IA, which we suggest is methodologically aligned with CA, has become commonly employed. IA, which arguably draws upon some of the analytic tools of CA both implicitly and explicitly, helps to “bridge the gap” between CA and the object of study, learning, which can be a concern for CSCL researchers interested in
612
S. Uttamchandani and J. N. Lester
using interpretive approaches (Wise and Schwarz 2017). Specifically, learning sciences scholars have often drawn upon IA to study the “interaction of human beings with each other and with objects in their environment” (Jordan and Henderson 1995, p. 39). Historically, much like contemporary CA (Nevile 2015), IA emphasizes the analysis of video data and embodied interactions (see, Barron 2000 and Stevens and Hall 1998, for early examples), with a wide range of research foci including the study of the structure of collaborative partner interactions (Simpson et al. 2017), augmented reality environments (Enyedy et al. 2015), and learning in middle school classrooms (Enyedy 2003), among others. More recently, diSessa et al. (2016) published an edited volume that brought together commentary and empirical examples of how interaction analysis and knowledge analysis might intersect when studying “knowledge in use” (Hall and Stevens 2016, p. 72) and epistemic cognition. While IA has therefore been generative for the study of learning, we believe that other discourse analytic approaches can also target constructs of relevance to the CSCL community and that these approaches may offer new insights on the relationship between discourse and collaborative learning in particular.
3 State of the Art: Discourse Analytic Perspectives of Relevance to CSCL As noted above, there is a multitude of distinct approaches to DA (Jørgensen and Phillips 2002), with some focused on more “macro”-oriented perspectives to language (e.g., the discourse of racial inequity) and others focused more on micro, everyday interactions (e.g., the way in which “question formulations” make visible how teachers position some students as knowledgeable and others as unknowledgeable). These varied approaches afford researchers analytic flexibility in studying a variety of relevant topics across times and contexts. And, quite importantly, they bring unique theoretical perspectives and positions on the meaning(s) of discourse, data, and analysis. While we next offer an overview of two distinct discourse analytic perspectives which we position as being particularly useful for CSCL scholars, we do not offer a stepwise discussion of how to design and ultimately carry out a DA study. Thus, what we offer is not comprehensive, but rather highlights key aspects of each methodological approach with the intent of inviting readers to “dive deeper” into studying the philosophical underpinnings and analytic practices of each approach through continued engagement. For indeed, as Stahl (2015) noted, these methodologies do require training and close study. More specifically, we highlight next: (1) critical discourse analysis, which includes a broad range of approaches (Fairclough et al. 2011); and (2) discursive psychology (Edwards and Potter 1992), a lesser known and used methodological approach in CSCL but widely used in other disciplines.
Qualitative Approaches to Language in CSCL
3.1
613
Critical Discourse Analysis
Critical discourse analysis (CDA) is a form of discourse analysis that explicitly seeks to understand the relationship between discourse and issues of power, inequality, and hegemony. It takes as foundational that there exist social inequities and that these inequities are visible in and constituted (to a degree) by discourse. As an approach, it is “critical” in that this approach rejects the idea of “objective” research and instead positions the researcher as a social actor and the CDA as an openly political project (Wodak and Meyer 2001). At the level of theory, CDA typically draws from other critical social perspectives (e.g., feminist theory, critical race theory, Marxism), with Foucault’s (1980) notion of power informing many scholars who take up CDA. Critical discourse analysts are concerned both with the way that social inequality is made manifest in discourse, as well as the role discursive practices have in (re)producing social inequality (van Dijk 2003). This lens is brought to bear in the theory and method of this approach.
3.1.1
Key Features of CDA
CDA is comprised of a variety of approaches that are focused on the relationship between discourse and social inequality. Fairclough et al. (2011) identified six families of approaches to CDA, which include socio-cognitive approaches, corpusbased or computer-mediated approaches, critical linguistics, Fairclough’s approach (see Fairclough 1992), a discourse-historical approach (see Wodak 2001), and argumentation and rhetoric. Across these approaches, “discourse” is treated broadly. Discourse can include words, pictures, symbols, gestures, social practices, and meaning-making resources broadly (Fairclough et al. 2011). Discourse is considered one of many social aspects involved in human organization and therefore these other social aspects, like government and law, cultural traditions, and physical spaces, are assumed to both shape and be shaped by discourse. CDA centers issues of ideology and power, and it is therefore adept at linking the social as manifested at the micro-level (discourses) with the macro-level (sociopolitical and cultural–historical contexts). Therefore, CDA scholars consider both micro and macro discourses in their analyses. Employing CDA in the study of learning would benefit from linking critical social theory to learning sciences concepts (see Esmonde and Booker 2017, for a fuller discussion of how this might be done). Two of the approaches to CDA are particularly relevant to the CSCL community: the socio-cognitive approach and corpus-based CDA. First, the socio-cognitive approach to CDA (van Dijk 2008) is based on a discourse–cognition–society triangle, in which cognition is treated as a mediator between discourse and social situations and structures, as discourse can only influence the social when it is filtered through individual’s cognition and vice versa (van Dijk 2015). Using this approach to study racist discourse, for example, would involve synthesizing across discourses,
614
S. Uttamchandani and J. N. Lester
people’s underlying ideologies and other cognitive features, and macro-level factors like politics and power (see van Dijk 2015 for extended discussion of this example). This theory of cognition is consistent with many learning sciences approaches (e.g., conceptual change research) and therefore might be particularly relevant to the study of learning. Second, corpus-based CDA allows for analysts to work with large datasets and combine both quantitative and qualitative analyses. In addition, some scholars have argued that a corpus-based approach is one that results in less researcher biases, by engaging corpus-based techniques (Marko 2008). Analytically, research in the CDA tradition begins with the identification of a social issue or topic to investigate (e.g., education policy reform, achievement inequities, civil rights). The analyst considers how critical social theorists have discussed the topic. They then narrow in on the methods and data that might be most effective for making sense of the topic as their own understanding changes through such reading. Data in CDA studies are as varied as historical texts, multimodal video data, audio recordings, and the written law. From this point, CDA is quite methodologically flexible, which perhaps contributes to the wide range of approaches that scholars bring to CDA. Notably, CDA is not without critique, particularly given its “top-down” orientation. A CDA approach is one that assumes a priori categories, such as race, gender, and ethnicity, are relevant. In contrast, some scholars have argued that the individuals involved in a given interactions are what makes a particular social category relevant (Benwell and Stokoe 2006).
3.1.2
Quality Markers of a CDA Study
Several criteria have been noted as important considerations for assuring the quality or validity of a CDA study (Meyer 2001). First, the completeness of a CDA study is often considered, with “the results of a study” being viewed as complete “if new data and the analysis of new linguistic devices reveal no new findings” (Meyer, p. 29). Second, accessibility of findings has been noted as important, as this particular criterion takes into account CDA’s pragmatic aims of generating findings that can be attended to by the very “social groups under investigation” (Meyer, p. 29). Third, triangulation has been positioned as useful. More particularly, Wodak (1999) noted in relation to triangulation the importance of exploring “the interconnectedness of discursive practices and extralinguistic social structures” (p. 188). Scollon (2001) also pointed to the importance of triangulation in CDA research, stating that “clear triangulation procedures are essential in drawing inferences about observations and in producing interpretations” (p. 181). Triangulation in this sense is understood as potentially involving multiple forms of data and inviting participants to respond to emergent findings.
Qualitative Approaches to Language in CSCL
3.1.3
615
Example of a CDA Study Relevant to CSCL
Menard-Warwick (2008) took up CDA to analyze social positioning and its relationship to learning in the context of a class for adult learners of English. MenardWarick drew on audiotapes of classroom observations as well as audio recordings of interviews she conducted with students and teachers. She used critical discourse analysis to understand power relations in the classroom. Drawing on Davies and Harré’s (1990) discussion of social positioning, she connected linguistic resources (e.g., claims of knowledge, corrective feedback, interruptions) to identity construction in the classroom. Grounding her analysis in close examination of several episodes, MenardWarrick concluded that the teacher and students in the class drew on common discourses about employment and gender in order to position themselves and each other (e.g., as a homemaker). Importantly, students attempted to resist such limiting positions at times. Menard-Warwick, drawing on the literature about language learning, then demonstrated that these positionings affected learning in terms of how it afforded and constrained socialization into the practices that were presented as those of English-language speakers. Such analyses are useful in CSCL approaches that explore issues of socialization and collaborative coordination, as they can make visible how identity, positioning, and power negotiations can afford or constrain learning opportunities (e.g., in group work).
3.2
Discursive Psychology
A related but distinct form of qualitative discourse analysis is discursive psychology (DP) (Edwards and Potter 1992). DP is both a methodological and theoretical framework for understanding human activity through analysis of discourse. It focuses on the ways that traditionally psychologized concepts, like attitudes, preferences, and cognition, are constructed, are made relevant, and are deployed in talk and text. In this way, “the issue of cognition is treated as an analytical object (something we study without first making assumptions about what it is) rather than an analytical framework (something we make assumptions about and which then directs what we study)” (Wiggins 2017, p. 5). According to Potter (2012), DP contains three substrands: (1) a focus on interview data and repertoires (patterns) in talk; (2) an analysis of how psychological constructs traditionally treated as mental activity can be understood instead as socially situated in talk (referred to as “respecification” of the construct), and; (3) a focus on the sequential nature and action orientation of talk.
616
3.2.1
S. Uttamchandani and J. N. Lester
Key Features of DP
Like other forms of discourse analysis, DP calls attention to the nature of talk as constructive (i.e., making visible particular versions of the social world at the expense of others) and constructed (i.e., deliberately designed to function conversationally) (Potter and Hepburn 2008). This approach centers a more relativist worldview (Edwards et al. 1995). DP considers itself a branch of psychology but notes that “DP is not a threat to psychology, and should instead be regarded as a different way of doing psychology” (Wiggins 2017, p. 6). In contrast to survey and lab-based approaches, DP focuses on the microlevel treatment of talk as situated in interaction. For example, Wiggins et al. (2001) re-specified the psychological construct of “attitudes towards food.” The study of these attitudes, they argue, traditionally has used methods such as having participants taste a food and rate on a numeric scale how full or satiated they are. By contrast, the authors use discursive psychology to analyze audio recordings of dinner-table talk among families. Their findings reveal how speakers discursively construct themselves as having particular attitudes toward food in order to perform a social action (such as delicately explaining why an eater has left food on their plate). DP research has also highlighted the social nature of constructs like self-conception of masculinity (Wetherell and Edley 2014), memory (Edwards and Potter 1992), and verbal fluency (Muskett et al. 2013). Taken together, this work shows that DP has been a rather productive method to understand these constructs by embracing the inherent instability of talk, that is, assuming that talk does not reflect what is “true” in a person’s mind, but rather that it functions to accomplish something interactionally. DP thus abandons the idea that people’s talk neutrally reflects underlying mental architecture; rather, the talk (not the mind) becomes the target of the research. This leads to the analysis of a construct (like attitudes toward food) that focuses not on finding the “true” construct as it is hidden in the mind, but rather on how the construct is used to accomplish something discursively. Therefore, discursive psychologists “do not expect that an individual’s discourse will be consistent and coherent. Rather, the focus is on the discourse itself; how it is organized and what it is doing” (Potter and Wetherell 1987, p. 49, emphasis original). To study this, discursive psychologists tend to draw on insights from CA, but with a specific analytic construct (e.g., identity as a vegan; Sneijder and te Molder 2009) in mind as the object of study.
3.2.2
Quality Markers of a DP Study
Potter (2012) described a framework for assessing the quality of DP research as being built into the approach itself. He noted that the distinction between “validation,” then, and analysis is a bit blurred, as a central aspect of validating the findings is in attending to the details of the analysis. He also proposed four specific areas for consideration. First, in alignment with its close association with CA, a DP analysis
Qualitative Approaches to Language in CSCL
617
fundamentally considers the orientations of participants within a given interaction. A key assumption is that all utterances are better understood by considering the preceding turn of talk, as well as the subsequent utterance (Heritage 1984). Within DP, it is argued that by attending to participants’ orientations (as made visible in the turn-by-turn sequence of talk), the “interpretative gap” is reduced, as the analyst aims to stay close to the participants’ utterances (Edwards 2012). Second, scholars who employ DP intentionally seek out alternative cases and explanations (Potter 2004). In doing so, the analyst aims to intentionally attend to inconsistencies and diversity within the participants’ talk (Potter and Wetherell 1987). Third, a DP analysis aims to illustrate coherence (or not) with other researches around similar conversational features, with this practice understood as serving to substantiate and bolster the interpretation (Potter 2012). Finally, a DP study’s findings are written in a way that allows for reader evaluation. By thoroughly and transparently presenting how each analytic claim is supported by excerpts from the larger corpus of data, the analyst provides space for the reader to evaluate their claims (Potter 1996).
3.2.3
Example of a DP Study
Lester and Paulus (2011) deployed DP to analyze conversations in a CSCL environment, specifically blog posts students created in a university-level nutrition science course. In a two-week unit on dietary supplements, students were required to make at least one post and five comments on other students’ posts before each lecture. Following the lecture, students were required to make at least one additional post and five more comments, intended to have students reflect on what they had learned. They used DP, informed by three broad DP-type analytic questions: “(1) What are the students doing/accomplishing with their language?, (2) How are they constructing their language in order to achieve this?, and (3) What resources are being used to perform these tasks?” (p. 5). The authors’ DP analysis illustrated that students used a variety of discursive devices to manage appearing knowledgeable, such as employing disclaimers (e.g., “I don’t know”) and following an “academic script” (i.e., using topic sentences and explicitly defining terms). Although the blog post instructions foregrounded students speaking “informally” by focusing on their personal experiences and beliefs, these findings demonstrated that students still oriented to the task as an institutional, school-type task and therefore engaged in school-style discourse. This type of analysis is useful in CSCL contexts in that it demonstrates how students, at the level of discourse, take up learning tasks.
618
S. Uttamchandani and J. N. Lester
4 The Future: Challenges and Possibilities for Discourse Analytic Approaches to CSCL Research Although CA, IA, CDA, and DP are methodological approaches that can be illuminating to the study of learning, they have not yet seen wide use in CSCL research. As the relevance of multivocal and multimethod approaches to methodology in CSCL becomes increasingly important for the robustness of the larger CSCL project (cf. Wise and Schwarz 2017), these language-based methodologies have the potential to offer uniquely deep insights on the nature of discourse, collaboration, and learning. While in practice CA has a preference for naturally occurring data, we believe that when interventionist and design-based research contexts are conceptualized as “institutional talk” (Antaki 2011), they remain robust research sites from the a discourse-analytic perspective. CSCL and these named discourse analytic traditions have a great deal to learn from one another. While both CDA and DP have primarily studied face-to-face interaction, which may be of relevance to many CSCL researchers, studying textbased discourse and computer-mediated discourse has also seen great success in DP (e.g., Goodman and Rowe 2014; Sneijder and te Molder 2009) and is an area of interest in CDA (e.g., Mautner 2005; Weir 2005). CSCL researchers can build on and contribute to these methodologies through increased attention to computersupported data (Paulus and Wise 2019). With regard to collaboration, these methodologies are particularly well-suited to conceptualizations of collaboration that prioritize the co-construction of knowledge by focusing on group cognition, collaborative knowledge building, and joint engagement in shared discursive spaces (Hakkarainen et al. 2013; Stahl 2007). Taking on these approaches can help with contemporary CSCL challenges of taking on microecological approaches (Borge and Mercier 2019) and can offer the “different methodological approaches [that] are needed to tackle the challenge of exploring and mapping the landscape of CSCL support and to work towards a comprehensive framework of CSCL support” (Rummel 2018, p. 128). Furthermore, CSCL’s rich history of clarifying what is meant by “learning” lends itself to novel insights for these methodologies, refining their conceptualizations of knowledge and discursive change. Theoretically speaking, some have argued that such interpretive (rather than analytic) approaches are too methodologically rigid to study constructs of interest to CSCL researchers (cf. Wise and Schwarz 2017). We, however, argue that CA, CDA, and DP are useful approaches for studying learning (particularly when learning is viewed as a change in discourse), collaboration (particularly when collaboration is understood to require intersubjectivity as achieved through discourse), learner identity (particularly when identity is seen as a joint accomplishment between a learner and their environment, including other people; Hand and Gresalfi 2015), and other important constructs in CSCL research. In addition, as issues of power and privilege become especially relevant to the learning sciences (Esmonde and Booker 2017; Politics of Learning Writing Collective 2017), as we have illustrated, qualitative language-based methodologies and methods can offer a
Qualitative Approaches to Language in CSCL
619
rigorous approach to studying the taken for granted and issues of power development. As CSCL research continues to engage with emergent phenomena of interest, we envision qualitative language-based methodologies and methods playing an important role in unearthing new understandings of constructs of interest to the field.
References Antaki, C. (2011). Applied conversation analysis: Intervention and change in institutional talk. Springer. Arribas-Ayloon, M., & Walkerdine, V. (2008). Foucauldian discourse analysis. In C. Willig & W. Stainton-Rogers (Eds.), The SAGE handbook of qualitative research in psychology (pp. 91–108). Sage. Bakhtin, M. M. (1981). Discourse in the novel. In M. Holquist (Ed.), The dialogic imagination: Four essays by M. M. Bakhtin (pp. 259–422). University of Texas Press. Bang, M., & Vossoughi, S. (Eds.). (2016). Participatory design research and educational justice: Studying learning and relations within social change making [special issue]. Cognition and Instruction, 34(3), 173–193. Barron, B. (2000). Achieving coordination in collaborative problem-solving groups. The Journal of the Learning Sciences, 9(4), 403–436. Benwell, B., & Stokoe, E. (2006). Discourse and identity. Edinburgh University Press. Berger, P. L., & Luckmann, T. (1967). The social construction of reality: A treatise in the sociology of knowledge. Garden City. Borge, M., & Mercier, E. (2019). Towards a micro-ecological approach to CSCL. International Journal of Computer-Supported Collaborative Learning, 14, 219–235. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42. Burr, V. (2003). Social constructionism (2nd ed.). Routledge. Chi, M. T. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6(3), 271–315. Davies, B., & Harré, R. (1990). Positioning: The discursive production of selves. Journal for the Theory of Social Behaviour, 20(1), 43–63. diSessa, A., Levin, M., & Brown, N. J. (2016). Knowledge and interaction: A synthetic agenda for the learning sciences. Routledge. Edwards, D. (2012). Discursive and scientific psychology. British Journal of Social Psychology, 51 (3), 425–435. Edwards, D., Ashmore, M., & Potter, J. (1995). Death and furniture: The rhetoric, politics and theology of bottom line arguments against relativism. History of the Human Sciences, 8(2), 25–49. Edwards, D., & Potter, J. (1992). Discursive psychology. Sage Publications. Enyedy, N. (2003). Knowledge construction and collective practice: At the intersection of learning, talk, and social configurations in a computer-mediated mathematics classroom. The Journal of the Learning Sciences, 12(3), 361–407. Enyedy, N., Danish, J. A., & DeLiema, D. (2015). Constructing liminal blends in a collaborative augmented-reality learning environment. International Journal of Computer-Supported Collaborative Learning, 10(1), 7–34. Esmonde, I., & Booker, A. N. (2017). Power and privilege in the learning sciences; critical and sociocultural theories of learning. Routledge. Fairclough, N. (1992). Discourse and social change. Polity Press. Fairclough, N., Mulderrig, J., & Wodak, R. (2011). Critical discourse analysis. In T. In van Dijk (Ed.), Discourse studies: A multidisciplinary introduction (pp. 357–378). Sage.
620
S. Uttamchandani and J. N. Lester
Foucault, M. (1980). Power/knowledge: Selected interviews and other writings, 1972–1977. Pantheon. Garfinkel, H. (1967). Studies in ethnomethodology. Prentice-Hall. Gee, J. P. (2007). Social linguistics and literacies: Ideology in discourses (3rd ed.). Routledge. Giles, D. C., Stommel, W., & Paulus, T. M. (2017). Introduction: The microanalysis of online data: The next stage. Journal of Pragmatics, 115, 37–41. Giles, D. C., Stommel, W., Paulus, T. M., Lester, J. N., & Reed, D. (2015). Microanalysis of online data the methodological development of “digital CA”. Discourse, Context, & Media, 7, 45–51. Goodman, S., & Rowe, L. (2014). ‘Maybe it is prejudice. . . but it is NOT racism’: Negotiating racism in discussion forums about gypsies. Discourse & Society, 25(1), 32–46. Gumperz, J. (1999). On interactional sociolinguistic method. In S. Sarangi & C. Roberts (Eds.), Talk, work and institutional order: Discourse in medical, mediation and management settings (pp. 453–471). Mouton de Gruyter. Gutierrez, K., Rymes, B., & Larson, J. (1995). Script, counterscript, and underlife in the classroom: James Brown versus Brown v. Board of Education. Harvard Educational Review, 65(3), 445–472. Hakkarainen, K., Paavola, S., Kangas, K., & Seitamaa-Hakkarainen, P. (2013). Sociocultural perspectives on collaborative learning: Toward collaborative knowledge creation. In C. Hmelo-Silver, C. A. Chinn, C. K. K. Chan, & A. M. O’Donnell (Eds.), The international handbook of collaborative learning (pp. 57–73). Routledge. Hall, R., & Stevens, R. (2016). Developing approaches to interaction analysis of knowledge in use. In A. A. diSessa, M. Levin, & N. J. S. Brown (Eds.), Knowledge and interaction: A synthetic agenda for the learning sciences. Routledge. Hand, V., & Gresalfi, M. (2015). The joint accomplishment of identity. Educational Psychologist, 50(3), 190–203. Hepburn, A., & Bolden, G. B. (2017). Transcribing for social research. Sage. Heritage, J. (1984). Garfinkel and ethnomethodology. John Wiley & Sons. Jefferson, G. (2004). Glossary of transcript symbols with an introduction. In G. H. Lerner (Ed.), Conversation analysis: Studies from the first generation (pp. 13–34). Benjamins. Jeong, H., Hmelo-Silver, C. E., & Yu, Y. (2014). An examination of CSCL methodological practices and the influence of theoretical frameworks 2005-2009. International Journal of Computer-Supported Collaborative Learning, 9(3), 305–334. https://doi.org/10.1007/s11412014-9198-3. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103. Jørgensen, M., & Phillips, L. (2002). Discourse analysis as theory and method. SAGE. Koschmann, T., & Schwarz, B. B. (this volume). Case studies in theory and practice. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. Lester, J. N. (2011). Exploring the borders of cognitive and discursive psychology: A methodological reconceptualization of cognition and discourse. Journal of Cognitive Education and Psychology, 10(3), 280–293. Lester, J. N., & Paulus, T. M. (2011). Accountability and public displays of knowing in an undergraduate computer-mediated communication context. Discourse Studies, 13(6), 671–686. Ma, J. Y. (2017). Multi-party, whole-body interactions in mathematical activity. Cognition and Instruction, 35(2), 141–164. Marko, G. (2008). Penetrating language: A critical discourse analysis of pornography. Gunter Narr. Mautner, G. (2005). Time to get wired: Using web-based corpora in critical discourse analysis. Discourse & Society, 16(6), 809–828. Menard-Warwick, J. (2008). Because she made beds. Every day’. Social positioning, classroom discourse, and language learning. Applied Linguistics, 29(2), 267–289.
Qualitative Approaches to Language in CSCL
621
Meredith, J., & Stokoe, E. (2014). Repair: Comparing Facebook ‘chat’ with spoken interaction. Discourse & Communication, 8(2), 181–207. Meyer, M. (2001). Between theory, method, and politics: Positioning of the approaches to CDA. In R. Wodak & M. Meyer (Eds.), Methods of critical discourse analysis (pp. 14–31). Sage. Muskett, T., Body, R., & Perkins, M. (2013). A discursive psychology critique of semantic verbal fluency assessment and its interpretation. Theory & Psychology, 23(2), 205–226. Nevile, M. (2015). The embodied turn in research on language and social interaction. Research on Language and Social Interaction, 48(2), 121–151. Palincsar, A. S. (1998). Social constructivist perspectives on teaching and learning. Annual Review of Psychology, 49(1), 345–375. Paulus, T., Warren, A., & Lester, J. N. (2016). Applying conversation analysis methods to online talk: A literature review. Discourse, Context & Media, 12, 1–10. Paulus, T. M., & Wise, A. F. (2019). Looking for insight, transformation, and learning in online talk. Routledge. Politics of Learning Writing Collective. (2017). The learning sciences in a new era of US nationalism. Cognition & Instruction, 35(2), 91–102. Potter, J. (1996). Attitudes, social representations and discursive psychology. In M. Wetherell (Ed.), Identities, groups and social issues (pp. 119–173). Sage. Potter, J. (2004). Discourse analysis. In M. A. Hardy & A. Bryman (Eds.), Handbook of data analysis (pp. 607–624). Sage. Potter, J. (2012). Discourse analysis and discursive psychology. In H. Cooper (Ed.), APA handbook of research methods in psychology: Vol. 2. Quantitative, qualitative, neuropsychological, and biological (pp. 111–130). American Psychological Association Press. Potter, J., & Hepburn, A. (2008). Discursive constructionism. In J. A. Holstein & J. F. Gubrium (Eds.), Handbook of constructionist research (pp. 275–293). Guildford Press. Potter, J., & Wetherell, M. (1987). Discourse and social psychology: Beyond attitudes and behaviors. Sage Publications. Roschelle, J. (1992). Learning by collaborating: Convergent conceptual change. The Journal of the Learning Sciences, 2(3), 235–276. Roulston, K. (2006). Close encounters of the ‘CA’ kind: A review of literature analysing talk in research interviews. Qualitative Research, 6(4), 515–534. Rummel, N. (2018). One framework to rule them all? Carrying forward the conversation started by Wise and Schwarz. International Journal of Computer-Supported Collaborative Learning, 13 (1), 123–129. Sacks, H. (With Schegloff, E.). (1992). Lectures on conversation (Vols. 1–2, G. Jefferson, Ed.). Basil Blackwell. Schegloff, E. (1999). Discourse, pragmatics, conversation analysis. Discourse Studies, 1(4), 405–435. Scollon, R. (2001). Action and text: Toward an integrated understanding of the place of text in social (inter)action, mediated discourse analysis and the problem of social action. In R. Wodak & M. Meyer (Eds.), Methods of critical discourse analysis (pp. 139–184) SAGE. Seedhouse, P. (2004). Conversation analysis methodology. Language Learning, 54(S1), 1–54. Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(2), 4–13. Sfard, A., & Cobb, P. (2014). Research in mathematics education: What can it teach us about human learning? In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 545–563). Cambridge University Press. Simpson, A., Bannister, N., & Matthews, G. (2017). Cracking her codes: Understanding shared technology resources as positioning artifacts for power and status in CSCL environments. International Journal of Computer-Supported Collaborative Learning, 12(3), 221–249. Sinclair, J., & Coulthard, M. (1992). Towards an analysis of discourse. In M. Coulthard (Ed.), Advances in spoken discourse analysis (pp. 1–34). London: Routledge.
622
S. Uttamchandani and J. N. Lester
Sinclair, J. M., & Coulthard, M. (1975). Towards an analysis of discourse: The English used by teachers and pupils. London: Oxford University Press. Sneijder, P., & te Molder, H. (2009). Normalizing ideological food choice and eating practices: Identity work in online discussions on veganism. Appetite, 52(3), 621–630. Stahl, G. (2007). Scripting group cognition. In F. Fischer, I. Kollar, H. Mandl, & J. M. Haake (Eds.), Scripting computer-supported collaborative learning (pp. 327–336). New York: Springer. Stahl, G. (2015). A decade of CSCL. International Journal of Computer-Supported Collaborative Learning, 10(4), 337–344. Steensen, S. (2014). Conversing the audience: A methodological exploration of how conversation analysis can contribute to the analysis of interactive journalism. New Media & Society, 16(8), 1197–1213. Stevens, R., & Hall, R. (1998). Disciplined perception: Learning to see in technoscience. In M. Lampert & M. Blunk (Eds.), Talking mathematics in school: Studies of teaching and learning (learning in doing: Social, cognitive and computational perspectives) (pp. 107–149). Cambridge University Press. ten Have, P. (2007). Doing conversation analysis: A practical guide. In Sage Publications (2nd ed.). London. van Dijk, T. A. (2003). Introduction: What is critical discourse analysis? In D. Schiffrin, D. Tannen, & H. E. Hamilton (Eds.), The handbook of discourse analysis (pp. 352–371). Oxford: Blackwell. van Dijk, T. A. (2008). Discourse and context: A sociocognitive approach. New York: Cambridge University Press. van Dijk, T. A. (2015). Critical discourse studies: A sociocognitive approach. In R. Wodak & M. Meyer (Eds.), Methods of critical discourse studies (pp. 63–74). London: Sage. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press. Weir, K. (2005). Critical discourse analysis and internet research. Critical Studies in Education, 46 (2), 67–86. Wetherell, M., & Edley, N. (2014). A discursive psychological framework for analyzing men and masculinities. Psychology of Men & Masculinity, 15(4), 355. Wiggins, S. (2017). Discursive psychology: Theory, method and applications. Los Angeles: Sage. Wiggins, S., Potter, J., & Wildsmith, A. (2001). Eating your words: Discursive psychology and the reconstruction of eating practices. Journal of Health Psychology, 6(1), 5–15. Winch, P. (1967). The idea of a social science and its relation to philosophy. Routledge & Kegan Paul. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Wittgenstein, L. (1958). Philosophical investigations (2nd ed.). Oxford: Basil Blackwell. Wodak, R. (1999). Critical discourse analysis at the end of the 20th century. Research on Language & Social Interaction, 32(1–2), 185–193. Wodak, R. (2001). The discourse-historical approach. Methods of critical discourse analysis, 1, 63–95. Wodak, R., & Meyer, M. (2001). Methods of critical discourse analysis. Sage.
Further Readings Fairclough, N. (2003). Analysing discourse: Textual analysis for social research. Psychology Press. Fairclough’s text, Analyzing discourse: Textual analysis for social discourse (2003), is particularly useful for making sense of core ideas related to CDA. The text is useful for those
Qualitative Approaches to Language in CSCL
623
unfamiliar with CDA, as well as those with some level of familiarity, as it offers a pragmatic approach for designing and carrying out a CDA study. Paulus, T., Warren, A., & Lester, J. (2018). Using conversation analysis to understand how agreements, personal experiences, and cognition verbs function in online discussions. Language@Internet, 15, article 1. Paulus, Warren, and Lester’s (2018) article illustrates how CA can be used to study learning in online asynchronous discussion forums, with the authors including online discussion data generated by students enrolled in a nutrition class at an American university. The authors point to how CA can be employed to understand the social functions of three conversational features (agreements, personal experiences, and cognition verbs). More particular to CSCL scholars, the authors argue that CA serves to offer insights about online data that are different from the predominant analytic frameworks that draw upon scripts and scaffolds to online talk. ten Have, P. (2007). Doing conversation analysis: A practical guide (2nd ed.). London: Sage Publications. To become generally familiar with CA, ten Have’s text, Doing conversation analysis: A practical guide (2007), is a useful starting point. This text covers a range of topics related to CA, including its history, core features, And ways by which to design and carry out a study. Wiggins, S. (2017). Discursive psychology: Theory, method and applications. Sage. Wiggins’ text, Discursive psychology: Theory, method, and application (2017), is the premier text that offers a theoretically grounded and pragmatic perspectives on DP. The text includes discussion of how DP compares to other discourse analytic perspectives, as well as key considerations for conceptualizing, designing, and carrying out a DP study. Wooffitt, R. (2005). Conversation analysis and discourse analysis: A comparative and critical introduction. Sage. Wooffitt’s text, Conversation analysis and discourse analysis, provides a general overview of discourse analytic methods and CA. It is a useful starting point for those less familiar with the assumptions of language-based methodologies, such as CA and DA. The author writes the text in an interdisciplinary way and provides numerous empirical examples and exercises throughout.
Gesture and Gaze: Multimodal Data in Dyadic Interactions Bertrand Schneider, Marcelo Worsley, and Roberto Martinez-Maldonado
Abstract With the advent of new and affordable sensing technologies, CSCL researchers are able to automatically capture collaborative interactions with unprecedented levels of accuracy. This development opens new opportunities and challenges for the field. In this chapter, we describe empirical studies and theoretical frameworks that leverage multimodal sensors to study dyadic interactions. More specifically, we focus on gaze and gesture sensing and how these measures can be associated with constructs such as learning, interaction, and collaboration strategies in colocated settings. We briefly describe the history of the development of multimodal analytics methodologies in CSCL, the state of the art of this area of research, and how data fusion and human-centered techniques are most needed to give meaning to multimodal data when studying collaborative learning groups. We conclude by discussing the future of these developments and their implications for CSCL researchers. Keywords Multimodal sensing · Learning analytics · Eye-tracking · Motion sensing · Colocated collaborative learning · Computational models
B. Schneider (*) Graduate School of Education, Harvard University, Cambridge, MA, USA e-mail: [email protected] M. Worsley Learning Sciences and Computer Science, Northwestern University, Evanston, IL, USA e-mail: [email protected] R. Martinez-Maldonado Faculty of Information Technologies, Monash University, Melbourne, Australia e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_34
625
626
B. Schneider et al.
1 Definitions and Scope Educational researchers have argued for decades that the field needs better ways to capture process data (Werner 1937). More recently in CSCL, Dillenbourg et al. (1996) noted that “empirical studies have started to focus less on establishing parameters for effective collaboration and more on trying to understand the role which such variables play in mediating interaction. This shift to a more processoriented account requires new tools for analyzing and modeling interactions.” Multimodal learning analytics (MMLA; Blikstein and Worsley 2016) is about creating new tools to automatically generate fine-grained process data from multimodal sensors. More specifically, the focus of this chapter is on gesture and gaze data collected in colocated interactions. We recognize that collaboration is the result of subtle microbehaviors, such as learners’ body position, gestures, head orientation, visual attention, and discourse. These actions are complex, intertwined, and result in a rich choreography of behaviors that create sophisticated social interactions. Figure 1 provides a visual representation of the key constructs of this chapter: The first column shows modalities studied by CSCL researchers (e.g., gaze, gestures, speech, dialogue). These modalities provide “Raw Measures” of users’ gaze or body postures. These data are then used to capture specific “Observable Behaviors,” such as joint visual attention (JVA) or body similarity. We can use these behaviors as proxies for “Theoretical Constructs” (Wise et al. this volume), for example, the quality of a group’s common ground (Clark and Brennan 1991) or the extent to which group members mimic each other (Chartrand and Bargh 1999). The raw measures, observables behaviors, and constructs can be used to predict outcomes of interest (e.g., how well a group is collaborating), model collaborative
Fig. 1 How different sensor modalities can help CSCL researchers capture constructs relevant to collaborative learning, and how this can be used to predict, model, explain, and support productive behaviors. In this chapter, we focus on gaze and gestures (even though other modalities—such as speech—are highly relevant in CSCL settings)
Gesture and Gaze: Multimodal Data in Dyadic Interactions
627
processes (e.g., how social interactions change over time), explain them (e.g., contribute to theories of collaboration), or support collaboration (e.g., design interventions that use sensor data to support learning). In the sections below, we describe the history and development of MMLA. We then provide additional definitions for the constructs in Fig. 1 and provide concrete examples of their use.
2 History and Development While MMLA seems to be a new and exciting methodological development, there has been a long tradition of designing multimodal devices to capture human behavior. At the beginning of the twentieth century, Huey (1908) designed the first eye-tracker by having participants wear contact lenses with a small opening for the pupil. Because a pointer was attached to it, Huey was able to make new discoveries on effective reading behaviors. In the 1920s, a German pedagogue, Dr. Kurt Johnen, created a device to measure expert piano players’ breathing and muscular tension as a way to design better instruction for novices (Johnen 1929). In 1977, Manfred Clynes built a device called a “sentograph” which attempted to detect emotions by extracting the length and force applied on a pressure-sensitive finger rest (Clynes 1977). There are many other examples of early “sensors” designed to capture human behaviors. Over the last decade, however, the affordability and accessibility of multimodal sensing have opened new doors for monitoring, analyzing, visualizing, and regulating a variety of learning processes. Depth cameras such as the Microsoft Kinect can collect information about a person’s body joints (x, y, z coordinates), their facial expressions, and their speech 30 times per second. Researchers can obtain more than 100 variables from this sensor, which represents +3000 data points per second for one person. This translates to roughly 10 million data points for an hour of data collection. Multiply this figure by the number of sensors (e.g., eye-trackers, galvanic skin response sensors, emotion detection tools, speech features) and number of learners to get a sense of the possibilities and challenges of combining sensor data with data mining techniques.
3 State of the Art In this section, we describe the state-of-the-art research methods for analyzing gaze and motion data from small groups in educational settings. We start with some definitions, conventions, and findings from the CSCL community and beyond. We conclude this chapter with a comparison of the state of the field for gaze and gesture sensing, comments on the future of associated methodologies, and implications for CSCL researchers.
628
3.1
B. Schneider et al.
Gaze Sensing in CSCL
With sensing devices becoming more affordable, the last decades have seen an increasing number of CSCL researchers taking advantage of eye-trackers to study small collaborative groups. This line of work is grounded in the literature on joint visual attention (Tomasello 1995). Joint attention is an important mechanism for building a common ground (i.e., “grounding,” which allows group members to anticipate and prevent misunderstanding; Clark and Brennan 1991). Educational researchers have built on this idea and extended it to learning scenarios: “From the viewpoint of collaborative learning, misunderstanding is a learning opportunity. In order to repair misunderstandings, partners have to engage in constructive activities: they will build explanations, justify themselves, make explicit some knowledge which would otherwise remain tacit and therefore reflect on their own knowledge, and so forth. This extra effort for grounding, even if it slows down interaction, may lead to better understanding of the task” (Dillenbourg and Traum 2006). In other words, educational researchers go beyond the psycholinguistic definition of grounding to focus on shared meaning making (Stahl 2007). Shared meaning making is associated with “the increased cognitive-interactional effort involved in the transition from learning to understand each other to learning to understand the meanings of the semiotic tools that constitute the mediators of interpersonal interaction” (Baker et al. 1999, p.31). It gradually leads to the construction of new meanings and results in conceptual change. There is some evidence suggesting that groups with high levels of joint visual attention are more likely to iteratively sustain and refine their common understanding of a shared problem space (Barron 2003). Because eye-trackers can provide a rigorous measure of joint visual attention, gaze sensing has become an attractive methodology for studying grounding in collaborative learning groups. The state of the art of CSCL gaze sensing is a dual eye-tracking methodology where pairs of learners solve a problem together and learn from a shared set of resources. Early studies had two participants looking at a different computer screen equipped with an eye-tracker (Jermann et al. 2001). Participants can communicate through an audio channel and have access to the same interface. For dyadic analysis, the two eye-tracking devices need to be synchronized so that the resulting datasets can be combined to compute measures of joint visual attention (JVA). After the data are acquired, there are established methodologies for computing JVA measures. Cross recurrence graphs (Richardson et al. 2007) are commonly used to visually inspect the joined eye-tracking datasets and identify missing data. JVA is then computed according to Richardson and Dale’s findings (Richardson and Dale 2005), where they found that dyad members are rarely perfectly synchronized; it takes participants ±2 s to react to an offer of joint visual attention and respond to it. Thus, for a particular gaze point to count as joint visual attention, researchers usually look at a 4 s time window to check whether the other participant was paying attention to the same location. This methodology provides an overall measure of attentional alignment for dyads.
Gesture and Gaze: Multimodal Data in Dyadic Interactions
629
One common finding is that levels of joint visual attention are positively associated with constructs that the CSCL community cares about. For example, researchers have used established coding schemes to evaluate the quality of a dyad’s collaboration and correlated it with measures of JVA. Meier et al. (2007) developed a coding scheme that characterizes collaboration across nine subdimensions: sustaining mutual understanding, dialogue management, information pooling, reaching consensus, division, time management, technical coordination, reciprocal interaction, and individual task orientation. Among those subdimensions, JVA has been repeatedly found to be significantly associated with a group’s ability to sustain mutual understanding (e.g., Schneider et al. 2015; Schneider and Pea 2013). Some other studies have also found positive correlations between JVA and learning gains (Schneider and Pea 2013), which suggests that this type of collaborative process is not just beneficial to collaboration, but also to learning. This shows that, to some extent, JVA measures can be used to predict collaboration quality and learning. Additional measures of JVA have been developed for specific contexts. For example, “with-me-ness” was developed to measure if students are following along with a teacher’s instruction (Sharma et al. 2014). This measure is calculated by aggregating three features of gaze data: entry time, first fixation duration, and the number of revisits. Entry time is the temporal lag between the time a reference pointer (gaze) appears on the screen and stops at the referred location (x, y) and the time the student first looks at the referred location (x, y). The first fixation duration is how long the student gaze stopped at the referred location for the first time and revisits are the number of times the student’s gaze comes back to the referred location within 4 s. In addition to these measures of JVA, CSCL researchers have also looked at the “attentional similarity” between participants (Sharma et al. 2013). For a given time window (e.g., 5 s), the proportion of time spent on different Areas of Interest (AOIs) is computed and compared across participants using a similarity metric (e.g., the cosine similarity between two vectors). Papavlasopoulou et al. (2017) found that in a pair programming task, teenagers (13- to 17-year-old participants) spent more time overall working together (higher similarity gaze) than younger participants kids (8–12 year old). While this measure is similar to others described above, it uses a less conservative operationalization of joint visual attention. These measures provide alternative ways of modeling joint visual attention in small groups. It is also possible to detect asymmetrical collaboration from the eye-tracking data (Schneider et al. 2018). For each moment of joint attention, one can look at which participant initiated this episode (i.e., the person whose gaze was first present in this area during the previous 2 s) and which student responded to it (i.e., the person whose gaze was there next). The absolute value of the difference between the number of moments that each participant initiated and responded to represents the (im)balance of a group’s “visual leadership.” As an illustration, a group may achieve joint attention during 25% of their time collaborating together; let us say that one student initiated 5% of those moments of JVA, while the other student initiated 20% of those moments. Schneider et al. (2018) found this measure to be negatively correlated with learning gains—meaning that groups in which one person tended
630
B. Schneider et al.
Fig. 2 (Reproduced from Schneider 2019): An example of using dual mobile eye-tracking to capture joint visual attention in a colocated setting (in this particular case, pairs of participants had to program a robot to solve a variety of mazes). The two images on the right show the perspective of the two participants; the left image shows a ground truth where gaze points are remapped using the location of the fiducial markers detected on each image (the white lines connect identical markers)
to always initiate or respond to an offer of joint visual attention were less likely to achieve high learning gains. These findings can help us explain how specific collaborative behaviors can contribute to learning. Additionally, researchers have started to go beyond remote collaboration and use dual eye-tracking in colocated settings using mobile eye-trackers (Schneider et al. 2018). In this type of setup, there is an extra step of spatially synchronizing the two eye-tracking datasets, which is usually done by remapping participants’ gaze into a ground truth (i.e., a common scene that both participants look at). The remapping processes are usually accomplished by disseminating fiducial markers in the environments and using this shared set of coordinates between each participant’s point of view and the ground truth (Fig. 2). When the two gaze points are remapped onto the ground truth, one can reuse the methodology described above for remote interactions and compute the same measure of joint visual attention. Finally, there are practical implications of using dual eye-tracking methodologies beyond quantitatively capturing collaborative processes. The last decade has seen a nascent interest for designing shared gaze visualization—i.e., displaying the gaze of one’s partner on a computer screen to support joint visual attention (see review by d’Angelo and Schneider under review). Shared gaze visualizations have been found to facilitate communication through deictic references, disambiguate vague utterances, and help participants anticipate their partner’s verbal contribution. This is an
Gesture and Gaze: Multimodal Data in Dyadic Interactions
631
exciting new line of research because work goes beyond descriptive measure of collaboration and suggests interventions to support collaboration. While the study of JVA through gaze sensing is reaching some maturity, there are obvious gaps in this area of research. Dual eye-tracking tends to be used in live remote collaboration, which is not the most ecological setting from an educational perspective. Most students still work in colocated spaces, where they work together face-to-face or side-by-side. This lack is slowly being addressed by new methodologies using mobile eye-trackers, which brings more ecological validity to this field of research.
3.2
Gesture Sensing in CSCL
In contrast to eye-tracking, where researchers are looking at the x,y coordinate of a participant’s gaze, gesture tracking (and more generally motion sensing in CSCL) is operationalized at varying levels of granularity. These levels of analysis range from the mere quantification of movement or the complex identification of specific gestures in dyadic interactions to localizing people in physical learning spaces. Part of this breadth in levels of analysis reflects to relative infancy of this area of study. Researchers are in the process of determining the appropriate measures and theoretical grounding for gesture sensing. In this section, we present examples along this spectrum and further note how these approaches are utilized to examine and support collaboration. As is the case with eye tracking, the availability of low-cost gesture tracking technology has enabled researchers to develop and create interfaces that incorporate human gestures. Initially, many of these technological systems relied on an infrared camera (e.g., the Nintendo Wiimote) and an infrared source (e.g., an infrared pen or television remote). This was, for example, used for the mathematical inquiry trainer (Howison et al. 2011), a system that supports embodied learning of fractions. The next wave of gesture technology was heavily fueled by the Microsoft Kinect Sensor and supporting SDK. The Kinect Sensor V2 uses a depth camera to provide a computer vision-based solution to track upper and lower body joints—as well as finger movement, head position, and even the amount of force applied to each appendage. Leong et al. (2015) provide an in-depth comparison of different depth cameras and their capabilities. More recently, advances in computer vision have eliminated the need for specialized data capture hardware. Instead, OpenPose (Cao et al. 2017; Simon et al. 2017; Wei et al. 2016) and DensePose (Güler et al. 2018), for example, train deep neural networks for estimating human body pose, from standard web images or videos cameras. As an example, Ochoa et al. (2018) use OpenPose to provide feedback to users about their body posture during oral presentation training. The result of these technological developments is a growing opportunity to employ use gesture sensing to study collaborative learning environments, without the need for expensive, or invasive wearables.
632
B. Schneider et al.
As previously noted, research on motion sensing in CSCL operates at different levels of complexity (i.e., individual learning, small group interactions, and localizing a larger number of participants in open spaces). Some studies are merely looking to quantify the amount of movement; others examine body synchrony, while still others are concerned with recognizing specific types of gestures or body movements. The specific approaches utilized, as well as how they are operationalized are necessarily impacted by the research questions being explored. At the individual level, several studies have looked at the potential of motion sensing for understanding learning and constructing models of the student learning experience. Schneider and Blikstein (2015), for example, tackled this question by examining prototypical body positions among pairs of learners completing an activity with a tangible user interface. The researchers categorized body postures using unsupervised machine learning algorithms and identified three prototypical states: an “active” posture (positively correlated with learning gains), a “semi-active” posture, and a “passive” posture (negatively correlated with learning gains). Interestingly, the best predictor for learning was the number of times that participants transitioned between those states, suggesting a higher number of iterations between “thinking” about the problem and “acting” on it. Researchers interested in ITSs (intelligent tutoring systems) have also used motion and affective sensing to predict levels of engagement, frustration, and learning using supervised machine learning algorithms. Grafsgaard et al. (2014), for example, found indicators of engagement and frustration by leveraging features about face and gesture (e.g., hand-to-face gestures) and indicators of learning by using face and posture features. These two papers highlight the opportunity for motion sensing to help us better identify patterns of engagement that may be indicative of improved learning, or certain affective states. Specifically, gesture sensing can help researchers predict learning gains or affective states. At the group level, the most basic uses of gesture data involve the quantification of bodily movement among pairs of students collaborating on a given task. For example, Martinez-Maldonado et al. (2017) presented an application of the Kinect by locating it on top of an interactive tabletop to associate actions logged by the multitouch interface with the author of such a touch. Authors applied a sequential pattern mining algorithm on these logs to detect patterns that distinguished high from low-performing small groups in a collaborative concept mapping task. Worsley and Blikstein (2013) used hand/wrist joint movement data to extract patterns of multimodal behaviors of dyads completing an engineering design activity. The gestural data, when taken in conjunction with audio and electrodermal activation data were beneficial in codifying the types of actions students were taking at different phases of the building activity. Such information about student gestural engagement could also be used in a way that is analogous to analyses of turn-taking. Moreover, it can help answer questions about the extent of each participant’s physical contributions to a given learning activity, or, the patterns of participation that emerge between participants as they collaborate with one another. In the same vein, Won et al. (2014b) found that body movements captured by a Kinect sensor could predict learning with 85.7% accuracy in a teacher–student dyad; the top three features were the standard
Gesture and Gaze: Multimodal Data in Dyadic Interactions
633
deviation of the head and torso of the teacher, the skewness of students’ head and torso, and mean of teacher left arm. Other studies have looked at the relationship between body synchronization and group interaction. Won et al. (2014a), for example, found that nonverbal synchrony predicted creativity in 52 collaborative dyads. Models trained with synchrony scores could predict low or high scores of creativity with 86.7% accuracy. In educational contexts, Schneider and Blikstein (2015) looked for the salience of body synchronization by considering the correlation between body position similarity and learning gains. However, the results indicated no correlation between learning and body synchronization in this context. Similarly, Spikol et al. (2017) paired a number of computer vision systems to detect wrist movement and face orientation of small groups of students performing an electronic toy prototyping task in triads. Results indicated that some features, such as the distance between all learners’ hands and the number of times they look at a shared screen, are promising in helping to identify physical engagement, synchronicity, and accountability of students’ actions. Concretely, motion sensing among groups of learners can be used to explain success within given collaborative experience as determined through the relative participation of each individual and their level of synchrony or proximity to their peers. Researchers are also finding ways to leverage gestural data as a means for streamlining and improving the data analysis process. In a study that involved pairs of students completing engineering design tasks, Worsley et al. (2015) were able to show that using body posture information to automatically segment data into meaningful chunks, led to analyses that provided stronger correlations with student performance and student learning. In this particular study, the authors used automatically detected changes in head pose relative to learners’ partners to demark the beginning of a new phase. This approach was compared to human annotation of phases, and taking a fixed window approach, with the body position-based segmentation proving to be quite beneficial. Hence, the utility of gesture data does not necessarily have to be restricted to a final correlation with learning or performance. It can, instead, be used to more adequately group chunks of data into meaningful representations. In this line of work, computational methods provide ways to model students’ behaviors. In another emerging body of work, researchers are exploring the use of gestures, in conjunction with other modalities, to better understand embodied learning in mathematics and science. For example, Abrahamson’s Mathematical Inquiry Trainers (Howison et al. 2011) and Robb Lindgren’s ELASTICS (Kang et al. 2018) platforms represent computer-supported tools that help facilitate student learning with the assistance of a more knowledgeable interviewer. In both instances, the interviewer serves as a collaborator to help guide the student toward learning and articulating mathematical or scientific ideas. In the case of Abrahamson’s work, students use their hands to reason about fractions, either through a touch screen interface, Nintendo Wii mote, or Kinect sensor. In the case of ELASTICs, students use gestures to instantiate different mathematical operations. For example, in Kang et al. participants determine a gestural sequence that will allow them to produce a value of 431. In order to reach this value, students can complete gestures that correspond to add 1, subtract 1, multiply by 10, or divide by 10. These subtasks
634
B. Schneider et al.
exist within a larger task of helping students reason about exponential growth. Crucial for both Abrahamson and Lindgren’s work is the opportunity to create gestural interfaces that allow for embodied experiences, and the availability of visual representations that individuals and/or pairs can utilize to refine their thinking and serve as a context for discussion. This kind of work exemplifies the potential of motion sensor data to support novel, embodied, collaborative learning. These different examples suggest that while there are some similarities and accepted practices in how to analyze gesture data (e.g., the use of joint angles as opposed to three-dimensional x, y, z data), there are still several areas where new innovations and ideas are emerging. The identification of constructs that are analogous to the joint visual attention, for example, does not yet seem to exist within the gesture space. Instead, researchers have found and explored different metrics that aim to characterize the nature of collaboration among groups or pairs of learners.
3.3
Comparison Between Gaze Sensing and Gesture Sensing
In this section, we compare the state of the field in gesture and gaze sensing to illustrate opportunities and challenges to studying small collaborative groups using gaze and motion sensing. Both areas of research have been evolving at different paces and have contributed unique findings to the study of collaborative learning. Table 1 summarizes the main commonalities and differences across those two methodologies: Table 1 A comparison of the state of research using gaze and motion sensing based on the work reviewed in this chapter Raw measures Accuracy Constructs
Methodology Models
Theoretical basis
Gaze Sensing x, y coordinates of gaze in a 2D space (e.g., remote or mobile eye-tracker) Accurate, depending on the eye-tracker used Joint visual attention (Schneider and Pea 2013), attentional similarity (Sharma et al. 2013) Well established; strong conventions (Richardson and Dale 2005) Glass-box traditional statistical models (e.g., Sharma et al. 2014); higher explainability, lower predictive value Well-documented and specific, from developmental (Tomasello 1995) and social (Richardson et al. 2007) psychology
Motion Sensing x, y, z coordinates of dozens of body joints in a 3D space (e.g., Kinect sensor) More noisy and susceptible to occlusion Body movement (Worsley and Blikstein 2013), prototypical states (Schneider and Blikstein 2015), physical synchrony (Won et al. 2014a) In development; currently, there are no strong conventions Black-Box machine learning models (e.g., Won et al. 2014a; b); lower explainability, higher predictive value Emerging and less prescriptive, e.g., embodied cognition (Howison et al. 2011)
Gesture and Gaze: Multimodal Data in Dyadic Interactions
635
A striking difference between those two fields of research is that gaze sensing— through the study of joint visual attention—has developed well-established conventions for visualizing and capturing collaborative processes. This work leverages foundational theories in developmental psychology and has specific hypotheses about the role of visual synchronization for social interactions. Because the raw measures are simpler and the theory is more prescriptive, it has allowed researchers to use more transparent (“glass-box”) statistical models (e.g., Richardson et al. 2007) and design innovative interventions to support collaborative processes—for example by building systems where participants’ gaze can be displayed in real time and shared within the group (Schneider and Pea 2013). Motion sensing, on the other hand, offers larger and more complex datasets. Because theoretical frameworks are less specific (i.e., embodied cognition), there is a wider variety of measures and models being used, with more researchers leveraging “black box” models (i.e., supervised machine learning algorithms) to predict collaborative processes (e.g., Won et al. 2014b). While those models are designed to provide accurate predictions, they tend to be less transparent and offer fewer opportunities for designing interventions. In summary, gaze sensing has benefited from simpler constructs, more prescriptive theoretical frameworks, and accurate sensors to reach a certain level of maturity. Motion sensing, on the other hand, has an untapped potential: the technology is rapidly improving and there are new opportunities to make theoretical contributions, develop innovative measures of group interaction, and design interventions to support collaborative learning processes.
3.4
Fusion
While most of the current body of work has looked at gaze and motion sensing in isolation, there is a growing interest in combining multiple sources of data to provide a more complete depiction of complex social aspects of human activity that would be hard to model considering one modality of group interaction only. In the examples discussed above, multiple data sources have been used to model different aspects of collaborative learning. For instance, gaze sensing is commonly paired with information generated by the learning systems or with transcripts (Schneider and Pea 2015). Gestural data have been enriched by combining them with quantitative traces of speech, such as sound level (Spikol et al. 2017) or turn-taking patterns (MartinezMaldonado et al. 2017), to give meaning to gestures and poses. However, the process of fusing across data streams can bring a number of challenges related to low-level technical issues, such as data modeling and pattern extraction; and higher level aspects, such as sensemaking, data interpretation, and identification of implications for teaching, learning, or collaboration. Some low-level challenges in fusing gaze, gesture, and other sources of data are associated with deciding what features to extract from the data, and how to segment or group the multiple data streams with the purpose of jointly modeling a meaningful
636
B. Schneider et al.
indicator of collaboration or learning. In terms of multifeature extraction, researchers often overlook the opportunity to extract multiple pieces of information from a single data source. In the case of gaze data, for example, multifeature extraction includes determining fixations, saccades, and pupil dilation from the single data source (i.e., the eye tracker). From skeletal tracking information, one might extract pointwise velocity, angular displacement, or distance between body points. The challenge here is in giving interpretative meaning to the selected features that can be obtained from the data for particular contexts. This challenge also applies to how the data is grouped or segmented. Summary statistics represent a simple approach for investigating multimodal data. In principle, this approach merges all of the data from a given modality into a single representation. Researchers commonly use values of mean, median, mode, range, maximum, and minimum. This accomplishes fusion across time, but can grossly oversimplify the data representation. Instead, researchers may wish to “group” data into meaningful segments. Within this paradigm, data can be segmented into chunks that range in size from the entire dataset all the way down to individual data points. One advantage of segmentation is that it can help surface patterns and trends that are localized to particular segments. For example, Worsley and Blikstein (2017) explored the affordances of segmentation by comparing three different approaches. These authors ultimately found that having a combination of semantically meaningful segments and a large number of segments yielded the most meaningful results. At a higher level, there are challenges in giving meaning to fused data across streams and participants. Fundamental to multimodal learning analytics is the idea that a given data stream can only be interpreted in the context of other data streams. However, a key question remains: on what basis can low-level indicators serve as proxies for higher order collaborative learning constructs? From a research perspective, this is a fundamental modeling problem that involves encoding low-level events in data representations that contain a certain amount of contextual information to facilitate higher level abstraction. This is manifested in the learning analytics and educational data mining communities in various forms such as stealth assessment (Shute and Ventura 2013) and evidence-centered design (Mislevy et al. 2012). At the intersection between CSCL and learning analytics, this challenge has been called as mapping “from clicks to constructs” (Wise et al. this volume). From a teaching and learning perspective, modeling group constructs from multiple data streams is a prerequisite for creating interfaces that are intelligible to teachers and learners, who commonly do not have a strong analytical background. Until now, most multimodal analytics for group activity have mainly remained the preserve of researchers (Ochoa 2017). Imbuing traces of gaze and gesture, and other sources of data, with contextual meaning can bring teachers and students into the sensemaking and interpretation loop. One promising approach is that of Echeverria et al. (2019) who proposed a modeling representation to encode each modality of data into one or more of the n columns of a matrix and segments that contain instances of group behaviors into the m rows. From this representation, a set of group visualizations were proposed, each presenting information related to one modality of teamwork, namely speech, arousal, positioning, and logged actions.
Gesture and Gaze: Multimodal Data in Dyadic Interactions
637
In summary, there are numerous technical and sensemaking-related challenges related to combining multiple data sources that need to be addressed in turn. However, the potential benefits, such as the possibility of creating interpretable group models, generating a deeper understanding of collaborative learning, and deploying user interfaces that can provide tailored feedback on collocated activities, outweigh such challenges.
4 The Future The last decade has seen an increasing number of research projects involving gaze and motion sensing. This is a positive development for the CSCL community. This methodology provides researchers with large amounts of process data and new tools to analyze them. Not only does it help automate time-consuming analyses, but it also provides a new perspective to understand collaborative processes. Additionally, it provides researchers with opportunities to develop real-time interventions (e.g., through dashboards or awareness tools; Schneider and Pea 2013). These advances are not without challenges. For example, most of the work presented in this chapter is about dyads, when collaborative groups are often larger than two participants. This poses new opportunities for adapting multimodal measures of collaboration for larger groups (e.g., is JVA occurring when all the participants—or just two group members?—are jointly looking at the same place at the same time?) Researchers are slowly starting to look at larger social contexts, but this is currently an understudied area of research. Another major area of work is the contribution of multimodal studies to theory. Researchers are designing more sophisticated measures of visual synchronization and collaboration (e.g., leadership behaviors, with-me-ness) and turning dual eye-tracking setups into interventions to support collaborative processes. However, this kind of empirical study needs to be replicated and refined before they can be established as significant theoretical contributions to the field of CSCL. More importantly, theories of collaboration have not yet benefited from more fine-grained multimodal measures of collaborative processes. Finally, it should be noted that most studies are unimodal or only combine two data streams together. Very few projects have attempted to combine data sources; data fusion presents new opportunities for studying collaborative learning groups and capturing more sophisticated constructs. With these new opportunities also come increased concerns about data privacy: how should we handle questions around the collection, storage, and analysis of potentially sensitive datasets? It will be important for the CSCL researchers to carefully reflect on these concerns as they look to drive innovation and advance knowledge. In the coming decade, we are expecting to see more affordable and accurate sensors emerge as well as easy-to-use toolkits for analyzing multimodal datasets. With an increased focus on data-driven approaches, we believe that multimodal sensing will become a common tool for educational researchers. Those new tools
638
B. Schneider et al.
will provide new ways to build theories of collaboration and design interventions to support social interactions. We agree with Wise and Schwarz (2017), who argue that CSCL has to embrace those new methods if it wants to stay relevant in an increasingly data-driven world.
References Baker, M., Hansen, T., Joiner, R., & Traum, D. (1999). The role of grounding in collaborative learning tasks. In P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches (pp. 31–63; 223–225). Elsevier. Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359. Blikstein, P., & Worsley, M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238. Cao, Z., Simon, T., Wei, S.-E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7291–7299. Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–behavior link and social interaction. Journal of Personality and Social Psychology, 76(6), 893–910. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. Perspectives on socially shared cognition, 13(1991), 127–149. Clynes, M. (1977). Sentics: The touch of emotions. Garden City: Anchor Press. d’Angelo, S., & Schneider, B. (under review). Shared gaze visualizations in collaborative work: Past, present and future [Manuscript submitted for publication]. Dillenbourg, P., Baker, M., Blaye, A., & O’Malley, C. (1996). The evolution of research on collaborative learning. In P. Reimann & H. Spada (Eds.), Learning in humans and machine: Towards an interdisciplinary learning science (pp. 189–211). Emerald. Dillenbourg, P., & Traum, D. (2006). Sharing solutions: Persistence and grounding in multimodal collaborative problem solving. The Journal of the Learning Sciences, 15(1), 121–151. Echeverria, V., Martinez-Maldonado, R., & Buckingham Shum, S. (2019). Towards collaboration translucence: Giving meaning to multimodal group data. In Proceedings of the 2019 CHI conference on human factors in computing systems (paper 39, pp. 1–16). Association for Computing Machinery. doi: https://doi.org/10.1145/3290605.3300269. Grafsgaard, J. F., Wiggins, J. B., Vail, A. K., Boyer, K. E., Wiebe, E. N., & Lester, J. C. (2014). The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In Proceedings of the sixteenth ACM international conference on multimodal interaction (pp. 42–49). Association for Computing Machinery. doi: https://doi.org/10.1145/ 2663204.2663264. Güler, R. A., Neverova, N., & Kokkinos, I. (2018). DensePose: Dense human pose estimation in the wild. In Proceedings of the 2018 IEEE/CVF conference on computer vision and pattern recognition (pp. 7297–7306). Howison, M., Trninic, D., Reinholz, D., & Abrahamson, D. (2011). The mathematical imagery trainer: from embodied interaction to conceptual learning. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1989–1998). Association for Computing Machinery. doi: https://doi.org/10.1145/1978942.1979230. Huey, E. B. (1908). The psychology and pedagogy of reading. New York: The Macmillan Company. Jermann, P., Mullins, D., Nuessli, M.-A., & Dillenbourg, P. (2001). Collaborative gaze footprints: Correlates of interaction quality. In H. Spada, G. Stahl, N. Miyake, & N. Law (Eds.),
Gesture and Gaze: Multimodal Data in Dyadic Interactions
639
Connecting computer-supported collaborative learning to policy and practice: CSCL2011 conference proceedings (Vol. 1, pp. 184–191). International Society of the Learning Sciences. Johnen, K. (1929). Measures energy used in piano. Popular Science Monthly, 69. Kang, J., Lindgren, R., & Planey, J. (2018). Exploring emergent features of student interaction within an embodied science learning simulation. Multimodal Technologies and Interaction, 2 (3), 39. Leong, C. W., Chen, L., Feng, G., Lee, C. M., & Mulholland, M. (2015). Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits. In ICMI 2015—Proceedings of the 2015 ACM international conference on multimodal interaction (pp. 547–556). Association for Computing Machinery. doi: https://doi.org/10.1145/2818346.2830605. Martinez-Maldonado, R., Kay, J., Buckingham Shum, S., & Yacef, K. (2017). Collocated collaboration analytics: Principles and dilemmas for mining multimodal interaction data. HumanComputer Interaction, 34(1), 1–50. Meier, A., Spada, H., & Rummel, N. (2007). A rating scheme for assessing the quality of computersupported collaboration processes. International Journal of Computer-Supported Collaborative Learning, 2(1), 63–86. Mislevy, R. J., Behrens, J. T., Dicerbo, K. E., & Levy, R. (2012). Design and discovery in educational assessment: Evidence-centered design, psychometrics, and educational data mining. Journal of Educational Data Mining, 4(1), 11–48. Ochoa, X. (2017). Multimodal learning analytics. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The handbook of learning analytics (pp. 129–141). SOLAR. Ochoa, X., Dominguez, F., Guamán, B., Maya, R., Falcones, G., & Castells, J. (2018). The RAP system: Automatic feedback of oral presentation skills using multimodal analysis and low-cost sensors. In Proceedings of the 8th international conference on learning analytics and knowledge (pp. 360–364). ACM. doi: https://doi.org/10.1145/3170358.3170406. Papavlasopoulou, S., Sharma, K., Giannakos, M., & Jaccheri, L. (2017). Using eye-tracking to unveil differences between kids and teens in coding activities. In Proceedings of the 2017 conference on interaction design and children (pp. 171–181). ACM. Richardson, D. C., & Dale, R. (2005). Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cognitive Science, 29 (6), 1045–1060. Richardson, D. C., Dale, R., & Kirkham, N. Z. (2007). The art of conversation is coordination common ground and the coupling of eye movements during dialogue. Psychological Science, 18 (5), 407–413. Schneider, B. (2019). Unpacking collaborative learning processes during hands-on activities using mobile eye-tracking. In The 13th International conference on computer supported collaborative learning (Vol. 1, pp. 41–48). International Society of the Learning Sciences. Schneider, B., & Blikstein, P. (2015). Unraveling students’ interaction around a tangible interface using multimodal learning analytics. Journal of Educational Data Mining, 7(3), 89–116. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. Schneider, B., & Pea, R. (2015). Does seeing one another’s gaze affect group dialogue? A computational approach. Journal of Learning Analytics, 2(2), 107–133. https://doi.org/10. 18608/jla.2015.22.9. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2015). 3D tangibles facilitate joint visual attention in dyads. In Proceedings of the 11th international conference on computer supported collaborative learning (Vol. 1, pp. 158–165). International Society of the Learning Sciences. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2018). Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups. International Journal of Computer-Supported Collaborative Learning, 13(3), 241–261.
640
B. Schneider et al.
Sharma, K., Jermann, P., & Dillenbourg, P. (2014). “With-me-ness”: A gaze-measure for students’ attention in MOOCs. In Proceedings of the 11th international conference of the learning sciences (pp. 1017–1022). ISLS. Sharma, K., Jermann, P., Nüssli, M. A., & Dillenbourg, P. (2013). Understanding collaborative program comprehension: Interlacing gaze and dialogues. In N. Rummel, M. Kapur, M. Nathan, & S. Puntambekar (Eds.), To see the world and a grain of sand: Learning across levels of space, time, and scale: CSCL 2013 Conference Proceedings: Volume 1. Full papers & symposia (pp. 430–437). International Society of the Learning Sciences. Shute, V. J., & Ventura, M. (2013). Stealth assessment: Measuring and supporting learning in video games. London: MIT Press. Simon, T., Joo, H., Matthews, I., & Sheikh, Y. (2017). Hand keypoint detection in single images using multiview bootstrapping. In Proceedings of the 2017 IEEE conference on computer vision and pattern recognition (pp. 1145–1153). IEEE. Spikol, D., Ruffaldi, E., & Cukurova, M. (2017). Using multimodal learning analytics to identify aspects of collaboration in project-based learning. International Society of the Learning Sciences. Stahl, G. (2007). Meaning making in CSCL: Conditions and preconditions for cognitive processes by groups. In Proceedings of the 8th international conference on computer supported collaborative learning (pp. 652–661). ACM. Tomasello, M. (1995). Joint attention as social cognition. In C. Moore & P. J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 103–130). Hillsdale, NJ: Lawrence Erlbaum. Wei, S.-E., Ramakrishna, V., Kanade, T., & Sheikh, Y. (2016). Convolutional pose machines. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4724–4732. Werner, H. (1937). Process and achievement—A basic problem of education and developmental psychology. Harvard Educational Review, 7, 353–368. Wise, A. F., Knight, S., & Buckingham Shum, S. (this volume). Collaborative learning analytics. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Wise, A. F., & Schwarz, B. B. (2017). Visions of CSCL: Eight provocations for the future of the field. International Journal of Computer-Supported Collaborative Learning, 12(4), 423–467. Won, A. S., Bailenson, J. N., & Janssen, J. H. (2014b). Automatic detection of nonverbal behavior predicts learning in dyadic interactions. IEEE Transactions on Affective Computing, 5(2), 112–125. Won, A. S., Bailenson, J. N., Stathatos, S. C., & Dai, W. (2014a). Automatically detected nonverbal behavior predicts creativity in collaborating dyads. Journal of Nonverbal Behavior, 38(3), 389–408. Worsley, M., & Blikstein, P. (2013). Towards the development of multimodal action based assessment. In Proceedings of the third international conference on learning analytics and knowledge (LAK ‘13) (pp. 94–101). ACM. doi: https://doi.org/10.1145/2460296.2460315. Worsley, M., & Blikstein, P. (2017). A multimodal analysis of making. International Journal of Artificial Intelligence in Education, 28(3), 385–419. Worsley, M., Scherer, S., Morency, L.-P., & Blikstein, P. (2015). Exploring behavior representation for learning analytics. In ICMI 2015—Proceedings of the 2015 ACM international conference on multimodal interaction (pp. 251–258). ACM. doi: https://doi.org/10.1145/2818346. 2820737.
Further Readings Abrahamson, D., Shayan, S., Bakker, A., & van der Schaaf, M. (2015). Eye-tracking Piaget: Capturing the emergence of attentional anchors in the coordination of proportional motor action. Human Development, 58, 218–244. https://doi.org/10.1159/000443153. This paper presents an empirical analysis that combines gaze and a gesture-based interface to surface attentional
Gesture and Gaze: Multimodal Data in Dyadic Interactions
641
anchors that students utilize when trying to represent fractions using their hands. The learning context is a Piagetian style interview, in which a pupil and an adult discuss fractions. The pupil is faced with a challenge, and the adult serves as a learning partner with whom the student may engage as they describe their thoughts and actions. However, the experience is also supported by a computational interface that allows the student to use gestures to control a visual display. By coupling eye tracking with the computer-supported learning interface, the authors are able to see the invisible attentional anchors that students utilize to recognize proportions and reduce the complexity of the task. Cukurova, M., Luckin, R., Millán, E., & Mavrikis, M. (2018). The NISPI framework: Analysing collaborative problem-solving from students’ physical interactions. Computers and Education, 116, 93–109. https://doi.org/10.1016/j.compedu.2017.08.007. The Non-Verbal Index of Students’ Physical Interactivity (NIPSI) framework utilizes multimodal data to model student collaborative problem-solving. The authors operationalize synchrony, individual accountability, equality, and intraindividual variability. Each construct is based on the point-wise classification of student activity as being active, semi-active, and passive. Furthermore, each construct using the point-wise student activity classifications in different ways. For example, synchrony looks at the extent to which all participants in the group are exhibiting the same level of activity, which in this paper, is based on the active state. Intraindividual accountability looks at changes in activity between adjacent data points. Accordingly, the authors find that automatic annotation of these different activity states has utility for examining a number of different constructs related to collaborative problem-solving, and these automatic codes closely align with human-generated annotations. Martinez-Maldonado, R., Kay, J., Buckingham Shum, S., & Yacef, K. (2019). Collocated collaboration analytics: principles and dilemmas for mining multimodal interaction data. HumanComputer Interaction, 34(1), 1–50 In this paper, Martinez-Maldonado, Kay, Buckingham Shum, and Yacef describe six studies that showcase different ways study colocated group work considering the meaning of gestures over interactive surfaces and embodied strategies of groups of learners in the physical space. In some of these studies, they combine multiple streams of data including speech detection, activity logs, and interactions with physical objects in both experimental and authentic classroom settings. Authors recommend a series of principles that can be applied to multimodal analytics and also describe a series of dilemmas in terms of data modeling, analysis, and sensemaking. Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2018). Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups. International Journal of Computer-Supported Collaborative Learning, 13(3), 241–261. In this ijCSCL paper, Schneider, Sharma, Cuendet, Zufferey, Pea, and Dillenbourg describe a methodology for studying colocated groups: mobile eye-trackers. The authors provide a comprehensive description of the data collection and analysis processes so that other researchers can take advantage of this cutting-edge technology for capturing collaborative processes. They provide empirical findings showing that imbalances in leadership behaviors (captured by eye movements) are significantly correlated with learning gains. They conclude with some implications for automatically analyzing students’ interactions using dual eye-trackers. Stahl, G. (2006). Group cognition: Computer support for building collaborative knowledge. Cambridge: MIT Press. To facilitate the interpretability of the results of multimodal analytics of group activity, it has been recommended to build the analysis and design on foundational CSCL work. One of such foundations that consider the evidence generated by learners during their interactions as the core unit of analysis is that of group cognition. Group cognition is focused on understanding how two or more people communicate, via spoken language and other channels of communication, and interact with artifacts within a sociocultural setting. This work can serve for underpinning the analysis and fusion of multiple sources of evidence to understand how knowledge is built as a group, and how to connect low-level data with higher level constructs in the learning sciences.
Video Data Collection and Video Analyses in CSCL Research Carmen Zahn, Alessia Ruf, and Ricki Goldman
Abstract The purpose of this chapter is to examine significant advances in the collection and analysis of video data in computer-supported collaborative learning (CSCL) research. We demonstrate how video-based studies create robust and dynamic research processes. The chapter starts with an overview of how video analysis developed within CSCL by way of its pioneering roots. Linked throughout the chapter are the theoretical, methodological, and technological advances that keep advancing CSCL research. Specific empirical and experimental research examples will illustrate current and future advances in data collection, transformation, coding, and analysis. Research benefits and challenges that include the current state of understanding from observations of single, multiple, or 360○ camera recordings will also be featured. In addition, eye-tracking and virtual reality environments for collecting and analyzing video data are discussed as they become new foci for future CSCL research. Keywords Video data · Video analysis · Learning research · Group research · Psychological methods
1 Definitions and Scope The particularity of rich video data compared to other data gathering methods in the learning sciences is that video data make both verbal and nonverbal social interactions in learning situations enduringly visible and audible to researchers. In this regard, C. Zahn (*) · A. Ruf University of Applied Sciences and Arts Northwestern Switzerland, School of Applied Psychology, Olten, Switzerland e-mail: [email protected]; [email protected] R. Goldman NYU Steinhardt - Educational Communication and Technology, New York, NY, USA e-mail: [email protected] © Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3_35
643
644
C. Zahn et al.
video data differ from outcome data (e.g., quantitative data gathered in learning experiments systematically examining treatments and their effects), because they can open the “black box” of collaborative learning processes. The scope of this chapter is to illuminate the scholarly understanding of existing and future methods for video data collection and data analysis in CSCL research in a practical fashion. The chapter maps past, present, and future innovative advances with specific examples selected to demonstrate the methods of video data collection and data analysis that learning science and CSCL researchers in a range of fields (e.g., Zheng et al. (2014)) have been using for a better understanding of complex collaborative learning processes. CSCL video methods span the entire spectrum of the social sciences (Brauner et al. 2018), which includes qualitative research methods such as case-based fieldwork, and video ethnographic accounts, as well as quantitative methods such as experimental, and data-driven statistical research which includes learning analytics accounts. The majority of CSCL research articles in the International Journal of Computer-Supported Collaborative Learning (IJCSCL) as well as other related journals and volumes tend to consist of mixed methods studies, using both casebased and (quasi-)experimental research methods (e.g., Sinha et al. 2015; Zahn 2017). In this chapter, we will provide rich examples of how researchers can use video data for both deep qualitative case studies and with advanced and automated methods for complex visual analyses . Over time, we propose, they may also be used along with learning analytics. We open the chapter with an historical overview of pioneering analog and digital video researchers in the learnings sciences and CSCL. We also delve into current research in CSCL video research to explore the benefits and challenges that exist now and will likely exist in the coming years. Questions about data collection, data transformation, data analysis, and interpretation will be followed by three examples of contemporary research studies. We will also present new approaches to using video data that allow for deeper post hoc observations of recorded learning interactions and digging into the details of knowledge co-construction and knowledgebuilding in and beyond CSCL research. For example, certain collaborative theoretical approaches, such as complex qualitative interaction analyses (Rack et al. 2019), focus on coordination and collaboration group processes. This interactional approach is especially enhanced by collecting and analyzing video data which can if needed, be linked to ethnographic video accounts. The closing sections of this chapter address the current understanding of video data as observations from single or multiple cameras or 360○ camera recordings. It will also look at the emergence of video data as ways of “looking through people’s eyes” when eye-tracking or the use of virtual reality tools for collecting and analyzing video data are used (Greenwald et al. 2017; Sharma et al. 2017). Such tools represent promising areas for future developments. A deeper understanding of how a range of theories and collaborative methods and tools influence the research process can be found in Goldman (2007a), b, Part 1 and 4); Derry et al. (2010) as well as in Goldman et al. (2014).
Video Data Collection and Video Analyses in CSCL Research
645
2 History and Development: Pioneering Video Research The twentieth century heralded in a range of new visual media forms such as social documentary, fictional photography, and ethnographic filmmaking. To study this topic more deeply, refer to the AMC filmsite called The History of Film. See: https:// www.filmsite.org/pre20sintro2.html. The affordances of both photography and film were soon adopted by sociologists, anthropologists, and ethnographers around the world as tools for studying the lives of people at home, school, work, or play, in places both near and far. For example, anthropologist Margaret Mead and cybernetician Gregory Bateson used the film camera as a tool for social and cultural documentation, producing a film called Bathing Babies in Three Cultures in 1951 based on Mead’s research comparing the bathing practices of mothers in three countries—New Guinea, Bali, and the United States. Mead, ever the futurist, imagined a time when there would be 360○ cameras (Mead 1973). She thought it would take 10 years. It took 40!
2.1
Foundational Analogue and Digital Video Studies in LS and CSCL
Erickson (2011) looking back on his own early video observations of learning processes in groups emphasizes the central advantage which made him rely on audiovisual records to study learning in small groups: “. . .I could see who the speakers were addressing as they spoke—a particular individual, a subset of the group, or the whole group. . . . A multimodal and multiparty analysis of the locally situated ecological processes of interaction and meaning making became possible. . . .” (p. 181). The camera he used weighed about 25 pounds and recording was done on reels that were about 16 inches in diameter. One of the earliest breakthrough collaborative classroom studies of interpreting digital video data was conducted by Goldman-Segall (1998) at the MIT Media Lab. For over 3 years, her digital video ethnography at a Boston magnet school included videotaping computer activities of, and conversations with, grades 5 and 6 youth and their teachers. During the decade, Goldman-Segall developed the Points of Viewing Theory (1998) and the Perspectivity Methodology (Goldman 2007b) based on Clifford Geertz’s (1973) notion of layering data to build thick description. Goldman along with Dong (Goldman and Dong 2007) advanced the ethnographic use of thick description to become thick interpretations, which were built by collaborative by layering diverse views of researchers, teachers, and students. For more than two decades, she designed digital video analysis environments with each new research study. The first environment was a simple HyperCard tool that enabled Goldman to establish categories gleaned from thematically arranged video excerpts that had been transferred onto videodiscs. By using her new tool called Learning Constellations, collaborating teachers and researchers could annotate, rate, analyze, and interpret the
646
C. Zahn et al.
video (1998). Following LC was the tool, WebConstellations in 1997, and Orion, an online digital video analysis tool for changing our perspectives as an interpretive community (Goldman 2007a). Each of these collaborative studies and the methods and tools is described in articles found in the references. Modern technologies also allowed researchers to be more and more flexible in studying more complex learning situations comprehensively. For instance, Cobb and colleagues (e.g., Cobb and Whitenack 1996) studied children’s mathematical development in long-term social contexts in a classroom study. Two cameras captured pairs of children collaborating on mathematics problem-solving over a course of 27 lessons, the authors articulate a three-stage method that begins with interpretive episode-by-episode analyses and meta-analyses resulting in integrated chronologies of children’s social and mathematical developments. Another comprehensive study was the collected videotaped records for classroom instructions from classrooms around the world called the Third International Mathematics and Science Study (TIMSS; Stigler et al. 1999). This video-based comparative study aimed at drawing comparisons between national samples. It set a standard for international sampling and video-based methods (Seidel et al. 2005): 231 eighth-grade mathematics lessons from Germany, Japan, and the United States were observed. In each classroom, one lesson was videotaped. The tapes then were encoded, transcribed, and analyzed based on a number of criteria. Analysis focused on the content and organization of the mathematics lessons and on the teaching practices that used a software especially developed for this study. According to Stigler et al., the advantages of videos compared to real-time observations make it possible for observers to work collaboratively on the video data. A further advantage described is the facilitation of communication of the research results. Similar advantages were achieved with new approaches when digital video tools entered the scene. A comprehensive video workflow model for using video data in learning science research so that it can be shared was presented by Pea and Hoffert (2007). Their process model goes from the strategic planning of video research to a preproduction phase to the phases of video capturing, coding, storing, and chunking. Analysis then turns into collections of video segments, further statistical analyses, or case descriptions. The model moves from creating video as a means of observation and data collection toward decomposing video for analysis and then toward recomposing video for shared interpretation, collaboration, and discussion in a group or larger community of researchers. Pea and Hoffert thereby suggest that staying as close as possible to the video data during the research process, instead of translating results back and forth, affords an almost absolute closeness to the data during the whole process—also in sharing or presenting and discussing results. The authors introduce “WebDIVER,” which is a streaming media interface for “web-based diving” into the video. Koschmann et al. (2007) used the ethnomethodology of mini-chunks of video data to closely examine how learners form and act in collaborative communities. Their narrative methods used video to compose analytic narratives/stories from their footage.
Video Data Collection and Video Analyses in CSCL Research
647
Powell et al. (2003) developed a seven-step method through their longitudinal study of children’s mathematical development within constructivist learning environments. The method starts with the researcher attentively viewing micro-video and then proceeds through stages of identifying critical events, transcribing, coding, until it ends with composing analytic narratives. We will now discuss lessons that have been learned and how to integrate those lessons into future research practices.
3 State of the Art Video analysis is now a common practice in learning science and CSCL research that spans across methodological approaches, be they experimental, quasi-experimental, field research, or case studies (see Derry et al. 2010). Video data are used to capture social and/or human-computer interactions, present moments of learning, and, in qualitative case studies, produce “collaborative learning accounts” (Barron 2003). In this section, we first take a generic methodological perspective that tackles the general challenges and practices of applying video analysis in the learning sciences. Then we highlight specific problematics, and solutions when video data are used with qualitative, quantitative, or mixed-methods research in CSCL settings, provide examples for CSCL video collections and analysis. Ethnographic, narrative, problem-based, and design-based methods are also included.
3.1
Benefits and Challenges of Using Video Methods in Learning Science Research
From a methodological viewpoint, researchers agree that video-based research provides highly valuable data on learning processes in collaborative settings. For instance, they provide detailed process data that can be analyzed in an eventbased, but also in a time sequence-based approach (for analysis of discrete event sequences, see Chiu and Reimann this volume). At the same time, such research is highly selective and researchers’ decisions determine what is being recorded and analyzed. Researchers’ decisions precede the production of video data, adding their points of viewing on them at all stages of the research (Goldman-Segall 1998). On the one hand, video technologies can be beneficial in that they represent powerful ways of collecting video data with easy to use, relatively lightweight, and affordable cameras. They also constitute well-designed web-platforms for storage and for sharing video data with other researchers, and they are effective tools for deeper analysis and video editing. Despite these notable advantages, as Derry et al. (2010) specify, there are challenges posed to researchers who collect and use video records to conduct research in complex learning environments. These challenges include
648
C. Zahn et al.
developing or finding appropriate analytical frameworks and practices for given research goals; identifying available technologies and new tools for reporting, and sharing videos; and, protecting the data and rights of participants, i.e., ethics and privacy issues. Blikstad-Balas (2017) adds further key challenges: contextualization, (getting close enough to a situation to detect details, but always keeping an extra eye on the context); magnification (magnifying small details that might be irrelevant for learners in the situation, even if it may be critically important to researchers); representation (presenting data in a way that others can understand and follow scientific interpretations). With respect to the tension between the aforementioned benefits and challenges, Derry et al. (2010) suggest careful consideration of the different phases in the practice of using video analysis and interpretation of results. In each of these phases, researchers must be aware of the consequences of their selections, decisions, and the procedures they apply.
3.2
Specification for CSCL Research—Selected Research Examples
In CSCL, we consider specific issues related to the use of video data in computersupported and collaborative learning. An additional challenge for CSCL settings is that researchers have to integrate or synchronize data streams on social interactions or conversations (recorded in a physical space) with further data (e.g., screen recording or logs of human–computer interactions). How can this be accomplished in practice? The following three examples illustrate possible solutions: First, a case study of a qualitative and in-depth analysis of the collaboration process based on online verbal communication, where video was used as an additive. Second, an exploratory study following N ¼ 5 groups over a time span of a 6-week course where video analysis based on coding and counting was central and both conducted and reported in a very distinguished way. Third, an example from experimental research with a sample of N ¼ 24 pairs of learners where video analysis was used in a complex and multileveled mixed-methods approach. In the first example, Vogler et al. (2017) report a case study on the emerging complexity of online interactions and the way participants contribute through meaning making in a classroom discussion that took place in a CSCL environment. The research question was how meaning emerges from the collective interactions of individuals. In particular, the researchers investigated how the small groups introduced, sustained, and eventually closed a discussion topic. Therefore, computermediated discussions of small student groups in class were analyzed. Data were collected by means of screen recording (Camtasia software) for capturing the participants’ activities on the computer (e.g., any changes that occurred on the screen display, typing, deleting, or opening of online resources). Further on, the researchers captured by means of four video cameras the activities and interactions that took
Video Data Collection and Video Analyses in CSCL Research
649
place in the physical classroom—i.e., the small groups of two to three participants were recorded (e.g., eye gaze away from the screen, body movements, and accessing offline materials). In addition, trained observers took ethnographic notes. From the collected online conversations, the researchers created transcripts, coherence maps (for an example, see Fig. 1), and then spreadsheets showing how individual comments were connected and how threads and topics evolved (Vogler et al. 2017). The authors report on microanalyses of those learners’ discourses and present a detailed analysis of the life cycles of two selected discussion threads. The video recordings from the four classroom cameras were used as additional data together with the researcher’s observations. The data streams were synchronized by means of a tedious process that had to be done manually prior to analysis. It would have been interesting to couple different data sources (screen recordings, written discussion threads, and video recordings of nonverbal behaviors) using complex and elaborate visual analysis methods. A point to which we will return below. In the second example, Näykki et al. (2017) examined, in an exploratory study, the role of CSCL scripts for regulating discussions during a during a 6-weeklong environmental science course in teacher education. The scripts (i.e., prompts presented on tablet computers) aimed at supporting the planning and reflection of the collaborative process. The authors compared processes of scripted and non-scripted collaborative learning asking how socio-cognitive and socio-emotional monitoring would emerge in groups depending on the (more or less) active use of such scripts. They also investigated how monitoring activities would transfer to subsequent task work. The study took place in a classroom-like research space and video data were collected by means of a 360○ recording method (for details, see: https://www.oulu.fi/leaf-eng/node/41543). The authors extracted 30 h of video data (discussions, movements, and gestures) from five student groups that were repeatedly captured five times. A multistep analysis method was applied for analysis: the video data were first segmented into 30-s events. Each 30-s segment was annotated by a researcher with a description of what had occurred within the segment resulting in a content log of each video (e.g., group finishes task; one person shows their created mind map to others, group discusses task completion, suggestions on further proceeding). The content log of each video was complemented with a comprehensive memo of the most salient observations. In a second step, each 30-s segment was observed to see if group members showed socio-cognitive and socio-emotional monitoring (i.e., the behaviors associated with the understanding and progress of the study-like task, content understanding, socio-emotional support). The subsequent development of categories and coding procedure is described thoroughly in Näykki et al. (2017). Twenty-five percent of the video data were also coded by an independent coder. Upon this data, frequency analysis was applied for further statistical hypothesis testing. Time-based video segmentation was also applied by Sinha et al. (2015) studying collaborative engagement in CSCL groups, but here the video segments were subjected to observer ratings of the quality of collaborative engagement in small groups (high, moderate, or low) and used for qualitative case studies. In the third example, N ¼ 24 pairs of students were investigated when learning with advanced digital tools in history lessons (Zahn et al. 2010). Two conditions
650
C. Zahn et al.
Fig. 1 Example of a coherence graph kindly with friendly permission by Jane Vogler
Video Data Collection and Video Analyses in CSCL Research
651
supporting collaborative learning were compared: one where students used an advanced web-based video tool (WebDIVER, see Pea and Hoffert 2007) and one where students used a simple video player and text tool (controls). The advanced tool allowed cutting out of details from video sequences and extracting those “pieces of video” in order to comment on the details. Students’ interactions with technology were captured by means of screen recording (Camtasia Studio by TechSmith) and dyadic social interactions were recorded by means of a webcam. In order to analyze these data, a mixed-methods strategy was applied in combining both types of data in a two-step coding procedure (for subsequent quantitative analyses) and integrated activity transcripts (for subsequent qualitative case studies). Trained observers first watched the video recordings of social interaction to identify emergent behavior categories and then applied a process of coding and counting. Eight categories of verbal interactions were found in this process (e.g., content-related talk, videorelated talk, technical issues talk, help seeking, etc.). The relative amounts of time spent for talking in the categories, related to total talking time, were then calculated and compared between conditions. Transcripts of learning episodes were produced for deep analyses of selected cases and specific categories (e.g., content-related talk) from the different conditions. The transcripts synchronized the students’ conversations and interactions with digital tools (e.g., typing, submitting comments, playing video, watching, stopping video, rewinding, making marks with an advanced video function, etc.). The transcripts were analyzed according to Barron (2003) as “localized accounts” of “successful learning.” Based on this qualitative approach, it would be interesting in further research to return to a quantitative strategy by counting collaboration patterns in dyads from both conditions and compare their prevalence statistically thereby testing for significance. Yet, limited resources often force research to disclaim such mixed-method approaches. Future perspectives, however, include automated analyses that could render this option feasible. In sum, from these examples, it can be noted how a number of decisions were made in the phases of video data collection and analyses, starting from the number and types of cameras used as well as their placement in the investigated scene; to the number of groups and group sizes under scrutiny; the duration and frequency of video data collection; the decision of using transcripts for qualitative in-depth analysis versus developing categories to be coded and counted or both; the using of extra-visualizations or verbal comments for data exploration; and, to the selecting of results to be presented in a scholarly publication.
4 The Future Video analysis has evolved rapidly alongside recent technological progress (e.g., mobile eye-tracking, social computing, virtual reality). In this section, we will look ahead and include developments such as tracking and automatic data analysis methods from social computing technologies.
652
4.1
C. Zahn et al.
Eye-Tracking in CSCL Research
Eye-tracking as a method to investigate learning behaviors has been widely used in individual learning settings in the last few years (for an overview, see Alemdag and Cagiltay 2018; Lai et al. 2013). Mobile eye-tracking, for example, was applied to research on informal learning in museums (Mayr et al. 2009; Wessel et al. 2007) where researchers could reflect on eye-tracking videos afterward together with visitors in order to gain insights into motivational factors and possible effects of exhibition design on learning during a museum visit (vom Lehn and Heath 2007). Although using eye-tracking in CSCL is not unknown (e.g., Stahl et al. 2013), it still seems rather uncommon. Since 2013, only few studies were published that used eye-tracking as a method in CSCL research. Among these are the studies by Schneider et al. (2016), Schneider and Pea (2013, 2014), Sharma et al. (2017), and Stahl et al. (2013) that emphasize the advantages and possibilities of eye-tracking as a method to support and research collaboration. Schneider and Pea (2013) investigated collaborative problem-solving situations, where dyads saw the eye gazes of their learning partner on a screen. The authors found that this mediated joint visual attention helped dyads achieve a higher quality of collaboration and increased learning gains. These results indicate that joint visual attention in collaborations is of great importance in problem-solving settings, as it fosters an equal understanding of the problem (Stahl et al. 2013; Zemel and Koschmann 2013). In a follow-up study, Schneider and Pea (2014) examined collaborative learning processes in dyads working remotely in different rooms. Similar to their previous study (Schneider and Pea 2013), participants were able to see the gaze of their learning partner on the screen. Using eye-tracking data, Schneider and Pea (2014) could roughly predict collaboration quality with an accuracy between 85% and 100%. Hence, joint attention (which involves gaze) is an important nonverbal predictor and indicator for successful collaboration. In addition, Schneider et al. (2016) investigated the way users memorize, analyze, collaborate, and learn new concepts on a tangible user interface (TUI) in a 2D versus 3D interactive stimulation of a warehouse. Eye-tracking goggles were used as a method to further investigate collaboration processes in colocated settings. Results suggested that 3D interfaces fostered joint visual attention which significantly predicted task performance and learning gains. The little existing research about using gaze in CSCL has demonstrated that eye-tracking data contributes highly relevant and important insights into collaborative processes (see also Schneider et al. this volume). Sharma et al. (2017) elaborated that “eye-tracking provides an automatic way of analyzing and assessing collaboration, which could gain deeper and richer understandings of collaborative cognition. With the increasing number of eye-tracking studies, in collaborative settings, there is a need to create a shared body of knowledge about relations found between gazebased variables and cognitive constructs” (p. 727). With eye-tracking devices, especially mobile eye-trackers, becoming cheaper and widely available, we expect increases in eye-tracking studies in future CSCL research. For this reason, theoretical frameworks for eye-tracking research in
Video Data Collection and Video Analyses in CSCL Research
653
CSCL in both colocated and online-remote settings are needed. Moreover, creating and sharing digital mini-movies for “looking through people’s eyes” during collaborative learning would be an interesting open data option for CSCL researchers. In addition, developments of automatic analyses of eye-tracking and other data in collaborative learning processes in the future will lead CSCL research to facilitated behavior pattern-based research. We elaborate on this latter point in the next section on related tracking methods from small group research. In fact, corresponding technology, such as the Tobii Pro Glasses, is already in development. Such new mobile eye-trackers not only provide multiple cameras (such as infrared cameras recording eye movements and scene cameras recording participants’ view) but also microphones and other tracking technologies. Furthermore, Li et al. (2019), for instance, proposed a smart eye-tracking system for VR device that is able to detect eye movements in real time in a VR stimulation.
4.2
Related Tracking and Automatic Analyses Methods
Studies in small group research have developed related methods such as the tracking of head and body activities during social interactions. These methods include the integration with other behavior data such as recordings of verbal conversations or nonverbal cues in communication (e.g., speaking turns, interruptions) as well as questionnaire data. Studies on the automatic extraction of cues from AV-recorded face-to-face interactions in small groups use either rule-based methods or learning algorithms. Such algorithms can produce larger amounts of annotated data in accurate ways and less time—compared with annotations added by human observers/researchers. For instance, Sanchez-Cortes et al. (2012) —who investigated nonverbal behaviors during collaborative problem solving for early detection of emergent leading behaviors in small groups— used algorithms for the tracking of head activities and integration with other data. In a lab setting, two setups were installed: a static setup with six cameras (four close-ups, two side views, and one center view of the interaction under scrutiny), and a portable setup with two web cameras (Logitech® Webcam Pro 9000) including portable audio and video sensors. They recorded social face-to-face interaction of the groups and several nonverbal features were automatically extracted from recordings. The results (from approximately 10 h of audio/video recordings) were then integrated with variables extracted from questionnaires filled in by each group member immediately after the recordings. Results of this study indicated that an early identification of emergent leaders was possible with an accuracy of up to 85%. Thus, analyses of complex data from intricate scenarios, such as face-to-face group problem-solving scenarios or collaborative learning in the natural field, become feasible. Another approach to integrate and visualize larger data sets and scales, including multimodal data, is the interaction geography slicer (IGS) (Steier et al. 2019). This tool was used within a 3-year project in collaboration with a nationally renowned museum in the United States. One goal of the project was to understand how visitors
654
C. Zahn et al.
move across galleries within a complete museum visit (Shapiro 2019). Therefore, movements, interactions, and social media use of visitor groups were collected by video and audio records. The IGS forces the challenge of making sense of diverse unstructured data by dynamically integrating and visualizing these data and proves new visual analytic techniques. It supports representing and interpreting of collaborative interaction as people move across physical environments (Shapiro and Hall 2018). In the future of CSCL research, the automatic tracking and analysis of video recorded data will yield three noteworthy benefits for the investigation of complex data sets from both face-to-face and online settings: first, it will allow us to investigate realistic scenarios in more detail by adequately taking into account learners’ nonverbal behaviors in the physical space during computer-supported collaborative learning. Second, with data visualization tools, it will become possible to manage the complex data streams from body, head, or eye-tracking devices even when we investigate larger collaborative groups (more than two people) in the field (Rack et al. 2019). Moreover, integrative visualization tools, such as the IGS, will offer new ways to study collaborative learning from a learner perspective. Third, video-based research can be implemented into new CSCL models and simulations in future learning environments. For instance, knowing that joint visual attention has positive impacts on collaboration quality and recognizing that gaze is an important indicator for it, we can consider implementing gaze data in future virtual 3D learning environments in order to allow collaborative learners to follow the eye gaze of their learning partners. More generally, and on the research side, it might be considered useful to generate collections of mini video clips based on video-recorded quality indicators in order to discuss them on open access platforms with other researchers for joint analyses and knowledge building in CSCL research. By such developments, CSCL researchers would probably be able to meet two of the key challenges of using video when investigating social interactions in education as described by Blikstad-Balas (2017): “getting close enough to the details without losing context” and “representing data so that an audience can assess whether inferences drawn from the data are plausible” (p. 511).
4.3
Virtual Reality (VR) and Augmented Reality (AR) in CSCL Research
In their content analysis, Zheng et al. (2014) found a significant upswing of virtual reality (VR) in CSCL between 2003–2007 and 2008–2012. This is in line with the fact that VR is becoming more popular and the number of active users worldwide is forecasted to grow (Statista 2018). A reason for this trend is that VR has become more accessible and cheaper due to innovations in design and manufacturing. Due to this development, VR is affordable for institutions and could bring new possibilities
Video Data Collection and Video Analyses in CSCL Research
655
both into the classroom and into CSCL research. Likewise, augmented reality (AR)—where virtual objects are “added” to the physical world (Azuma et al. 2001)—provides multiple new options for educational settings and is, in contrast to VR, already widely used in schools (for an overview, see Akçayır and Akçayır 2017). Nevertheless, research on these topics is still rare—especially methodological research. Focusing on the potential of virtual reality in education, Greenwald et al. (2017) highlighted the importance of asking the question of what problems can be solved with VR in order to not only offer adequate possibilities—“add-ons”—compared to traditional approaches in educational settings, but new and required possibilities. Hence, they mentioned two main advantages of virtual reality: interaction with other humans and adaptable environments. For interaction with other humans, it can be distinguished between how (e.g., take on another shape; avatar) and who one interacts with (e.g., remote people). Environments in the virtual reality, on the other hand, offer the possibility of going to difficult places to reach (e.g., the moon) or to interact in and with an inherently virtual environment (e.g., move through veins of the human body). An initial approach of how environments of VR systems can influence learning comes from a team of the Bauhaus University Weimar (Beck et al. 2013; Kulik et al. 2011; Salzmann et al. 2009). In their research, they focus on learning complex and ambiguous information. In this context, two factors seem to be of great importance: exchange with peers—whereby a certain level of individual autonomy should be ensured (Sawyer 2008, pp. 64–66)—and learning by doing. Therefore, virtual reality environments should allow interactivity, on the one hand, and, on the other hand, enable communication between multiple learners in addition to having fluent transitions between individual and collaborative activities. On this basis, Kulik et al. (2011) and Salzmann et al. (2009) found that learners interact with virtual objects just as they do with real objects and that the understanding of all learners can be increased in collaborative visual search. Furthermore, Beck et al. (2013) found that body language with 3D avatars in such systems can be supported, whereas the perceived copresence of such avatars is limited. Last but not least, Greenwald et al. (2017) described opportunities for interacting and learning in children with autism by new environments where interactions with others might be easier. In summary, it becomes visible, that VR offers new possibilities for collaboration processes—that in turn should be observed with VR-based methods— and for analysis methods of collaborative learning environments. Especially with 360○ cameras and spatial microphones new holistic and environmentally sensitive records of collaborative learning interactions are provided (Steier et al. 2019). Such records enable a holistic immersion in a collaborative learning situation even after the capture of the event. Since there is a lack of existing tools supporting transcribing and analyzing of learning interactions in 360○ videos, McIlvenny (2018) and McIlvenny and Davidsen (2017) developed a tool that allows researchers—inter alia—to annotate and deploy interactive transcript objects directly into the 360○ video in virtual reality (AVA360VR: Annotate Visualize Analyse 360○ video in VR). With such new methods, the investigation of collaborative interactions in all its complexity gets more holistic, without reducing it by
656
C. Zahn et al.
transforming these embodied interactions into text transcripts (Davidsen and Ryberg 2017). With regard to augmented reality, learning environments can be created that facilitate the development of processing skills through independent collaborative exercises by combining digital and physical objects (Dunleavy et al. 2009). In fact, there is evidence that digital augmentations can help conceptual development in informal collaborative settings of science knowledge during science experiences (Yoon et al. 2012).
4.4
Conclusion
In conclusion, from the works we reported above, we see future perspectives for eye-tracking CSCL research devoting by eye-tracking technologies that not only allow tracking of eye-movements but also recordings of participants’ views and audio recording. Moreover, new tools allowing to dynamically integrate and visualize large, multimodal data sets will bring more holistic insights in collaborative interactions and simultaneously facilitate analysis and interpretation of these data. Furthermore, we can derive different future perspectives for AR- or VR-based CSCL research: Firstly, AR and VR changes the research object itself, because learning in augmented or inherently virtual environments offers new forms of collaboration by interacting partially or fully in virtual systems (e.g., with avatars). These forms are important research topics to be observed by VR-based methods. For example, when interactions with others become easier in VR environments for children with autism, these interactions and respectively, their collaborative learning, can be investigated empirically. Moreover, the new forms of collaboration processes can be investigated through new forms of interactivity in virtual environments (e.g., adaptive environments)—which offers important options in experimental research. In a VR simulation, collaboration and behaviors (such as head movements or gaze) can be easily tracked. Consequently, we have alternative methods for rich observation and reliable measurement of social interactions. For instance, collaborative processes could be recorded as 360○ videos and transferred in VR simulations instead of simple videos or video could be transformed into VR environments in order to investigate specific behaviors and interactions during collaborative learning. Appropriate tools will also provide new opportunities to transcribe and analyze collaborative settings, for example, by possibilities to annotate directly into 360○ videos. Yet, such methods still need to be developed—and their potential role of producing methodological artifacts must be critically reflected upon. If we allow such tools to serve as methods in collaborative learning, we might not always be investigating what we think we are investigating. Future methodological research on the effects of VR or AR on collaborative learning is needed.
Video Data Collection and Video Analyses in CSCL Research
657
References Akçayır, M., & Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review, 20, 1–11. Alemdag, E., & Cagiltay, K. (2018). A systematic review of eye tracking research on multimedia learning. Computers & Education, 125, 413–428. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., & MacIntyre, B. (2001). Recent advances in augmented reality. IEEE Computer Graphics and Applications, 21(6), 34–47. Barron, B. (2003). When smart groups fail. Journal of the Learning Sciences, 12(3), 307–359. Beck, S., Kunert, A., Kulik, A., & Froehlich, B. (2013). Immersive group-to-group telepresence. IEEE Transactions on Visualization and Computer Graphics, 19(4), 616–625. Blikstad-Balas, M. (2017). Key challenges of using video when investigating social practices in education: Contextualization, magnification, and representation. International Journal of Research & Method in Education, 40(5), 511–523. Brauner, E., Boos, M., & Kolbe, M. (Eds.). (2018). The Cambridge handbook of group interaction analysis. Cambridge University Press. Chiu, M. M., & Reimann, P. (this volume). Statistical and stochastic analysis of sequence data. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computersupported collaborative learning. Cham: Springer. Cobb, P., & Whitenack, J. W. (1996). A method for conducting longitudinal analyses of classroom videorecordings and transcripts. Educational Studies in Mathematics, 30(3), 213–228. Davidsen, J., & Ryberg, T. (2017). “This is the size of one meter”: Children’s bodily-material collaboration. International Journal of Computer-Supported Collaborative Learning, 12(1), 65–90. Derry, S. J., Pea, R. D., Barron, B., Engle, R. A., Erickson, F., Goldman, R., Hall, R., Koschmann, T., Lemke, J. L., Sherin, M. G., & Sherin, B. L. (2010). Conducting video research in the learning sciences: Guidance on selection, analysis, technology, and ethics. Journal of the Learning Sciences, 19(1), 3–53. Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. Journal of Science Education and Technology, 18(1), 7–22. Erickson, F. (2011). Uses of video in social research: A brief history. International Journal of Social Research Methodology, 14(3), 179–189. Geertz, C. (1973). The interpretation of cultures. New York: Basic Books. Goldman, R. (2007a). Orion™, an online digital video analysis tool: Changing our perspectives as an interpretive community. In R. Goldman, R. Pea, B. Barron, & S. Derry (Eds.), (pp. 507–520). Mahwah, NJ: Erlbaum/Routledge. Goldman, R. (2007b). Video representations and the perspectivity framework: epistemology ethnography, evaluation and ethics. In R. Goldman, R. Pea, B. Barron, & S. Derry (Eds.), Video research in the learning sciences (pp. 3–38). Mahwah, NJ: Erlbaum/Routledge. Goldman, R., & Dong, C. (2007). Using Orion™ and the perspectivity framework for video analysis to interpret events: The CSCL’05 video exemplar. In The Computer Supported Collaborative Learning 2007 Conference (pp. 16–25). Rutgers. Goldman, R., Zahn, C., & Derry, S. (2014). Frontiers of digital video research in the learning sciences: Mapping the terrain. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed.). New York, NY: Cambridge University Press. Goldman-Segall, R. (1998). Points of viewing children’s thinking: A digital ethnographer’s journey. Mahwah, NJ: Erlbaum/Routledge. With accompanying online video vignettes and commentary tools. Greenwald, S., Kulik, A., Kunert, A., Beck, S., Frohlich, B., Cobb, S., Parsons, S., Newbutt, N., Gouveia, C., Cook, C., Snyder, A., Payne, S., Holland, J., Buessing, S., Fields, G., Corning, W., Lee, V., Xia, L., & Maes, P. (2017). Technology and applications for collaborative learning in virtual reality. In B. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference:
658
C. Zahn et al.
Prioritizing equity and access in CSCL, 12th International Conference on Computer Supported Collaborative Learning (CSCL) 2017 (Vol. 2, pp. 719–726). International Society of the Learning Sciences. Koschmann, T., Stahl, G., & Zemel, A. (2007). The video analyst’s manifesto (or the implications of Garfinkel’s policies for studying instructional practice in design-based research). In R. Goldman, R. Pea, B. Barron, & S. Derry (Eds.), Video research in the learning sciences (pp. 133–143). Mahwah, NJ: LEA/Routledge. Kulik, A., Kunert, A., Beck, S., Reichel, R., Blach, R., Zink, A., & Froehlich, B. (2011). C1x6: A stereoscopic six-user display for co-located collaboration in shared virtual environments. ACM Transactions on Graphics, 30(6), 1–12. Lai, M.-L., Tsai, M.-J., Yang, F.-Y., Hsu, C.-Y., Liu, T.-C., Lee, S. W.-Y., Lee, M.-H., Chiou, G.L., Liang, J.-C., & Tsai, C.-C. (2013). A review of using eye-tracking technology in exploring learning from 2000 to 2012. Educational Research Review, 10, 90–115. Li, B., Zhang, Y., Zheng, X., Huang, X., Zhang, S., & He, J. (2019). A smart eye tracking system for virtual reality. In 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC) (Vol. 1, pp. 1–3). IEEE. Mayr, E., Knipfer, K., & Wessel, D. (2009). In-sights into mobile learning: An exploration of mobile eye tracking methodology for learning in museums. In G. Vavoula, N. Pachler, & A. Kukulska-Hulme (Eds.), Researching mobile learning: Frameworks, methods, and research designs (pp. 189–204). Peter Lang. McIlvenny, P. (2018). Inhabiting spatial video and audio data: Towards a scenographic turn in the analysis of social interaction. Social Interaction. Video-Based Studies of Human Sociality, 2(1). https://doi.org/10.7146/si.v2i1.110409. McIlvenny, P., & Davidsen, J. (2017). A big video manifesto: Re-sensing video and audio. Nordicom Information, 39(2), 15–21. Mead, M. (1973). Changing styles of anthropological work. Annual Review of Anthropology, 2, 1–27. Näykki, P., Isohätälä, J., Järvelä, S., Pöysä-Tarhonen, J., & Häkkinen, P. (2017). Facilitating sociocognitive and socio-emotional monitoring in collaborative learning with a regulation macro script–an exploratory study. International Journal of Computer-Supported Collaborative Learning, 12(3), 251–279. Pea, R. D., & Hoffert, E. (2007). Video workflow in the learning sciences: Prospects of emerging technologies for augmenting work practices. In R. Goldman, R. Pea, B. Barron, & S. J. Derry (Eds.), Video research in the learning sciences (pp. 427–460). Mahwah, NJ: LEA/Routledge. Powell, A. B., Francisco, J. M., & Mager, C. A. (2003). An analytical model for studying the development of learners’ mathematical ideas and reasoning using videotape data. Journal of Mathematical Behavior, 22, 405–435. Rack, O., Zahn, C., & Bleisch, S. (2019). Do you see us?—Applied visual analytics for the investigation of group coordination. Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO), 50, 53–60. [Group. Interaction. Organisation. Journal of Applied Organisational Psychology]. Salzmann, H., Moehring, M., & Froehlich, B. (2009). Virtual vs. real-world pointing in two-user scenarios. In Proceedings of the 2009 IEEE Virtual Reality Conference (pp. 127–130). IEEE Computer Society. Sanchez-Cortes, D., Aran, O., Mast, M. S., & Gatica-Perez, D. (2012). A nonverbal behavior approach to identify emergent leaders in small groups. IEEE Transactions on Multimedia, 14(3), 816–832. Sawyer, K. (2008). Group genius: The creative power of collaboration. New York: Basic Books. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397. Schneider, B., & Pea, R. (2014). Toward collaboration sensing. International Journal of ComputerSupported Collaborative Learning, 9(4), 371–395.
Video Data Collection and Video Analyses in CSCL Research
659
Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2016). Using mobile eye-trackers to unpack the perceptual benefits of a tangible user interface for collaborative learning. ACM Transaction on Computer-Human Interaction, 23(6), 1–23. Schneider, B., Worsley, M., & Martinez-Maldonado, R. (this volume). Gesture and gaze: Multimodal data in dyadic interactions. In U. Cress, C. Rosé, A. F. Wise, & J. Oshima (Eds.), International handbook of computer-supported collaborative learning. Cham: Springer. Seidel, T., Prenzel, M., & Kobarg, M. (Eds.). (2005). How to run a video study. Technical report of the IPN video study. Münster: Waxmann. Shapiro, B. R. (2019). Integrative visualization: exploring data collected in collaborative learning contexts. In Proceedings of the 13th International Conference on Computer Supported Collaborative Learning (CSCL) (Vol. 1, pp. 184–191). International Society of the Learning Sciences. Shapiro, B. R., & Hall, R. (2018). Personal curation in a museum. In Proceedings of the ACM in Human-Computer Interaction, CSCW (Vol. 2, Article 158). ACM. Sharma, K., Jermann, P., Dillenbourg, P., Prieto, L. P., D’Angelo, S., Gergle, D., Schneider, B., Rau, M., Pardos, Z., & Rummel, N. (2017). CSCL and eye-tracking: Experiences, opportunities and challenges. In B. K. Smith, M. Borge, E. Mercier, & K. Y. Lim (Eds.), Making a difference: Prioritizing equity and access in CSCL, 12th international conference on computer supported collaborative learning (CSCL) 2017 (Vol. 2). International Society of the Learning Sciences. Sinha, S., Rogat, T. K., Adams-Wiggins, K. R., & Hmelo-Silver, C. E. (2015). Collaborative group engagement in a computer-supported inquiry learning environment. International Journal of Computer-Supported Collaborative Learning, 10(3), 273–307. Stahl, G., Law, N., & Hesse, F. (2013). Reigniting CSCL flash themes. International Journal of Computer-Supported Collaborative Learning, 8(4), 369–374. Statista. (2018). Number of active virtual reality users worldwide from 2014 to 2018 (in millions). Retrieved November 27, 2018, from https://www.statista.com/statistics/426469/active-virtualreality-users-worldwide/ Steier, R., Shapiro, B. R., Christidou, D., Pierroux, P., Davidsen, J., & Hall, R. (2019). Tools and methods for ‘4E Analysis’: New lenses for analyzing interaction in CSCL. In Proceedings of the 13th International Conference on Computer Supported Collaborative Learning (CSCL) (Vol. 2, pp. 759–766). International Society of the Learning Sciences. Stigler, J. W., Gonzales, P., Kwanaka, T., Knoll, S., & Serrano, A. (1999). The TIMSS videotape classroom study: Methods and findings from an exploratory research project on eighth-grade mathematics instruction in Germany, Japan, and the United States. A research and development report (National Center for Education Statistics Report No. NCES 99-0974). U. S. Government Printing Office. Vogler, J. S., Schallert, D. L., Jordan, M. E., Song, K., Sander, A. J. Z., Chiang, Y. Y. T., Lee, J.-E., Park, J. H., & Yu, L.-T. (2017). Life history of a topic in an online discussion: A complex systems theory perspective on how one message attracts class members to create meaning collaboratively. International Journal of Computer-Supported Collaborative Learning, 12, 173–194. https://doi.org/10.1007/s11412-017-9255-9. vom Lehn, D., & Heath, C. (2007). Social interaction in museums and galleries: a note on videobased field studies. In R. Goldman, R. Pea, B. Barron, & S. Derry (Eds.), Video research in the learning sciences (pp. 287–301). Mahwah: LEA/Routledge. Wessel, D., Mayr, E., & Knipfer, K. (2007). Re-viewing the museum visitors view. In G. N. Vavoula, A. Kukulska-Hulme, & N. Pachler (Eds.), Research methods in informal and mobile learning (pp. 17–23). WLE Centre. Yoon, S. A., Elinich, K., Wang, J., Steinmeier, C., & Tucker, S. (2012). Using augmented reality and knowledge-building scaffolds to improve learning in a science museum. International Journal of Computer-Supported Collaborative Learning, 7(4), 519–541. Zahn, C. (2017). Digital design and learning: Cognitive-constructivist perspectives. In S. Schwan & U. Cress (Eds.), The psychology of digital learning. Constructing, exchanging, and acquiring knowledge with digital media. Springer.
660
C. Zahn et al.
Zahn, C., Pea, R., Hesse, F. W., & Rosen, J. (2010). Comparing simple and advanced video tools as supports for complex collaborative design processes. Journal of the Learning Sciences, 19(3), 403–440. Zemel, A., & Koschmann, T. (2013). Recalibrating reference within a dual-space interaction environment. International Journal of Computer-Supported Collaborative Learning, 8, 65–87. Zheng, L., Huang, R., & Yu, J. (2014). Identifying computer-supported collaborative learning (CSCL) research in selected journals published from 2003 to 2012: A content analysis of research topics and issues. Journal of Educational Technology & Society, 17(4), 335.
Further Readings Blikstad-Balas, M. (2017). Key challenges of using video when investigating social practices in education: Contextualization, magnification, and representation. International Journal of Research & Method in Education, 40(5), 511–523. Derry, S. J., Pea, R. D., Barron, B., Engle, R. A., Erickson, F., Goldman, R., Hall, R., Koschmann, T., Lemke, J. L., Sherin, M. G., & Sherin, B. L. (2010). Conducting video research in the learning sciences: Guidance on selection, analysis, technology, and ethics. Journal of the Learning Sciences, 19(1), 3–53. Goldman, R., Zahn, C., & Derry, S. (2014). Frontiers of digital video research in the learning sciences: Mapping the terrain. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (2nd ed.). New York: Cambridge University Press. Näykki, P., Isohätälä, J., Järvelä, S., Pöysä-Tarhonen, J., & Häkkinen, P. (2017). Facilitating sociocognitive and socio-emotional monitoring in collaborative learning with a regulation macro script–an exploratory study. International Journal of Computer-Supported Collaborative Learning, 12(3), 251–279. Schneider, B., & Pea, R. (2013). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397.
Index
A Academically Productive Talk (APT), 188, 416 Accountability, 471 ACT, Inc., 523, 525 Actional immersion, 390, 394, 396, 399, 401 Action analysis, 579, 580 Active Learning Classrooms, 152 Active Learning Forum (ALF), 452 Activity structures, 262, 267, 270–272 Activity theory, 243, 246, 263 Actor-network theory (ANT), 34, 38, 468 Actroids, 412 Adaptable CSCL scripts, 346–348 Adaptable CSCL systems, 435, 436 Adaptive CSCL systems, 434, 435 Adoption of group practices, 35 Affordances, 452 Agents and avatars, 361 ALF-based seminar, 455 Algorithmic systems, 176 Analysing CSCL sessions, 225 Analysis tools, 451, 455, 456 Analytics of collaborative learning (ACL), 15, 438 AR magnetic force field, 398 Architecture for learning, 129 Arguing-to-learn approach, 189 Argumentation coevolution model, 192, 193 collaborative/collective, 194 collaborative learning, 184 CSCL, 184 domain-general skill, 192 early precursors, 185, 186
everyday life and development, 186, 187 initiators, 185 knowledge construction, 184, 190, 191 knowledge-related processes, 185 methods and tools development, 187, 188 persuasive communication, 184 polyphonic model, 189, 190 rainbow framework, 189 reasoning, 184 reflective interactions, 189 research, 184 science classroom, 191, 192 Argumentation diagraming, 359 Argumentative grammar, 483, 485, 488 Argumentative knowledge construction, 428 Articulation, 448 Artifact mediation, 242 Artifacts, 241–243, 248, 448 AI, 554 classification, 554, 555 code and count approaches, 558 computational linguistics approaches, 556 computer-based analysis, 557 conceptual/knowledge, 553 ConcertChat environment, 552 content analysis, verbal and textual data, 559, 560 content-based approaches, 564 conversation analysis, 557 conversations logs, 557 CSCL, 552, 553, 562, 564 digital, 554 human activities, 551, 552 human beings, 552
© Springer Nature Switzerland AG 2021 U. Cress et al. (eds.), International Handbook of Computer-Supported Collaborative Learning, Computer-Supported Collaborative Learning Series 19, https://doi.org/10.1007/978-3-030-65291-3
661
662 Artifacts (cont.) information, 564 knowledge/conceptual, 554, 556 language and text-based artifacts, 553, 554 manifestations, 552 physical interactions, 554 polyphonic analysis, 560–562 signs, 552, 553, 556 social network analysis, 558, 559 Artificial intelligence (AI), 457, 554 expert systems, 546 Artificial pedagogical agents, 361 Assessment and Teaching of 21st-Century Skills (ATC21S), 519 ASSISTments, 418 Augmented reality (AR), 14, 252, 391, 655, 656 games, 375 place-based immersion, 400 space-based, 397–400 Augmented Reality for Interpretive and Experiential Learning (ARIEL) project, 397–399 Authentic cultures, 148 Automatic analyses methods, 653, 654 Automatic collaborative process analysis, 455 Avatars, 413
B Backing, 186 Bakhtin’s analysis, 225 approach, 185 dialogism, 220 Band-Aid, 104 Bayesian knowledge tracing (BKT), 415, 417 Behavior modeling, 412 Blogging/microblogging tools, 227 Bloom’s Taxonomy, 408 Boundary objects, 246 Brainstorming, 221 Bricolage approach, 453 Broaden and deepen category, 189 Bug-Scope, 153 Building tools, 450, 451, 453, 454
C CA practitioners, 205 Case-based research ANT, 468 CHAT, 465–466, 470, 473 classroom ethnography, 464
Index computer-based ballistics simulator, 469 computer tools, 470 conceptual change, 469 conversation analysis, 468, 469 critical design ethnography, 471 critical pedagogy, 466 critical theory, 466 CSCL, 465 CSILE’s database, 470 dialogic approach, 473 dialogic theory, 467 EM, 468, 474 ethnographic inquiry, 464, 472 ethnography, 463, 464 experimental intervention, 464 interaction analysis, 471 semiotic/dialogic approach, 470 sequential development, 473 simulation-based activity, 472 Centrality, 559 Chiasm, 233, 234 Citizen science initiatives, 153, 165 Claim, 186 Classical 2D digital renderings, 152 Classroom communities, 152 Classroom ethnography, 464 Co-construction, 358–360, 363 Code and count approaches, 558 Coding language, 596 Coding schemes, 597 Coevolution model, 173, 174, 176 Cognitive conflicts, 305 Cognitive development and learning, 29 Cognitive diversity, 114 Cognitive group awareness tool, 300, 301, 303, 305 Cognitive load theory, 354 Cognitive revolution of Homo sapiens, 146 Cognitive science, 149 Cognitive skills, 518 Cognitive-systemic stance, 57 Cognitive theory of multimedia learning, 354 Cognitive tools, 200 Cognitive Tutors, 412, 415 Cognitivism, 30 Cohere, 188 Collaboration, 243, 356, 359, 360, 362, 364 classroom culture, 38 CSCL community, 168 definition, 168 digital fabrication technology, 28 essential features, 169
Index expansive learning, 34 external representations, 169 generic social media apps, 25 interactive processes, 169 inter-objective theories, 29 intersubjective aspects, 24 intersubjective theories, 29 knowledge-building framework, 33 and learning, 446 levels of scale, 169 multimedia artifacts, 169 open-source community, 168 subjective theories, 29 Wikipedia, 174 ZpD, 30 Collaboration-as-learning, 575–578 Collaboration practices, 201 Collaboration scripts, 306, 318, 337, 498, 499 on cognitive theory, 336 computer-supported, 341 external, 336, 340–341 flexibility, CSCL scripts, 345–346 hierarchical relationship, 338 instructional technology, 341 interdisciplinary collaboration, 348 internal, 336, 339 knowledge components, 337 learners’ and teachers’ agency, 346–347 meta-analysis, 342 motivation measures, 343 scaffolding transactivity and learning regulation, 347–348 social interactions, 344 SToG, 337, 339 transactivity, 342, 344 Collaboration skills, 342 Collaboration support, 453 Collaborative designing, 251 Collaborative digital mapping activity, 154 Collaborative discourse, 191 Collaborative editing, 173, 175 Collaborative efforts, 51, 52, 55 Collaborative instruments, 244 Collaborative knowledge building, 250 Collaborative learning, 242, 407 cognitive development, 30 CSCL, 24, 26, 38, 39 discourse and interaction, 36 group meaning-making process, 37 intersubjective theories, 29 knowledge objects, 24 linguistic moves and embodied actions, 28
663 metacognition (see Metacognition, collaborative learning) research methods, 26 scripting strategies, 32 shared practices, 38 socio-cognitive theories, 32 technologies/groupware systems, 25 video games (see Video games) virtual worlds (see Virtual worlds) Collaborative learning analytics (CLA), 15 ACL, 432 adaptable CSCL systems, 435, 436 adaptive CSCL systems, 434, 435 argumentative knowledge construction, 428 challenges, 427, 438 collaborative contributions, 428 computer support, 438 core tensions, 433 CSCL environments, 432 digital traces, 428, 429 environmental/physiological sensors, 428 feedback loop, 431 group-interaction dynamics, 432 iterations, 436, 437 knowledge-building discourse, 428–431 qualitative and quantitative analysis, 427 research and practice, 433 responsiveness, 432 technology/human agency, 433, 434 unit-of-analysis, 436 Collaborative learning at scale, 10 advantages, 177 agents, 176 complex algorithms, 176 vs. cooperative learning, 168 CSCL community, 169, 170 designs, 171 fresh models, 169 human–human contact, 174 innovative approaches, 172 large communities, 170 large-scale environments, 168 machine learning models, 176 mass collaboration, 166, 169 nascent challenge, 175, 177 opportunity to CSCL, 166 quadrants, 172 theoretical models, 176 understanding and designing framework, 174 Collaborative patterns, 524 Collaborative practices, 28 Collaborative problem-solving (CPS) ACT, 523, 525
664 Collaborative problem-solving (CPS) (cont.) assessment solutions, 518 cognitive and social aspects, 518 cognitive dimensions, 518 collaborative activities design, 528 collaborative learning, 526, 527 competency, 519 computer-mediated communications, 528 CSCL, 520 developmental psychology, 527 ECD, 526 environments, 526 ETS, 525 face-to-face collaboration, 521 feedback, 527 H-A approach, 522 H-H approach, 521, 522 information, 519 interactions, 519 knowledge transfer, 527 learning companies, 519 OECD, 518 PISA, 518, 519, 522 professional assessment, 519 psychometric values, 523 roles, 519 scalable learning, 518 skill development and assessment, 520 skill proficiency, 526 social components, 518 social dimensions, 518 student performance, 521 task effectiveness, 520 team effectiveness, 519, 520 technology advances, 521 Collaborative processes, 39, 150, 448 awareness, 12 communicative processes, 10 computer-supported, 12 contextual factors, 9 conversational interventions, 12 knowledge construction process, 10 learning communities, 11 metacognitive awareness, 12 processes, 12 sociocultural perspective, 10 Collaborative technologies, 453 Collaboratories, 152 Colocated collaborative learning, 630, 631 Colorblind, 113 Communication, 192 Communities and participation, 9 CSCL, 146, 149, 155
Index current notions, 154 formal and informal, 154 learning sciences, 154 Neolithic revolution, 146 notions, 155 panoply, 147 problematic nature, 147 recognition and attention, 146 scholarship, 146, 151–153 situated learning, 148 social and cultural restrictions, 147 socio-material spaces, 154 sophisticated forms, 146 spatial turn, 153 synchronous and asynchronous collaboration, 154 theoretical ideas, 150 “Community of learners”, 147 Communities of practice, 466, 471 Community knowledge, 262 Community-wide knowledge-building endeavor, 151 Computational approaches, 177 Computational artifacts, 46–90 immersive environment, 58 interactive surfaces, 58 technology-enhanced embodied play, 58 Computational models, 412, 633 Computer-based spaces, 446 Computer Human Interaction (CHI), 151 Computer science (CS), 446 Computer science-based communities, 50 Computer Supported Collaborative Work (CSCW), 108 Computer technology, 223 Computer tutors, 412 Computerized peer, 413 Computer-mediated communication, 227, 613 Computer-supported analysis, 211 Computer-supported collaborative learning (CSCL), 122 adaptable collaboration scripts, 285 adoption, 8 AI application, 24 collaboration scripts (see Collaboration scripts) collaborative culture, 6 collaborative learning experiences, 8 collaborative processes, 4 communication, 6 community, 4 complex interrelationships, 4 computational artifacts, 46–48
Index computer-based learning systems, 4 content meta-analyses, 26 context, 7 (see also Context, CSCL) conventional educational settings, 24 cooperative, 24 cross-cutting issues, 5 design-based research approaches, 7 design for equity, 6 development, 7 dialogism (see Dialogism) digital devices, 24 diversity, 6, 7 educational ecosystems, 8 educational systems and practices, 8 educational technology, 24 educational use, 25 environments, 31 epistemological stance (see Epistemological stance) goal of a theory, 26 high school and education, 287 human cognition, 28 implementation, 27, 39–40 innovative computational supports, 24 integrated theory, 35–36 interaction analysis/design-based research, 27 international conference, 4 inter-objective, 26 knowledge-building activities, 282 and LA communities, 426 metacognition (see Metacognition, collaborative learning) methodological practices, 28 methods, 16–20 mixed methods, 7 multidisciplinary field, 5 multiplicities, 5 origin, 46 power relations, 8 practices, 25 processes, 4 regulation types, 286 representation roles (see Representational learning) research designs, 7 research methods, 25, 26 self-report data, 28 social interaction, 46 social processes, 4 STEM domains, 46 students, 4 subjective, 26
665 technologies, 4, 25 ACL, 15 CLA, 15 collective intelligence, 14 content knowledge, 13 contexts, 15 development, 24 environments, 14 framework, 16 information-based technologies, 16 leaning analytics orientation, 14 learning scientists, 16 multi-user virtual environments, 14 robotics and learning analytics, 13 role of tools, 15 SToG, 13 support learning, 15 types of tools, 16 virtual learning spaces, 13 transformative character, 4 unit of analysis, 7 Computer-supported collaborative work (CSCW), 432 Computer-supported cooperative work (CSCW), 151, 165, 297–299, 307, 308 Computer-Supported Intentional Learning Environments (CSILE), 470, 574 Conceptual artifacts, 262–264, 272, 273 Conceptualization, 448, 456 Conceptual/knowledge artifacts, 553 ConcertChat environment, 552 Conditional probability, 536 Configuration scripts, 448 Constructing representations, 358–360 Constructive interaction, 574 Constructivism, 29, 263, 267, 268, 272, 274 Content analysis, 559, 560 Content and argumentative structures, 184 Content logs, 588 Content monitoring, 285 Context, CSCL cognitive perspectives, 89–91 cognitivist and socioculturalist traditions, 88 collaborative learning with technology, 86 focal context, 88 HCI, 87 immediate context, 88 intentional and explicit, 87 layers, 89 mobile application, 87 ontological and epistemological paradigms, 86
666 Context (cont.) peripheral context, 88, 92, 94 relationships, 86 researchers’ treatment, 87 sociocultural perspectives, 92–94 subject of study, 88 in technology, 95 understanding, students’ learning, 86 Context-aware computing, 95 Context-aware technologies, 95 Contingencies, 190, 205 analysis, 571 graphs, 190 Conversation analysis (CA), 204–206, 231, 468, 469, 472, 557, 559, 609 conversational structures, 610 disciplinary perspectives, 610 online interactions, 611 social interaction, 611 sociology, 610 stages, 611 study of talk, 609 talk-in-interaction, 610 telephone conversations, 611 transcription system, 610 Conversational agent, 524 Cooperation, 168 Cooperation–collaboration distinction, 169 Cooperative learning, 24 CORDTRA (chronologically-oriented representations of discourse and tool-related activity), 77 Co-regulation, 47, 55, 286 Cosmic Ray Observatory Project (CROP), 153 Course Builder, 452, 453 Critical design ethnography, 471 Critical discourse analysis (CDA), 609 accessibility, 614 corpus-based approach, 614 CSCL community, 613 discourse analysis, 613 face-to-face interaction, 618 human organization, 613 ideology and power, 613 language learning, 615 learning, 613, 618 social actor, 613 social inequality, 613 social position, 615 socio-cognitive approach, 613 text-based discourse, 618 triangulation, 614 validity, 614
Index Critical pedagogy, 466 Critical theory, 466, 469 Crowd-sourced Q&A platforms, 166, 171 CSCL chat sessions, 224, 232 CSCL communities, 150 CSCL community inception, 446 CSCL conference, 48, 49 CSCL intervention, 87, 97 CSCL methods challenges in research methods, 74, 75 earlier results, 66 education programs, 66 history and development, 67 methodological practices, 65, 66 mixing methods, 72–74 research methods, 65–67 research questions, 67–68 research settings, 66 and theory, 74 CSCL publication, 48, 49 CSCL researchers, 164 CSCL scientific work, 445 CSCL tools, 112, 188, 192 Cultural-historical activity theory (CHAT), 34, 465–467, 469, 470, 473 Cultural learning, 154 Cultural mediation, 465 Cultural turn, 153 Culture of learning continuum (CLC), 492 Curricular objective, 450 Cutting-edge innovations, 151 Cyberlearning, 168 Cyberspace, 224
D Data gloves, 365 Data traces, 446 Data visualization tools, 654 Democratic communities, 147 Design-based implementation research (DBIR), 8, 125, 480 Design-based research (DBR), 52, 53, 56, 69, 109, 125, 446, 470 argumentative grammar, 483, 485, 488 characteristics, 481 CLC, 492 conjecture map, 486, 487 CSCL, 481, 482 definition, 480 design, 482 design principles database, 486 design processe, 480
Index design research, 482 design studies, 480 DRTL, 489, 490, 492 dual epistemic game, 483, 487 educational theory, 484 epistemologies, 482, 490 human experience, 485 manuscript, 485 methodological alignment, 488, 489, 491, 492 multiple argumentative grammars, 487 ontologies, 490 PPK, 487, 493 research methodology, 480, 481 “science” and “design” modes of inquiry, 484, 488 Designed-for-emergent collaborations, 381 Design objects, 251 Design principles (DPs), 124, 126, 247–250 Design research, 482 Design Researchers’ Transformative Learning (DRTL), 489, 490 Deterministic process models, 546 Developmental psychology, 527 Dialogic education, 223, 225, 228, 233 Dialogic instruction, 419 Dialogic interaction, 32, 225 Dialogic relations, 226 Dialogic space, 222, 232 Dialogic theory, 220–222, 226, 233, 467 Dialogic voices, 222 Dialogism, 32, 33 Bakhtin’s analysis, 220, 225 in CSCL, 224, 225 definition, 219 designing, 232, 233 and dialogic theory, 221 dissonances, 221 history and development, 225–228 in education, 220 methods and methodologies, 220 and monologism, 222 polyphonic model, 228–231 spark, 221 unpacking, 222, 223 unsituated–situatedness, 226 voices, 220 Dialogues, 52, 220–228, 230–234 Dichotomies, 453 Differentiated learning, 112 MOOCs, 111, 112 VHS, 112 Digital artifacts, 48, 252, 554
667 Digital augmentation, 398 Digital ecologies of learning, 253 Digital fabrication technologies, 28, 251, 252 Digital infrastructures, 48, 59 Digitalization, 246 Digital manipulatives, 409 Digital materiality, 252 Digital technologies, 224, 228, 245 Digital tools, 524 Digital traces, 437 Digital transformation, 59 Directionality, 380, 381 Disciplinary learning, 607 Discourse analysis (DA), 570 CDA, 613–615 CSCL, 612 definition, 606 discursive psychology, 606 DP, 615–618 history, 606 language, 606, 607 talk, 606 texts, 606 Discourse-oriented systems, 570 Discourse practices, 201 Discursive psychology (DP), 606, 609 academic script, 617 cognition, 615 computer-mediated discourse, 618 conversation analysis, 616 discursively construct, 616 DP-type analytic questions, 617 face-to-face interaction, 618 interview data, 615 learning, 618 psychologized concepts, 615 qualitative discourse analysis, 615 reader evaluation, 617 self-conception, 616 talk, 616 text-based discourse, 618 validation, 616 Discussion bus, 171 Dissonances, 190, 221 Diversity, equity and inclusion (DEI) ability-based design framework, 107 attention, 105 Band-Aid, 104 centric design dialogs, 104 and CSCL, 104, 106 CSCL 2017 conference, 105 demands, 104 diversity, 106
668 Diversity (cont.) equality, 107 equity, 107 evolution, 105 gaming industry, 113 implementation process, 108 intersectionality theory, 108 language, 110 multilevel complex, 106 OER designers, 113 outcome measure, 107 Division of labor, 168 Domain-specific technology design, 202 Double dialogicality, 222 Double stimulation, 31 Dynamic Bayesian networks (DBNs), 18, 544, 545 Dynamic geometry study, 213 Dynamic philosophy, 29 Dynamic script-based support, 454 Dynamic scripts, 416 Dynamic social network analysis, 545
E Educational activity system, 149 Educational data mining (EDM), 419, 636 Educational design, 221 Educational games, 384 Educational models, 170 Educational projects, 233 Educational research and practice, 24 Educational technology, 225, 228, 233, 446 Educational Testing Service (ETS), 519, 520, 523, 525 Education ecosystem, 133 “Education into dialogue”, 467 Emancipatory immersion, 392, 397 Embodied virtual agents, 365 Emerging learning paradigms, 167 Emotion regulation, 288 Empirical analysis, 200 Empiricism, 29 Enabling technologies, 15 Engineering practices, 250 Envisioning Machine, 575 Epistemic agency, 262, 267, 270 Epistemic frame theory, 578 Epistemic network analysis (ENA), 579 Epistemic objects, 246, 247 Epistemic practices, 53 Epistemological stance diversity, 50
Index individualism, 51 learning and knowing, 50 pragmatic and computational stance, 53 relationism, 52 Ethnographic research, 177 Ethnography, 463, 464 Ethnomethodology (EM), 32, 56, 150, 204, 205, 468, 472, 474 EU-funded project LACE, 457 Evidence-centered design (ECD), 526 Evolution of Conceptualization of Processes, 9 Expert systems, 408 Explicit individual knowledge, 204 Explicit knowledge, 199 Exploratory sequential data analysis (ESDA), 208 External collaboration scripts, 336, 340–342, 344, 348 Externalization, 192 External scripts, 167 Eye-tracking, 653 Eye-tracking data, 77 Eye-tracking in CSCL collaborative learning processes, 653 collaborative problem-solving situations, 652 colocated and online-remote settings, 653 gesture sensing, 631 joint visual attention, 652 learning behaviors, 652 mobile eye-tracking, 652 TUI, 652
F Facebook, 155 Face-to-face collaborations, 360, 361, 450 Face-to-face conversations, 224 Face-to-face whole-class discussions, 417 Feedback, 304, 305 Fifth Dimension, 149 Finite-state machines, 546 First CSCL conference, 46 Formal learning environments, 426 Foundation tools, 450, 452, 453 FROG (open-source tool), 452, 454, 456 Future Learning Spaces, 152 FutureLearn (MOOC platform), 171
G Galaxy Zoo project, 153 Game-creation communities, 379
Index Gaming industry, 113 Gaze vs. motion sensing, 634, 635 Gaze sensing, CSCL constructive activities, 628 dual eye-tracking methodology, 628, 630 dual mobile eye-tracking, 630 eye-tracking data, 628, 629 JVA, 628, 629, 631 mobile eye-trackers, 630 shared gaze visualizations, 630 shared meaning making, 628 visual leadership, 629 Geminoids, 412 General linear model, 535 General-purpose statistical analysis tools, 451 Gestural data, 635 Gesture sensing, CSCL body position-based segmentation, 633 body postures, 632 body synchronization vs. group interaction, 633 collaborator, 633 computational methods, 633 eye-tracking, 631 fusion, 635–637 vs. gesture, 634, 635 Kinect Sensor V2, 631 mathematical inquiry trainer, 631 Github, 176 Gognitive group awareness tool, 303 Gold standard, 452 Good collaboration, 150 Goodwin’s studies, 206 Google Scholar documents, 264 Government-funded research efforts, 456 GPS-enabled AR, 400 Group awareness classification, 299, 307 communication technologies, 296 computer-mediated communication, 296 CSCL, 298, 299, 307 CSCW, 297, 307, 308 definition, 296 face-to-face scenarios, 298 function display, 303 external representations social comparison, 303 feedback, 304 framing, 303 group processes, 302 learning outcomes, 302 problematization, 305, 306
669 regulating collaboration, 302 scripted collaboration, 306 psychological functions, 299 social and organizational psychology, 308 social influence, 308 social interactions, 298 technical developments, 299 technological developments, 308 type of information, 299, 300 cognitive group awareness tool, 300, 301 face-to-face settings, 300 social group awareness tools, 301, 302 visual feedback, 297 visualizations, 297 Group cognition, 30, 31, 34, 38, 207 Group practices, 11 cultural resources, 200 design, 201, 213, 214 empirical analysis, 200 group-level construct, 200 history and development (see Individual- to group-level constructs, group practices) methodology, 202, 214 multiple sequential orders (see Multiple sequential orders analysis, group practices) pedagogy, 201, 212, 213 small groups, 200 social practices, 200 technology, 202, 214 theory, 212
H Hidden Markov models (HMMs), 18, 538–540, 544 Hierarchical linear modeling, 541 History of CSCL, 46, 48 Homophonic music, 224 Honda’s humanoid robot, 412 Human–agent collaboration, 523 Human behavioral models, 408 Human beings, 244, 245 Human-cognitive architecture, 245 Human–computer interaction (HCI), 87, 151, 165, 409 Human–human collaboration, 523 Human intelligence, 244 Human–machine interactions, 411 Human-modeled computer systems, 410 Human problem-solving, 408 Human-to-agent (H-A) approach, 522
670 Human-to-human (H-H) approach, 521, 522 Human tutors, 412 Hyperspace, 171
I Idea Thread Mapper, 152 ‘I-It’, 225 ILDE projects, 454 Immersive environments actional immersion, 390 AR (see Augmented reality (AR)) CSCL, 390 digital technologies, 390 emancipatory immersion, 392 narrative immersion, 391 SANSE, 392, 393 sensory immersion, 390, 391 social immersion, 391 social justice, 391 spatial immersion, 391 subjective experience, 390 types, 392, 393 VR, 391, 394 Indexicals, 206 Individual self-learning advantages, 414 disadvantages, 415 early automated instructions, 413 indirect and direct interactions, 414 LBT, 414 mechanisms, 413 ProJo, 414 self-comparison, 414 self-explanations, 413 sound puzzling, 413 Teachable Agent system, 413 visualization tools, 413 Individual- to group-level constructs, group practices EM, 204 group cognition, 207 individuals constructing understanding, 204 interaction, 205 pre-historic spirits, 203 rational minds, 203 representational practice, 206 sequential analysis, 206 sequential organization, 205 uptake interaction, 206, 207 Innovation network, 129 In-person human facilitation, 412 Inscriptional similarities, 190
Index Instagram, 155 Instructors, 437 Integrated theory, CSCL CSCL research, 38 cultural–historical unit of analysis, 38 discourse and interaction, 36 epistemic mediation, 37 framework, 36 interactional mediation, 37 intersubjectivity and shared understanding, 38 personal motivations and beliefs, 38 temporality and sequentiality, 37–38 Integrating external tools, 244 Integrative visualization tools, 654 Intelligent model-based agents, 414 Intelligent Support for Learning in Groups, 434 Intelligent tutoring systems (ITSs), 408, 412–414, 632 Intentional learning, 263 Interaction analysis, 55, 471 in CSCL, 611, 612 Interaction geography slicer (IGS), 653 Interaction graph, 558 Interaction management, 189 Interactive Whiteboards, 227 Interanimation, 190, 222, 224 Interdisciplinary collaboration, 348, 349 Intermediary objects, 246 Internal collaboration scripts, 336–340, 348 Internal scripts, 167 International Journal of Computer-Supported Collaborative Learning (IJCSCL), 227, 644 Internet-based projects, 152 Internet-mediated education projects, 232 Inter-objective, 24, 27, 29, 31, 35 Interoperability, 457 Interprofessional collaboration, 349 intersectionality, 108 Intersubjectivity, 166, 212, 242 Intertextuality, 223 Intra-mental’ capabilities, 226 Item Response Theory (IRT), 524 ‘I-Thou’, 225
J Joint attention cross-recurrence, 430 dyads, 431 eye-tracking, 430, 431 ground communication, 430
Index intervention, 430 tangible tabletop, 431 transactive discourse, 430 verbal coherence, 431 Joint construction, representations, 358, 359, 362 Joint visual attention (JVA), 626, 628, 629, 652
K KCI model, 152 Knowledge, 148 Knowledge acquisition, 168 Knowledge artifacts, 171 Knowledge building, 33, 228, 456 activities, 282 activity structures, 270, 271 analytics and analytical tools, 268–270 classrooms, 263, 264 communities, 265 community knowledge, 262 development, 263, 264 effectiveness, 265 epistemological issues, 270 history, 263, 264 Knowledge Building International, 262 knowledge creation, 262, 263 pedagogy, 267, 268 principles, 264, 265 producing tangible/material artifacts, 272, 273 research projects, 264 scripts and scripting, 271, 272 self-descriptions, 262 symmetric knowledge advancement, 264 technology, 265–267 Knowledge Building Communities (KBCs), 150, 262 Knowledge-building discourse conventional contributions, 429 CSCL software tool, 429 early research analyses, 429 knowledge Forum software tool, 430 productive and improvable threads, 429, 430 Knowledge Building Discourse Explorer (KBDeX), 53 Knowledge Building Gallery, 264 Knowledge Building International (KBIP), 129, 262 Knowledge Building Summer Institutes, 262 Knowledge-building theory, 247, 249 Knowledge/conceptual artifacts, 554, 556 Knowledge construction, 11
671 argumentation, 193 collective and collaborative, 184, 185, 188, 193 frameworks, 193 social media platforms, 192 social system, 193 Knowledge Construction Dialogues, 416 Knowledge creation, 262, 263 Knowledge-creating activity, 250 Knowledge-creating learning, 33 Knowledge-creating organizations, 262, 263 Knowledge-creation metaphor, 242, 243, 249, 250 Knowledge-creation practices, 254 Knowledge-creation processes, 249, 253 Knowledge-embodying material artifacts, 262 Knowledge Forum (KF) software tool, 127, 151, 152, 263, 265–271, 430 Knowledge Gap theory, 111 Knowledge-intensive organizations, 247 Knowledge-laden artifacts, 242 Knowledge modeling, 419 Knowledge objects, 223 Knowledge organizations, 165 Knowledge practices, 244 Knowledge society, 439 Knowledge Space Visualizer, 188
L Language, 110 accessibility, 111 availability, 110 and text-based artifacts, 553, 554 Language quantification in CSCL abstraction of language, 586 anthropology, 594 articles, 590, 591 challenges, 598 characteristics, 589 coding language, 596 coding schemes, 596, 597 cognitive biases, 588 cognitive factors, 594 collaborative learning, 586 content-based cognitive activity, 595 content logs, 588 conversational patterns, 594, 595 criteria, 589 cultural tools, 594 descriptive tool, 587 design-based papers, 591 efficiency, 599
672 Language quantification in CSCL (cont.) epistemological differences, 593 frequency counts, contexts, 590 IJCSCL, 598, 599 individual level, 595 informal learning, social media, 599 inter-rater reliability, 588 laboratory-based studies, 590 learner activity, 586 learning sciences, 587 listening behaviors, 596 machine learning, 598, 599 measurement tool, 586 pattern recognition tool, 586 psychological tool, 595 psychology, 593 qualitative data, 588 qualitative research, 587 quantitative research, 587 research papers, IJCSCL, 591 social constructivism, 595 social hierarchies, 600 socialization tool, 594 social learning component, 591 social meaning, 598 social signals, 598 sociolinguistics, 598 software tools, 587 theoretical frameworks, 592, 594 tool, 587 Language-based methodologies in CSCL conversation analysis, 609–611 critical discourse analysis, 609 disciplinary learning, 607 discursive psychology, 609 interaction analysis, 609, 611, 612 positivist conceptions of rigor, 609 qualitative coding-based approaches, 608 qualitative methods, 608 sociocultural perspective, 608 Language Technologies Community, 456 Large-scale learning environments, 164 Learner-centered interactions human peers, 413 individual self-learning, 413–415 individual self-other learning, 415–417 other’s learning, 417 peer tutoring circumstances, 413 social interaction, 412 verbal dialogue, 412 Learners’ knowledge-elaboration processes, 189 Learning CSCL research community, 167
Index external scripts, 167 interactional meaning-making, 167 internal scripts, 167 knowledge creation processes, 168 multifaceted perspectives, 168 negotiation, 167 scaled environments, 168 Learning analytics (LA), 53, 95, 253, 359, 426, 438, 571, 576, 580, 636, 644 Learning by Teaching (LBT), 414 Learning communities collaborative knowledge building, 149 CSCL, 149 documented case, 147 Future Learning Spaces, 152 membership, 148 notions, 147 scientific, 146 socioculturally minded thinking, 148 Learning design approach, 451, 453, 454 Learning environment, 227 Learning in a Networked Society (LINKS), 154 Learning innovation, 122 Learning Management Systems (LMS), 155 Learning mechanisms, 407 Learning outcomes, 150 Learning paradox, 148 Learning sciences, 109, 166, 587, 607, 618, 643, 644, 647 Learning Spaces, 14 Learning to Learn Together (L2L2), 233 Legitimate peripheral participation, 466 LightSIDE, 266, 455, 456 Linguistic Data Consortium, 457 Liquid modernity, 154 Local meaning-making, 426 Log-file data actions, 579, 580 body language, 580 collaboration-as-a-window group, 574 collaboration-as-learning, 575–578 collaboration-for-distal-outcomes group, 574 collaboration-for-proximal-outcomes, 575, 577, 578 computational analysis, 580 computational methods, 570, 572, 573 computer-based simulation, 575 constructive interaction, 574 contingency analysis, 571 convergent conceptual change, 575 CSCL, 570, 574, 579, 580 CSILE, 574
Index design-based research, CSCL, 576, 577 design conjecture, 577 discourse-oriented systems, 570 discourse units, 578 ENA, 579 epistemic frame theory, 578 epistemic stances, 573 human learning, 580 individual minds, 573 input data, 571 learning analytics, 571, 576, 579, 580 modalities, 579 multilayer artificial neural networks, 580 multimodal learning analytics, 579, 580 sequential data, 570 technical tools, 570 theoretical conjecture, 577 user activities, 570 Log-file data mining, 439 Longitudinal assessment, 455
M Machine learning, 176, 599 Management tools, 450, 451, 454 Mass collaboration, 57, 169 bidirectional stimulation, 173 definition, 169 interpersonal processes, 168 learning, 166 scenario, 172 web gathering, 165 Massive collaboration Dragon Swooping Cough, 376, 377 preventative measures, 377 virtual epidemic, 376 virtual game, 377 Whyville, 375 Massive data, 545 Massive multiplayer online games, 372 Massive open online courses (MOOCs), 155, 599 collaboration, 171–173 crowd-sourced Q&A, 166 discussion forum, 176 FutureLearn, 171 graph modeling, 167 learning at scale, 166 levels of scale, 171 teacher education, 172 Meaning unit, 222 Media artifacts, 206 Media dependency, 190, 428 Mediated cognition (Vygotsky’s theory), 30–31 Mediating artifacts, 243, 246
673 Mediation, 30 Metacognition, collaborative learning, 12 adaptable collaboration scripts, 285 collaborative/transactive costs, 283 content monitoring, 285 co-regulation, 286 CSCL, 282, 289 definition, 284 emotion regulation, 288 group learning, 285 instructional method, 283 metacognitive awareness, 288 regulation modes, 282 sequential analysis, 285, 289 shared mental model, 283 shared regulation, 286 shared regulatory activities, 285 socially shared regulation, 285–287 SRL, 284, 287 student activities, 283 technological supports, 288 temporal analysis, 287, 289 transactional activities, 288 transactive activities, 283 transactive discourse, 283 Meta-cognitive learning, 227 Metacognitive monitoring collaborative learning, 283, 286 collaborative processes, 288 co-regulation, 286 individual learning, 284 internal individual mental process, 284 internal process, 288 regulated learning phase, 284 regulatory activities, 289 temporal analysis, 287 Metagaming activities, 377 Meta-knowledge, 456 Methodological alignment, 488, 489, 491, 492 Methodological practices, 65, 66 Methodological stances in CSCL and analytic techniques, 54 DBR, 56 individualism, 54 interaction analysis, 54–56 mass collaboration, 57 predefined dimensions, 54–56 Micro-creativity, 228 Microgenetic moment, 150 Minerva’s assessment/analysis tools, 455 Minerva’s tools, 452, 454, 456 Missing referents, 210 Mixing methods, CSCL, 72–74 Mobile application, 87, 227 Mobile City Science (MCS), 400, 401
674 Mobile eye-tracking, 652 Mod The Sims 2 (MTS2), 382 Modern robotic construction kits, 409, 411 Monologic utterance, 231 Monologic voices, 222 Monologism, 220–222 Monophonic music, 224 Motion sensing, CSCL collaborative experience, 633 vs. gaze sensing, 634 student learning experience, 632 theoretical frameworks, 635 Multidisciplinarity, 75 Multifeature extraction, 636 Multifocal’ lenses, 224 Multilevel structural equation models (ML-SEM), 542 Multimodal analytics, 435 Multimodal learning analytics (MMLA), 439 multimodal sensors, 626, 627 sentograph, 627 Multimodal sensing, 627, 637 Multiplayer Online Battle Arena (MOBA), 381 Multiple representations, 354, 362, 364 Multiple sequential orders analysis, group practices adoption identification, 210, 211 computer-supported analysis, 211 content logging, 208 ESDA, 208 mutually compositional units, 208 segment description, 209, 210 segmentation, 209 structural description, 208 Multivocal analysis, 177 Multivocality, 223 Music-related image, 230
N Narrative immersion, 391, 394, 396, 401 NATO-funded workshop, 149 Natural language processing, 231, 554, 557, 559, 561, 562, 564 Neolithic revolution, 146 Network Society, 224 Nonaka’s SECI model, 263 NovoEd approach, 173
O Object-driven activities, 242 Object-oriented collaboration
Index activity theory, 246 boundary objects, 246 CSCL environments, 244 CSCL tools, 244 design objects, 251 design process, 251 digital fabrication technologies, 251 digitalization, 246 digital technologies, 245 educational institutions, 252 educational practices, CSCL, 245 epistemic objects, 246, 247 human activities, 244 human beings, 245 human-cognitive architecture, 245 human intelligence, 244 intermediary objects, 246 intersubjectivity, 242 knowledge-building theory, 247 knowledge creation, 243, 250, 252 knowledge-laden artifacts, 242 knowledge practices, 244, 247 knowledge society, 252 knowledge work, 247 mediating artifacts, 246 socially emergent and nonlinear process, 244 socio-cultural approaches, 242, 252 sociomateriality, 242 Object-oriented knowledge practices, 244 Object-orientedness, 242, 246, 248, 253 Objects, 241, 242, 246 Observable behaviors, 626 Online argumentation environment LASAD, 359 Online chats, 536 Online communities, 151 Online course system, 114 Online digital video analysis tool, 646 Online environment, 450 Online science simulation, 47 Online verbal communication, 648 Open Educational Resources (OER), 111 Orchestration graphs, 454 Organization for Economic Co-operation and Development (OECD), 518 Organizational learning, 245 “Others’ learning” ASSISTments, 418 BKT, 417 heuristics, 417 open-learner models, 417 pedagogical agents, 417 Purdue Course Signal System, 418
Index Out-of-school settings, 131 Outside activity, 189 Overarching methodologies, 16
P Pedagogic adaptation, 436 Pedagogical agents and avatars, 414 collaborative learning, 413, 415 human-like appearances, 410 programs, 410 and robots, 415 social metaphor, 411 Pedagogical innovations, 124 Pedagogical robots, 411 Pedagogic models, 249 Pedagogy of the oppressed, 225 Petri Nets, 546 Piaget’s and Vygotsky’s theoretical approaches, 147 Piazza, 155 PKA tool, 305 Place-based AR actional immersion, 401 emancipatory immersion, 401 environmental detectives, 400 GPS, 400 long-term actional immersion, 401 MCS, 400 narrative immersion, 401 sensory immersion, 400, 401 social immersion, 401 Political quietism, 474 PolyCAFe, 231, 562 Polyphony, 229 analysis, 231, 560–562 construction, 229 model, 221, 224, 228–231, 553, 560, 561 music, 221, 224, 560 piece, 224 Popper–Lakatos–Thagard constellation, 263 Post hoc visualization, 435 Post-humanist approaches, 24 Power analysis, 507 Predefined dimensions, 54–56 Pressey’s Testing Machine, 408, 411 Principled-based approaches, 150 Principled practical knowledge (PPK), 487 Problematization, 305, 306 Problem-solving assessment, 518–520, 524, 526, 527 Procedural consequentiality, 205, 210
675 Processes of Participation, 9, 10 Process mining, 573 Process-oriented analysis methods, 573 Process-oriented roles, 321 Productive convergence, 426 Product-oriented roles, 321 Programme for International Student Assessment (PISA), 518 Programming language, 409 Progressive research programs, 263 “ProJo”, 414 Promisingness, 430 Provocation, 447 Pseudoevidence, 187 Psychometric values, 523 Pulitzer prize-winning Panama Papers investigation, 165 Purdue Course Signal System, 418
Q Qualifiers, 186 Qualitative analytic techniques, 72 Qualitative methods, 72, 73, 75 “Quantitative Ethnography”, 579 Quantitative methods, 69, 71–73 (Quasi-)experimental designs, CSCL research causal relationship, 498, 499 cause-and-effect relationships, 499 collaboration scripts, 498, 499 concepts, 498 dependent variables collaboration script, 498 collaborative outcome level, 505 collaborative process level, 504–505 individual outcome level, 505 individual process level, 504 external validity, 500 independent and dependent variables, 501 independent variables collaboration vs. individual learning, 502 collaboration script, 498 computers, learning, 502, 503 learning environments and scaffolds, 503, 504 internal validity, 500 learning analytics research, 510 methodology, 510 mixed methods, 508, 509 multilevel analysis, 506 open science, 510 problem-solving tasks, 501 psychological science, 509
676 (Quasi-)experimental designs (cont.) quantitative measures, 500 random assignment, 499, 500 registered reports, 510 reproducibility, 509 statistical power, 507, 508 Quick Helper, 171
R Rainbow framework, 189 Random assignment, 499, 500 Rationalism, 29 Rebuttal, 186 reCaptcha, 165, 175 Reddit Place canvas, 163 Reflection, 306 Reflective design-based research methodology, 456 Reflective structuration, 128, 271 Regulation, 47, 52, 55 Relationism, 52 Representational learning cognitive and sociocultural perspectives, 355 cognitive dyadic, 354 cognitive perspectives, 354 collaboration, 356 collaborative activities, 362–363 collaborative learning, 354, 362 computational, 356 distributed cognition, 355 dyna-linked representations, 357 face-to-face collaboration, 360 flexibly assigned, 357 functional roles, 355 information-processing approaches, 354 innovation, 355 joint construction, 358, 359 knowledge communication, 356 methodological and context-related factors, 363 MOOCs, 363 multi-touch tables, 356 qualitative, 357 quantitative, 357 sociocultural, 355 technologies, 361 verbalizations, 356 Representational practice, 206 Representations, 410 Research challenges, CSCL, 74, 75 Research methods, CSCL
Index case studies, 69 classroom settings, 70 data sources and analysis, 70–72 DBR methods, 69 descriptive designs, 69 experimental designs, 68 mixed methods research designs, 69 qualitative research designs, 69 quantitative designs, 68 randomized experimental/quasiexperimental designs, 69 statistical tests, 68 strategies for inquiry, 68 traditional assessments, 70 Resource sharing, 456 Rich microworld, 438 Robot and agents, 409 collaborative partner, 410 education, 408–409 learner-centered interactions, 412–418 linking theory-practices, 418, 419 similarities, 409 social metaphors, 410, 412 Robotics technology, 410 Roles for Structuring Groups for Collaboration, 12 Roles, structuring CSCL a priori assigned vs. emergent roles, 321 assignment, 325 automated analysis, 327 behaviors, 326 collaborative processes, 324 conversational functions, 321 design science, 325 face-to-face classroom discussions, 320 group process, 320 individual’s responsibilities, 316, 322 learning analytics, 326, 327 macro level, 322 meso level, 316, 322 micro level, 316, 322 moderator role, 325 networking technologies, 316 online/distance learning environments, 327 pedagogical approach, 324 process-oriented roles, 321 product-oriented roles, 321 role rotation, 325 social participatory, 323 starter–wrapper technique, 324 student characteristics, 325 students, 320, 322, 323 technology, 326
Index visualization, 327 vocal characteristics, 327 Roomquake, 152
S SAIL digital infrastructure, 152 Scaffolding, 317 Scaffolding tools, 188 Scalable CSCL innovations, 133 Scale-harnessing approach, 175 Scale-reduction approach, 173 School communities, 153 School–UNiversity–Government (SUNG), 131 Scientific practices, 250 Scientometric analysis, 48 SCOPUS database search, 48 Scratch online community, 152, 171, 173 Script, 317, 318 Script-based methods, 416 Scripted collaboration, 306 Scripting approaches, 150 Script Theory of Guidance (SToG), 13, 271, 337, 339–343, 348, 449 Segmentation, 209 Segment microanalysis, 209 Self-directed learning capacity, 112 Self-other learning cognitive implicature, 415 dialogic discussion practices, 417 dialogue-rich discussions, 416 dialogue-rich instruction, 416 early automated instructions, 415 early pedagogical agents, 416 face-to-face whole-class discussions, 417 pedagogical agents, 415 professional learning interventions, 417 scaffold group discussions, 415 script-based methods, 416 sensemaking process, 416 speech acts, 415 tutorial dialogue agents, 416 Self-regulated learning (SRL), 284, 286–288, 573 Self-regulation, 47, 48 Self-representation, 361 Semantic relatedness, 190 Sensemaking, 47, 176 Sensor modalities, 626 Sensory immersion, 390, 391, 394, 396, 401 Sensory, Actional, Narrative, Social, Emancipatory (SANSE), 392, 400 Sentograph, 627
677 Sequential analysis, 211, 536 Sequential pattern mining, 573 Sequentiality, 206 Shared gaze visualizations, 630 Shared knowledge, 212, 356, 358, 364 Shared meaning making, 628 Shared regulation, 47, 286 Shared regulatory activities, 285 Shared representations, 358, 362–364 Shared spaces, 202 Shared tasks, 457 Sims-focused online communities, 382 Simulations, 353, 356 Single-minded artifacts, 408 Single-player video games, 372 Situated learning, 148 Skinner’s Teaching Machine, 408, 411 Small-group practices, 200 Small-scale collaborations, 383 Social coding, 171 Social comparison orientation, 304 Social constructivism, 29, 223, 595 Social discourse, 230 Social game-play settings, 373 Social group awareness tools, 301–304 Social immersion, 391, 395, 396, 400, 401 Social inequality, 613 Social interaction, 146, 243 Social learning, 592 Socially explicit systems, 412 Socially implicit systems, 412 Socially indifferent machines, 411 Socially shared regulation, 285–287 Social media, 164, 599 Social metaphors, 410, 412, 413, 419 Social network analysis (SNA), 225, 231, 545, 558–561, 570, 572, 577, 578 Social position, 615 Social practices, 200 Social processes, 447 Social relation, 189 Social sciences, 245 Social–semantic networks, 264 Social space, 301 Social systems, 192 Society for Learning Analytics Research (SoLAR), 48 Socio-cognitive conflicts, 342, 343, 347 Socio-cognitive theories, CSCL, 32 Sociocultural approach, 148, 226 Sociocultural perspective, 148, 154 Sociocultural research approach, 456 Socio-cultural tradition, 226
678 Socioculturally minded thinking, 148 Socioculturally oriented CSCL research, 93, 94 Socio-digital technologies, 37 Sociograms, 558 Sociological accounts, 146 Sociomateriality, 242 Sociotechnical environment, 400 Socio-technical infrastructure, 175 Sociotechnical systems, 47, 174 Software system Belvedere, 188 Sophisticated artifacts, 418 Space-based AR actional immersion, 399 ARIEL, 397–399 digital augmentation, 398 digital information, 397 informal learning, 397 magnetic force field, 398 narrative time scale, 399 smaller scale spaces, 397 social immersion, 400 Space Physics and Aeronomy Research Collaboratory (SPARC), 153 Spatial organization, 190 Spatial turn, 153–155 Speaking, 185 S3 project, 418 StackOverflow, 167, 171, 174 Stakeholders, 450 Standard VLE/LMS, 450 State-of-the-art scholarship, 153 Statistical analysis analytic issues group differences, 541 indirect effect and measurement error, 542 latent process, 542 later group/individual outcomes, 543 multiple target events, 542 parallel chats and trees, 541 pivotal events, 541, 542 task difficulty, 541 time periods, 542 conditional probability, 536 online chats, 536 regressions, 536, 537 SDA, 543 sequential analysis, 536 sequential processes, 534, 535 statistical strategies and analytic issues, 540 VAR, 537–538 Statistical and Stochastic Analysis of Sequence Data, 18
Index Statistical discourse analysis (SDA), 76, 543 Statistical power, 507, 508 Stigmergic collaboration, 174, 176 Stigmergy, 174 Stochastic analysis DBNs, 544, 545 HMM, 538–540, 544 sequential processes, 535 Stratagem, 187 Strict scripts, 416 Structure discovery methods, 426 Structured interdependence, 317 Structuring cognitive psychology, 320 collaboration scripts, 318 CSCL, 319 definition, 316, 317 history, 319 instructional intervention, 318 regulating, 319 roles, 316 scaffolding, 317 scripting, 317–319 structured interdependence, 317, 319 Student-driven collaborative processes, 128 Subject-specific learning tools, 227 Subreddits, 164 Supportive architectures, learning DBIR efforts, 129 design-based research, 130 professional learning opportunities, 130 scaling pedagogical innovations, 129 teacher codesign, 130 Support learning processes, 448 Sustainability, 124 Sustainability and scalability adoption, 123 conceptualizations, 122 CSCL innovations, 8, 126 CSCL practices, 124 CSCL programs, 126 design challenges, 123 design perspective, 123 educational efforts, 123 implementation research, 124 innovation and reform, 123 pedagogical innovations, 124 scaling educational innovations, 125 Sustainable CSCL models boundary-crossing designs, 127 discipline-specific resources, 129 environments, 128, 129 knowledge-building processes, 127
Index learning interactions, 127 principle-based approach, 126 Sustainable out-of-school practices, 133 online computer platform, 131 Symmetric knowledge advancement, 264 Synchronous/asynchronous collaborative learning interventions, 454 Systematic review, 78
T Tablet computers, 227 Tacit knowledge, 204 Tacit learning, 204 Taking Citizen Science to Schools, 153 Tangible user interface (TUI), 652 Target learning environment, 451 Task management, 189 Teachable Agent system, 413 Teacher-centered presentations, 201 Team-based games, 374 Technical puzzle, 433 Technical spheres, 449 Technocentric, 446 Technological affordances, 201, 227, 448 Technological artifacts, 409, 413 Technological capacity, 95 Technology/tools/interventions relationships, 447 Technology-enhanced learning communities, 457 Technology-mediated interaction, 206 Technology-mediated learning, 242 Technology-rich collaborative learning environments, 408 “Telesphere Mask”, 394 Temporal analysis, 287 Temporal proximity, 190, 428 Text-based interactions, 455 Text-mining technique, 578 The Sims Resource (TSR), 382 Theoretical baggage, 448 Theoretical frameworks, 449 Theory and design principles, 451 Theorycrafting, 381 Thick description, 150 Touch table technology, 227 Toulmin’s approach, 191 Traditional data analysis methods, 155 Transactive activities, 283, 343, 348 Transactive discourse, 430 Transactivity collaborative learning, 339, 342
679 instantiation, 340 meta-analysis, 342, 344 scaffolding generic collaboration skills, 347 Trialogical learning artifacts, 248 Bakhtin’s theory, 248 collaboration, 243 collaborative processes, 248 cultural–historical activity theory, 248 design objects, 251 design process, 251 digital collaborative technologies, 246 digital fabrication technologies, 251 DPs, 247–250 educational institutions, 249, 250 engineering practices, 250 human activities, 244 intermediary artifacts, 243 knowledge-building approach, 247, 249 knowledge-creating activity, 250 knowledge-creating learning experiments, 250 knowledge-creation metaphor, 242, 243, 249, 250 knowledge-creation processes, 249, 252 knowledge society, 252 object-driven inquiry, 253 object-orientedness, 242, 253 pedagogic models, 249 the practice turn, 244, 245 scientific practices, 250 social interaction, 243 spatial transformation, 248 wikipedia, 250 Trialogical Learning and Object-Oriented Collaboration, 11 Triangulation, 614 Turn-taking system, 205 Tutorial dialogue agents, 416, 417
U Understanding Interaction between Technology and Learning, 13 Unobservable processes, 202 Unpacking dialogism, 222, 223 Uptakes, 76, 189, 206 Upvote/downvote system, 174
V Vector auto-regression (VAR), 537–538 Video analysis tools, 646
680 Videoconferencing, 453 Video data analysis advantages, 646, 647 augmented reality, 654–656 automatic analyses methods, 653, 654 challenges, 648 collaborative learning processes, 644 collaborative theoretical approaches, 644 CSCL, 644, 647, 649 digital, 645 eye-tracking, 644, 652, 653 learning processes, 645, 647 learning science, 643, 644, 647 mixed-methods strategy, 651 monitoring activities, 649 nonverbal behaviors, 649 online interactions, 648 online verbal communication, 648 qualitative approach, 651 quantitative methods, 644 social interactions, 648, 651, 653, 654, 656 time-based video segmentation, 649 tool, 645 transcripts, learning episodes, 651 verbal interactions, 651 video workflow model, 646 virtual reality, 644, 654–656 web-based video tool, 651 WebDIVER, 646 Video games collaborative learning, 372, 373, 375 commercial games, 384 designed-for-emergence, 381 digitally based collaborations, 383 directionality, 380, 381 educational design, 383 player–creator communities, the Sims, 382 single social space, 372 Video recorded nonverbal interactions, 649, 653, 654 Video recorded social interactions, 648, 651, 653, 654, 656 Virtual epidemic, 376 Virtual high schools (VHS), 109 Visualizations, 76, 77, 297 Virtual learning environment (VLE), 450 Virtual materiality, 252 Virtual Math Teams (VMT), 122, 128 Virtual reality (VR), 14, 58, 252, 391, 654, 656 actional immersion, 394
Index CO2, 394 emancipatory immersion, 397 K12 education, 394 narrative immersion, 394, 396 sensory immersion, 394, 396 social immersion, 395, 396 Telesphere Mask, 394 Virtual retakers, 112 Virtual worlds collaborative learning, 372, 375 CSCL, 372, 374 designed-for-emergence, 381 directionality, 380 Dragon Swooping Cough, 376 educational design, 383 educational game, 372 massive populations, 374 massively multiplayer online games, 374 multiple scales, 384 player–creator communities, sims, 382 players/participants, 380 small “g” games, 377 small-scale collaborations, 383 social practices, 378 Whyville, 375, 376, 378 Visual representations, 362 Voices, 220, 222 Vygotsky’s educational theory, 185 Vygotsky’s model of mediation, 225
W Wallcology, 152 Warrant, 186 Web-based technologies, 570 WebCollage, 454, 456 WebDIVER, 646 Wikipedia, 146, 250 Wikipedia’s WikiProjects, 171, 175 Wikis, 227
Y YouTube, 379 YouTube videos, 167, 379
Z Zone of proximal development (ZPD/ZpD), 30, 226