140 42 62MB
English Pages 632 [620] Year 2023
LNCS 14041
Panayiotis Zaphiris Andri Ioannou (Eds.)
Learning and Collaboration Technologies 10th International Conference, LCT 2023 Held as Part of the 25th HCI International Conference, HCII 2023 Copenhagen, Denmark, July 23–28, 2023 Proceedings, Part II
Lecture Notes in Computer Science Founding Editors Gerhard Goos Juris Hartmanis
Editorial Board Members Elisa Bertino, Purdue University, West Lafayette, IN, USA Wen Gao, Peking University, Beijing, China Bernhard Steffen , TU Dortmund University, Dortmund, Germany Moti Yung , Columbia University, New York, NY, USA
14041
The series Lecture Notes in Computer Science (LNCS), including its subseries Lecture Notes in Artificial Intelligence (LNAI) and Lecture Notes in Bioinformatics (LNBI), has established itself as a medium for the publication of new developments in computer science and information technology research, teaching, and education. LNCS enjoys close cooperation with the computer science R & D community, the series counts many renowned academics among its volume editors and paper authors, and collaborates with prestigious societies. Its mission is to serve this international community by providing an invaluable service, mainly focused on the publication of conference and workshop proceedings and postproceedings. LNCS commenced publication in 1973.
Panayiotis Zaphiris · Andri Ioannou Editors
Learning and Collaboration Technologies 10th International Conference, LCT 2023 Held as Part of the 25th HCI International Conference, HCII 2023 Copenhagen, Denmark, July 23–28, 2023 Proceedings, Part II
Editors Panayiotis Zaphiris Cyprus University of Technology Limassol, Cyprus
Andri Ioannou Cyprus University of Technology Limassol, Cyprus CYENS Nicosia, Cyprus
ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-031-34549-4 ISBN 978-3-031-34550-0 (eBook) https://doi.org/10.1007/978-3-031-34550-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Human-computer interaction (HCI) is acquiring an ever-increasing scientific and industrial importance, as well as having more impact on people’s everyday lives, as an ever-growing number of human activities are progressively moving from the physical to the digital world. This process, which has been ongoing for some time now, was further accelerated during the acute period of the COVID-19 pandemic. The HCI International (HCII) conference series, held annually, aims to respond to the compelling need to advance the exchange of knowledge and research and development efforts on the human aspects of design and use of computing systems. The 25th International Conference on Human-Computer Interaction, HCI International 2023 (HCII 2023), was held in the emerging post-pandemic era as a ‘hybrid’ event at the AC Bella Sky Hotel and Bella Center, Copenhagen, Denmark, during July 23–28, 2023. It incorporated the 21 thematic areas and affiliated conferences listed below. A total of 7472 individuals from academia, research institutes, industry, and government agencies from 85 countries submitted contributions, and 1578 papers and 396 posters were included in the volumes of the proceedings that were published just before the start of the conference, these are listed below. The contributions thoroughly cover the entire field of human-computer interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. These papers provide academics, researchers, engineers, scientists, practitioners and students with state-of-the-art information on the most recent advances in HCI. The HCI International (HCII) conference also offers the option of presenting ‘Late Breaking Work’, and this applies both for papers and posters, with corresponding volumes of proceedings that will be published after the conference. Full papers will be included in the ‘HCII 2023 - Late Breaking Work - Papers’ volumes of the proceedings to be published in the Springer LNCS series, while ‘Poster Extended Abstracts’ will be included as short research papers in the ‘HCII 2023 - Late Breaking Work - Posters’ volumes to be published in the Springer CCIS series. I would like to thank the Program Board Chairs and the members of the Program Boards of all thematic areas and affiliated conferences for their contribution towards the high scientific quality and overall success of the HCI International 2023 conference. Their manifold support in terms of paper reviewing (single-blind review process, with a minimum of two reviews per submission), session organization and their willingness to act as goodwill ambassadors for the conference is most highly appreciated. This conference would not have been possible without the continuous and unwavering support and advice of Gavriel Salvendy, founder, General Chair Emeritus, and Scientific Advisor. For his outstanding efforts, I would like to express my sincere appreciation to Abbas Moallem, Communications Chair and Editor of HCI International News. July 2023
Constantine Stephanidis
HCI International 2023 Thematic Areas and Affiliated Conferences
Thematic Areas • HCI: Human-Computer Interaction • HIMI: Human Interface and the Management of Information Affiliated Conferences • EPCE: 20th International Conference on Engineering Psychology and Cognitive Ergonomics • AC: 17th International Conference on Augmented Cognition • UAHCI: 17th International Conference on Universal Access in Human-Computer Interaction • CCD: 15th International Conference on Cross-Cultural Design • SCSM: 15th International Conference on Social Computing and Social Media • VAMR: 15th International Conference on Virtual, Augmented and Mixed Reality • DHM: 14th International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management • DUXU: 12th International Conference on Design, User Experience and Usability • C&C: 11th International Conference on Culture and Computing • DAPI: 11th International Conference on Distributed, Ambient and Pervasive Interactions • HCIBGO: 10th International Conference on HCI in Business, Government and Organizations • LCT: 10th International Conference on Learning and Collaboration Technologies • ITAP: 9th International Conference on Human Aspects of IT for the Aged Population • AIS: 5th International Conference on Adaptive Instructional Systems • HCI-CPT: 5th International Conference on HCI for Cybersecurity, Privacy and Trust • HCI-Games: 5th International Conference on HCI in Games • MobiTAS: 5th International Conference on HCI in Mobility, Transport and Automotive Systems • AI-HCI: 4th International Conference on Artificial Intelligence in HCI • MOBILE: 4th International Conference on Design, Operation and Evaluation of Mobile Communications
List of Conference Proceedings Volumes Appearing Before the Conference
1. LNCS 14011, Human-Computer Interaction: Part I, edited by Masaaki Kurosu and Ayako Hashizume 2. LNCS 14012, Human-Computer Interaction: Part II, edited by Masaaki Kurosu and Ayako Hashizume 3. LNCS 14013, Human-Computer Interaction: Part III, edited by Masaaki Kurosu and Ayako Hashizume 4. LNCS 14014, Human-Computer Interaction: Part IV, edited by Masaaki Kurosu and Ayako Hashizume 5. LNCS 14015, Human Interface and the Management of Information: Part I, edited by Hirohiko Mori and Yumi Asahi 6. LNCS 14016, Human Interface and the Management of Information: Part II, edited by Hirohiko Mori and Yumi Asahi 7. LNAI 14017, Engineering Psychology and Cognitive Ergonomics: Part I, edited by Don Harris and Wen-Chin Li 8. LNAI 14018, Engineering Psychology and Cognitive Ergonomics: Part II, edited by Don Harris and Wen-Chin Li 9. LNAI 14019, Augmented Cognition, edited by Dylan D. Schmorrow and Cali M. Fidopiastis 10. LNCS 14020, Universal Access in Human-Computer Interaction: Part I, edited by Margherita Antona and Constantine Stephanidis 11. LNCS 14021, Universal Access in Human-Computer Interaction: Part II, edited by Margherita Antona and Constantine Stephanidis 12. LNCS 14022, Cross-Cultural Design: Part I, edited by Pei-Luen Patrick Rau 13. LNCS 14023, Cross-Cultural Design: Part II, edited by Pei-Luen Patrick Rau 14. LNCS 14024, Cross-Cultural Design: Part III, edited by Pei-Luen Patrick Rau 15. LNCS 14025, Social Computing and Social Media: Part I, edited by Adela Coman and Simona Vasilache 16. LNCS 14026, Social Computing and Social Media: Part II, edited by Adela Coman and Simona Vasilache 17. LNCS 14027, Virtual, Augmented and Mixed Reality, edited by Jessie Y. C. Chen and Gino Fragomeni 18. LNCS 14028, Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management: Part I, edited by Vincent G. Duffy 19. LNCS 14029, Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management: Part II, edited by Vincent G. Duffy 20. LNCS 14030, Design, User Experience, and Usability: Part I, edited by Aaron Marcus, Elizabeth Rosenzweig and Marcelo Soares 21. LNCS 14031, Design, User Experience, and Usability: Part II, edited by Aaron Marcus, Elizabeth Rosenzweig and Marcelo Soares
x
List of Conference Proceedings Volumes Appearing Before the Conference
22. LNCS 14032, Design, User Experience, and Usability: Part III, edited by Aaron Marcus, Elizabeth Rosenzweig and Marcelo Soares 23. LNCS 14033, Design, User Experience, and Usability: Part IV, edited by Aaron Marcus, Elizabeth Rosenzweig and Marcelo Soares 24. LNCS 14034, Design, User Experience, and Usability: Part V, edited by Aaron Marcus, Elizabeth Rosenzweig and Marcelo Soares 25. LNCS 14035, Culture and Computing, edited by Matthias Rauterberg 26. LNCS 14036, Distributed, Ambient and Pervasive Interactions: Part I, edited by Norbert Streitz and Shin’ichi Konomi 27. LNCS 14037, Distributed, Ambient and Pervasive Interactions: Part II, edited by Norbert Streitz and Shin’ichi Konomi 28. LNCS 14038, HCI in Business, Government and Organizations: Part I, edited by Fiona Fui-Hoon Nah and Keng Siau 29. LNCS 14039, HCI in Business, Government and Organizations: Part II, edited by Fiona Fui-Hoon Nah and Keng Siau 30. LNCS 14040, Learning and Collaboration Technologies: Part I, edited by Panayiotis Zaphiris and Andri Ioannou 31. LNCS 14041, Learning and Collaboration Technologies: Part II, edited by Panayiotis Zaphiris and Andri Ioannou 32. LNCS 14042, Human Aspects of IT for the Aged Population: Part I, edited by Qin Gao and Jia Zhou 33. LNCS 14043, Human Aspects of IT for the Aged Population: Part II, edited by Qin Gao and Jia Zhou 34. LNCS 14044, Adaptive Instructional Systems, edited by Robert A. Sottilare and Jessica Schwarz 35. LNCS 14045, HCI for Cybersecurity, Privacy and Trust, edited by Abbas Moallem 36. LNCS 14046, HCI in Games: Part I, edited by Xiaowen Fang 37. LNCS 14047, HCI in Games: Part II, edited by Xiaowen Fang 38. LNCS 14048, HCI in Mobility, Transport and Automotive Systems: Part I, edited by Heidi Krömker 39. LNCS 14049, HCI in Mobility, Transport and Automotive Systems: Part II, edited by Heidi Krömker 40. LNAI 14050, Artificial Intelligence in HCI: Part I, edited by Helmut Degen and Stavroula Ntoa 41. LNAI 14051, Artificial Intelligence in HCI: Part II, edited by Helmut Degen and Stavroula Ntoa 42. LNCS 14052, Design, Operation and Evaluation of Mobile Communications, edited by Gavriel Salvendy and June Wei 43. CCIS 1832, HCI International 2023 Posters - Part I, edited by Constantine Stephanidis, Margherita Antona, Stavroula Ntoa and Gavriel Salvendy 44. CCIS 1833, HCI International 2023 Posters - Part II, edited by Constantine Stephanidis, Margherita Antona, Stavroula Ntoa and Gavriel Salvendy 45. CCIS 1834, HCI International 2023 Posters - Part III, edited by Constantine Stephanidis, Margherita Antona, Stavroula Ntoa and Gavriel Salvendy 46. CCIS 1835, HCI International 2023 Posters - Part IV, edited by Constantine Stephanidis, Margherita Antona, Stavroula Ntoa and Gavriel Salvendy
List of Conference Proceedings Volumes Appearing Before the Conference
xi
47. CCIS 1836, HCI International 2023 Posters - Part V, edited by Constantine Stephanidis, Margherita Antona, Stavroula Ntoa and Gavriel Salvendy
https://2023.hci.international/proceedings
Preface
In today’s knowledge society, learning and collaboration are two fundamental and strictly interrelated aspects of knowledge acquisition and creation. Learning technology is the broad range of communication, information, and related technologies that can be used to support learning, teaching, and assessment, often in a collaborative way. Collaboration technology, on the other hand, is targeted to support individuals working in teams towards a common goal, which may be an educational one, by providing tools that aid communication and the management of activities as well as the process of problem solving. In this context, interactive technologies not only affect and improve the existing educational system but become a transformative force that can generate radically new ways of knowing, learning, and collaborating. The 10th International Conference on Learning and Collaboration Technologies (LCT 2023), affiliated to HCI International 2023, addressed the theoretical foundations, design and implementation, and effectiveness and impact issues related to interactive technologies for learning and collaboration, including design methodologies, developments and tools, theoretical models, and learning design or learning experience (LX) design, as well as technology adoption and use in formal, non-formal, and informal educational contexts. Learning and collaboration technologies are increasingly adopted in K-20 (kindergarten to higher education) classrooms and lifelong learning. Technology can support expansive forms of collaboration; deepened empathy; complex coordination of people, materials, and purposes; and development of skill sets that are increasingly important across workspaces in the 21st century. The general themes of the LCT conference aim to address challenges related to understanding how to design for better learning and collaboration with technology, support learners to develop relevant approaches and skills, and assess or evaluate gains and outcomes. To this end, topics such as extended reality (XR) learning, embodied and immersive learning, mobile learning and ubiquitous technologies, serious games and gamification, learning through design and making, educational robotics, educational chatbots, human-computer interfaces, and computersupported collaborative learning, among others, are elaborated in the LCT conference proceedings. Learning (experience) design and user experience design remain a challenge in the arena of learning environments and collaboration technology. LCT aims to serve a continuous dialog while synthesizing current knowledge. Two volumes of the HCII 2023 proceedings are dedicated to this year’s edition of the LCT 2023 conference. Part I focuses on topics related to the design of learning environments, the learning experience, technology-supported teaching, and supporting creativity, while Part II covers XR and robotic technologies in learning, as well as virtual, blended, and hybrid learning. Papers of these volumes are included for publication after a minimum of two singleblind reviews from the members of the LCT Program Board or, in some cases, from
xiv
Preface
members of the Program Boards of other affiliated conferences. We would like to thank all of them for their invaluable contribution, support, and efforts. July 2023
Panayiotis Zaphiris Andri Ioannou
10th International Conference on Learning and Collaboration Technologies (LCT 2023)
Program Board Chairs: Panayiotis Zaphiris, Cyprus University of Technology, Cyprus and Andri Ioannou, Cyprus University of Technology and Research Center on Interactive Media, Smart Systems and Emerging Technologies (CYENS), Cyprus Program Board: • • • • • • • • • • • • • • •
Fisnik Dalipi, Linnaeus University, Sweden Camille Dickson-Deane, University of Technology Sydney, Australia David Fonseca Escudero, La Salle Ramon Llull University, Spain Francisco Jose García-Peñalvo, University of Salamanca, Spain Aleksandar Jevremovic, Singidunum University, Serbia Elis Kakoulli Constantinou, Cyprus University of Technology, Cyprus Tomaž Klobuˇcar, Jozef Stefan Institute, Slovenia Birgy Lorenz, Tallinn University of Technology, Estonia Nicholas H. Müller, University of Applied Sciences Würzburg-Schweinfurt, Germany Anna Nicolaou, Cyprus University of Technology, Cyprus Antigoni Parmaxi, Cyprus University of Technology, Cyprus Dijana Plantak Vukovac, University of Zagreb, Croatia Maria-Victoria Soulé, Cyprus University of Technology, Cyprus Sonia Sousa, Tallinn University, Estonia Sara Villagrá-Sobrino, Valladolid University, Spain
The full list with the Program Board Chairs and the members of the Program Boards of all thematic areas and affiliated conferences of HCII2023 is available online at:
http://www.hci.international/board-members-2023.php
HCI International 2024 Conference
The 26th International Conference on Human-Computer Interaction, HCI International 2024, will be held jointly with the affiliated conferences at the Washington Hilton Hotel, Washington, DC, USA, June 29 – July 4, 2024. It will cover a broad spectrum of themes related to Human-Computer Interaction, including theoretical issues, methods, tools, processes, and case studies in HCI design, as well as novel interaction techniques, interfaces, and applications. The proceedings will be published by Springer. More information will be made available on the conference website: http://2024.hci.international/. General Chair Prof. Constantine Stephanidis University of Crete and ICS-FORTH Heraklion, Crete, Greece Email: [email protected]
https://2024.hci.international/
Contents – Part II
XR for Learning and Education Enhancing Usability in AR and Non-AR Educational Technology: An Embodied Approach to Geometric Transformations . . . . . . . . . . . . . . . . . . . . . Samantha D. Aguilar, Heather Burte, Zohreh Shaghaghian, Philip Yasskin, Jeffrey Liew, and Wei Yan Why the Educational Metaverse Is Not All About Virtual Reality Apps . . . . . . . . Mike Brayshaw, Neil Gordon, Francis Kambili-Mzembe, and Tareq Al Jaber
3
22
An Exploratory Case Study on Student Teachers’ Experiences of Using the AR App Seek by iNaturalist When Learning About Plants . . . . . . . . . . . . . . . . Anne-Marie Cederqvist and Alexina Thorén Williams
33
Introducing Dreams of Dali in a Tertiary Education ESP Course: Technological and Pedagogical Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria Christoforou and Fotini Efthimiou
53
Implementation of Augmented Reality Resources in the Teaching-Learning Process. Qualitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Omar Cóndor-Herrera and Carlos Ramos-Galarza
66
Teachers’ Educational Design Using Adaptive VR-Environments in Multilingual Study Guidance to Promote Students’ Conceptual Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emma Edstrand, Jeanette Sjöberg, and Sylvana Sofkova Hashem Didactics and Technical Challenges of Virtual Learning Locations for Vocational Education and Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thomas Keller, Martin Berger, Janick Michot, Elke Brucker-Kley, and Reto Knaack
79
95
WebAR-NFC to Gauge User Immersion in Education and Training . . . . . . . . . . . 115 Soundarya Korlapati and Cheryl D. Seals Evaluation of WebRTC in the Cloud for Surgical Simulations: A Case Study on Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST) . . . . . . . . . 127 William Kwabla, Furkan Dinc, Khalil Oumimoun, Sinan Kockara, Tansel Halic, Doga Demirel, Sreekanth Arikatla, and Shahryar Ahmadi
xx
Contents – Part II
Building VR Learning Material as Scaffolding for Design Students to Propose Home Appliances Shape Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Yu-Hsu Lee and Sheng-Wei Peng Digital Interactive Learning Ecosystems in Metaverse: Educational Path Based on Immersive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Yuan Liu, Wei Gao, and Yao Song 3D Geography Course Using AR: The Case of the Map of Greece . . . . . . . . . . . . 170 Ilias Logothetis, Iraklis Katsaris, Myron Sfyrakis, and Nikolas Vidakis Educational Effect of Molecular Dynamics Simulation in a Smartphone Virtual Reality System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Kenroh Matsuda, Nobuaki Kikkawa, Seiji Kajita, Sota Sato, and Tomohiro Tanikawa The Effect of Changes in the Environment in a Virtual Space on Learning Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Ryoma Nakao and Tatsuo Nakajima Perceived Effects of Mixed Reality in Distance Learning for the Mining Education Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Stefan Thurner, Sandra Schön, Martin Ebner, Philipp Leitner, and Lea Daling Developing an Augmented Reality-Based Interactive Learning System with Real-Time Location and Motion Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Ching-Yun Yu, Jung Hyup Kim, Sara Mostowfi, Fang Wang, Danielle Oprean, and Kangwon Seo Study on VR Science Educational Games Based on Participatory Design and Flow Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Zhang Zhang and Zhiyi Xu Learning with Robots Collaborative Learning with Social Robots – Reflections on the Novel Co-learning Concepts Robocamp and Robotour . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Aino Ahtinen, Aparajita Chowdhury, Nasim Beheshtian, Valentina Ramirez Millan, and Chia-Hsin Wu Can a Humanoid Robot Motivate Children to Read More? Yes, It Can! . . . . . . . . 271 Janine Breßler and Janett Mohnke
Contents – Part II
xxi
Learning Agile Estimation in Diverse Student Teams by Playing Planning Poker with the Humanoid Robot NAO. Results from Two Pilot Studies in Higher Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Ilona Buchem, Lewe Christiansen, Susanne Glissmann-Hochstein, and Stefano Sostak Thinking Open and Ludic Educational Robotics: Considerations Based on the Interaction Design Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Murilo de Oliveira, Larissa Paschoalin, Vitor Teixeira, Marilia Amaral, and Leonelo Almeida Authoring Robot Presentation Behavior for Promoting Self-review . . . . . . . . . . . 316 Kenyu Ito and Akihiro Kashihara Praise for the Robot Model Affects Children’s Sharing Behavior . . . . . . . . . . . . . 327 Qianxi Jia, Jiaxin Lee, and Yi Pang Assessment of Divergent Thinking in a Social and Modular Robotics Task: An Edit Distance Approach at the Configuration Level . . . . . . . . . . . . . . . . . 336 Louis Kohler and Margarida Romero Welcome to the University! Students’ Orientation Activity Mediated by a Social Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Gila Kurtz and Dan Kohen-Vacs Designing Pedagogical Models for Human-Robot-Interactions – A Systematic Literature Review (SLR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Moshe Leiba, Tomer Zulhian, Ido Barak, and Ziv Massad Using Augmented Reality and a Social Robot to Teach Geography in Primary School . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Christina Pasalidou, Nikolaos Fachantidis, and Efthymia Koiou Complementary Teaching Support with Robot in English Communication . . . . . 386 Takafumi Sato and Akihiro Kashihara Analyzing Learners’ Emotion from an HRI Experiment Using Facial Expression Recognition Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Hae Seon Yun, Heiko Hübert, Johann Chevalère, Niels Pinkwart, Verena V. Hafner, and Rebecca Lazarides
xxii
Contents – Part II
Virtual, Blended and Hybrid Learning Requirements Analysis to Support Equal Participation in Hybrid Collaboration Settings in Higher Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Arlind Avdullahu, Thomas Herrmann, and Nikol Rummel Learning Spaces in Higher Education: A Systematic Literature Review . . . . . . . . 431 Eirini Christou, Antigoni Parmaxi, Anna Nicolaou, and Eleni Pashia From Face-to-Face Learning to Virtual Learning: Students’ Perspective . . . . . . . 447 Omar Cóndor-Herrera, Mónica Bolaños-Pasquel, Carlos Ramos-Galarza, and Jorge Cruz-Cárdenas Blended Learning and Higher Education: A Bibliometric Analysis . . . . . . . . . . . . 456 Jorge Cruz-Cárdenas, Javier Parra-Domínguez, Ekaterina Zabelina, Olga Deyneka, and Carlos Ramos-Galarza Preliminary Validation of ENGAME: Fostering Civic Participation and Social Inclusion Through an E-Learning Game . . . . . . . . . . . . . . . . . . . . . . . . . 466 Alicia García-Holgado, Andrea Vázquez-Ingelmo, Sonia Verdugo-Castro, Francisco José García-Peñalvo, Elisavet Kiourti, Daina Gudoniene, Maria Kyriakidou, Liliana Romaniuc, Katarzyna Rak, Leo Kraus, and Peter Fruhmann Assessing the Impact of Using Python to Teach Computational Thinking for Remote Schools in a Blended Learning Environment . . . . . . . . . . . . . . . . . . . . 482 Lakshmi Preethi Kamak and Vijay Mago Analysis of Relationship Between Preparation and Classroom Activities of Flipped Classroom Using Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 Tatsuya Kawakami, Yasuyuki Sumi, Taku Yamaguchi, and Michiko Oba Mobile Device-Based Interactions for Collocated Direct Voting in Collaborative Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Romina Kühn, Karl Kegel, Felix Kallenbach, Mandy Korzetz, Uwe Aßmann, and Thomas Schlegel Powergaming by Spamming in a Learning Game . . . . . . . . . . . . . . . . . . . . . . . . . . . 538 Nafisul Kiron, Mehnuma Tabassum Omar, and Julita Vassileva Exploring the Factors Affecting Learning Satisfaction in MOOC: A Case Study of Higher Education in a Developing Country . . . . . . . . . . . . . . . . . . . . . . . . 551 Kanitsorn Suriyapaiboonwattana and Kate Hone
Contents – Part II
xxiii
The Analysis of Student Errors in ARIN-561 – An Educational Game for Learning Artificial Intelligence for High School Students . . . . . . . . . . . . . . . . . 570 Ning Wang, Eric Greenwald, Ryan Montgomery, and Maxyn Leitner An Analytical Framework for Designing Future Hybrid Creative Learning Spaces: A Pattern Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582 Dan Zhu and Yeqiu Yang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Contents – Part I
Designing Learning Experiences Security and Privacy in Academic Data Management at Schools: SPADATAS Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel Amo-Filva, David Fonseca Escudero, Mónica V. Sanchez-Sepulveda, Alicia García-Holgado, Lucía García-Holgado, Francisco José García-Peñalvo, Tihomir Orehovaˇcki, Marjan Krašna, Igor Pesek, Emanuela Marchetti, Andrea Valente, Claus Witfelt, Ivana Ruži´c, Karim Elia Fraoua, and Fernando Moreira The Rallye Platform: Mobile Location-Based Serious Games for Digital Cultural Heritage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jean Botev, Sandra Camarda, and Claude Ohlhoff
3
17
An e-Learning Application for Children Suffering from Autism . . . . . . . . . . . . . . Shubham Choudhary, Supriya Kaur, Abhinav Sharma, and Swati Chandna
32
Two-Phases AI Model for a Smart Learning System . . . . . . . . . . . . . . . . . . . . . . . . Javier García-Sigüenza, Alberto Real-Fernández, Rafael Molina-Carmona, and Faraón Llorens-Largo
42
Learning System for Relational Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Erika Hernández-Rubio, Marco Antonio Rodríguez-Torres, Humberto Vázquez-Santiago, and Amilcar Meneses-Viveros
54
Design and Simulation of an IoT Intelligent University Campus for Academic Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mary Luz Mouronte-López, Ángel Lambertt Lobaina, Elizabeth Guevara-Martínez, and Jorge Alberto Rodríguez Rubio Behavioral Coding for Predicting Perceptions of Interactions in Dyads . . . . . . . . Mohammadamin Sanaei, Marielle Machacek, Stephen B. Gilbert, Coleman Eubanks, Peggy Wu, and James Oliver Towards Accessible, Sustainable and Healthy Mobility: The City of Barcelona as Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mónica V. Sanchez-Sepulveda, David Fonseca Escudero, Joan Navarro, and Daniel Amo-Filva
64
79
91
xxvi
Contents – Part I
Augmenting Online Classes with an Attention Tracking Tool May Improve Student Engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Arnab Sen Sharma, Mohammad Ruhul Amin, and Muztaba Fuad A Review on Modular Framework and Artificial Intelligence-Based Smart Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Sarthak Sengupta, Anurika Vaish, David Fonseca Escudero, Francisco José García-Peñalvo, Anindya Bose, and Fernando Moreira Technology and Education as Drivers of the Fourth Industrial Revolution Through the Lens of the New Science of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Iulia Stefan, Nadia Barkoczi, Todor Todorov, Ivaylo Peev, Lia Pop, Claudia Marian, Cristina Campian, Sonia-Carmen Munteanu, Patrick Flynn, and Lucía Morales ‘How Do We Move Back?’ – A Case Study of Joint Problem-Solving at an Interactive Tabletop Mediated Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Patrick Sunnen, Béatrice Arend, Svenja Heuser, Hoorieh Afkari, and Valérie Maquil Designing a Pedagogical Strategy for the Implementation of Educational Technology in Collaborative Learning Environments . . . . . . . . . . . . . . . . . . . . . . . 163 Tord Talmo, Robin Støckert, Begona Gonzalez Ricart, Maria Sapountzi, George Dafoulas, George Bekiaridis, Alessia Valenti, Jelena Mazaj, and Ariadni Tsiakara Discovering Best Practices for Educational Video Conferencing Systems . . . . . . 182 Tord Talmo and Mikhail Forminykh Experimental Design and Validation of i-Comments for Online Learning Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Jiaqi Wang, Jian Chen, and Qun Jin Tailoring Persuasive, Personalised Mobile Learning Apps for University Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Xiaoran Zhao, Yongyan Guo, and Qin Yang Understanding the Learning Experience Title Investigating the Critical Nature of HE Emergency Remote Learning Networks During the COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Allaa Barefah, Elspeth McKay, and Walaa Barefah
Contents – Part I
xxvii
Decoding Student Error in Programming: An Iterative Approach to Understanding Mental Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Francisco J. Gallego-Durán, Patricia Compañ-Rosique, Carlos J. Villagrá-Arnedo, Gala M. García-Sánchez, Rosana Satorre-Cuerda, Rafael Molina-Carmona, Faraón Llorens-Largo, Sergio J. Viudes-Carbonell, Alberto Real-Fernández, and Jorge Valor-Lucena What Do Students Think About Learning Supported by e-Schools Digital Educational Resources? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Goran Hajdin, Dijana Plantak Vukovac, and Dijana Oreški A Human or a Computer Agent: The Social and Cognitive Effects of an e-Learning Instructor’s Identity and Voice Cues . . . . . . . . . . . . . . . . . . . . . . . 292 Tze Wei Liew, Su-Mae Tan, Chin Lay Gan, and Si Na Kew The Study on Usability and User Experience of Reading Assessment Systems: A Preliminary Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Peiying Lin and Guanze Liao Learning with Videos and Quiz Attempts: Explorative Insights into Behavior and Patterns of MOOC Participants . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Bettina Mair, Sandra Schön, Martin Ebner, Sarah Edelsbrunner, and Philipp Leitner Doctoral Education in Technology-Enhanced Learning: The Perspective of PhD Candidates and Researchers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Anna Nicolaou, Maria Victoria Soule, Androulla Athanasiou, Elis Kakoulli Constantinou, Antigoni Parmaxi, Mikhail Fominykh, Maria Perifanou, Anastasios Economides, Luís Pedro, Laia Albó, Davinia Hernández-Leo, and Fridolin Wild Analyzing Students’ Perspective for Using Computer-Mediated Communication Tools for Group Collaboration in Higher Education . . . . . . . . . . 349 Eric Owusu, Adita Kulkarni, and Brittani S. Washington Exploring Factors Affecting User Perception of Trustworthiness in Advanced Technology: Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Iuliia Paramonova, Sonia Sousa, and David Lamas Am I Like Me? Avatar Self-similarity and Satisfaction in a Professional Training Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Monika Pröbster, Ronja-Debora Tomaske-Graff, Doris Herget, Martina Lucht, and Nicola Marsden
xxviii
Contents – Part I
Mapping the Factors Affecting Online Education During the Pandemic in Greece: Understanding the Importance of Designing Experiences Through Different Cultural and Philosophical Approaches . . . . . . . . . . . . . . . . . . . 401 Angeliki Tevekeli and Vasiliki Mylonopoulou Usability Study of a Pilot Database Interface for Consulting Open Educational Resources in the Context of the ENCORE Project . . . . . . . . . . . . . . . 420 Andrea Vázquez-Ingelmo, Alicia García-Holgado, Francisco José García-Peñalvo, and Filippo Chiarello Technology-Supported Teaching Digital Skills During Emergency Remote Teaching, for VUCA Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 Carmen Graciela Arbulú Pérez Vargas, Moreno Muro Juan Pablo, Lourdes Gisella Palacios Ladines, Cristian Augusto Jurado Fernández, and Pérez Delgado José Willams Definition of a Learning Analytics Ecosystem for the ILEDA Project Piloting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 Miguel Á. Conde, Atanas Georgiev, Sonsoles López-Pernas, Jovana Jovic, Ignacio Crespo-Martínez, Miroslava Raspopovic Milic, Mohammed Saqr, and Katina Pancheva Scenarios, Methods, and Didactics in Teaching Using Video-Conferencing Systems and Interactive Tools: Empirical Investigation on Problems and Good Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Md. Saifuddin Khalid, Tobias Alexander Bang Tretow-Fish, and Mahmuda Parveen Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection . . . . . . . . . . 475 Mohammad Khalil and Erkan Er Choosing a Modern Teaching Approach and Supporting Technology . . . . . . . . . . 488 Renata Mekovec Lesson-Planning Groupware for Teachers: Situated Participatory Design . . . . . . 500 Leandro Queiros, Alex Sandro Gomes, Rosane Alencar, Aluísio Pereira, and Fernando Moreira Main Gaps in the Training and Assessment of Teamwork Competency in the University Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 María Luisa Sein-Echaluce, Ángel Fidalgo-Blanco, and Francisco José García-Peñalvo
Contents – Part I
xxix
Prototyping the Learning Analytics Dashboards of an Adaptive Learning Platform: Faculty Perceptions Versus Designers’ Intentions . . . . . . . . . . . . . . . . . . 531 Tobias Alexander Bang Tretow-Fish, Md. Saifuddin Khalid, and Victor Anton Charles Leweke Operationalising Transparency as an Integral Value of Learning Analytics Systems – From Ethical and Data Protection to Technical Design Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Hristina Veljanova, Carla Barreiros, Nicole Gosch, Elisabeth Staudegger, Martin Ebner, and Stefanie Lindstaedt Towards Personalized Instruction: Co-designing a Teacher-Centered Dashboard for Learning Engagement Analysis in Blended Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Han Zhang, Xu Sun, Yanhui Zhang, Qingfeng Wang, and Cheng Yao Supporting Creativity in Learning Preliminary Study on Students’ Experiences in Design-Based Interdisciplinary Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Wenzhi Chen, Dong Xu, and Ying-Shung Lee Supporting Collaboration in Challenge-Based Learning by Integrating Digital Technologies: Insights from a Design-Based Research Study . . . . . . . . . . 588 Caterina Hauser Evaluating the Development of Soft Skills Through the Integration of Digital Making Activities in Undergraduate Computing Courses . . . . . . . . . . . 601 Dora Konstantinou, Antigoni Parmaxi, and Panayiotis Zaphiris AgroEdu Through Co-crafting: Incorporating Minecraft into Co-design Activities for Agricultural Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Xinliu Li and Baosheng Wang Design and Assessment of a Tool for Improving Creativity and Imagination in School Children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 Pawan Pagaria, Abhijeet Kujur, and Jyoti Kumar Digital Fabrication in Arts and Crafts Education: A Critical Review . . . . . . . . . . . 642 Susanne Stigberg, Fahad Faisal Said, and Daniela Blauhut Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
XR for Learning and Education
Enhancing Usability in AR and Non-AR Educational Technology: An Embodied Approach to Geometric Transformations Samantha D. Aguilar(B) , Heather Burte , Zohreh Shaghaghian , Philip Yasskin , Jeffrey Liew , and Wei Yan Texas A&M University, College Station, TX 77845, USA {samdyanne,heather.burte,zohreh-sh,yasskin,jeffrey.liew, wyan}@tamu.edu
Abstract. The present study investigates user interaction with the augmented reality (AR) educational application, BRICKxAR/T. BRICKxAR/T uses AR to facilitate an embodied approach for learning geometric transformations and their mathematics through a virtual and physical interactive environment with the goal of helping in cognitive offloading and spatial thinking related to 3D transformations and matrix algebra. Two usability studies were conducted to examine users’ interaction and experience with two versions of the educational application BRICKxAR/T (AR and non-AR). By using both quantitative and qualitative methodology, usability studies assessed users’ interactions and experience with specific in-app functions and general evaluation of the app’s companion materials (i.e., introduction and app interaction videos). Findings were used to provide recommendations that could be implemented to enhance the application. Implications of findings for a variety of other educational technologies will be discussed. Keywords: Augmented Reality · Educational Technology · User Interaction · Spatial Transformations · Matrices · Embodied Learning
1 Introduction The dynamic relationship between spatial reasoning and learning mathematics is documented as a critical component of innovative thinking (e.g., Lowrie et al. 2017; Mulligan and Woolcott 2015). Many longitudinal studies have shown that spatial reasoning and mathematics can predict creative and scholarly achievements over the lifespan and across multiple disciplines (Bruce et al. 2017; Davis and The Spatial Reasoning Study Group 2015; Uttal et al. 2013). More recent studies continue to highlight the crucial role of spatial reasoning for success in secondary education and further careers (Newcombe et al. 2013; Sinton 2014; Mulligan et al. 2018). Spatial reasoning involves locating, orienting, rotating, decomposing, recomposing, scaling, and recognizing symmetry (Buckley et al. 2018), which are key components for understanding geometric transformations. Geometric transformations and their associated mathematical theories are a salient and significant concept for students within © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 3–21, 2023. https://doi.org/10.1007/978-3-031-34550-0_1
4
S. D. Aguilar et al.
Science, Technology, Engineering, and Mathematics (STEM) education. Although the significance of spatial abilities within the geometric transformations’ domain is considered essential for those within STEM education, the difficulty of learning the content and the associated mathematical logic is a well-known challenge. A potential intermediary that may soften the learning curve is augmented reality. Augmented reality (AR) technology in education can augment physical reality with virtual information and enhance students’ learning within problem-solving and abstract thinking domains. The fundamental premise of AR is to enable the user to interact with virtual objects immediately and directly and manipulate them just like a physical object. Thus, AR within education allows users to interact with and visually see abstract concepts (e.g., parameters of transformations) which provides the potential to improve learning outcomes through increased engagement and authentic tactile experiences. 1.1 Embodied Learning Embodiment combines the physical and psychological processes that individuals experience throughout their authentic, everyday lives. The phenomenon of embodiment is marked by a heightened body-mind awareness of the self, concerning the shifts within their environment (see Stolz 2015). Embodied learning thus combines the physical, emotional, and cognitive processes that naturally occur as an individual goes through life. The individual learns to actively enact changes and shifts to their environment through their movements or interactions with stimuli. These shifts are then experienced in, through, with, and because of the body (i.e., the individual engagement). The mindful attention to, and retention of, this process facilitates learning and cognition. As an embodied pedagogical approach, learning focuses on learners’ innate or autonomous competence to build learning processes. Multiple studies have established the connection between mathematical thinking and learning that emerges through embodied experiences (Alibali and Nathan 2012; de Freitas and Sinclair 2013; Lakoff and Núñez 2000; Radford 2014). For learning complex abstract concepts, such as matrix algebra, embodied learning approaches may facilitate a dynamic physical experience that can provide deeper insight into mathematical theory. The use of interactive AR technology within the classroom can enhance traditional pedagogy focused on technical and theoretical knowledge related to mathematics (Bujak et al. 2013; Wang et al. 2018). AR can afford visual and tactile learning (e.g., immersive and embodied) experiences for students to improve their spatial perception. For undergraduate students, AR technology may allow students to better learn 3D geometric transformation concepts and their representations in higher education classes. 1.2 BRICKxAR/T This research focuses on the educational AR application BRICKxAR/T for learning geometric transformations and their mathematics through a virtual and physical interactive environment. BRICKxAR/T was developed based on the progressive learning method for learning spatial transformations in three levels of motions, mappings, and functions (see Shaghaghian et al. 2022). This app aims to make the spatial transformations underlying 3D matrix algebra visible and interactive. Furthermore, BRICKxAR/T aims to assist
Enhancing Usability in AR and Non-AR Educational Technology
5
students in understanding the math concepts behind geometric transformations through visualization of the entries within transformation matrices. BRICKxAR/T consists of two versions – one utilizing AR and one without AR. AR Version. The AR version of the app displays the dynamic relationship between physical motion and the corresponding mathematical model via the translation or rotation of a physical model and the corresponding transformation matrix based on the translation vector and the rotation angle. As an intermediary, the AR displays a superimposed virtual model (i.e., green wireframe) on a physical LEGO model through an AR-enabled iOS mobile device (iPad) screen. The AR displays sets of matrices associated with the model and several buttons that can be clicked to provide detailed step-by-step instructions on utilizing the app’s x, y, z-axes sliders needed to perform transformations to the virtual model. The virtual model’s motions are either synched with the physical model or controlled by transformation parameters sliders (i.e., x y z values). Users can experiment with the motions of each model, affecting the parameters in the matrices by moving the LEGO model or manipulating the sliders to transform the superimposed virtual model (i.e., green wireframe). To use the AR version of the app, users first must register the LEGO model (aligning the virtual and physical models in the 3D space) by starting the app, pointing the camera towards the LEGO model, and then waiting for the registration to complete and the related message to disappear (Fig. 1). Note that compared to (Shaghaghian et al. 2022), the use of the patterned desk surface significantly improved the AR registration and tracking in the new studies. Once the LEGO model is registered, the user can interact with the app in 2 ways: First, the user can interact with the LEGO model by physically moving it along or rotating it around the x, y, z-axes (Fig. 2). Second, the user can interact with the virtual model (i.e., green wireframe) by pressing the axis label in the green matrices to pull up the x, y, z-axes translation sliders (e.g., clicking the red x to pull up the x translation slider; Fig. 2) and moving them as desired or by pressing the axis cycle button (Fig. 3) to bring up and cycle through the x, y, z-axes rotation sliders marked by a color coded ‘t’ (left side; Fig. 3) and rotating them as desired. While interacting with the LEGO and virtual model, users can use color codes to identify axes, click through the steps to access the instructions, and visualize how the matrices change with each transformation. Non-AR Version. The non-AR version of the BRICKxAR/T app displays a virtual scene with two digital models; one is stagnant, representing the point of origin (i.e., pre-image of a transformation), and the other is a wireframe model (i.e., image of the transformation) that displays the geometric transformations of the model, which is controlled by the app’s translation and rotation sliders (Fig. 4). Users enact changes to the virtual scene by moving the app’s sliders to create desired geometric transformations. User interaction is focused on manipulating the app’s sliders and rotation features to transform the image (i.e., the green wireframe). As the user interacts with functions, the matrices shown on the screen change to represent the mathematical logic.
6
S. D. Aguilar et al.
Fig. 1. BRICKxAR/T app: Registering the model
Fig. 2. BRICKxAR/T app: AR scene with a physical model and a virtual model.
To use the non-AR version of the app, users can open the app and interact with the virtual model (i.e., green wireframe) by pressing the axis label in the green matrices to pull up the x, y, and z-axes translation sliders (e.g., clicking the blue z to pull up the z translation slider; Fig. 5) and moving them as desired or by pressing the axis cycle button to bring up and cycle through the x, y, z-axes rotation sliders marked by a color coded ‘t’ and moving the slider as desired. Users can also navigate the virtual scene from different viewpoints by panning or moving the scene to a different spot on the screen. They can: 1. Zoom in or zoom out on their view of the scene by spreading two fingers apart and pinching their two fingers together, 2. Move the camera’s placement in the scene by gently sliding it across the screen with two fingers and, 3. Change their perspective view by quickly sliding their finger to change the camera’s angle. Like the AR version of the app, participants can use color codes to identify axes, access the instructions, and visualize how the matrices change with each transformation.
Enhancing Usability in AR and Non-AR Educational Technology
7
Fig. 3. BRICKxAR/T app: AR scene with translation and rotation sliders.
The embodiment of interacting with physical and virtual models and the visualization of the changes in the matrices allow for a deeper understanding of mathematical concepts (Núñez et al. 1999), thereby reducing cognitive load and the need for users to possess strong spatial thinking skills. Using educational technology, such as the BRICKxAR/T app, can make challenging mathematical concepts more accessible for all students. However, learning to use a new app and technology, like AR, can add to cognitive load if not user-friendly. Thus, this study evaluates the usability of the BRICKxAR/T app and makes recommendations for related educational technologies.
Fig. 4. BRICKxAR/T app: non-AR scene with two virtual models (solid and wireframe) and three translation sliders.
8
S. D. Aguilar et al.
Fig. 5. BRICKxAR/T app: non-AR scene with one rotation and three translation sliders.
2 Overview of Usability Tests Using quantitative and qualitative methods, two usability studies (i.e., benchmark and updated usability tests) examined user interactions with the two versions of the BRICKxAR/T app (AR and non-AR) that teach 3D matrix algebra. The overarching goal of these studies was to improve user experience with the app and investigate how the level of interactivity (AR versus non-AR) helps in cognitive offloading and spatial thinking related to 3D transformations and matrix algebra. The benchmark usability test was first conducted to gain insight into the BRICKxAR/T app’s discoverability and usability under its starting conditions. Twelve participants reported on their previous math experiences, completed a pre-test, watched two videos, interacted with either the AR or non-AR version of the app, and then provided feedback on their app experience. After each study, a thematic analysis was conducted because this methodology allows for systematically identifying, organizing, and offering insight into patterns of meaning (themes) across a dataset (Braun and Clarke 2006). The results of the benchmark test were then used to restructure the app’s introduction and demo videos. The updated usability test was then conducted to reflect the discoverability and usability after changes to video instructions. Like the benchmark usability test, fifteen participants completed the same pre-test, interacted with the AR and non-AR versions of the app, and provided feedback on the app’s usability. Another thematic analysis was performed to identify themes and patterns in user app interactions. Findings from the benchmark and updated usability test thematic analysis were then compared to identify any changes in usability reported after restructuring the video instructions. Based on the results, several recommendations were formulated to maximize usability, enhance app capabilities, and facilitate users’ learning of geometric transformations.
3 Benchmark Usability Test The benchmark study participants (N = 12) were undergraduate students at a large public university in the United States. All participants were enrolled in an introductory psychology course and signed up to participate through an online research scheduling
Enhancing Usability in AR and Non-AR Educational Technology
9
system to receive course credit. Participants were randomly assigned to use the AR (N = 6) or the non-AR version of BrickxAR/T (N = 6). Ten were women, two were men, eight participants were freshman, three sophomores, and one junior. The majority had previous matrix algebra experience. Participants’ pre-test matrix algebra test accuracy (i.e., correct test answer selected) was moderate with participants rating low confidence in their answers (Fig. 6). 3.1 Method Participants completed a pre-test that consisted of demographic information, previous experience with matrix algebra, and a multiple-choice math test on transformation matrices, designed based on learning materials of Khan Academy with proceeding participant confidence ratings for each question (where “1” represented being “not at all confident” and 5 represented being “very confident” in their answer to each math question). Each experiment condition group followed the same procedure. After completing the pre-test, participants watched a brief introductory video on matrix algebra and corresponding app demo video (AR or non-AR). After watching the videos, participants were given either the iPad setup with the BRICKxAR/T app and accompanying LEGO model (i.e., AR condition) or just an iPad setup with the BRICKxAR/T app (i.e., non-AR condition). While interacting with the apps, participants were asked to complete seven tasks related to the app’s functionality (Table 1). As participants interacted with the app to complete each task, they were instructed to think aloud, explaining what they were trying to do and if it was easy or challenging. After each task participants were then asked to rate how easy or difficult it was for them to complete that task (Table 1) on a 7-point rating scale (i.e., Single Ease Question rating scale; SEQ). The Single Ease Question (SEQ) is a 7-point rating scale to assess how difficult users find a task. When a participant provided a rating of less than 5, they were asked to describe why they found the task difficult. Once finished interacting with either the AR or non-AR version of the app participants completed a post-test with the same multiple-choice math test in randomized order with proceeding confidence ratings. To assess their overall experience with the app, participants also completed the System Usability Scale (SUS), a measure of usability. 3.2 Results AR Version. The benchmark AR participants’ post-test matrix algebra test accuracy was moderate, with moderate confidence in their answers, and participants generally reported low usability based on their SUS rating (Fig. 6). Participants’ SEQ scores for each task were also analyzed to gain insight into the app’s functionality (Fig. 7). The SEQ scores indicated that interacting with the app during ‘free play’ or unstructured userapp-interaction time was relatively easy. Four of the six participants rated discoverability above a 5. In regards to registering the LEGO model (i.e., aligning the virtual and physical models in the 3D space), a majority of the participants found this to be particularly easy to complete as they simply needed to point the camera and allow the app to register the Lego model; However, one participant did experience difficulty with the AR during this experiment, as the app had several issues causing the participant to restart the app to register the model, leading to an SEQ rating of one. Moreover, performing translation
10
S. D. Aguilar et al.
Fig. 6. Benchmark usability experience with matrix algebra for AR (A) and Non-AR (G), accuracy on math test for AR (B) and Non-AR (H), and confidence on math test for AR (C) and Non-AR (I) both taken before interacting with BrickxA/RT. After interacting with BrickxAR/T, SUS rating for AR (D) and Non-AR (J), accuracy on math test for AR (E) and Non-AR (K), and confidence on math test for AR (F) and Non-AR (L).
Enhancing Usability in AR and Non-AR Educational Technology
11
Table 1. AR and non-AR app task list Task
AR Version
Non-AR Version
1st
Free App Play (Discoverability)
Free App Play (Discoverability)
2nd
Model Registration
Move wireframe along x-axis
3rd
Move LEGO along y-axis
Rotate wireframe around z-axis
4th
Rotate LEGO around z-axis
Reset the wireframe
5th
Move wireframe along x-axis
Move wireframe along y-axis
6th
Rotate wireframe around y-axis
Rotate wireframe around x-axis
7th
Move wireframe along z-axis & rotate wireframe around x-axis
Move wireframe along z-axis & rotate wireframe around y-axis
with the Lego and virtual model was rated notably high in usability. For translating the Lego model, over half of the participants (N = 4) rated it as a six or greater, and all participants rated moving the wireframe as five or greater, with half of the participants rating it a seven. Rotations using the Lego and virtual model were also rated favorable by participants. Most participants (N = 5) rated rotating the Lego a six or greater, and all participants rated rotating the wireframe a seven. These ratings imply that the translation and rotation functions of the BRICKxAR/T app are intuitive for users and that performing these tasks was accessible. However, performing multiple transformations at once appeared difficult as participants varied widely in ease of moving and then rotating the wireframe. Regarding the app’s mechanics, users reported that the registration could have been faster, that the AR would not consistently track the model correctly through the entire experiment, and that the apps’ translation and rotation sliders were overly sensitive, making it difficult for users to perform precise transformations. In addition, the lack of movement observed by participants when interacting with the app was of particular concern. The BRICKxAR/T app aims to facilitate the embodied learning of geometric transformations and their mathematical representations. However, we found that users did not innately move around when using the app, even though it was implicitly shown in the demo videos. Participants were more likely to interact with the virtual model (i.e., green wireframe) by using the sliders rather than physically moving the LEGO model. Additionally, it was documented that accessing the app functions to their full extent was not intuitive to the user. For example, participants could only find translation sliders or be able to complete specific tasks if they remembered what the demo video had shown. Thus, participants needed to understand the transformations of the LEGO model about specific axes. However, most participants often utilized the instructions to complete tasks. Furthermore, they reported that tasks were simple once instructions were read or a hint was given (e.g., try reading through the instructions more carefully). Non-AR Version. Like the AR participants, the benchmark usability study non-AR participants’ post-test matrix algebra test accuracy was moderate, with moderate confidence in their answers, and participants generally reported low usability based on their SUS rating (Fig. 6). Participants’ SEQ scores were also analyzed to gain insight into
12
S. D. Aguilar et al.
the functionality of the non-AR version (Fig. 7). Of the six participants, over half of the participants rated the discoverability a 5 or greater. Participants also found that translating the wireframe the first time was easy. However, participants reported lower usability after performing other tasks and then being asked to translate the wireframe again on a different axis. Four participants rated the first translation task a 7, but for the second attempt at rotation, only three participants rated this task a 7, with the other half rating it as low as a 4. Inversely, performing rotations on the wireframe more than once proved to increase usability. For the first rotation, participants had mixed ratings, with over half (N = 4) rating a 5 or less, but by the second rotation, five of the six participants rated it a 7. Next, participants were asked to realign the wireframe with the stagnant virtual model, which was generally rated easy as five of the six participants rated it a 6 or greater. However, we do note that majority of participants expressed frustration with slider sensitivity while completing the task, finding it difficult to get all sliders back to zero. Finally, most participants were able to perform multiple transformations together (i.e., translation and rotation), as five of the six participants rated this task a 6 or greater. Moreover, we found that participants in the non-AR condition referenced the instructions when needed but felt overwhelmed by their length and detail. Participants needed help finding translation sliders or could not complete specific tasks if they had forgotten what the demo video had shown, often skimming through instructions to see what they could find. Unique to the non-AR version, we found that participants reported frustration trying to change their point of view in the virtual scene and realign the green wireframe with the stagnant model. Participants noted that the app’s functions were overly sensitive; it was difficult to zoom in or out, pan the camera view, and get all sliders back precisely to zero. However, it appeared that participants had an easier time understanding the matrix-wireframe relationship in the non-AR condition as only one matrix multiplication changed (for the wireframe model) and labeling of each axis and rotations were helpful. A thematic analysis was performed to code and identified recurring themes in users’ experience and understanding while interacting with the app. The findings from the analyses were used to restructure the instructional and demo videos to address users’ feedback by providing explicit instruction and examples on how to use the app’s functions. Using the updated materials, the usability study was replicated with a new set of participants.
4 Redesign of Video Instructions The results of the benchmark usability test were analyzed to formulate recommendations for restructuring the app’s corresponding introduction and demo videos. The overarching aim of redesigning the video instructions was to provide a clear and effective way for users to understand the basic concepts of matrix algebra and a guide for using the BRICKxAR/T app’s functions, thus maximizing users’ experience with the app. For the matrix algebra introduction video, we found that users preferred animations to explain concepts such as translations and rotations on specific axes. However, we also noted that users needed help understanding trigonometry terms such as sin and cosine relating to transformations. Users also requested additional explanations of 3D transformations and
Enhancing Usability in AR and Non-AR Educational Technology
13
Table 2. BRICKxAR/T Benchmark AR and Non-AR Thematic Analysis Results Theme
Sub Theme
Recommendation
App Intuitive
Slider Access
Provide a more intuitive way for users to access translation sliders
Functionality
Axis Information
Give rotation and scale sliders corresponding labels (x,y,z) in addition to color coding
Functional Information Provide clear labels/instructions related to each procedure/ transformation App features Confusion of Purpose Unclear functionality
Provide a “zero out” function that allows users to restart Reformat rotation and translation buttons for clear understanding of function
Table 3. BRICKxAR/T Updated AR and Non-AR Thematic Analysis Results and recommendations Theme
Sub Theme
Recommendation
Sensitivity
Overly responsive sliders Reduce slider sensitivity
App Intuitive
Ease of Use
Improve intuition of opening/closing translation sliders
Functional information
Provide axis information for rotation and scale sliders
App features
Provide a “zero out” function that allows users to restart
Functional information
Provide clear/concise labels & instructions related to each procedure or transformation
Confusion of Purpose Unclear functionality
Reformat rotation and translation buttons for clear understanding of function
their related matrices, as many had little to no previous experience with the concepts. To address these issues, in redesigning the matrix algebra introduction video, we used additional animations to explain further or emphasize concepts and terms. However, we intentionally minimized stimuli on each slide to assist in cognitive load. Additionally, we focused on building on concepts as the video progressed and used consistent and defined terminology similar to what would be used in the AR and non-AR demo videos. For the original AR App demo video, we found that most users enjoyed a third-person perspective of interacting with the app but needed further explanation and examples of the app’s functions. Thus, when redesigning the AR demo video, when provided more in-depth explanations of the geometric transformations being performed on the screen and elaborated on how to complete functions. Finally, the original non-AR app demo users reported uncertainty about adequately using the app’s functions. Therefore,
Fig. 7. AR Benchmark Usability SEQ scores for discoverability (A), registration (B), moving the LEGO (C), rotating the LEGO (D), moving the wireframe (E), rotating the wireframe (F), and both moving and rotating the wireframe (G). Non-AR Benchmark Usability SEQ scores for discoverability (H), moving the wireframe along one axis (I), rotating the wireframe along one axis (J), realigning the wireframe (K), moving along another axis (L), rotating along another axis (M), and both moving and rotating the wireframe (N).
14 S. D. Aguilar et al.
Enhancing Usability in AR and Non-AR Educational Technology
15
redesigning the non-AR demo video, we highlight each app function by giving examples of specific transformations performed for each axis and using a third person POV to display person-app interactions.
5 Updated Usability Tests Like the benchmark usability test, participants completed a pre-test that consisted of demographic information, previous experience with matrix algebra (2D and 3D), and a multiple-choice math test on transformation matrices, designed based on learning materials of Khan Academy with proceeding participant confidence ratings for each question. Each experiment condition group followed the same procedure. After completing the pre-test, participants watched a brief introductory video on matrix algebra and a corresponding app demo video (AR or non-AR). After watching the videos, participants were given either the iPad setup with the BRICKxAR/T app and accompanying LEGO model (i.e., AR condition) or just an iPad setup with the BRICKxAR/T app (i.e., nonAR condition). Participants were then asked to complete seven tasks related to the app’s functionality (Table 1). As participants interacted with the app to complete each task, they were instructed to think aloud, explaining what they were trying to do and if it was easy or challenging. After each task participants answered an SEQ, rating how easy or difficult it was for them to complete that task (Table 1) on a 7-point rating scale. After the experiment, participants completed a post-test with the same multiple-choice math test in randomized order with proceeding confidence ratings. To assess their overall experience with the app, participants also completed the System Usability Scale (SUS). 5.1 Methods The updated study participants (N = 15) were mostly women, with twelve women and three men. Twelve participants were freshman, two sophomores, and one postbaccalaureate. The majority had previous 2D matrix algebra experience (Fig. 8). Participants pre-test matrix algebra test accuracy was moderate with participants rating low confidence in their answers. The updated test participants viewed the updated introduction video. They were randomly assigned to either the AR condition (N = 8) or the non-AR condition (N = 7), in which they watched the related updated demo videos and then interacted with the app. Like the benchmark test participants, the updated test participants interacted with the apps, performed the same several tasks related to the app’s functions (Table 1), provided feedback, and rated the level of usability for each task. 5.2 Results The thematic analysis was performed to code and identify recurring themes to understand users’ experience interacting with the app and assess improvements. In addition, preand post-test and SEQ scores were also analyzed to explore trends in users’ experiences, however, statistical significance cannot be inferred due to sample size and low statistical power. Based on the analysis, study findings were categorized and prioritized to directly
16
S. D. Aguilar et al.
respond to issues reported by users. The difference in user interactions between the two studies revealed that changes to the videos increased the use of in-app instructions and increased app feature usability (Table 2). Both usability studies revealed that making changes to app feature interactivity could further bolster usability. Before making changes to the video, participants skimmed instructions if prompted, struggled to pull up transformation sliders without instructions, and needed more confidence in performing transformations on LEGO models. By simply changing the video instructions, we found that users reported the app to be more intuitive and easier to use. With the video changes, participants referenced instructions before interacting with the app, utilized instructions more effectively, and reported an understanding of the wireframe-model-matrix relationship for both the AR and non-AR version of the app. When trying to perform specific transformations, users quickly found transformation sliders and used the wireframe model functions to reference or double-check transformations before performing them on a LEGO model. However, even with the updated video, we noted several persisting concerns. Updated AR Version. Updated AR participants’ post-test matrix algebra test accuracy was moderate with moderate confidence in their answers (Fig. 8). Participants SEQ scores were analyzed to identify any changes in reported usability (Fig. 9). Like in the benchmark usability test, the updated AR participants rated discoverability and registration as relatively easy to complete. Of the eight participants, six rated discoverability a 5 or higher and half (N = 4) rated registration a 7. The updated AR participants found translations particularly easier than the benchmark participants. Six participants rated translating the Lego model 7 and six participants rated translating the wireframe a 7. Additionally, participants were able to complete rotations with similar ease. Majority (N = 7) rated rotating the Lego model a 5 or greater and this was consistent for rotating the wireframe. We hypothesize that this may be due to changes in the AR demo video, as the redesign included showing specific translations and rotations of the Lego and virtual model (i.e., green wireframe) along each axis. Most notably, compared to the benchmark test participants the updated test participants reported that completing a translation then rotation on the wireframe to be relatively easy as majority rated the task rotating a five or higher. According to SUS ratings, participants reported generally low usability (Fig. 8). Though there were notable improvements in user’s interaction with the app (e.g., using sliders effectively; referencing wireframe), participants still appeared to favor moving the wireframe more than moving the LEGO model and reported feeling overwhelmed by the amount of information on the screen and the matrices consistently changing as they interacted with the app. Updated Non-AR Version. Updated Non-AR participants’ post-test matrix algebra test accuracy was moderate with moderate confidence in their answers (Fig. 8). The updated non-AR participants appeared to be able to access the app’s features with more ease than the benchmark test participants (Fig. 9). Over half (N = 5) rated discoverability a 6 or higher versus four the benchmarks participants rating it only a 5 or greater. Translating the wireframe, the first time was split on ease as two participants rated it a 4 or below and 5 rated it a 4 or lower, however, by the second translation majority of user (N = 6) rated the
Enhancing Usability in AR and Non-AR Educational Technology
17
Fig. 8. Updated usability experience with 2D matrix algebra for AR (A) and Non-AR (G), accuracy on math test for AR (B) and Non-AR (H), and confidence on math test for AR (C) and Non-AR (I) both taken before interacting with BrickxAR/T. After interacting with BrickxAR/T, SUS for AR (D) and Non-AR (J), accuracy on math test for AR (E) and Non-AR (K), and confidence on math test for AR (F) and Non-AR (L).
Fig. 9. AR Updated Usability SEQ scores for discoverability (A), registration (B), moving the LEGO (C), rotating the LEGO (D), moving the wireframe (E), rotating the wireframe (F), and both moving and rotating the wireframe (G). Non-AR Updated Usability SEQ scores for discoverability (H), moving the wireframe along one axis (I), rotating the wireframe along one axis (J), realigning the wireframe (K), moving along another axis (L), rotating along another axis (M), and both moving and rotating the wireframe (N).
18 S. D. Aguilar et al.
Enhancing Usability in AR and Non-AR Educational Technology
19
task a 6 or higher. We observed a similar occurrence with rotating the wireframe. Like the benchmark participants, the updated participants struggled with realign the wireframe as they experienced issues with getting the transformation slider back to zero. Completing the first rotations was reported as moderately easy as six participants rated the task between a 4 and a 6, however, on the second rotation all participants rated the task a 5 or greater with four participants rating it a 7. By continuing to interact with the app, users seemed to perform task more efficiently and with greater ease as reflected in their SEQ scores. Finally, completing a translation and rotating the wireframe was varied in rating as three participants rated it a 7, two rated it 6 and the other two participants rated it a 4 and 5. Based on SUS ratings, participants reported generally low usability (Fig. 8). Using the qualitative and quantitative data collected from the benchmark and updated usability studies, a thematic analysis was conducted to find recurring themes in user experience. There were three general themes found regarding the app’s sensitivity, intuitiveness, and confusion of app functionality. Based on the themes and subthemes found, recommendations were formulated to specifically target each area of concern (Table 2). To enhance user app interaction we recommended the following: 1. Redesigning slider sensitivity, 2. Reformatting the change rotation button and translation buttons so users can more clearly see their functionality and work towards reducing slider sensitivity, 3. Using clear labels for and condensed instructions related to each procedure or transformation, 4. Adding axis labels for rotation sliders instead of a color-coded ‘t’ for all, and 5. Providing a ‘zero out’ function that allows the user to restart (Table 3).
6 Conclusion Usability tests are essential for identifying app features that are most confusing and frustrating for users, and for troubleshooting and making impactful improvements to an app. In addition, usability testing can help us be more aware of users’ limitations, such as needing to read instructions consistently or needing straightforward visual cues related to functionality. The present study aimed to assess users’ interactions and experience with the BRICKxAR/T’s specific in-app functions and general evaluation of the app’s companion materials (i.e., introduction and app interaction videos). BRICKxAR/T has the potential to enable embodied learning through students’ interaction with the physical manipulative in the immersive AR environment and integrate visualization of geometric and algebraic concepts, synchronized with the movements. First, a benchmark usability study was conducted to investigate the usability for the BRICKxAR/T app’s AR and nonAR versions in its standard condition. The benchmark usability provided insight into how users interacted with the app and the accessibility of its functions. We then redesigned the app’s instructional and demo videos with the aim to address identified issues such as need for additional explanation of advance concepts, clearer instructions on how to complete in app transformations, and multiple person-app-interaction visual examples of how to access the app’s functions. We then conducted an updated usability study with the BRICKxAR/T AR and non-AR versions of the app and updated videos. The second usability study allowed us to examine changes in user experience with using the app and target continued issues in user-app-interactions. We noted several areas of improvement to usability such as easier process of accessing app features, effective use of instructions,
20
S. D. Aguilar et al.
and a better understanding of the mathematical logic related to transformations. Simple changes in instructions can be beneficial in improving usability, however, supporting changes in the app are also necessary. Based on the findings from the benchmark and updated usability studies, we were able to provide concrete recommendations that will be utilized in the re-design of the app and can be generalized to the development of future educational technologies. 6.1 Limitations and Future Research A noted limitation of our study is that the BRICKxAR/T was initially developed to serve undergraduate students taking courses that involved matrix algebra. However, our usability studies only tested the user app interaction with undergraduate students taking an introductory psychology course. Testing usability with participants currently enrolled in a course that teaches matrix algebra will be completed once the BRICKxAR/T app’s efficacy has been established. Additionally, though we used two coders to ensure intercoder reliability, future usability studies should consider using video recordings of participants’ interactions. Using video observations, user-app interactions can be examined more thoroughly and in greater detail. In conclusion, for future development of educational technology, we propose the following: 1. Considering that users tend to scan instructions instead of spending indepth time reading them, we recommend working towards concise instructions that would optimize users’ time spent interacting with the app, 2. Though complimentary app resources may be appealing, we recommend moving away from reliance on instructional videos or other outside materials as this would enhance the app’s stand-alone usability and decrease the likelihood that users overly rely on such materials to use the app, and 3. In conjunction with our second recommendation, creating intuitive and accessible app features can enhance discoverability. Examples of accessible educational applications vary but generally we recommendation creating buttons, sliders, instructions, etc., that users can quickly access and use as needed, minimizing the time spent searching for and figuring out how to perform a desired function. Thus, developing educational technology interventions guided by these recommendations can enhance usability, aid in cognitive offloading, and drive a user-centered learning experience. Acknowledgements. . This material is based upon work supported by the National Science Foundation under Grant No. 2119549. We appreciate the support from Dr. Dezhen Song and Shu-Hao Yeh in the Department of Computer Science & Engineering and Dr. Francis Quek in the Department of Teaching, Learning & Culture at Texas A&M University.
References Alibali, M.W., Nathan, M.J.: Embodiment in mathematics teaching and learning: Evidence from learners’ and teachers’ gestures. J. Learn. Sci. 21(2), 247–286 (2012) Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006)
Enhancing Usability in AR and Non-AR Educational Technology
21
Bruce, C.D., et al.: Understanding gaps in research networks: using “spatial reasoning” as a window into the importance of networked educational research. Educ. Stud. Math. 95, 143–161 (2017) Buckley, J., Seery, N., Canty, D.: Investigating the use of spatial reasoning strategies in geometric problem solving. Int. J. Technol. Des. Educ. 29(2), 341–362 (2018). https://doi.org/10.1007/ s10798-018-9446-3 Bujak, K.R., Radu, I., Catrambone, R., MacIntyre, B., Zheng, R., Golubski, G.: A psychological perspective on augmented reality in the mathematics classroom. Comput. Educ. 68, 536–544 (2013) Davis, B., Spatial Reasoning Study Group: Spatial Reasoning in the Early Years: Principles, Assertions, and Speculations. Routledge (2015) de Freitas, E., Sinclair, N.: New materialist ontologies in mathematics education: The body in/of mathematics. Educ. Stud. Math. 83(3), 453–470 (2013) Lakoff, G., Núñez, R.: Where Mathematics Comes from, vol. 6. Basic Books, New York (2000) Lowrie, T., Logan, T., Ramful, A.: Visuospatial training improves elementary students’ mathematics performance. Br. J. Educ. Psychol. 87(2), 170–186 (2017) Mulligan, J., Woolcott, G.: What lies beneath? Conceptual connectivity underlying whole number arithmetic. ors: Xuhua Sun, Berinderjeet Kaur, Jarmila Novo, 220 (2015) Mulligan, J., Woolcott, G., Mitchelmore, M., Davis, B.: Connecting mathematics learning through spatial reasoning. Math. Educ. Res. J. 30, 77–87 (2018) Newcombe, N.S., Uttal, D.H., Sauter, M.: Spatial development (2013) Núñez, R.E., Edwards, L.D., Filipe Matos, J.: Embodied cognition as grounding for situatedness and context in mathematics education. Educ. Stud. Math. 39(1), 45–65 (1999) Radford, L.: The progressive development of early embodied algebraic thinking. Math. Educ. Res. J. 26(2), 257–277 (2014) Shaghaghian, Z., Burte, H., Song, D., Yan, W.: Learning spatial transformations and their math representations through embodied learning in augmented reality. In: Zaphiris, P., Ioannou, A. (eds.) Learning and Collaboration Technologies. Novel Technological Environments. HCII 2022. LNCS, vol. 13329, pp. 112–128. Springer, Cham (2022). https://doi.org/10.1007/9783-031-05675-8_10 Sinton, D.S.: 11 Spatial Learning in Higher Education. Space in mind: concepts for spatial learning and education, 219 (2014) Stolz, S.A.: Embodied learning. Educ. Philos. Theory 47(5), 474–487 (2015) Uttal, D.H., Miller, D.I., Newcombe, N.S.: Exploring and enhancing spatial thinking: Links to achievement in science, technology, engineering, and mathematics? Curr. Dir. Psychol. Sci. 22(5), 367–373 (2013) Wang, M., Callaghan, V., Bernhardt, J., White, K., Peña-Ríos, A.: Augmented reality in education and training: pedagogical approaches and illustrative case studies. J. Ambient. Intell. Humaniz. Comput. 9, 1391–1402 (2018)
Why the Educational Metaverse Is Not All About Virtual Reality Apps Mike Brayshaw1 , Neil Gordon1(B)
, Francis Kambili-Mzembe2 , and Tareq Al Jaber1
1 School of Computer Science, University of Hull, Hull HU6 7RX, UK
[email protected] 2 University of Malawi, Zomba, Malawi
Abstract. This paper explores how the Metaverse can be used in the context of learning and collaboration. In it we seek to dispel the story that the Metaverse is just another synonym for Virtual Reality and future technology. Instead we will argue that the Metaverse is another interface scaffolding story akin to an interaction metaphor and can be used in an analogous manner to that of the Desktop Metaphor. How it is rendered in reality is a very flexible implementation detail. Instead, it should be thought of as a conceptual space and as an information sharing channel. As long as you are engaging in the Metaverse conceptually you can be there practically and in reality. As such it can be a low bandwidth and/or a high bandwidth information superhighway, depending on the specific mode of interaction. The use of metaphor and sharing has been with us a long time in learning technology and shared/collaborative interaction. So it is wrong to put down the aims and objectives of the Metaverse for Education as just an exercise in expensive high bandwidth Virtual Reality. It is not something that is based on technology for technology’s sake. It clearly has a role for those with access to advanced technologies, bandwidth, and hardware. In this paper we will however argue that this limited view fails to understand the possibilities it opens up for those who do not have such facilities available. Instead the Metaverse provides learning and collaboration possibilities for the many, not the privileged few. Keywords: Collaborative Learning in Online Environments/CSCL · Human-Computer Interfaces and Technology Support for Collaboration and Learning · Wearable Technologies Mobile learning and Ubiquitous Technologies for Learning
1 Introduction The Metaverse has been proposed as a new utopian future of User Interaction and Use Experience (UX/UI). In Mark Zuckerberg’s [1] vision of the future, it is a world where people would inhabit an entirely virtual space, parallel to natural reality, but one in which they could engage in all the activities of the real world. In this vision, the metaverse employs high-end virtual realities to realise this utopia. Therefore, common technology associated with the Metaverse in these visionary statements are high specification hardware, software and bandwidth. Clearly such technology has costs associated with it. It is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 22–32, 2023. https://doi.org/10.1007/978-3-031-34550-0_2
Why the Educational Metaverse
23
the purpose of this paper to challenge these assumptions and show that for educational purposes much of the same goals and affordances can be achieved with less resource heavy technology and at much lower costs. Such concerns for Learning and Collaboration are timely and have considerable consequences. The Metaverse for Learning and Collaboration can have direct impacts on embedded and immersed learning, ubiquitous learning, the use of interactive AI and Chatbots, and provide new locations physical, social, and cultural for interaction and collaborative learning. This paper aims to demonstrate how the new world that the concept of the Metaverse opens up can be made available in much more commonplace and cost effective ways. This is not to take anything away from high end, resource rich, technology solutions. For many they will be of great benefit, but we here argue that this benefit is not their soul preserve, and other implementation routes can be equally rewarding and an important step forward in what we provide our students. VR has been with us for a long time, but still takes up a comparatively small role in UX/UI. Even in computer gaming, it is a limited force, with many games opting for lower technology solutions that give the impression of a larger virtual world. The location of the game does not need full blown VR in order to realise the player’s immersion in the imaginary world and/or collaboration. Here in the educational context we will argue that the same ideas can be used to immerse users in a version of the metaverse for learning.
2 The Metaverse Realised as a Metaphor The metaverse is only a new way of presenting, handling, manipulating and interacting with information. In HCI we have a long history of using techniques in order to scaffold interaction between users and the computer. This might be via a shared imagined space, such as in early all text based games for example. LOGO [2] used the concept of a turtle moving in a space as a basis of interaction as a way to teach maths and geometry to children, doing so in a low resolution media manner. Most famously the Xerox Star system of the late 1970s [e.g. 3] presented us for the first time with a recognizable version of the desktop metaphor. It contained the crucial idea of a graphical user interface, Windows, Icons, Menus, and Pointing (WIMP), allowing direct manipulation of these objects, and enabling a What You See Is What You Get (WYSIWYG) interface. From the Apple Lisa onwards these founding principles underlie the interfaces we see on common interfaces for computers, with the likes of modern Mac, Windows, and Linux desktop systems. They are representative of the desktop as a metaphor only. Hypertext systems (e.g. Hypercard, [4]) or spreadsheets (e.g. VisiCalc, [5]) use a visual model that can stand in for what they purport to represent. They do not need to be just like the real thing and this is enough to enable the types of interaction and ease of use they were designed for. As long as the metaphor is capable of being shared for a given particular purpose then they fulfil the necessary requirements. How the metaphor is implemented in detail may not be critical as such. If we look at early desktop metaphors the icons and graphics were crude in comparison with today and the world appeared literally black and white. However, they were good enough for the purpose. Many icon referents were not obvious and their meaning had to be learnt or guessed because of limits on representations (e.g. 32x32 bit maps implementing them), but once these referents had been discovered they could be used and the metaphor worked accordingly.
24
M. Brayshaw et al.
The metaverse can thus be thought of as a virtual machine implementing a new metaphor. In the same way that we can use the desktop metaphor to structure the interaction without the need for a VR representation of a desktop, from an information perspective we can achieve the metaverse metaphor without requiring VR. Thus, preserving the interaction benefits of the metaverse metaphor without incurring unneeded overheads in terms of implementation cost. As long as you are in a believable metaverse, and able to do the same types of things, then the type of implementation that supports this is more open. It has to believably support the metaphor. Many different technologies can do this and not necessarily all needing to be high end in terms of cost and bandwidth.
3 Social Media and Social Learning The background origin of the Metaverse lies in social media, and here we are looking at trying to use it for collaborative learning. Thus it is timely to consider social media and how it can be used for effective social learning. Initial social media systems were often very low bandwidth asynchronous messaging systems. These ranged from the early email to early chat boards (e.g. Usenet, Byte Information Exchange (BIX, [6]), Compuserve, AOL). Contemporary Social Media Systems can be used in the learning context. For example TikTok is often employed by teachers to present 15 s summaries of teaching session content. In the same vein students can then be encouraged to make short 15 s summaries of important things they have learned from specific lessons. Longer interactions can be supported by tools like Facebook and Twitter, where longer media clips can be easily shared. Groups can be set up and discussion groups created (open or closed), enabling rapid feedback, the use of interaction using polls, and synchronous interactions using Facebook Live (that can be recorded for subsequent further study) or MS Teams. These allow social media tools to be used for social learning. The recent health crisis caused a rapid shotgun marriage between Social Media and Learning, in that schools, universities, and other places of learning were shut down and all activity was moved online. Initially people had to adapt to what was immediately available e.g. using systems traditionally favoured by gamers like DISCORD for asynchronous chat, or synchronous voice or video communication. Over time institutions became used to employing conferencing tools like MS TEAMS or ZOOM to deliver lectures, tutorial, meetings, and other learning activities. Indeed even as the crisis is over this has persisted as a source of delivery to students. One of the lessons from all this here is that the metaphor of a virtual university had to be assembled rapidly and using various off-the-shelf technologies, glued together in order to provide the new virtual school or university. Users were able to adapt to this and accept that they now studied virtually at their school or university. All sorts of applications were rapidly assembled to create the metaphor from familiar software (e.g. Powerpoint or Skype) to new video conferencing software and even borrowed PCs. The metaphor could be made to stand up and work. This is the same for the metaverse, its story can be told from many different implementational angles. Whilst learning can be done individually, the development of learning has been as a social activity, from nursery through primary, secondary and further education, and
Why the Educational Metaverse
25
into higher education (tertiary education). Campus based education typically depends on classes, peer groups and social environments. For online learning, distance learning providers utilise social metaphors and approaches to emulate the classroom environment and to frame learning and support education. Social media provides a mechanism to enable the informal discussions and support networks that complement the formal learning, akin to the extracurricular groupings and activities of traditional learning. Thus social media platforms can support the learner and their learning. With the rise of viable and cost-effective VR technologies, this offers the opportunity to immerse the learner and provide a new environment that can let the learner develop.This offers the potential for a more flexible approach to learning, which can integrate with Virtual Learning Environments (VLE), which are not immersive technologies but act as a metaphor for a learning institution. Next, if we break down collaborative learning by interactions types and modes (both human-human and human-AI) we can look at the use of social media and learning and the following collaborative contexts: • embedded and immersed learning • Ubilearning, including anytime, anywhere Flexible Learning • human-AI collaboration, To use the metaverse in each of these contexts we can think of what type of information and media we would need. Each of these types of information and media handling can be satisfactorily done without the need of a high end VR. Indeed many of them are done with existing systems. Our GUIs can do much of this as can our Conferencing systems. In terms of what basic facilities they would need to handle we would need: • • • • •
a shared metaphor of interaction, shared facts and knowledge about the world, a shared set of virtual artefacts and locations, a shared virtual machine, a mode of working, sharing experiences and communication exchanges.
Whilst each of these types of information manipulation handling and manipulation could be done in a high end virtual reality they can, and are on an everyday basis, done by other lower technology means. Thus from a learning and collaboration metaverse pedagogy perspective, there is no need to just base this on high end specification devices.
4 The Costs and Options for Implementing the Metaphor Early examples of interactive virtual worlds, where users interacted with each other, employed a textual world (for example the game Dungeons and Dragons - a shared text based virtual reality). In this section we present some historical examples of how complex metaphors can be implemented via simple systems. In doing so we present the following as examples of stepping stones into creating virtual worlds that can support the types of activity we need in our educational metaverse. Low bandwidth VR has been with us a long time and has already been applied in the learning domain. In 1994 The Open University UK ran a Virtual Summer School
26
M. Brayshaw et al.
(VSS, [7]) that had to use the infrastructure of modems and copper twisted wire phone technology to create a virtual model of the sort of residential summer school typically then held on campus of UK universities. Bandwidth was augmented by the use of mobile phones and turn taking software. The Campus was created with a veneer of software on top of the FirstClass Conferencing system. It allowed students to receive lectures and tutorials, have one to one teaching sessions, work together in groups, write simple computer programs, and design and run experiments. Additional Teaching material was presented by Hypercard tutorials. The whole project used a suite of available software packages, assembled together to tell the story of the Virtual Summer School. An example of a virtual world that was achieved with use of simplified graphics is Second Life. Second Life [8] is a 3D virtual world where multiple users can interact. They do so by creating avatars for themselves and interacting with other people’s avatars in shared spaces with resources they have created. An internal market is monetized by the “Linden Dollar”. It delivers these virtual worlds on a home computer using a web browser and OpenGL technology. It was originally launched in 2003 and at its peak claimed one million users [9]. The important concept here is that people bought into this “reality” and took it for real despite the reality of lower multimedia. People were able to immerse themselves in this shared world and interact accordingly. In an educational context, the Department of Information Systems of the University of Sheffield [10] set up an island ‘Infolit iSchool’ with Second Life. They have used it to teach real students, to present at conferences, and for research purposes as a method of contact and communication between others engaged in this research, and for holding events. In the context of distance education, one of the benefits that is claimed is that the colocation of people’s avatars being present in the same virtual world creates a social presence of these other people. This in turn might be a factor in overcoming one of the key issues in distance education, namely that of loneliness and feeling of isolation, leading to poorer performance or in the worst case student drop out. A further potential advantage is that it doesn’t matter which country someone is from, it is the presence of their avatar that matters, thus meaning that physical location is not an issue. Several other Universities have set up their own specific islands to teach a number of different subjects including health studies and nursing, history, art and architecture, and Chinese [11]. Another off the shelf application is Minecraft. Minecraft is an example of an interactive virtual word with simplified graphics, of which the virtual world is largely made up of “blocks or cubes” [12], which has been used for educational purposes in a range of subject areas [13]. Although Minecraft is considered as a video game [14], there is an education version [13], known as Minecraft Education [15]. According to [16], the “shared virtual space in the metaverse” [ibid, p. 40] enables learners to “better participate in education”. In this regard, [17] carried out a study in which Minecraft was used as an educational tool for facilitating “collaborative learning”. According to [17], there is an interest in “computer-supported collaborative learning” [ibid, p. 1] within education, since collaboration is considered to be an essential ability in the modern day. In [18], to avoid situations in which educational Information and Technology (IT) based content is only accessible to a few teachers and learners, “cross-platform technologies” [ibid, p. 18] should be considered. In this regard, Minecraft is available for
Why the Educational Metaverse
27
different platforms including “Windows, Mac, Chromebook, and iPad” [19], of which users on different platforms can interact within the same virtual Minecraft environment. Therefore, this ensures that those that have challenges accessing or using a particular platform, have alternative platforms that they could use. Furthermore, Minecraft is also supported on Head Mounted Display (HMD) Virtual Reality (VR) platforms, as indicated by [20] who carried out a study in which Minecraft was used in high school classroom settings via the Oculus Rift HMD VR device [20]. In this regard, one of the activities consisted of learners in a science class tasked to collaboratively “build a model of a plant” [20] within the Minecraft virtual environment, and another activity consisted of learners in an ICT class “building a virtual reality cafe” [ibid, p. 22]. HMD devices are a category of VR platforms [21], in which a virtual environment is projected onto displays situated in front of the user, similar to wearable glasses [22], of which case the user’s view of the physical world is occluded by the displays in question [21]. 3D virtual environments projected onto HMDs are considered to provide a more immersive experience than virtual environments projected onto conventional display devices such as computer monitors [21, 23], of which within the context of VR, when used for VR, conventional computing devices are categorised as desktop VR [21]. According to [20] there was a “willingness of most students to collaborate” within the virtual environment simulated in Minecraft via VR, which further suggests that an off the shelf application like Minecraft, particularly when used in conjunction with VR, is capable of engaging learners to work collaboratively in a shared environment. However, [20] also note that there was a small number of learners who were not comfortable using the HMD VR device, which further supports the notion that cross-platform applications should be considered for educational purposes. In the previously discussed study by [20], it states that one of the challenges they faced was accessing the internet in order to use Minecraft, due to the school network having blocked access to the Minecraft application, resulting in the need to find an alternative means of accessing the internet for Minecraft [20]. This highlights the need for low bandwidth technology solutions where reliance on a persistent internet connection is minimised. In this regard, reliance on the internet would potentially result in accessibility and inclusion challenges in a developing country like Malawi, where according to the Malawi Ministry of Education [24], only 2.5 percent of primary schools in Malawi have access to an internet connection. Furthermore, according to the Malawi Ministry of Education [24], 68% (2022, p. 27) of primary schools in Malawi have no access to electricity, while 19% of secondary schools in Malawi have no access to electricity. Therefore, to account for accessibility and inclusion, low bandwidth technologies can be considered for areas facing challenges like those in Malawi, in relation to technology access and use. Each of the examples in this section show that believable Virtual Worlds can be created in different ways. In several respects, the Virtual Summer School (VSS), experience in different countries, and approaches we saw during the Covid-19 Health Crisis discussed earlier, share the approach of implementing the story of a Virtual School or University by exploiting existing software that can be glued together. Second Life and Minecraft show how believable VR doesn’t need to involve High End graphical solutions. The importance is that the end users need to be able to buy into these stories and
28
M. Brayshaw et al.
interact and behave as if they believed they were in that Virtual World. Thus believability is the key to the low end educational metaverse.
5 Why Low Cost Bandwidth and Technology Solutions are Desirable Low-cost technology solutions can be desirable, particularly in relation to accessibility and inclusion. [25] describes accessibility as “the extent to which an interactive product is accessible to as many people as possible”, and refers to inclusion as accommodating “the widest possible number of people”. In this regard, high end high-cost solutions could result in accessibility and inclusion challenges in some areas. Specifically, • Many places do not have the bandwidth currently and this may last well into the future. This may be down to lack of necessary infrastructure or lack of service providers. In more remote or lightly populated areas (for example rural locations) there may be a lack of economic incentive to provide the necessary services. • Many institutions, such as schools, are already challenged for resources, and will not be able to afford the costs of high end kits now or in the future. For instance, in an underdeveloped country like Malawi (Malawi Government, [26]), rural areas are “where poverty levels are high” (Malawi Government, [26]). In which case, according to the Malawi Ministry of Education [22], 87 percent (2022, p. 6) of primary schools in Malawi, and 80 percent of secondary schools in Malawi, as identified in the 2022 Malawi Education Statistics Report, are located in rural areas [24]. Therefore, considering the prevalence of poverty in rural areas in Malawi [26], the majority of primary and secondary schools in Malawi would potentially face challenges in accessing high cost technologies, more so since the cost of ICT services and technologies is considered a challenge in relation to technology use in Malawi [27, 28] • Many individuals or families will similarly not be able to afford the kit due to financial and other resources challenges. With the shift to flexible learning and working at home, it is important to make this technology available across the range of the domestic circumstances. VR HeadSets and Gloves are relatively expensive items. To be used to their full potential they need higher end home computers, so just having broadband to the house does not solve any of these problems. If we apply flexible learning to large cohorts then there needs to be adequate provision for all users in all circumstances. • Low resource solutions widens the range of potential contexts of MLearning and learning on the move. By not assuming that high cost and bandwidth solutions are the only way of delivering your learning goals, you broaden the potential learner base. If we want those learning to be mobile and learn on the move, this can be better realised if we have low resource intensive solutions available for our students. • There is less to learn if you are coming from a known world into a new manifestation of it, rather than some new, first time, exploratory space. Many learners only have wide experience with low technology solutions. Introducing new technology need not necessarily help the user, indeed new technology can become a barrier with an extra layer that needs to be learnt, making the number of new skills to be mastered much larger than it might be without. Furthermore, there remains more work done to
Why the Educational Metaverse
29
show the benefits of all of this for learning. Just throwing technology at a problem because we can does not in itself make the learning experience better. We know this from recorded experiences with TICCIT and PLATO, where providing computer based learning materials was only beneficial if those materials were pedagogically sound [29]. If we are to increase the learning curve for students there needs to be demonstrable advantages. If low bandwidth alternatives exist which try to do the same thing without the need to buy and learn new expensive technology, this is an alternative solution route for us to follow. The work presented here is a start of that journey of finding low overhead analogous or isomorphic solution routes.
6 Case Studies for the Use of Low Cost Metaverse for Learning and Collaboration Technologies The following is a case study of a high end solution to the metaverse in the context of Learning and Collaboration Technologies that demonstrates how this might be realised by other means. Kambili-Mzembe and Gordon [30] propose an approach of which they showcase how VR technologies can be used to simulate a cross-platform shared virtual environment, for the purpose of school teaching, without relying on the internet or an existing network infrastructure, thereby showcasing the use of low cost and low bandwidth technologies that could potentially be applied to the use of shared environments for learning. In that study [30], Kambili-Mzembe and Gordon present a VR prototype that was developed to showcase a shared collaborative interactive 3D environment, of which a user on a Windows computer, a user on a Android tablet and a user on an Oculus Quest HMD VR device, are able to interact synchronously within the same virtual 3D environment, via a local network setup on a router. According to [29], by supporting both desktop VR and HMD VR devices in their proposed setup, in a way that does not require a particular type of device to be available, if any of the other supported devices are available, allows for readily available devices to potentially be used, thereby potentially addressing cost related challenges. Furthermore, they indicate that enabling multi-user functionality via a local network setup on a router, allows for collaboration between users while addressing cost related challenges since an existing network infrastructure or the internet are not required. The approach proposed by [30] demonstrates how shared 3D virtual environments can potentially be developed and implemented using relatively cost-effective technologies, in areas facing challenges like those faced in Malawi, where as previously discussed, access to high end and costly technologies and ICT services are a challenge. 6.1 Evaluation Methods One of the critical things for an interactive learning environment to be effective is that the interaction metaphor to be believable. For example, do the users interact with their low end systems as if they were embedded just like in high end systems? Are they able to do the same types of things and exhibit the same type of learning and collaboration? To determine this would be critical in assessing its effectiveness. One obvious way of looking at this would be empirically via gathering user experiences. The problem here
30
M. Brayshaw et al.
is that because the interaction is potentially new to many users, they may not have the necessary language or observations to answer such qualitative questions. Clearly we could use quantitative methods like marks and drop out rates as measures of academic achievement and engagement. Another approach would be to take existing inspection methods and generalise these. We have looked at the metaverse as a way of interacting, sharing, and behaving and looked to achieve the same types of behaviour by lower technology routes. One way of testing the success of these endeavours is to develop inspection methods that could now probe these questions. As [31] generalised Heuristic Evaluation [32, 33] to look at pedagogical aspects of applications, a possible route forward here would be to develop the necessary heuristics which could inspect low bandwidth metaverse solutions and evaluate whether they were fit for purpose.
7 Conclusions and Further Work This paper started out by defining some of the core critical axes of the Metaverse, first starting with its basic definition, its vision of future interaction design, and the key claims of deliverables that it will bring. It then explored how in the current context it can be applied to learning and collaborative educational technologies. We then focused on the basic underpinning information and media handling concepts. The Metaverse is thus analysed not from a purely technical perspective but from one of information handling, manipulation, and user experience. From this we then looked at how this might be offered, on an information management basis, by different interaction modes. In this way we have looked to demonstrate how the benefits of the Metaverse metaphor may be decoupled from the technology and thereby delivered by a mix of interaction techniques. These techniques in themselves may be flexible in their cost, in terms of technology, availability, and bandwidth. This is a desirable property to introduce both as a way of broadening the potential uptake and roll out of the metaphor, the flexibility it offers in terms of delivery, and the potential for mobile and international learning. In this paper we have argued that it is the concept that is important in exploiting the metaverse more than just the implementing technology per se and that there are different, and viable alternatives to realising and implementing it. We are not against High Specification Technologies but the notion that there is only one way of implementing the Metaverse is not the case. The Education Metaverse is not just in terms of high end VR technologies alone. Instead we have aimed to demonstrate the possibility of using a far wider range of implementation methods. We have argued the role of lower cost technologies that may be employed to achieve similar ends. Indeed we have shown that it is both timely and appropriate to consider these solution routes in order to widen the reach of the metaverse concept and make it a learning and collaboration technology the all may seem to benefit from in the near distant future.
References 1. Zuckerberg, M.: Connect 2021 Keynote: Our Vision for the Metaverse. Facebook https:// www.youtube.com/watch?v=Go9eRK8DOf8. Accessed 24 Feb 23 (2021)
Why the Educational Metaverse
31
2. Papert, S.: Mind-Storms: Children, Computers, and Powerful Ideas. The Harvester Press, Hemel Hempstead. ISBN 0-85527-163-9 (1980) 3. Johnson, J., et al.: The xerox star: a retrospective. Computer 22(9), 11–26 (1989) 4. Atkinson, H.: Apple Computer (1988) 5. Bricklin, D.: VisiCalc, Apple II Computers (1979) 6. Byte Information Exchange (BIX), The Birth of BYTENet, BYTE. https://archive.org/det ails/byte-magazine-1984-10/page/n7/mode/2up?view=theater. Accessed 18 Feb 23, October 1984 7. Eisenstadt, M., Brayshaw, M., Hasemer, T., Issroff, K.: Teaching, learning and collaborating at an open university virtual summer school. In: Dix, A., Beale, R. (eds.) Remote Cooperation: CSCW Issues for Mobile and Teleworkers. Springer, London (1986) 8. Linden Lab, Second Life. https://www.lindenlab.com/about Accessed 17 Feb 23. (2023) 9. Linden La. Infographic: 10 years of Second Life. https://www.lindenlab.com/releases/infogr aphic-10-years-of-second-life. Accessed 17 Feb 23. (2013) 10. University of Sheffield, iSchool launch in Second Life. https://www.youtube.com/watch?v= xNJSyZH175g. Accessed 23 Feb 23 11. Second Life, Destination Guide > Education > Universities. https://secondlife.com/destinati ons/learning/universities. Accessed 18 Feb 2023 12. Carbonell-Carrera, C., Jaeger, A.J., Saorín, J.L., Melián, D., de la Torre-Cantero, J.: Minecraft as a block building approach for developing spatial skills. Entertainment Comput. 38, 100427 (2021). https://doi.org/10.1016/j.entcom.2021.100427 13. Bar-El, D., E. Ringland, K., Crafting game-based learning: an analysis of lessons for minecraft education edition. In: International Conference on the Foundations of Digital Games. Presented at the FDG 2020: International Conference on the Foundations of Digital Games, ACM, Bugibba Malta, pp. 1–4 (2020). https://doi.org/10.1145/3402942.3409788 14. Faas, T., Lin, C.: Self-directed learning in teacher-lead minecraft classrooms. In: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2569–2575 (2017). https://doi.org/10.1145/3027063.3053269 15. Microsoft, Minecraft Official Site | Minecraft Education. https://education.minecraft.net/ en-us. Accessed 05 Feb 2023. (2023) 16. Yue, K.: Breaking down the barrier between teachers and students by using metaverse technology in education: based on a survey and analysis of Shenzhen city, China. In: 2022 13th International Conference on E-Education, E-Business, E-Management, and E-Learning (IC4E). Presented at the IC4E 2022: 2022 13th International Conference on E-Education, E-Business, E-Management, and E-Learning, ACM, Tokyo Japan, pp. 40–44 (2022). https://doi.org/10. 1145/3514262.3514345 17. Andersen, R., Rustad, M.: Using Minecraft as an educational tool for supporting collaboration as a 21st century skill. Comput. Educ. Open 3, 100094 (2022). https://doi.org/10.1016/j.caeo. 2022.100094 18. Garcia-Bonete, M.-J., Jensen, M., Katona, G.: A practical guide to developing virtual and augmented reality exercises for teaching structural biology. Biochem. Molecular Biol. Educ. 47, 16–24 (2019). https://doi.org/10.1002/bmb.21188 19. Microsoft, 2023b. What Is Minecraft Education? https://education.minecraft.net/en-us/dis cover/what-is-minecraft/. Accessed 05 Feb 2023. (2019) 20. Southgate, E., et al.: Embedding immersive virtual reality in classrooms: ethical, organisational and educational lessons in bridging research and practice. Int. J. Child-Comput. Interact. 19, 19–29 (2019). https://doi.org/10.1016/j.ijcci.2018.10.002 21. Makransky, G., Lilleholt, L.: A structural equation modeling investigation of the emotional value of immersive virtual reality in education. Educ. Tech. Res. Dev. 66(5), 1141–1164 (2018). https://doi.org/10.1007/s11423-018-9581-2
32
M. Brayshaw et al.
22. Sousa Santos, B., et al.: Head-mounted display versus desktop for 3D navigation in virtual reality: a user study. Multimed Tools Appl. 41, 161–181 (2009). https://doi.org/10.1007/s11 042-008-0223-2 23. Hickman, L., Akdere, M.: Exploring virtual reality for developing soft-skills in STEM education. In: 2017 7th World Engineering Education Forum (WEEF). Presented at the 2017 7th World Engineering Education Forum (WEEF), pp. 461–465 (2017). https://doi.org/10.1109/ WEEF.2017.8467037 24. Malawi Ministry of Education (2022 Malawi Education Statistics Report. https://www.edu cation.gov.mw/index.php/edu-resources/category/10-reports. Accessed 06 Feb 2023. (2021) 25. Sharp, H., Preece, J., Rogers, Y.: Interaction Design: Beyond Human-Computer Interaction, 5th edition. ed. Wiley-Blackwell, Indiana (2019) 26. Malawi Government The Malawi Growth and Development Strategy III. https://www.undp. org/malawi/publications/malawi-growth-and-development-strategy-iii. Accessed 07 Feb 2023. (2018) 27. Hettinger, P.S., Mboob, I.S., Robinson, D.S., Nyirenda, Y.L., Galal, R.M.A., Chilima, E.Z.: Malawi Economic Monitor: Investing in Digital Transformation. World Bank Group, Washington (2021) 28. National Planning Commission (Malawi Vision 2063: An Inclusively Wealthy and Self-reliant Nation). United Nations. https://malawi.un.org/en/108390-malawi-vision-2063-inclusivelywealthy-and-self-reliant-nation. Accessed 26 Apr 2022. (2021) 29. O’Shea, T., Self, J.: Learning and Teaching with Computer, Harvester Press, Hemel Hempstead, ISBN 0-7108-0665-5 (1983) 30. Kambili-Mzembe, F., Gordon, N.A.: Synchronous multi-user cross-platform virtual reality for school teachers. In: 2022 8th International Conference of the Immersive Learning Research Network (iLRN), pp. 1–5. IEEE (2022) 31. Squires, D., Preece, J.: Predicting Quality in educational software: evaluating for learning, usability and the synergy between them. Interact. Comput. 11, 467–483 (1999). ISSN 09535438. https://www.sciencedirect.com/science/article/abs/pii/S0953543898000630. Accessed 19 Feb 23 32. Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Proceedings of ACM CHI’90 Conference (Seattle, WA, 1–5 April), pp. 249–256 (1990) 33. Nielsen, J., Mack, R.L.: Heuristic Evaluation. Usability Inspection Methods. Wiley, New York, pp. 25–62 (1994)
An Exploratory Case Study on Student Teachers’ Experiences of Using the AR App Seek by iNaturalist When Learning About Plants Anne-Marie Cederqvist1(B)
and Alexina Thorén Williams2
1 Halmstad University, 30118 Halmstad, Sweden
[email protected]
2 University of Gothenburg, 41117 Gothenburg, Sweden
[email protected]
Abstract. In this case study we explore the use of Augmented Reality (AR) as a pedagogical tool to promote student teachers learning about plants in a biology course module within teacher education. Traditionally, when studying plants in science education, flora books are used. However, digital technology has contributed to new ways of exploring plants, and AR is suggested as an important tool. This requires teacher educators who are familiar with AR and the pedagogical implications of using AR when teaching about plants. Therefore, the aim of this study is to investigate student teachers’ experiences of using the AR app Seek by iNaturalist during an excursion to identify and learn about plants, as well as discuss the pedagogical implications of integrating the app into a biology course module. The student teachers took part in the excursion using the app to practice plant identification. Afterwards, semi-structured interviews were conducted with three student teachers. A thematic analysis approach was used to explore the student teachers’ experiences when using Seek. The findings indicate that Seek increases student teachers’ awareness of plants, which promotes their interest and engagement in plants. The easy accessibility in their phone makes them use Seek in their spare time. The increased interest and easy access support learning. By knowing the name of plants, the student teachers establish relationships with plants, which increases awareness of the importance of caring for nature. Hence, Seek could be seen as a pedagogical tool that promotes student teachers’ learning and interest about plants. Keywords: Augmented Reality (AR) · Teacher Education · Biology · Plants · Seek by iNaturalist
1 Introduction In recent decades, Swedish children have spent less time in nature than in previous ones [1]. A concern is that that the reduced direct contact with natural environments could lead to so-called plant blindness, i.e., the inability to notice plants in our environment and their importance for life on our planet [2]. Balas and Momsen [3] suggest that to handle plant blindness, there is a need for better learning opportunities that help students direct their attention to plants. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 33–52, 2023. https://doi.org/10.1007/978-3-031-34550-0_3
34
A.-M. Cederqvist and A. Thorén Williams
Traditionally, when studying plants in science education, tools such as flora books with dichotomous keys are used. However, the rapid development of digital technology has contributed to new ways of exploring plants. An example of this is Augmented Reality (AR), which provides students access to educational experiences that have not been possible before [4]. AR technology combines virtual objects with the real world interactively in real-time, where 3D virtual objects are created [5]. By using an AR app in a mobile phone or tablet with its camera, references in the environment can be identified and from this generate digital information that is added on the screen; hence, the environment is presented in a new way [6]. Previous research highlights that the use of AR in educational settings promotes students’ interest and engagement in learning situations since AR helps teachers explain complex and abstract subject content better [7, 8]. Thus, AR can be a valuable tool in science teachers’ repertoire of providing better learning opportunities that direct attention to plants and their importance in ecosystems. Hence, it requires teacher educators who are familiar with AR and knowledge of how it can be used in teaching. Simultaneously, AR can also be seen as a pedagogical tool that increases student teachers’ content knowledge. However, few studies have investigated the impact of using AR in teacher education in relation to plants. Therefore, this exploratory case study aims to investigate student teachers’ experiences of using the AR app Seek by iNaturalist when taking part in an excursion to identify and learn about plants. The research questions that guide the study are the following: • In what ways do student teachers experience the use of Seek by iNaturalist? • What are the pedagogical implications of integrating Seek by iNaturalist in a biology course module?
2 Background Balas and Momsen [3] suggest that to handle plant blindness, there is a need for better learning opportunities that help students direct their attention to plants. To provide this, teacher education plays a crucial role in preparing student teachers with pedagogical knowledge, as well as content knowledge about plants and their characteristics, which will strengthen their ability to identify plants and increase their awareness of plants and their role in a larger ecological context [9]. 2.1 Student Teachers’ Learning About Plants In teaching about plants and their characteristics, one important aspect is to help student teachers to discern individual plants and different features of the plants to see beyond the green mass [10]. Magntorn and Helldén [9] examined biology student teachers’ ability to ‘read’ nature and use an ecological lens on individual species to discover how they are situated in habitats and ecosystems. The results indicate that teacher educators need to provide experiences where student teachers learn to ‘read’ nature, for example, learn to identify “structures of nature, where one example is to structure the forest into different layers” (p. 1251). Ecological literacy means having the ability to see and discover plants in the environment, to understand what plants are and understand the interaction between
An Exploratory Case Study on Student Teachers’ Experiences
35
plants and the environment. It also means having the ability to use one’s knowledge and understand critically the human impact on plants and the environment [11]. This requires both content knowledge and experiences in field practice. Magntorn and Helldén [9] suggest that the ability to ‘read’ nature is an essential part of a teacher’s ecological literacy. 2.2 The Linnean Taxonomy As part of learning about plants in teacher education, student teachers are expected to develop the ability to sort and group plants. This is usually done by providing them with opportunities to look more closely at different species and develop their knowledge of variations in plants as well as train in sorting and classifying. When sorting and classifying, plants are usually compared with a key, which provides characteristics useful for the identification of plants based on Linnaeus’ division of nature and how species were named. Carl von Linné developed a system where plants are divided into 24 classes depending on how many stamens the flowers have and how these are positioned around the pistil. The sexual system was presented for the first time in Linnaeus’ famous book Systema Naturae, which was published in 1735. Below (Fig. 1) is the hierarchical structure of the Linnean classification system in which organisms are classified depending on their characteristics.
Fig. 1. The Linnean classification system of species
2.3 Augmented Reality Augmented Reality (AR) is suggested to provide students access to educational experiences that have not previously been possible [4]. AR technology can be described as combining virtual objects with the real world and operating interactively in real-time
36
A.-M. Cederqvist and A. Thorén Williams
where 3D virtual objects are created [5]. Thus, it combines our experiences of the environment with digital information in the form of video, sound, or text. By using the camera and an AR app in a mobile phone or tablet, the camera can identify references in the environment that generate digital information, which is added to what the app displays. In this way, the reality becomes augmented by the digital information, which can be described as AR presenting the world around us in a new way [6]. Previous research points out that AR promotes students’ interest and engagement in learning situations and helps teachers to better explain and visualize subject content [7, 8, 12]. Further, the use of AR in teaching is also suggested to promote collaboration between learners as well as stimulate communication skills, critical thinking, and the ability to solve problems [7]. Thus, AR can be seen as an essential tool in teachers’ repertoire of providing better learning opportunities. However, there are some downsides to be aware of regarding AR technology in teaching and learning situations. For instance, sometimes the technology does not work due to limited access to internet, or the content presented may not be relevant or fail to meet the pedagogical needs in relation to the subject matter [13]. Saidin et al. [13] suggest that if AR should be able to present experiences that lead to learning, it is necessary that the AR technology is easily accessible and functional in the devices in which they are used and provides representations that triumph over traditional representations. 2.4 AR in Science Teacher Education Previous research shows that AR technology makes otherwise difficult-to-access science phenomena visible and thus science content more comprehensible [14–16]. Wyss et al. [15] suggest that AR provides better opportunities for studying the details of a phenomenon, which helps student teachers see aspects they may previously have not been able to discern, which leads to a deeper understanding of the phenomenon. Another important issue to highlight is the student teachers’ increased interest and motivation in learning science when using AR. It is suggested that AR makes the content more accessible and captivating, which in turn makes student teachers more interested and engaged in learning the topic at hand [14, 16–18]). Furthermore, AR allows the teaching situation to become more student centered since the student teachers become active in the search for new information, which they analyze on their own [19, 20]. The AR technology also has the possibility to allow student teachers to experience natural environments more in-depth. For example, Hurtado et al. [21] allowed student teachers to take in contextualized sensory experiences through a 3D-modelled garden in AR. The results of the study indicate increased interest and changing attitudes toward the environment among the student teachers, which can accordingly stimulate their interest in sustainability issues. Several studies show that the use of AR in science teacher education contributes to a deeper understanding of the science content taught [15, 16, 19, 22–24]. However, it seems insufficient to only put the AR technology in the hands of the student teachers; there is a need for pedagogical reflection on how AR can be implemented in a pedagogical manner that will support student teachers’ learning process. One way to accomplish this, suggested by Wyss et al. [15], is to combine the AR-technology with group work. They describe this as promoting and enhancing student teachers’ discussions on the content at hand. If they experience the same phenomena in a more in-depth way, there are better
An Exploratory Case Study on Student Teachers’ Experiences
37
opportunities for them to share and discuss their experiences with others. However, there is great variation in what the AR environments can present regarding the content to be learned. In a study by Aivelo and Uitto [23], the student teachers had difficulties in interpreting the AR environment in the app as a model for a scientific phenomenon, in this case the life cycle of a parasite. This resulted in the phenomenon being difficult to understand due to the lack of representational features. Using representations in science education is traditionally essential for communicating science phenomena [25]. This implies, when developing representations such as AR environments, the designer needs to be aware of what aspects of a phenomenon are necessary to put in the foreground, where a pedagogical perspective is crucial [26]. Previous research indicates that since the content in AR apps may not be comprehensive enough there is still a need for complementing teaching materials and activities in order for student teachers to learn [16, 23, 26]. However, more research is needed on what different AR apps may contribute to representing science phenomena, as well as how the apps can be implemented in teaching and learning situations. In science teaching, it is common to provide different representations to visualize important aspects of science phenomena, and hence facilitate understanding of phenomena. Thus, it is expected that the representations are able to bring the important aspects to the fore. Further, Lo [27] suggests that contextual factors such as representations affect the learners’ experience of the learning situation and the way they respond to learning in that specific scenario. According to Marton [28], learning implies developing a capability to see a phenomenon in a new way, different from the way it has been seen before, i.e., to discern important aspects of the phenomenon not previously discerned. Consequently, there should be attention directed towards teaching and learning activities where representations of science phenomena are used.
3 Method In this case study, we aim to investigate student teachers’ experiences of using the AR app Seek by iNaturalist to identify and learn about plants in a biology course module. Specifically, we aim to gain an empirically-based understanding of the structure, dynamics, and context of situations in which student teachers use the AR app Seek and determine whether it enables a broadened experience and learning of plants. This exploratory study is based on the analysis of a single case, namely student teachers’ experience of using the AR app Seek by iNaturalist when taking part in an excursion. The research design in a case study often includes qualitative methods such as semi-structured interviews or observations for conducting a detailed examination of the case [29], as this provides a wealth of details regarding real-life situations, which is important for the development of a nuanced view of reality [30]. The selected case has been chosen based on the nature of the research questions and is expected to provide us with in-depth knowledge of pedagogical implications by integrating Seek by iNaturalist in a biology course module. The group of student teachers was selected because of their participation in the biology course module within a science and technology course for primary teachers. Thus, the participants were chosen in a strategic way, i.e., through a purposive sampling that is relevant to the research questions.
38
A.-M. Cederqvist and A. Thorén Williams
The study includes data gathered from semi-structured interviews with three student teachers after they had taken part in the excursion. The sample size is insufficient for drawing any general conclusions about the findings. However, the qualitative data may be seen as rich and provides in-depth insight into how student teachers experience the use of the app Seek. Further, by providing detailed descriptions of the contexts in which the study took place, as well as excerpts of what the student teachers expressed during the interviews, there is presented a highly detailed account of the phenomenon being studied. This is expected to enable a deeper understanding of what took place, how it took place, and the broader context of the findings, based on this group of student teachers. That is, to go beyond the surface, to portray “people, events and actions within their locally meaningful contexts” (Yin, 2011, p. 213). As such, the case may be seen as an exemplifying case for the larger group of student teachers who participated in the same course. The aim of this kind of case study is to capture the conditions of a common situation [30], in this case, an excursion in a biology course where student teachers are expected to identify and learn about plants. A common criticism of case studies is that there cannot be drawn any general conclusions about the findings of a single case. However, the findings can be seen as a contribution into collective process of knowledge accumulation in a given research field (29, 30]. The contributed in-depth knowledge is expected to point out the direction for further investigations on the same theme, namely to understand the effects of using AR apps such as Seek in teacher education courses. 3.1 The Educational Context of the Case Study The educational context on which this study is based consists of a biology course module within a science and technology course (of 15 credits) for student teachers preparing for teaching in preschool class and primary school, grades 1–3. The student teachers take the course during their third semester (out of eight in total) and include the subjects biology, chemistry, physics, and technology, including theory and pedagogy. The student teachers have varying levels of biology knowledge and experiences in observing and identifying plants and animals. Generally, most student teachers have knowledge of biology from upper secondary school. In the specific biology course module, the second author of this case study was involved as a teacher educator, and the first author was, at the time of data collection, a teacher educator focusing on the teaching and learning of technology. 3.2 Description of the Biology Course Module The content of the biology course module is concentrated on teaching and learning about plants and animals in the immediate natural environment. Hence, it includes recognizing and naming different plants and animals and learning to work with species pedagogically, primarily outdoors but also in combination with classroom work. In the course module, it is emphasized that to be able to teach about plants and animals, one needs, as a teacher, to have acquired both content knowledge and pedagogical knowledge. Through excursions and outdoor work, the student teachers experience different species and environments with their senses. The plants and animals of interest in the course module are those that are focused upon during teacher-led excursions, lectures,
An Exploratory Case Study on Student Teachers’ Experiences
39
group work and seminars and workshops in ecology. The specific knowledge of species is examined in a written exam. In contrast, pedagogical knowledge is examined through a group assignment where the student teachers plan and carry out an outdoor lesson at Härlanda tjärn, a small lake with surrounding forest and meadow land located in the eastern parts of Gothenburg. 3.3 The AR App Seek by iNaturalist The app Seek by iNaturalist is an AR based app that provides automated species identification in real-time on a mobile screen or other devices when observing organisms.1 To identify an organism, the Seek Camera (i.e., the mobile camera if using a mobile phone) is opened. The user points the camera at an organism and the app starts to work. On the screen, there is an ID meter that represents the 7 main taxonomic ranks based on the Linnean taxonomy of how organisms are grouped from kingdom, phylum, class, order, family, genus, and species (see Fig. 2).
Fig. 2. An identified species and the 7 dots are shown that represent the seven main taxonomic ranks.
1 Seek by iNaturalist https://www.inaturalist.org/pages/seek_app.
40
A.-M. Cederqvist and A. Thorén Williams
When all the 7 dots are filled, the app can identify the organism at the species level, and the user can take a picture and store the finding in a personal bank of findings “my observations”. To find out more about the identified species, the user can click on the button “view species” and go further to “the Species Page”. This presents more detailed information about the species such as common and scientific names, taxonomy of the species by the Linnean taxonomic rank, when the species were observed, and a short informational text on the species. The user of the Seek app can also connect and register to the online community Naturalist where observations can be shared and discussed with other users, as well as gain more detailed information about the species such as a range map of findings and how many, as well as what time of the year the species may be observed based on the observations made within the community of iNaturalist. However, in this study the student teachers only used the app Seek. 3.4 Data Collection Multiple data sources are collected to develop a rich and detailed picture of the structure, dynamics, and contexts in which the AR app Seek is used by the participating student teachers. This exploratory case study relies on several sources of evidence, including the transcriptions of semi-structured interviews with three student teachers (1 h each) and documents such as course guides and course assignments relevant to the biology course module of study on-site direct observation by the student teachers when using the Seek app on a teacher-led excursion. The three activities (lectures, workshops, and group tasks), as well as the content and resources included in the biology course module species and outdoor pedagogy, are presented in Table 1. It is essential to note that the Seek app had never before been used in the science and technology course, nor had the student teachers any experience with it. The AR app was introduced to the whole class of student teachers at the end of the class seminar and workshop (activity number 2, in Table 1). In connection with the introduction of the app, we informed the student teachers about our study and whether some of them might consider participating. A handful of student teachers gave their consent, but most downloaded the app and tested it during the biology course module. In the presentation of data, the identity of the participants has been anonymized. 3.5 Data Analysis The analysis process in this case study involved comparing information from the semistructured interviews to identify patterns and common themes. The analytical process followed the thematic analysis with an inductive approach [32], which involved describing and arranging the data in detail. The first step of the analysis aimed at becoming familiar with the body of data. Initially, this involved transcribing the semi-structured interviews and then reading the transcripts repeatedly several times. In the next step, the transcripts were coded and divided into units in a systematic way. The beginning and end of a unit were delimited by the content the student teachers focused on. The coding was accompanied by detecting patterns in the student teachers’ experiences of using the Seek app during the excursion. The following step included examining similarities and differences between coded units and organizing them into tentative themes, as well as
An Exploratory Case Study on Student Teachers’ Experiences
41
gathering excerpts relevant to each theme. The next step was reviewing the themes in relation to the gathered excerpts and establish whether they correspond to the whole set of data. The next last step was to define the characteristics of each theme and name and organize them in a logical way. In the last step, excerpts were selected that should represent the themes and the analysis for answering the research question of this study. Table 1. The activities, content and resources included in the biology course module species and outdoor pedagogy Activities
Content
Resources
1 Lecture on outdoor pedagogy The group task involves and a group task working and gaining insight into how to observe and identify birds and trees in outdoor lessons and simultaneously gain holistic knowledge of species
The groups report their work in writing and with pictures of trees and birds
2 Whole class seminar and workshop: introduction of species
Natural artefacts are collected before the seminar. Introduction of the AR app Seek to be used as a complementing tool to the traditional flora book to identify plants and animals
The lecture and workshop include practical activities in small groups. The student teachers bring ten different leaves and five different natural objects to the seminar and preferably a commonly occurring flower (herb) with all its parts
3 Excursion to Lake Härlanda The student teachers identify Flora and fauna books, rakes, (the lake and the surrounding species during the excursion magnifiers, and the Seek app natural area) using outdoor educational methods. The excursion serves as an opportunity for the student teachers to learn about different species and their characteristics, and to learn how they can do pedagogical work with a class of primary school students 4 Group assignment: didactic species examination – to plan and implement an outdoor lesson
In a group of four, the student teachers plan and implement a 30-min outdoor lesson at Lake Härlanda. Each group implements their lesson with the course mates acting as their students
Available resources for the student teachers are the Seek app (only for preparation), flora and fauna books, rakes, magnifiers, and buckets
42
A.-M. Cederqvist and A. Thorén Williams
4 Findings and Discussion In this section, we will present the findings and discuss these in relation to the research questions and previous research on the topic. The section is structured in two parts where we in the first present the student teachers’ experiences of using the Seek app, and in the second discuss the pedagogical implications of integrating it into a biology course module. 4.1 The Student Teachers’ Experiences of Using the Seek App The aim of this exploratory case study is to investigate science student teachers’ experiences of using the AR app, Seek, in an excursion to identify and learn about plants. The findings are presented as emergent themes across the data of students’ experiences based on the semi-structured interviews that were held after the excursion. The emergent themes can be seen as a chain of events that lead to increased interest and engagement in plants. Doors are Opened to a New World Several times during the interviews, the student teachers expressed that the Seek app has opened doors to a new world. The student teachers experienced that it helps them see species that earlier failed to catch their attention. In the quotes below, the student teachers describe this experience. “[…] previously I didn’t have much interest in plants because I didn’t have much knowledge about different species.... But I’ve probably always felt a longing to know more and to be able to tell the difference between different trees and plants and animals. And this app kind of opened up a whole new world for me, not just the biology course, but actually the app. [...] So I have partly used it with the aim of studying in the course ... But also for identifying species that I have been curious about myself that were not part of the course content but that I still wanted to know about out of curiosity.” A similar opinion is expressed by one of the other student teachers. “[...] it has piqued my interest in looking up flowers... I didn’t think about it in the same way before. Now I walk around my garden and, “what is this” and “what is this?” […] the app has piqued my interest. I think it’s very exciting.” Learning to Distinguish the Undifferentiated Greenness The findings indicate that the Seek app frames the undifferentiated greenness of nature and helps student teachers to distinguish the specific plants within this greenness. That is, the app brings the plants to the foreground as parts of an undifferentiated whole of greenness and helps student teachers to direct attention to the details and characteristics of the plants. This, in turn, helps student teachers to see the variation in plants based on their similarities and differences. In the excerpts below, the student teacher describes this experience.
An Exploratory Case Study on Student Teachers’ Experiences
43
“Uh...so what I thought about when I was walking around with that app was how it helped me focus on different things and discover different plants...and now I want to check what kind of tree this is and then I realize with the help of the app: oh, there are so many different herbs and flowers here... And grass... it becomes easier to start thinking in different focuses and that there are so many details and not just all that greenness.” An important feature of the app seems to be the visibility of the Linnean taxonomy, i.e., the classification system for the naming of species and their order in relation to their characteristics and relationships with one another. Since the app visualizes the process of identifying the plants in terms of their order in the taxonomy, the student teachers become aware of specific details and characteristics that point either to similarities or differences in relation to other plants, which help them see the variety in plants. That is, they are going from only seeing the greenness of plants to seeing the variety of specific species. “Something that I thought about was when the app identifies that these different dots... it is starting quite generally and then it works narrower and narrower with this classification and I know there are different levels when classifying, but it became clearer because it always starts on almost the same dot when it is for example plants and herbs. And I also learned that certain species belong to a larger family, so it happened when I looked at a tree at home, I thought, this is probably some kind of apple ..., and then it stopped somewhere at...the rose order...the rose family or something and then I remembered what it belongs to and then even though I didn’t know exactly if it was wild apple, you can know which family it belongs to.” The excerpt indicates that the app provides insight into the Linnean taxonomy, which helped the student teacher become aware of specific families of plants as well as what their characteristics are. This knowledge makes the student aware of similarities and differences between plants, and the specific characteristics to search for when identifying them. “...it’s the same with all ferns, the app had difficulties of identifying them, is it wood fern, holly fern, eagle fern? So, I’ve been a little more critical when it comes to the specificity of species and which it is... so I’ve tried to check at home, and I borrowed a big flora book from the library which had great pictures.” As will be seen, there are some constraints with the app when it comes to identifying certain species within the same family. When she is to identify a plant that looks like a fern or bracken, the app has difficulties in identifying the correct species. However, the student understands that she must be cautious when determining the species and decides to take it one step further. She goes to the library and borrows a Flora book. An interesting question is whether she would have taken this step if she had not used the app?
44
A.-M. Cederqvist and A. Thorén Williams
An important consequence of distinguishing the greenness and becoming aware of the variation in plants in terms of similarities and differences between plants in different classes, orders, families, genus, and species, is that students see the diversity of plants. “[…] there is something about this variety of species, there is so much…but I see that they are not the same. When I’ve been trying to find species and when I’ve found a new one, I try to remember that yes, but this one was yellow, but it had leaves like this, unlike this herb which was also yellow but had completely different leaves. And then I can think that they are similar but still not the same, you see these small differences.” One of the student teachers brings the question of plant blindness to the table and suggests the importance of technology to engage students in activities where they want to learn more about the nature and plants, which is important for preventing plant blindness. “There’s a lot of talk about this...art illiteracy, no it’s called....art...plant blindness....and this app can kind of... Increase engagement and I think we’re gaining a lot on it. Because the more you learn, the more you want to learn and the more interesting it becomes. [...] and there is no teacher who can measure up to... am I allowed to say so?... With the app.” A Driving Force to Learn More The opening up of a door to a new world comes with consequences that are important for the learning process. When the student teachers begin to distinguish the undifferentiated greenness of plants, they understand that there is more to be explored. This seems to challenge them to search for more information, i.e., they want to know more about what they have discovered. This implication can be described as the important driving force that makes student teachers want to learn more about plants. In the previous excerpts, we could see situations where this occurs. The student teachers begin to discuss with each other and search for further information in flora books at the library. In the excerpt below, there is another situation that indicates that Seek could be seen as a driving force to willingly learn more. “[…] I’ve become more and more thirsty for knowledge, you could say, it’s probably because you get this satisfaction that you can find out quite quickly what something is. […] and after the excursion on the way back to the bus … I found new plants and was in the bushes and in someone’s garden to investigate different plants.” What the student teacher describes is informal learning situations that occur after she has been on the course excursion. The use of the app has triggered her to search for more plants to identify when she is walking to the bus stop or when she is at home. This indicates that introducing the app may result in student teachers’ engagement going beyond the formal learning situations and pressure to perform in relation to specific learning goals to becoming a genuine interest in plants. In the excerpt below, the student teacher describes how she has introduced the app to her family who also received an
An Exploratory Case Study on Student Teachers’ Experiences
45
increased interest in plants. Together they have searched for and identified new species in their surroundings, which have generated discussions on plants. “[…] in my family, there are several who have started to use the app because I introduced it, so I think we have very exciting conversations. […]” As the student teacher below expresses, she would have given up the search for the specific species if she had not used the app as a support for finding it. The app encourages the exploration and observation of plants, which she describes as important, not only for herself, but also for children. “[...] I don’t think I would have run so far if I hadn’t had the app because then I would have given up. I had no idea, it doesn’t matter how much I ran around there looking for the moss, I wouldn’t find it...so it [the app] gave me this ability to observe which is important also for the children to practice or for everyone… but what is important is that we get involved in the exploring and discovering, but also this ability to observe the details... It’s kind of... it was awakened in me [...] it gave me a desire to learn, and I learned what cat tail moss looks like and Common Haircap Moss too.” The Easy Access to Learning An appreciated aspect when using the app is the easy access to learning. As the student teacher expresses in the excerpt below, most people carry their phone with them, and hence, they can easily identify plants wherever they are in just a couple of minutes by using the app. Compared to going home and reading the traditional Flora or searching for information on the internet, the Seek app provides easier access to appropriate information “just in time” when needed. “[…] there are many species that I have seen during my childhood, and I did not know their names. But I still haven’t wondered so much that I’ve chosen to find out. But it somehow became so easy with the app, it was so easily accessible. Ok I’m waiting for the bus and it’s two minutes until it comes and this little flower, what is this? Then I have time to pick up the phone and the app, and check and get an answer before the bus arrives. […] you almost always carry your phone with you... If it had been before, I might have thought that I must remember to check it up when I get home and that I must have a flora book or search on the internet, and it is a little more circumstantial...” What also seems to be an important aspect is that the app has a bank of images uploaded by others, which provides a huge variety of images of species as well as a variety of images of the same species. This variety cannot be found in a flora book. “We were going to find Birch and Aspen and then we found very young aspens, so they were little ones, and then we needed to be able to compare them with more adult aspens, they are still the same species. On the way home, I also found Asp, but I was a bit unsure because it looked a bit different and then I used the app and
46
A.-M. Cederqvist and A. Thorén Williams
checked, to confirm that this is the same although they look different, just like us humans and all other species.” Furthermore, the bank of observations, both the student teachers’ own and others uploaded observations are a backpack of knowledge of identified species. This backpack of knowledge can be compared to the role of the traditional collection of pressed plants, i.e., an herbarium, which provides access to memories, experiences and situations that enables repetition and consolidation of knowledge. In the excerpt below, the student teacher expresses the feeling of confirmation when she has learned something that she can bring with her to new situations. She feels the satisfaction of having collected plants into her bank of knowledge. “...I like that you have a small bank of what you have found. You have found something...you get this confirmation or the feeling that here and now I know, I have found out what this is quite immediately and then that you can go back and look. Then there is the species name and the picture I took and the picture that Seek has...so the satisfaction that you have this bank and that you get....I like to learn things, so then I feel that my backpack is filled with knowledge...thanks to the app.” The Satisfaction of Knowing The satisfaction of knowing names and characteristics of plants promotes interest and motivation for the student teachers to learn more. Further, the satisfaction of knowing something seems closely connected to the feeling of contentment that comes from understanding or learning something new. “...I think it feels good in the heart and body to know what is in nature. […] I have had a hard time connecting a species when I look at it, with the species name, I have known several species names but on the other hand ...what is a ground-ivy? … and now that I’ve seen it, I can connect them. So it’s that relationship, and that I’m more detail-oriented when I’m out now, that I see more details. It’s because I’ve learned the names of the species, so it’s a big deal. […] I’ve become even more thirsty for knowledge, you could say, and as I said, it’s probably also because you get this satisfaction that you can find out quite quickly what something is.” The satisfaction of knowing names can also be referred to in the sense of accomplishment or pride that comes from being knowledgeable about a particular subject. In the excerpt below, the student teacher describes the sense of shame that she has felt when not knowing the names of commonly occurring species, which she describes as expected common knowledge. “I think it’s this kind of desire to know something and I also think that I have...now that I’ve had children, I feel a little ashamed that I don’t know anything, like it’s an Oak and it’s an Aspen and it’s...I want so badly to be able to...I’ve wanted it for so long that it’s a longing to gain knowledge. […] especially now when I... Have children and I think it belongs to common knowledge..., this basic knowledge of the most famous trees and birds, flowers and so on.”
An Exploratory Case Study on Student Teachers’ Experiences
47
The satisfaction of knowing seems to be a powerful motivator for the student teacher and leads to a desire to continue learning and expanding one’s knowledge. The student teacher also experiences the sense of independence in her learning, which can be referred to the ability to take charge of her own learning process, without relying on external guidance and support. This independence in learning seems to be an important aspect for becoming self-motivated and continuing learning. “It was so very convenient not to have to ask... I felt independent in my studies. It’s an important thing for me to... to manage on my own and it felt wonderful to be able to walk in the forest...or in nature and feel that I can gain knowledge on my own. [...] when you become curious, you want to find out...and I was able to do that. [...] the feeling of being independent in my knowledge development, it was like a great feeling.” The Establishing of a Relationship with Nature Knowing the names of plants refers to being able to identify different plants by their scientific names as well as being familiar with their characteristics and features and thereby able to distinguish one plant from another. In the excerpt below, the student teacher describes how this knowledge facilitates discussing and describing plants with others. “[…] Because that’s about knowing the names of things so that you can then talk about them instead of just saying that green thing that grew there with tall... it’s good to be able to describe too but if I can say it’s a Lady’s mantle, will make it easier.” However, the student teachers also express another dimension of knowing the names of plants. The student teachers establish a relationship with the plants by knowing their names, which also leads to a new relationship with nature that makes them value nature in a new way. In the excerpt below, the student teacher expresses this growing relationship and how she is making new friends with the plants she finds, and how this makes her proud. “... The more you know about nature, the more you care for it. […] it’s like...not that you’ve made new friends...but like there you are, that species and I found it then. So it’s like creating a relationship with nature and not just as a whole […] I notice them [the plants] more now than I did before because I somehow, but I have created this relationship, I know what they are called […] and before it was just a yellow flower…” Further, the student teacher describes the value of establishing this kind of relationship for her future students in relation to sustainable issues in the long run. Being familiar with the names and characteristics of plants may help to establish a relationship to nature and how students are seeing and managing our planet and environment. “I can imagine that it could be a bit the same as it was for me, knowledge of species but also how you look at nature, and what relationship you have which can spill
48
A.-M. Cederqvist and A. Thorén Williams
over into how you look at life and other people and the relationships you have to other people and our earth, and how we treat our planet and our environment.” The Limitations of Seek In the interviews, the student teachers expressed some limitations in using Seek that may be important to consider if using in it in teacher education context. Most of the expressed issues are related to technological limitations in the interface such as the app’s inability to deliver the correct information about the plant to be identified, although it was a very common species. When the technology does not meet the student teachers’ expectations, they become frustrated. “[...] we were given two different species to find, and it was birch and aspen, and then it was the birch. I think the birch is one of Sweden’s most common trees. But I know that the app is not Swedish from the beginning, but...I was working on this silver birch […] I already knew from the beginning what it was, but I still want to check with the app if it could identify, but it couldn’t.” “[...] it [the app] can sometimes cause some frustration if it doesn’t find it. You get a little spoiled to have everything served [laughs]. […] if it doesn’t show it right away, maybe you don’t have the patience to wait.” Sometimes technology seems to limit the discernment of details that could be signifiers for the identification of plants. The student teacher suggests that the user not only needs to have knowledge of how to use the app but also a critical approach to the information the app provides, referred to digital literacy, and the understanding that the technology is sometimes unable to fulfil its function and the user’s need. Hence, it is important to be able to use a traditional flora book as a complement when technology fails. “[…] you need to have a critical approach, not only swallow everything the app says, but to have an open mind because it might be wrong. […] sometimes when I took a picture it focused a lot on the flower itself, the leaves ended up diffusely in the background and I wanted to see the leaves too because they belong and can help me see similarities and differences with other species. […] because it could be a species that Seek has found that you are a little unsure about, and then the flora book can be a great complement to check it up. For example... The app says it’s the Autumn hawkbit and then let’s look at it in the flora book and see... if they look the same.” Another interesting aspect that came up during the interviews was that the app frames and delimits what we can visually perceive, and excludes other senses such as smell, touch, taste. This implies that there is a great focus on our visual senses on behalf of the other senses. The student teacher below expresses this as she is watching the plants through her mobile phone before looking at them directly. This makes her forget to smell and touch the plants, which she probably would have done if she had used the traditional flora book.
An Exploratory Case Study on Student Teachers’ Experiences
49
“[…] it will be like you looking at the plants through the phone. You are so quick with the phone and must look through it instead of looking directly at the plant [...] you lose the smelling and touching, the senses that you might otherwise have used...if you had a flora book with you…” 4.2 Pedagogical Implications of Integrating Seek in a Biology Course Module The findings of this study indicate that the use of the AR app Seek by iNaturalist promotes student teachers’ interest and engagement in learning about plants, which we suggest is a driving force for learning about plants. The findings can be described as follows: when student teachers use the Seek app, this seems to trigger a chain of events that lead to increased interest and engagement in plants. The student teachers explain how the app initially opens doors to a new world where they learn to discern things they have not previously discerned. Regarding plants, this implies learning to distinguish the undifferentiated greenness of plants, to see specific plants and their characteristics. The app facilitates the student teachers’ ability to see plants in a new way, i.e., they become able to discern important aspects and characteristics of the plants that they had not previously discerned, which leads to learning [28]. An important feature of the app seems to be the provision of insight into Linnean taxonomy, which helps the student teachers become aware of the hierarchical structure of the taxonomy and specific families of plants as well as what their characteristics are. This directs their attention to the similarities and differences between plants. According to Magntorn and Helldén [9], knowledge about plants and their characteristics strengthen students’ ability to identify plants and increase their awareness of plants as part of a larger ecological context. Further, as previous research suggests [15, 16, 19, 22–24], the use of AR contributes to a deeper understanding of the studied content since the technology provides better opportunities to study details. The deeper understanding of plants’ characteristics seems to promote the student teachers’ interest in plants. By distinguishing the undifferentiated greenness of plants, it encourages them to learn more. Furthermore, working together in small groups and exploring the environment together using the app promotes discussions on their findings. Similar findings have been described by Wyss et al. [15], who suggest that the combination of AR-technology and group work enhances student teachers’ discussions on the content explored. In this sense, we suggest the app is a driving force to learn more, based on its ability to promote deeper understanding and interest, but also as it provides easy access to learning. The combination of the curiousness of exploring the new world of plants, and the easy access to new information “just in time” in situ, seems to be a fruitful combination when it comes to promoting interest and learning about plants. By always carrying the phone with them, the student teachers not only use the app in formal situations within the course module, but they also use the app in informal situations at home and together with friends and family. This implies that the app can stretch their interest and learning about plants beyond the formal learning situations. From the presented excerpts we can see that by using the app in this broad context, the student teachers are developing the ability to ‘read’ nature [9], which is an essential part of a teacher’s ecological literacy.
50
A.-M. Cederqvist and A. Thorén Williams
The AR technology in terms of the Seek app provides a fruitful combination of increasing student teachers’ ability to see plants in a new way, an increased interest in plants and the easy access to learning, which leads to student teachers experiencing the satisfaction of knowing something about plants. The app helps them to learn the names of plants, which makes them establish relationships with plants and see their value. These findings are strengthened by previous research [14, 16–18], who suggest that AR makes the content more accessible and captivating, and this in turn makes student teachers more interested and engaged in learning the topic at hand. Hurtado et al. [21] suggest that the use of AR in natural environments may increase student teachers’ interest in sustainability issues. Furthermore, the student teachers experience an independence in their learning process since the use of the app makes them want to seek out new knowledge of plants on their own without guidance from a teacher, which seems an important aspect for becoming self-motivated in their learning-process. That is, the teaching situation becomes more student centered where the student teachers are active in searching for new information and conducting analyses on their own [19, 20]. Although there are many pedagogical advantages of using Seek as part of a biology course module, some limitations were identified that need to be considered in relation to traditional teaching about plants. The use of the app brings forward details and characteristics of plants, which the student teachers never would have experienced without the app. However, as they express, the use of the app limits them to only use their eyesight when experiencing the plants. When encountering a plant, the focus is to frame the plant with the camera of the app and find the right position and angle in order to provide the best circumstances for the app to identify the plant. What seems to get lost in this process is involving other senses such as smelling and touching, which they express as the traditional way of approaching an unidentified plant. The student teachers describe that this may have implications for the way their future students will approach plants, i.e., through the lens of a mobile phone. This is an important issue that needs further attention in research. On the one hand, we must consider that young people use and process information and knowledge in new ways, through digital technologies. On the other hand, we want them to spend more time in nature [1] and develop their ability to notice plants and their importance for life on our planet [2]. What can be established from this study is that the AR app Seek can blur the borders between the natural world and the digital world and provide new and better educational experiences that promote learning about plants and may help learners to overcome plant blindness. 4.3 Summary The findings of this study indicate that the use of the app Seek by iNaturalist promotes student teachers’ interest and engagement in learning about plants. The easy accessibility in their phone makes them also use it in their spare time, and by building their own library of identified species, they can return to their findings whenever they want. This seems to support the learning process, and knowing the name of plants also makes the student teachers establish relationships with plants, which increases awareness of the importance of caring for nature. However, the student teachers experience some limitations in using Seek. The app is sometimes unable to identify the plant or produce a doubtful result. Further, the student teachers express the risk of only approaching plants
An Exploratory Case Study on Student Teachers’ Experiences
51
with the phone, which may exclude other senses, such as smelling and touching. In sum, Seek by iNaturalist could be seen as a useful tool to promote student teachers’ learning and awareness of plants. However, more research is needed, and the findings of this study can be used for pointing out a direction.
References 1. Sandberg, M.: Barn Och Natur I Storstaden: En Studie Av Barns Förhållande till Naturområden I Hemmets Närhet – Med Exempel Från Stockholm Och Göteborg. Choros, Göteborg (2009). http://hdl.handle.net/2077/20095 2. Wandersee, J.H., Schussler, E.E.: Toward a theory of plant blindness. Plant Sci. Bull. 47(1), 2K9 (2001). https://cms.botany.org/userdata/IssueArchive/issues/originalfile/PSB_ 2001_47_1.pdf 3. Balas, B., Momsen, J.L.: Attention ‘blinks’ differently for plants and animals. CBE Life Sci. Educ. 13, 437–443 (2014). 10.1187%2Fcbe.14-05-0080 4. Bull, G., Groves, J.: The democratization of production. Learn. Lead. Technol. 37(3), 36–37 (2009). https://files.eric.ed.gov/fulltext/EJ863943.pdf 5. Azuma, R.: A survey of augmented reality. Presence-teleoperators and virtual environments 6(4), 355–385 (1997). https://doi.org/10.1080/00219266.2019.1682639 6. Klopfer, E., Sheldon, J.: Augmenting your own reality: student authoring of science-based augmented reality games. New Direct. Youth Dev. 128, 85–94 (2010). https://doi.org/10.1002/ yd.378 7. Akçayır, M.; Akçayır, G.: Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11 (2017). https://doi. org/10.1016/j.edurev.2016.11.002 8. Dunleavy, M., Dede, C., Mitchell, R.: Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. J. Sci. Educ. Technol. 18(1), 7–22 (2009). https://doi.org/10.1007/s10956-008-9119-1 9. Magntorn, O., Helldén, G.: Reading nature-experienced teachers’ reflections on a teaching sequence in ecology: implications for future teacher training. NorDiNa 5, 67–81 (2006). https://doi.org/10.5617/nordina.415 10. Sanders, D., Eriksen, B., MacHale Gunnarsson, C., Emanuelsson, J.: Seeing the green cucumber: reflections on variation theory and teaching plant identification. Plants People Planet 4(3), 258–268 (2022). https://doi.org/10.1002/ppp3.10248 11. Häggström, M.: Estetiska Erfarenheter I Naturmöten. En Fenomenologisk Studie Av Upplevelser Av Skog, Växtlighet Och Undervisning. Gothenburg Studies in Educational Sciences, 442 (2020) 12. Garzón, J., Kinshuk., Baldiris, S., Gutiérrez, J., Pavón, J.: How do pedagogical approaches affect the impact of augmented reality on education? A meta-analysis and research synthesis. Educ. Res. Rev. 31, 100334 (2020). https://doi.org/10.1016/j.edurev.2020.100334 13. Saidin, N.F., Halim, N.D.A., Yahaya, N.: A review of research on augmented reality in education: advantages and applications. Int. Educ. Stud. 8(13), 1–8 (2015). https://doi.org/10. 5539/ies.v8n13p1 14. Celik, C., Guven, G., Kozcu Cakir, N.K.: Integration of mobile augmented reality (MAR) applications into biology laboratory: anatomic structure of the heart. Res. Learn. Technol. 28, 1–11 (2020). https://doi.org/10.25304/rlt.v28.2355 15. Wyss, C., Bührer, W., Furrer, F., Degonda, A., Hiss, J.A.: Innovative teacher education with the augmented reality device microsoft hololens—results of an exploratory study and pedagogical considerations. Multimodal Technol. Interact. 5(8), 45 (2021). https://doi.org/10.3390/mti508 0045
52
A.-M. Cederqvist and A. Thorén Williams
16. Yapici, I. Ü., Karakoyun, F.: Using augmented reality in biology teaching. Malaysian Online J. Educ. Technol. 9(3), 40–51 (2021). https://doi.org/10.52380/mojet.2021.9.3.286 17. Aydin, M.: Investigating pre-service science teachers’ mobile augmented reality integration into worksheets. J. Biol. Educ. 55(3), 276–292 (2021). https://doi.org/10.1080/00219266. 2019.1682639 18. Burton, E.P., Frazier, W., Annetta, L., Lamb, R., Cheng, R., Chmiel, M.: Modeling augmented reality games with preservice elementary and secondary science teachers. J. Technol. Teacher Educ. 19(3), 303–329 (2011). https://www.learntechlib.org/primary/p/37136/ 19. Fuchsova, M., Korenova, L.: Visualisation in basic science and engineering education of future primary school teachers in human biology education using augmented reality. Eur. J. Contemp. Educ. 8(1), 92–102 (2019). https://doi.org/10.13187/ejced.2019.1.92 20. Syawaludin, A., Rintayati, P.: Development of augmented reality-based interactive multimedia to improve critical thinking skills in science learning. Int. J. Instr. 12(4), 331–344 (2019). https://www.e-iji.net/dosyalar/iji_2019_4_21.pdf 21. Hurtado Soler, A., Botella Nicolás, A.M., Martínez Gallego, S.: Virtual and augmented reality applied to the perception of the sound and visual garden. Educ. Sci. 12(6), 377 (2022). https:// doi.org/10.3390/educsci12060377 22. Pombo, L., Marques, M.M.: Guidelines for teacher training in mobile augmented reality games: hearing the teachers’ voices. Educ. Sci. 11(10), 597 (2021). https://doi.org/10.3390/ educsci11100597 23. Aivelo, T., Uitto, A.: Digital gaming for evolutionary biology learning: the case study of parasite race, an augmented reality location-based game. LUMAT: Int. J. Math Sci. Technol. Educ. 4(1), 1–26 (2016). https://doi.org/10.31129/LUMAT.4.1.3 24. Kozcu Cakir, N., Guven, G., Celik, C.: Integration of mobile augmented reality (MAR) applications into the 5E learning model in Biology teaching. Int. J. Technol. Educ. (IJTE) 4(1), 93–112 (2021). https://doi.org/10.46328/ijte.82 25. Cook, M.P.: Visual representations in science education: the influence of prior knowledge and cognitive load theory on instructional design principles. Sci. Educ. 90(6), 1073–1091 (2006). https://doi.org/10.1002/sce.20164 26. Yang, S., Mei, B., Yue, X.: Mobile augmented reality assisted chemical education: insights from elements 4D. J. Chem. Educ. 95(6), 1060–1062 (2018). https://doi.org/10.1021/acs.jch emed.8b00017 27. Lo, M.L.: Variation Theory and the Improvement of Teaching and Learning. Acta Universitatis Gothoburgensis: Göteborg (2012) 28. Marton, F.: Necessary Conditions of Learning. Routledge, New York (2015) 29. Bryman, A.: Social Research Methods, 5th edn. Oxford University Press, Oxford (2016) 30. Flyvbjerg, B.: Five misunderstandings about case-study research. Qual. Inq. 12(2), 219–245 (2010) 31. Yin, R.: Qualitative Research from Start to Finish. The Guilford Press, New York (2011) 32. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qualit. Res. Psychol. 3(2), 77–101 (2006)
Introducing Dreams of Dali in a Tertiary Education ESP Course: Technological and Pedagogical Implementations Maria Christoforou(B) and Fotini Efthimiou Cyprus University of Technology, Limassol, Cyprus {maria.christoforou,fotini.efthimiou}@cut.ac.cy
Abstract. Technology-enhanced teaching and learning environments transform the learning experience and increase student interest and engagement in the lesson. Virtual Reality (VR) has been identified as a multimodal medium that offers highly interactive and fully immersive experiences through which students can access a variety of meanings and enter a new social learning space which transcends classroom boundaries. VR also constitutes an innovative digital tool for foreign language (FL) learning in tertiary education through which teachers can promote a more situated learning context. Based on the multimodal affordances of the VR application Dreams of Dali, this paper aims to propose meaningful ways the application can be embedded in an English-for-Specific-Purposes (ESP) course in tertiary education, foregrounding the delivery of pedagogical content, creating student immersive literacy practices, and leading to alternative ways of conceptualising meanings. The immersive environment in Dreams of Dali can pose an instructional design tool which can increase familiarisation with courserelated content in English for Fine Arts and help simulate an authentic surrealistic environment that deviates from static art-related images and passive surrealist representations of the art movement in the classroom. Keywords: Virtual Reality-Enhanced Language Learning · Dreams of Dali · ESP
1 Introduction New and more sophisticated technologies have entered the world of education, providing students with limitless opportunities for engaging learning experiences [1], and drastically changing the way meanings are communicated and represented through the “new media” [2]. Virtual Reality, or also known as immersive VR, refers to the technological system with a computer capable of real-time animation, initiated by wired gloves, a position tracker and a head-mounted display device (HMD) for capturing a visual output [3]. As an innovative instructional tool, VR has been implemented in engineering and multidisciplinary courses, in the automotive and aerospace industries [4], in entertainment and in language learning education [5], rendering it one of the most popular mainstream consumer educational tools. So far, there have been several affordable and easy-to-use © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 53–65, 2023. https://doi.org/10.1007/978-3-031-34550-0_4
54
M. Christoforou and F. Efthimiou
VR hardware and software systems, like the Samsung Gear VR, a wearable device, supported by a Samsung Galaxy smart phone [6], Google Expeditions [7], Oculus Go [8], a standalone, portable and wireless VR headset, PlayStation VR, Microsoft HoloLens [9], AltspaceVR with Oculus Quest 2 [10], and many more. Despite some technical pitfalls in embedding VR in the lesson like regular updates, training requirements and non-technical issues like a need for familiarisation with the equipment [11], VR is predicted to be exponentially adopted for educational purposes, since it can represent a real or imaginative environment and enhance the situated experience for learners [12, 13]. VR promotes an experimental and experiential view of learning with learners actively reconstructing their knowledge based on their own existing internal model of the world [14]. Concerned with bridging the gap between the technology and pedagogy of 3D virtual learning environments (VLEs), Fowler [15] draws attention to the need of assigning a more pedagogical scope and focusing on the learning outcomes through the interaction of the technical property of VR, i.e. “immersion” and the psychological state of “presence”. Based on the aforementioned affordances, the present paper proposes the implementation of the VR application Dreams of Dali in the ESP course English for Fine Arts. More specifically, the aim is to increase student familiarisation with course-related content through authentic interactions with the cultural element of Surrealism, and to construct meaningful personalised understandings for students through immersion and presence in a highly immersive VR system. Through Dreams of Dali in the highly immersive environment, Salvador Dali’s artistic techniques and Surrealism as a taught art movement, can enable learners to, simultaneously, conceptualise parallel meanings and physically experience the content, not just learn about it. Through this experience, one may witness a shift in the role of learners who are neither readers of surrealistic static images and texts nor viewers of informative videos anymore; they become user participants of the authentic surrealistic environment where simulation is more than a multimodal experience: it is a subjective experience in terms of “presence” (the feeling of “being” in the authentic virtual Dali cultural context) [16] and an objective experience (the feeling of being immersed in the virtual space). Lee and Wong [17] stated that the application of VR in arts and humanities was disregarded. They also pointed out the opportunities for entering highly immersive contexts since students can visit historical or fictional events and participate in abstract spaces or processes to enhance their learning experience. Dreams of Dali is a suitable virtual environment that can initiate the opportunity for Fine Arts students to enter a mythical space that simulates a painting by Salvador Dali and enhance the cultural knowledge of Fine Arts students. Moreover, this paper addresses the gap in the literature on fully immersive VR pedagogical applications and the relationship between immersion and presence since there is dire need to align those affordances with pedagogy [18]. Simulations in highly immersive VR foreground two profound affordances, that of a) immersion and b) presence. Scavarelli, Arya, and Teather [19] refer to immersion as an objective experience, delivered by the VR technology which covers various sensory modalities that reflect human real-world senses, and presence as the subjective experience of the user in accepting the artificial reality as reality. They also define embodiment as the mental representations of the body within the virtual space. The students’ navigation in the surrealistic context becomes more effective as an embodied experience since,
Introducing Dreams of Dali in a Tertiary Education ESP Course
55
according to Johnson-Glenberg [20], embodiment, i.e. body movements with gestures, can facilitate cognitive activity and help students reflect on the embodied representations of their ideas. The multisensory experience of fully immersive VR exploits all five senses: vision, audition, touch, taste, and olfaction [21].
2 Description 2.1 General Description of Dreams of Dali Dreams of Dali is an artistic application which offers a three-dimensional experience of Salvador Dali’s surrealistic painting Archaeological Reminiscence of Millet’s ‘Angelus’. It has been running as a digital experience in the Exhibits section of the Dali Museum in St. Petersburg, Florida since 2016. The production company Goodby Silverstein and Partners developed this Surrealist application, which offers a single-user experience into the world of the visionary artist, Salvador Dali. Praised internationally by museum visitors and users, it has also gained many industry awards. Two additional advantages of the application are that first, it is free, and second, users can immerse in surrealist environment from a seated or a standing position.
Fig. 1. The original painting, The Angelus, by Jean-Francois Millet. (Image credit: Wikipedia)
Dali painted Archaeological Reminiscence of Millet’s ‘Angelus’ from 1933 to 1935. The painting was a revolutionary adaptation and a more provocative version of JeanFrancois Millet’s painting The Angelus (see Fig. 1). The original painting shows a pair of peasants, bowing over a basket of potatoes in an open field, in what appears to be a praying ritual. Dali, on the other hand, in the vanguard of constant innovation and transformation, gave the painting a more spiritual interpretation based on his subconscious influences. This led to the creation of his surrealist masterpiece, with two figures resembling praying mantises instead of real people like in Millet’s version. In Dali’s representation, the male
56
M. Christoforou and F. Efthimiou
mantis bows in defeat in front of the female one, symbolizing the artist’s suppressed fear of female sexuality. The two towering mantises are placed centrally in an empty landscape with a shade of light protruding from the dark sky, creating a mystical, eerie atmosphere. 2.2 Navigation Within the 3D Environment The artistic liberty of interpretation Dali was aiming for with his surrealist work is reflected as an immersive experience of constant exploration in this application. For starters, the various spheres, or orbs, which are positioned in key spots in the 3D painting, allow for user movement in Dali’s 3D context.
Fig. 2. Students point at the sphere to “enter” the painting.
Figure 2 shows the initial sphere students need to point at so that they can immerse themselves in the painting and navigate within the surrealist environment. Students are directed to place the pointer on the sphere for three seconds to move to another element of their choice. Figure 3 depicts the two mystical praying mantises that one can face the minute they are transported in the virtual context. Students can witness the mantises from all angles since they are placed centrally within the vast landscape. Navigation is not sequential, so they are free to move within the landscape in any order they like by pointing at the sphere. Wandering freely and exploring what is next to come add to a feeling of perpetual mystery since characters vanish and reappear from one minute to the next. In fact, the whole experience feels like venturing out into the unknown, which is a staple to surrealistic art interpretation.
Introducing Dreams of Dali in a Tertiary Education ESP Course
57
Fig. 3. Facing the praying mantises while being immersed in the surrealist environment.
Fig. 4. Pointing at the sphere can transfer students to the opposite mantis.
Figure 4 illustrates one of the two gigantic mantises with a sphere in its mouth, which students can point at. It is important to stress out that this sudden transferring might cause slight dizziness, especially when trying look down to the ground. Students also have a chance to marvel at more surreal elements from Dali’s work in their immersive voyage since the application incorporates some classic elements from other paintings, e.g., the marching elephants on beanstalk legs from his painting The Elephants and the bewildering synthesis of a lobster and a telephone from his work Lobster Telephone.
58
M. Christoforou and F. Efthimiou
2.3 Technological Features of Dreams of Dali According to the Dali Museum website [22], interested users may resort to the experience of a linear 360° video through Google Cardboard, Samsung Gear VR, or Daydream (lower-cost devices), all of which are now discontinued and no longer commercially available if one wishes to purchase them. A conventional 2D showing of the video can be viewed by a smartphone, tablet, or PC, or by a simple click on YouTube. A fully immersive experience of the application, as featured in tethered headsets like HTC VIVE, Oculus Rift, and Valve Index [23], can now be experienced with standalone headsets like Oculus Quest 2, using a link cable or a similar high-quality USB cable. Dreams of Dali became available for downloading from platforms such as Steam and VivePort in 2018, offering a fully immersive VR environment, and using special hardware to interact with the world of Dali: a) an HMD which is a wearable display that projects the images and, generally, everything the student is able to see and experience, b) sensors for 360° tracking, c) controllers (or hand-tracking) for interactions and haptic feedback [19] for an embodied experience.
3 Analysis 3.1 Pedagogical Implementations of Dreams of Dali in the Language Course In the era of electronic technologies, there has been prolific use of multimedia resources in language courses, which can lead to the representation of content material through various ways. Dreams of Dali offers a situated surrealistic experience for students who study art. However, the authors propose pedagogical ways of implementing the application in a language course in tertiary education, mainly based on the students’ reflective practice, even though the application does not fall under the category of language. The reflective practices were implemented in an intervening pilot program in the course English for Fine Arts. The authors consider it necessary to find pedagogical utilisations for immersive technologies so that they do not end up being a form of entertainment for students. In the present ESP course, author 1 employs YouTube videos, multimodal texts, audio tracks, website articles, 360° videos, excerpts from books, online quizzes, and digital artefact sharing platforms. All these resources enable for a more flexible representation of course content delivery for teachers, enriching the lesson and providing language learners with more resources to accommodate their learning needs. Teachers can test the instructional effectiveness of immersive VR and Dreams of Dali as an additional multimedia resource through the pedagogical process called transmediation. Transmediation involves the translation of one semiotic mode to another [24]. To put it simply, it is a process that shows how learners perceive meaning from different media. In an ESP course, studying the thematic area of the most influential art movements in the world is an integral aspect of the curriculum. Learners are exposed to the characteristics of each movement from information used from various multimedia resources mentioned above. Learners also use the target language to communicate what they understand since the lesson favours a more communicative orientation.
Introducing Dreams of Dali in a Tertiary Education ESP Course
59
Table 1 shows the progress of transcending multimodal teaching concerning the thematic area of Surrealism and reaching culmination with Dreams of Dali. The purpose is not to focus on strict grammar-based tasks and activities, but to choose the most appropriate resources that promote a more contextualised aspect of the subject. Table 1. The process of transmediation for the study of Surrealism Surrealism
Resource
Lesson 1: Conventional painting
Lists and image
Lesson 2: 2D video
YouTube
Lesson 3: VR experience
Dreams of Dali
In lesson 1, author 1 taught Surrealism in English as the target language through some indicative artistic examples, one of which was Archaeological Reminiscence of Millet’s ‘Angelus’ and introduced the surreal characteristics and concept-related vocabulary using a conventional image of the painting as a learning resource. This process involved the elements of art, for example, lines, shapes, space, and color, along with the principles of art, which included movement, unity, balance, and proportion. In lesson 2, author 1 proceeded with a multimedia resource, a YouTube video, displaying more elaborate forms of input from artists specialising in surrealist art or informative videos about this art movement. In lesson 3, interaction was replaced with immersion in the virtual environment of Dreams of Dali. This resource was considered as an effective semiotic domain for the conceptual representation of surrealism for students. It was also ideal for the students because they experienced the elements and principles of art they had learnt. Not only did the application offer a more dynamic multimodal representation of concepts, but learners also had more control over what they were studying. In fact, presenting content in multiple representations maximised learners’ understanding that is why the authors were interested in seeing how meaning was negotiated between these various resources. The authors also found it useful to see which one helped them comprehend the content better, manipulate it, and combine it with the target language [25]. After each lesson finished, students reflected on their experience with the resource they interacted. However, it was important to keep in mind that the affordances of each resource were different, so meanings were materialised differently. Language teachers who are interested in enhancing the thematic area of Surrealism with this application should consider that it does not pay any attention to linguistic accuracy, but it serves as an input to extract cultural information and familiarisation with the subject matter. The authors replaced traditional grammar as a concept of meaning making with the process of learning as Design. The concept of Design stems from a social semiotic approach to education, referring to the constant transformation one undergoes when making use of their available resources for engaging with meanings culturally towards the implementation of new designs and of the reshaping of new meanings [26]. Therefore, learning becomes a dynamic, transformative process of designing meaning within a range of semiotic modes, apart from speech and writing. In Dreams of Dali, learners underwent a transformative experience because they explored how parallel modes of
60
M. Christoforou and F. Efthimiou
representation worked together, for example, image, color, music, gesture, space, etc. to have a meaningful outcome. They deviated from the linguistic mode as the most prevalent mode in learning since language alone could not give access to the meanings of multimodal messages. The multimodal affordances of Dreams of Dali, e.g., presence, immersion, as well as the use of sensory inputs (realistic graphics, eerie sounds, surreal sights, mystical creatures, etc.), first-person experience, navigation in the learning context, and student autonomy of the learning process [27] enabled learners to conceptualise parallel meanings and physically participate in the content, not just learn about it. Table 2. Modes of meaning in the immersive environment Meanings Realisation of meanings in Dreams of Dali audio
narrative voice of Dali echoing from a distance, eerie sounds, mystical music
written
directions of pointer use at the beginning
visual
the man and the boy holding hands, the praying man, the towering praying mantises
spatial
immersion, a sense of presence
gestural
user movement, marching elephants
Table 2 shows the multimodal modes of meaning [28] that came together in Dreams of Dali. As it was already mentioned, learners drew on distinctly different sets of resources for meaning making, motivated by their own interest, therefore, they actively designed meaning which is why learning became a very dynamic experience. They were no longer passive recipients of information about Surrealism. On the contrary, they were exposed to a cultural enactment of Surrealism, and they had full control over the content.
4 Reflective Insights Gained from Dreams of Dali Plastina [29] talked about the need to view the ESP classroom as a multimodal environment where students can negotiate many meaning-making practices apart from language. Due to the fact that the use of VR, AR and MR technologies is decontextualised when it comes to the teaching of a foreign language [30], the instructors considered reflections to be an indispensable part of the lesson, leading to the engagement of reflective practice from students. After the termination of each lesson, author 1 collected the students’ reflections regarding the lesson experience. Based on the students’ reflections, the authors organised the subsequent lesson, which revolved around helping the students with their VR experience. At the end of the third lesson, students were given an anonymous questionnaire through which they expressed their experiences and opinions. The authors resorted to the use of an anonymous questionnaire to enable the students to express themselves freely. The questionnaire was paper-based and consisted of four close-ended and three open-ended questions. Questions 1–3 focused on the students’ experience regarding VR whereas question 4 aimed to gauge the extent to which students internalised the steps
Introducing Dreams of Dali in a Tertiary Education ESP Course
61
of the VR learning experience. In question 5, the students reflected and evaluated their experience in the application Dreams of Dali, whereas in question 6, they were asked to express their suggestions for the ways VR can be implemented in the Fine Arts field of study. It should be noted that these suggestions emerged from the students’ immersive experience in Dreams of Dali. The final question asked students to describe and reflect on their overall experience. 4.1 Reflective Examples Some indicative examples from the students’ reflective questionnaires are presented in this section. The examples contain various types of mistakes, however, the authors refrained from making any corrections in order to preserve the authentic version of reflections. It should be noted that author 1 provided feedback to the students based on the errors they had made. Student 1. “I enjoy it a lot. It was good experience to be inside of the painting of Dali. But the second time was more close to the things im studing now. I liked more and I understand the application”. Student 2. “The first idea is the blend of different forms of arts together, mixing the boundaries and creating a unique VR experience. Maybe while you walk through the landscape (created by paintings of famous artists) and you listen to music, the landscape will change as the music changes and forms into a new artstyle or a new artwork”. Student 3. “Very nice experience. I felt myself immersed in the realm of virtual reality but because I have acrophobia I felt a little uncomfortable”. Student 4. “I felt so immersed that everything else fades away, definitely I would like to repeat it again, for sure I would like to try another application and spending more time with the equipment, it is necessary”. Student 5. “I was very excited with VR. I felt like I was in a different dimension. I left reality and I felt like I was free of my body”. Student 6. “Based on my experience the VR puts you in an immersion situation. Where you can be creative and see things differently. I would love to repeat it. I would like to try another application like tilt brush, google earth and others. VR is something that demands time in order to get to know the equipment better and if you like to develop and take good
62
M. Christoforou and F. Efthimiou
advantage of this new technology you need time not only to get to know it but to learn the theory of the equipment”. Student 7. “A new source of inspiration and represent of an artwork for 21st century artists”.
5 Classroom Activity Using Dreams of Dali
Fig. 5. Image of Dreams of Dali. (https://store.steampowered.com/app/591360/Dreams_of_ Dali/)
The present article shows how the immersive environment in Dreams of Dali can pose as an instructional design tool which can increase familiarisation with course-related content in English for Fine Arts and help simulate an authentic surrealistic environment that deviates from static art-related images and passive surrealist representations of the art movement in the classroom. The authors propose the following activity based on the technological and pedagogical implementations mentioned above as well as on students’ reflections. The authors follow the activity format used by Frazier, Lege and Bonner [31] (Fig. 5). Activity: Maximising Content Learning • Target level: B2 • Time: 90 min • Aims: Students immerse themselves in Dreams of Dali to experience situated representations of course content and different parallel modes of meaning representation, and to practise their English language skills (writing a descriptive and argumentative text about their experience in the multimodal simulated environment using linguistic elements such as spatial adverbs, adjectives, words/phrases for expressing opinion, active and passive verbs, past tenses, comparisons, etc.) • Resources/materials: VR headset, Dreams of Dali app, list of spatial adverbs and adjectives/ regular & irregular verbs, guidelines for writing a descriptive, and argumentative text, list of active and passive verbs, comparative/superlative guidelines
Introducing Dreams of Dali in a Tertiary Education ESP Course
63
• Possible problems: Dreams of Dali offers an individual experience, but a limited number of VR headsets (either tethered or standalone) may slow down the process • Procedure: This activity helps students explore parallel modes of meaning representation in Salvador Dali’s surrealistic painting. Subsequently, it provides students with the opportunity to demonstrate their perception of the painting through written and oral activities. • Stages: 1. The students are given lists of spatial adverbs and adjectives/ regular & irregular verbs, active and passive verbs, comparative/superlative guidelines 2. The instructor demonstrates students how to become familiar with the VR hardware and software. 3. Students are immersed in Dreams of Dali individually for approximately 10 min each. 4. Once the simulated experience in the surrealistic environment is over, students orally compare the various modes of meaning making they had been exposed to (see Table 1) 5. The instructor gives the students the guidelines for writing a descriptive and argumentative text, followed by a discussion with the students in class. 6. The students design (Kress, 2000) their texts and justify their linguistic choices. The instructor may interfere in the writing stage if errors are detected. 7. Based on the instructor’s comments, the students write the final version of their text. 8. The students reflect on their whole experience (stages 1–7) in their diaries. The reflections may pose as a new source for language learning. 9. The students orally present their VR experience and knowledge gained in Dreams of Dali to the first-year students of the following academic semester.
6 Conclusions Through the present intervening program, university Fine Arts students studying ESP were taught the surrealistic techniques of Salvador Dali through their immersive experience in the VR application Dreams of Dali. The students’ reflections indicated that the integration of Dreams of Dali in the ESP lesson offered a fruitful experience as well as a positive environment in the understanding of the surrealistic properties of the artist’s work. Moreover, the students’ written texts (e.g., description of their experience in the VR environment) functioned as sources for language learning practice. Finally, the authors support that more in-depth results can be obtained from future research.
References 1. Kessler, G.: Technology and the future of language teaching. Foreign Lang. Ann. 51(1), 205–218 (2018). https://doi.org/10.1111/flan.12318 2. Jewitt, C.: The Routledge Handbook of Multimodal Analysis, 1st edn. Routledge, London (2011) 3. Steuer, J.: Defining virtual reality: dimensions determining telepresence, communication in the age of virtual reality. J. Commun. 42(4), 73–93 (1992). https://doi.org/10.1111/j.14602466.1992.tb00812.x
64
M. Christoforou and F. Efthimiou
4. Häfner, P., Häfner, V., Ovtcharova, J.: Teaching methodology for virtual reality practical course in engineering education. Procedia Comput. Sci. 25, 251–260 (2013). https://doi.org/ 10.1016/j.procs.2013.11.031 5. Blyth, C.: Immersive technologies and language learning. Foreign Lang. Ann. 51(1), 225–232 (2018). https://doi.org/10.1111/flan.12327 6. Hillmann, C.: Unreal for mobile and standalone VR. Unreal Mobile Standalone VR 141–167 (2019). https://doi.org/10.1007/978-1-4842-4360-2 7. Auer, M.E., Tsiatsos, T. (eds.): IMCL 2017. AISC, vol. 725. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-75175-7 8. Ogdon, D.C., Crumpton, S.: Bring the past to the future: adapting stereoscope images for use in the Oculus Go. J. Med. Libr. Assoc. 108(4), 639–642 (2020). https://doi.org/10.5195/jmla. 2020.1039 9. Hu Au, E., Lee, J.J.: Virtual reality in education: a tool for learning in the experience age. Int. J. Innov. Educ. 4(4), 215 (2017). https://doi.org/10.1504/IJIIE.2017.10012691 10. Jauregi-Ondarra, K., Christoforou, M., Boglou, D.: Initiating meaningful social interactions in a high-immersion self-access language learning space. JASAL J. 3(2), 86–102 (2022) 11. Boglou, D.: A simple blueprint for using oculus rift in the language learning classroom. In: 12th International Proceedings on Innovation in Language Learning, pp. 106–111, Filodiritto Editore (2019). 10.26353_2384-9509 12. Lan, Y.J.: Immersion, interaction, and experience-oriented learning: bringing virtual reality into FL learning. Lang. Learn. Technol. 24(1), 1–15 (2020). http://hdl.handle.net/10125/ 44704 13. Christoforou, M., Xerou, E., Papadima-Sophocleous, S.: Integrating a virtual reality application to simulate situated learning experiences in a foreign language course. In: Meunier, M., Van de Vyver, F., Bradley, L., Thouësny, S. (eds.) CALL and complexity – short papers from EUROCALL 2019, pp. 82–87. Research-publishing.net, (2019). https://doi.org/10.14705/ rpnet.2019.38.9782490057542 14. Schwienhorst, K.: Why virtual, why environments? Implementing VR concepts in CALL. Simul. Gaming 33(2), 196–209 (2002) 15. Fowler, C.: Virtual reality and learning: where is the pedagogy? Br. J. Edu. Technol. 46(2), 412–422 (2015). https://doi.org/10.1111/bjet.12135 16. Berti, M.: The unexplored potential of virtual reality for cultural learning. The EuroCALL Rev. 29(1), 60–67 (2021). https://doi.org/10.4995/eurocall.2021.12809 17. Lee, E.A.-L., Wong, K.W.: A review of using virtual reality for learning. In: Zhigeng, P., Cheok, A. D., Müller, W., Rhalibi, A. (eds.) 3rd International Conference on E-Learning and Games, Transactions on Edutainment I, TEDUTAIN, pp. 231–241. Springer, Berlin (2008). https://doi.org/10.1007/978-3-540-69744-2_18 18. Parmaxi, A.: Virtual reality in language learning: a systematic review and implications for research and practice. Interact. Learn. Environ. 31(1), 172–184 (2020). https://doi.org/10. 1080/10494820.2020.1765392 19. Scavarelli, A., Arya, A., Teather, R.J.: Virtual reality and augmented reality in social learning spaces: a literature review. Virtual Reality 25(1), 257–277 (2020). https://doi.org/10.1007/ s10055-020-00444-8 20. Johnson-Glenberg, M.: Immersive VR and education: embodied design principles that include gesture and hand controls. Front. Robot. AI, 5, 1–19 (2018). https://doi.org/10.3389/frobt. 2018.00081 21. Christou, C.: Virtual reality in education, affective, interactive and cognitive methods for elearning design: creating an optimal education experience 228–243 (2010). https://doi.org/ 10.4018/978-1-60566-940-3.ch012 22. The Dalí Museum Homepage. thedali.org. Accessed 06 Feb 2023
Introducing Dreams of Dali in a Tertiary Education ESP Course
65
23. Sadler, R., Thrasher, T.: Teaching languages with virtual reality: things you may need to know. CALICO Infobytes, December 2021. Retrieved from http://calico.org/infobytes 24. Mills, K.A., Brown, A.: Immersive virtual reality (VR) for digital media making : transmediation is key. Learn. Media Technol. 47(2), 179–200 (2021). https://doi.org/10.1080/17439884. 2021.1952428 25. Lege, R., Bonner, E., Frazier, E., Pascucci, L.: Pedagogical considerations for successful implementation of virtual reality in the language classroom. In: New Technological Applications for Foreign and Second Language Learning and Teaching, pp. 24–46. IGI Global, Hershey, PA (2020). https://doi.org/10.4018/978-1-7998-2591-3.ch002 26. Kress, G.: Design and transformation: new theories of meaning. In: Cope, M., Kalantzis, M. (eds.), Multiliteracies: Literacy learning and the design of social futures. 1st edn. Routledge, London (2000) 27. Dawley, L., Dede, C.: Situated Learning in Virtual Worlds and Immersive Simulations. Handbook of Research on Educational Communications and Technology: 4th edn, pp. 723–734 (2014). https://doi.org/10.1007/978-1-4614-3185-5 28. Kalantzis, M., Cope, B., Chan, E., Dalley-Trim, L.: Literacies, 2nd edn. Cambridge University Press, Cambridge (2016) 29. Plastina, A.: Multimodality in English for specific purposes: reconceptualizing meaningmaking practices. LFE. Revista de Lenguas para Fines Especificos, 19, 372–396 (2013) 30. Mills, K.A.: Potentials and challenges of extended reality technologies for language learning. Anglistik 33(1), 147–163 (2022) 31. Frazier, E., Lege, R., Bonner, E.: Making virtual reality accessible for language learning: applying the VR application analysis framework. Teach English Technol. 21(1), 131–143 (2021)
Implementation of Augmented Reality Resources in the Teaching-Learning Process. Qualitative Analysis Omar Cóndor-Herrera1(B)
and Carlos Ramos-Galarza2
1 Centro de Investigación en Mecatrónica y Sistemas Interactivos MIST/Carrera de,
Psicología/Maestría en Educación Mención Innovación y Liderazgo Educativo, Universidad Tecnológica Indoamérica, Av. Machala y Sabanilla, Quito, Ecuador {omarcondor,carlosramos}@uti.edu.ec 2 Facultad de Psicología, Pontificia Universidad Católica del Ecuador, Av. 12 de Octubre y Roca, Quito, Ecuador [email protected]
Abstract. In the present investigation, the qualitative results of an educational intervention are presented, which consisted in the implementation of augmented reality resources A.R in the teaching process, the objective of the investigation was to identify the different categories that arose in the speeches of the participants in relation to their experience in the application of augmented reality elements in the learning process, for which the open coding technique was used. The study, which lasted 8 weeks, was conducted with a population of 81 students, 45 women and 36 men, whose ages ranged between 13 and 16 years. Once the intervention was over, an in-depth interview was applied to identify the experiences of the participants, subsequently the information obtained was analyzed, identifying the following categories of description: a) learning experience with elements of augmented reality, b) adaptation to a new methodology of learning, c) motivation, d) benefits and disadvantages of the application of elements of A.R. The results show that many students interviewed gave answers in favor of the use of augmented reality technology, the participants pointed out that the learning experience with elements of augmented reality was innovative and interesting, in relation to the adaptation to a new methodology. of learning indicate that they did not have major difficulties once they became familiar with the operation of the different applications, the vast majority indicated that they felt motivated during the intervention, this being the greatest benefit as well as being able to access study topics in an effortless way. Interactive, which significantly improves their willingness to conduct learning activities, finally as disadvantages of the application of AR elements, they pointed out that the applications used require a lot of memory on mobile devices. Keywords: Augmented reality · education · innovation · technology
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 66–78, 2023. https://doi.org/10.1007/978-3-031-34550-0_5
Implementation of Augmented Reality Resources
67
1 Introduction Augmented Reality is a type of technology which enables the combination of virtual objects and real objects in real time which are displayed through technological devices, [1, 2] for other authors AR is defined as audio interaction, graphics, text, and other virtual elements superimposed on reality. [3] unlike virtual reality (VR) where the individual accesses information through an immersive and simulated environment [4]. In the modern educational field it is imperative to take into account the role of RA, there are more and more resources and activities are developed with the help of this technology, it has gradually been incorporating into its process different resources, methodologies and tools linked largely to the educational technology which during the COVID-19 pandemic was accelerated [5]. In the same way, educational research has reported findings favorable to the application of RA in education, various authors point out that the use of different technological tools improves the learning experience of students as well as improves their motivation and predisposition to learning. [6, 7], in the same way in the teaching of anatomy [8], literature, reading comprehension immersive learning [9], preschool education, creation of educational video games [10], emotional intelligence [11] to name a few Consequently, it can be mentioned that the use of this technology (AR) could facilitate the understanding of scientific concepts, since it complements the sensory perception that the student has of reality by incorporating computer-generated content into the environment, this offers a new form of interactivity between real and virtual worlds [12, 13]. On the other hand, the implementation of A.R can be used as resources that accompany learning methodologies [14] such as gamification [15] PBL or educational approaches such as STEAM education, since the rise of digital technologies, technological tools they are instruments at the service of education [16]. In this scenario, the current research is proposed, which collects the narratives of the students’ perception when applying augmented reality resources in the teaching-learning process. 1.1 Benefits Studies carried out in recent years in the field of augmented reality have become more relevant, many important benefits have been found related to the application of AR in the educational field, for example in the study entitled Mobile augmented reality adapted to the ARCS model of motivation.: a case study during the COVID-19 pandemic evaluated the relationship between motivation and meaningful learning for university students through AR, an increase in the percentage of students who achieved the expected learning objectives was found, compared to versions previous years of the course without AR [17] Another investigation carried out at the Pablo de Olavide University of Seville (Spain) and the Catholic University of Santiago de Guayaquil (Ecuador) in which different applications of AR were evaluated, the students indicated that they perceived the development of cognitive skills and the development of digital skills as benefits of the application action of AR in education [18], other studies indicate that students consider that AR resources arouse in them the motivation to use them due to their ease of
68
O. Cóndor-Herrera and C. Ramos-Galarza
use and the interaction they experience between content and virtual objects, generating knowledge with entertainment [8]. 1.2 Limitations for AR Implementation Just as there are benefits of the application of AR, the limitations that derive from the successful incorporation of the different AR resources in education should be analyzed, the digital skills of teachers in the use of tools can be considered as a limitation. Therefore, continuous training of teachers is essential because the better they understand the application of technological resources as well as the management of teaching methodologies linked to technology, better learning results can be expected when using said resources for education [8, 10, 19]. 1.3 Application of Augmented Reality in Education In Table 1 the reader will find the main findings found in recent studies on the application of AR in the educational field. Table 1. Studies on the application of AR. Research title
Authors
Research
Findings
Mobile augmented reality adapted to the ARCS model of motivation: a case study during the COVID-19 pandemic
(Laurens, L, 2022) [17]
Evaluate the relationship between motivation and meaningful learning for university students through AR, as well as the effects and implications of its use as support for teaching activities in an Industrial Design and Technical Drawing course
The implementation of AR was positively valued by many of the students surveyed. An increase in the percentage of students who achieved the expected learning objectives was found, compared to previous versions of the course (without AR)
The research objective was to analyze the effect of AR as a tool for learning the process of hand washing. This research was developed using RA based on a quantitative approach, quasi-experimental methodology and pretest-posttest design
The results show that there have been improvements in the post-test caused by working with augmented reality in several areas
The use of augmented (Lledó, G; Lledó, A; reality to improve the Gilabert, A; Lledó, A, development of activities 2022) [20] of daily living in students with ASD
(continued)
Implementation of Augmented Reality Resources
69
Table 1. (continued) Research title
Authors
Research
Findings
Virtual reality and augmented reality at the service of increasing interactivity in MOOCs
(Hamada, E; Mohamed, E; Mohamed, Sadgal; Youssef, Mourdi, 2020) [21]
Two MOOCs were applied, the first was a traditional MOOC while the second was a MOOC that offers participants some virtual simulations and practical activities to avoid desertion
Comparing the two MOOCs, the dropout rate decreased to more than half (from 36.62% in the first group to 15.79% in the second) In addition, the level of understanding has decreased from 26.67% in the first group to 11.94% in the second group
Faculty at Saudi Electronic University attitudes toward using augmented reality in education
(Hamadah, A; Thamer, The study examined the A, 2019) [22] possibility of implementing Augmented Reality (AR) application in higher education by answering three questions: which Saudi Electronic University professors are familiar with such applications, what perceptions do they have about their use in education, and what barriers they believe they can make it difficult to implement this technology
The results showed that teachers have a cheerful outlook towards the use of AR and are confident in its potential to enrich the learning environment
Effects of augmented reality application integration with computational thinking in geometry topics
(Mohd, A; Mohd, H; Three variables were Noraffandy, Y; Zaleha, measured, computational A, 2022) [23] thinking, visualization skills, and geometry subject achievement. The study was implemented with 124 students applying a quasi-experimental study design
The results show that there is a positive effect of teaching methods using Augmented Reality applications with Computational Thinking for students in improving Computational Thinking, Visualization Skills, and the achievement of the Geometry Theme
Augmented reality system for teaching mathematics during covid19’s times
Naranjo, J; Robalino-López, A; Alarcon-Ortiz, A; Peralvo, A; Garcia, M [24]
As a result, the students improved their academic average when using this type of tools
This article develops an Augmented Reality (AR) system, based on the Singapore method for teaching exact sciences
70
O. Cóndor-Herrera and C. Ramos-Galarza
2 Methodology The present investigation is of a qualitative type in which the discourses that emerged as the basis of the experiences and perspectives of the students in the development of an innovation project with the implementation of AR augmented reality resources are analyzed. It is worth mentioning that the qualitative results that presented in this paper are part of a larger project. 2.1 Participants For the development of this study, there was a population made up of 45 women and 36 men whose age varies between 13 and 16 years old, who worked as part of an innovation program implementing augmented reality resources which lasted 8 weeks. There was a population of 81 students who consented to their voluntary participation in the intervention. All the participants belonged to the fiscal educational system of the city of Quito, Ecuador. 2.2 Information Collection Techniques An in-depth interview was applied where aspects related to the experience that the students had in the development of the projects with augmented reality, difficulties, facilities to work with these technological resources, in the same way the benefits and disadvantages when implementing the resources were asked. of augmented reality in education and other aspects that allowed deepening the student’s experience around the learning process proposed in the research. 2.3 Data Analysis Plan For the analysis of the linguistic content, the open coding technique was used, which made it possible to identify the distinct categories that arose in the participants’ speeches in relation to their experience in the application of AR augmented reality elements in the learning process [25]. 2.4 Procedure Once the research was approved by the Ethics Committee of the Universidad Tecnológica Indoamérica and by the authorities of the institution where it was applied. The learning project with the implementation of augmented reality resources was carried out over eight weeks in which the students developed their projects, initially the concepts of augmented reality A.R. were explained to them and later the students were taught themselves to work on various platforms such as Unite A.R, [26] Mywebar.com [27] as well as different applications that they learned to use, such as the case of Animal 4D + which shows cards with AR animation of animals, Humanoid 4D + which has resources of AR on the human body [28], and Raap Chemistry that allows visualizing the anatomical structure of all the elements of the periodic table and merge cube that facilitates different animations of A.R.
Implementation of Augmented Reality Resources
71
Based on this knowledge, the students selected assorted topics and developed their projects by implementing AR resources for their learning, ending with the presentation of the products obtained in each project. Once this process was completed, in-depth interviews were conducted with the participants, from which the qualitative results presented below emerge. It is important to highlight that the work complied with the ethical standards considered for research with human beings declared in Helsinki and Nuremberg in 1964. [29] An informed consent for voluntary participation was signed by the legal representatives of the students before starting the study. Intervention, all the data obtained were managed with absolute confidentiality, students and parents were informed of the purpose of the research as well as the right to withdraw their child from the research at any time [30].
3 Results The open coding procedure was applied, the constant comparison and its respective association between the narratives of the participants, allowed to identify the categories that build a substantive description of the phenomenon resulting from the application of augmented reality resources A.R in the teaching-learning process. [31]. The categories of description identified after the application of the interview are presented below: a) learning experience with elements of augmented reality, b) adaptation to a new learning methodology, c) motivation, benefits, and disadvantages on which it will focus applied linguistic analysis. Next, the narratives extracted by each category explored are expanded. 3.1 Learning Experience Once the intervention was completed, an interview was applied to the students in which they were asked to share the experience they had when working with the different elements, AR applications during the work weeks, to which the participants stated the following: – Student 1.- It was an interesting and unique experience because I did not know that this type of thing existed, having an application that makes something look as if it were real (…). – Student 2.- My experience was exceptionally good since I did not know augmented reality applications. At first it was complicated but genuinely nice (…). – Student 3.- It was incredibly fun as well as exciting and my classmates collaborated a lot with their work (…). – Student 4.-It was a valuable experience working with augmented reality since it helps us learn topics with the help of technology, it made it much easier for us since everything is virtual and it is more understandable in these times (…). – Student 5.- This helped us to expand the information on something as simple as scanning a code, we were surprised, and it was something innovative and to be able to know that for a time the ‘perspective surpassed reality (…). As can be analyzed from the extracts obtained from the interview, the students state that their experience working with elements of augmented reality was positive. From
72
O. Cóndor-Herrera and C. Ramos-Galarza
the answers, it can be denoted that these types of resources are attractive and capture the interest of the students [31]. It makes them excited and interested in the topics of study and especially as student 4 mentions, the use of technology facilitates the understanding of some topics since they are resources according to the characteristics that students currently require. In the same area, students were asked to share what their reactions were when they visualized AR elements for the first time, from which the following narratives are extracted. – Student 1.- It was surprising since it is something we did not know could be done; it was a pleasant experience (…) (15-year-old woman). – Student 2.- It was something exciting and new, seeing animals, for example, the human body in augmented reality. I was surprised to see that the applications are easy to use (…) (man, 15 years old). – Student 3.- It was a reaction of great astonishment since it was the first time, we worked with something of this type (…) (man, 16 years old). All the answers provided by the students agreed that the first reaction they had when seeing the augmented reality elements for the first time was one of amazement and excitement and that the applications were not difficult to manage or even create their own augmented reality elements as mentioned the following excerpt. – Student 4.- I was surprised to see that augmented reality animations can be made, with our cell phones and in an uncomplicated way (…) (man, 15 years old). Figure 1 shows an example of the augmented reality elements that the students worked with in the 4D + animal application [28]. 3.2 Adaptation to a New Learning Methodology The second category analyzed and extracted from the narratives provided by the participants was the adaptation they had when working with these new elements within their work methodology for learning. In the interview with the diverse groups of students, they asked about the difficulty that had to use the A.R apps or create the A.R items via Q.R codes. Next, the extracted narratives are presented. – I consider that it was not a complicated process, since there are currently various applications that facilitate the process of creating RA resources and most applications have the steps that must be followed and there are also other options such as video tutorials on the Internet (…) (female 15 years). – At first it was difficult because I was just a beginner in this type of application, but as time went by using the application it became easier (…) (Student male, 16 years old). – It was quite easy, we never saw it as something difficult thanks to the instructions and guides provided by the graduate (…) (Student male, 14 years old). Figure 2 shows a work session conducted with the students.
Implementation of Augmented Reality Resources
73
Fig. 1. Elements of augmented reality animal application 4D +
As can be understood from the exposed excerpt, the students consider that adapting to work with the elements of augmented reality was not difficult at first, it had its difficulty since it was something they had not worked with before, but once they became familiar with the elements were simpler, in the same way they mentioned that they relied on video tutorials and the instructions provided by the teacher. 3.3 Motivation Below are some narratives extracted when students were asked if they consider that the application of augmented reality resources was motivating. – If it is motivating since it can be learned in a quite easy and fun way (…) (Female student, 14 years old). – I consider that, if it is motivating, because it is more entertaining to learn with augmented reality applications and it is very interesting and I think that you learn much better (…) (Female student, 15 years old). – Yes, it is motivating since when opening the augmented reality elements, it is incredibly fun and makes it easier to learn, and it is also remarkably interesting (…) (Student male, 14 years old).
74
O. Cóndor-Herrera and C. Ramos-Galarza
Fig. 2. Work session conducted with the participants.
– If it motivates because students could be inspired to investigate this topic and see things from other perspectives, see things with other eyes and they like it (…) (Student male, 14 years old). Once the answers of the students have been analyzed, it is evident that their perception of motivation is positive since they consider that the use of RA resources are not only motivating, but also facilitate an interesting and fun learning process, which in their Words make learning easier. 3.4 Benefits and Disadvantages Finally, the students were asked to share what for them the benefits and disadvantages they perceived when working with the elements of augmented reality during the development of the present investigation represent, from the extracted narratives the following stand out. – As a benefit I can point out education in an interactive and modern way, faster, more efficient, and extraordinary learning and the teaching of the use of technological resources and how to implement them. (…) (15-year-old male student).
Implementation of Augmented Reality Resources
75
– The benefits may be that these elements make education more didactic, and learning is easier and as a disadvantage that the applications use a lot of space on the cell phone (…) (Student male, 14 years old). – The benefit is that it illustrates the topics in an easy and didactic way, it entertains while teaching and is motivating, and the disadvantage is that heavy applications take time to download the cell phone (…) (Female student, 13 years old). – The benefit is that it is an interactive and attractive way of learning (…) (Female student, 14 years old). From what was stated by the students, learning in a didactic way is identified as the main benefits, which is attractive for students to work interactively, in turn, these elements motivate students to work in a better way, which is an attractive way. to learn, as disadvantages in general, students agree that applications, being heavy, accept a lot of space on their mobile, however that does not prevent them from collaborating with them. In Fig. 3 you can see the students visualizing the elements of augmented reality with their cell phone.
Fig. 3. Visualization of A.R elements through the cell phone.
4 Conclusions In this article we have reported the qualitative results of an investigation which focused on analyzing the narratives about the perception of students in relation to the use of AR resources in their learning process.
76
O. Cóndor-Herrera and C. Ramos-Galarza
The results show that most students interviewed gave answers in favor of the use of augmented reality technology, the participants pointed out that the learning experience with elements of augmented reality was innovative and interesting, in relation to the adaptation to a new methodology. of learning indicate that they did not have major difficulties once they became familiar with the operation of the different applications, the vast majority indicated that they felt motivated during the intervention, this being the greatest benefit as well as being able to access study topics in an effortless way. Interactive, which significantly improves their willingness to conduct learning activities, finally as disadvantages of the application of AR elements, they pointed out that the applications used require a lot of memory on mobile devices. As future research, it is proposed to extend this type of intervention to other areas of knowledge and to a larger population, in the same way, it is proposed to evaluate the usefulness of AR mobile applications by students.
References 1. Martínez Pérez, S., Bárbara, F., Borroso, J.: La realidad aumentada como recursos para la formación en la educación superior. Campus Virtuales 10(1), 9–19 (2021) 2. Azuma, R.: A survey of augmented reality. Teleoperators Virtual Environ. 6(4), 355–385 (1997) 3. Dunleavy, M., Dede, C.: Augmented reality teaching and learning. In: Spector, J., Merrill, M., Elen, J., Bishop, M. (eds) Handbook of Research on Educational Communications and Technology, pp. 735–745. Springer, New York, NY (2014). https://doi.org/10.1007/978-14614-3185-5_59 4. Liberati, N.: Augmented reality and ubiquitous computing: the hidden potentialities of augmented reality. AI & Soc. 31(1), 17–28 (2014). https://doi.org/10.1007/s00146-0140543-x 5. Cóndor-Herrera, O.: Educar en tiempos de covid-19. Cienciamerica 9(2), 31–37 (2020) 6. Cóndor-Herrera, O., Ramos-Galarza, C.: The impact of a technological intervention program on learning mathematical skills. Educ. Inf. Technol. 26, 1423–1433 (2021). https://doi.org/ 10.1007/s10639-020-10308-y 7. Delneshin, D., Jamali, H., Mansourian, Y., Rastegarpour, H.: Comparing reading comprehension between children reading augmented reality and print storybooks. Comput. Educ. 153, 1–24 (2020) 8. Hidalgo-Cajo, B., Hidalgo-Cajo, D., Montenegro-Chanalata, M., Hidalgo-Cajo, I.: Augmented reality as a support resource in the teaching-learning process. Revista Electronica Interuniversitaria de Formacion del Profesorado 24(3), 43–55 (2021) 9. del Rosario-Neira, M., del- Moral, E.: Literary education and reading promotion supported in immersive literary environments with augmented reality. Ocnos 20(3) (2021) 10. Méndez-Porras, A., Alfaro-Velasco, J., Rojas-Guzmán, R.: Educational video games for girls and boys in preschool education using robotics and augmented reality. RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao 2021(42), 472–485 (2021) 11. López, L., Jaen, J.: EmoFindAR: evaluation of a mobile multiplayer augmented reality game for primary school children. Comput. Educ. 149, 1–42 (2020) 12. Kaur, N., Pathan, R., Khwaja, U., Sarkar, P., Rathod, B., Murthy, S.: GeoSolvAR: augmented reality based application for mental rotation. In: 2018 IEEE Tenth International Conference on Technology for Education, vol. T4E, pp. 45–52 (2018)
Implementation of Augmented Reality Resources
77
13. Ibañez, M., Delgado, C.: Augmented reality for STEM learning: a systematic review. Comput. Educ. 123, 109–123 (2018) 14. Cóndor-Herrera, O., Acosta-Rodas, P., Ramos-Galarza, C.: Augmented reality teaching resources and its implementation in the teaching-learning process. In: Nazir, S., Ahram, T.Z., Karwowski, W. (eds.) AHFE 2021. LNNS, vol. 269, pp. 149–154. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80000-0_18 15. Cóndor-Herrera, O., Acosta-Rodas, P., Ramos-Galarza, C.: Gamification teaching for an active learning. In: Russo, D., Ahram, T., Karwowski, W., Di Bucchianico, G., Taiar, R. (eds.) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, vol. 1322, pp 247–252. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-68017-6_37 16. García- Varcálcel, A., Gómez-Pablos, V.: Aprendizaje Basado en Proyectos (ABP): evaluación desde la perspectiva de alumnos de. Revista de Investigación Educativa 35(1), 113–131 (2017) 17. Laurens-Arredondo, L.: Mobile augmented reality adapted to the ARCS model of motivation: a case study during the COVID-19 pandemic. Educ. Inf. Technol. 27, 7927–7946 (2022) 18. Cabero-Almenara, J., Vásquez-Cano, E., Villota-Oyarvide, W., López-Meneses, E.: Innovation in the university classroom through augmented reality. Analysis from the perspective of the Spanish and Latin American student. Revista Electronica Educare 25(3), 1–17 (2021) 19. Cóndor-Herrera, O., Ramos-Galarza, C.: E-learning and m-learning technological intervention in favor of mathematics. In: Zaphiris, P., Ioannou, A. (eds.) HCII 2021. LNCS, vol. 12784, pp. 401–408. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77889-7_28 20. Lledó, G., Lledó, A., Gilabert, A., Lorenzo, A.: The use of augmented reality to improve the development of activities of daily living in students with ASD. Educ. Inf. Technol. 27, 4865–4885 (2022) 21. Hamada, E.K., Mohamed, E.A., Mohamed, S., Youssef, M.: Virtual reality and augmented reality at the service of increasing interactivity in MOOCs. Educ. Inf. Technol. 25, 2871–2897 (2020) 22. Hamadah, A., Thamer, A.: Faculty at Saudi electronic university attitudes toward using augmented reality in education. Educ. Inf. Technol. 24, 1961–1972 (2016) 23. Mohd, A., Mohd, N., Noraffandy, Y., Zaleha, A.: Effects of augmented reality application integration with computational thinking in geometry topics. Educ. Inf. Technol. 27, 9485– 9521 (2022). https://doi.org/10.1007/s10639-022-10994-w 24. Naranjo, J., Robalino-López, A., Alarcon-Ortiz, A., Peralvo, A., Garcia, M.: Augmented reality system for teaching mathematics during covid19’s times. RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao 2021(42), 510–521 (2021) 25. Ramos-Galarza, C.: El abandono de la estadística en la psicología de Ecuador. Revista Chilena de Neuro-psiquiatría 55(2), 135–137 (2017) 26. mywebar.com, mywebar.com (2021) 27. UniteAR, unitear.com (2022) 28. octagon.studio. https://octagon.studio/products-and-services/4d-flashcards/ (2022) 29. Manzini, J.: Declaración De Helsinki: principios éticos para la investigación médica sobre sujetos humanoS. Acta bioethica 6(2), 321–334 (2000) 30. Nathanson, V.: Revising the declaration of helsinki. BMJ 346, 1–2 (2013) 31. Cóndor-Herrera, O., Ramos-Galarza, C., Acosta-Rodas, P.: Implementation of virtual learning objects in the development of mathematical skills: a qualitative analysis from the student experience. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Posters. HCII 2021. Communications in Computer and Information Science, vol. 1421, pp. 17–30. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78645-8_3
78
O. Cóndor-Herrera and C. Ramos-Galarza
32. Conley, Q., Atkinson, R., Nguyen, F., Nelson, B.: MantarayAR: leveraging augmented reality to teach probability and sampling. Comput. Educ. 153, 1–22 (2020) 33. Rusiñol, M., Chazalon, J., Diaz, K.: Augmented songbook: an augmented reality educational application for raising music awareness. Multimedia Tools Appl. 77, 13773–13798 (2018)
Teachers’ Educational Design Using Adaptive VR-Environments in Multilingual Study Guidance to Promote Students’ Conceptual Knowledge Emma Edstrand1(B)
, Jeanette Sjöberg1
, and Sylvana Sofkova Hashem2
1 Halmstad University, Kristian IV:S Väg 3, 301 18 Halmstad, Sweden
{emma.edstrand,jeanette.sjoberg}@hh.se
2 University of Gothenburg, Läroverksgatan 11, 411 20 Gothenburg, Sweden
[email protected]
Abstract. Virtual Reality (VR) is an example of a technology offering interesting potentials for learning in schools. Through VR, information and knowledge are made accessible in new ways since the technology allows experience destinations and content in 3D format that goes beyond classroom. In addition, VR invites activities where teachers and students can engage in the creation of content. The study has a particular interest in teachers’ educational design using adaptive VRenvironments in multilingual study guidance. Multilingual study guidance is a support in Swedish schools to enhance the development of subject content learning in native language. Based on a co-design approach combining methods of action research and design-based research the aim of the study is to explore how to didactically design multilingual study guidance to promote the development of students’ conceptual knowledge with adaptive VR-environments. The research questions posed are: (1) How do study guidance (SG) teachers plan and organize for multilingual study guidance with adaptive VR-environments? and (2) In what way is the subject content enacted in adaptive VR-environments? The data concerns interviews and workshop discussions with two SG-teachers reflecting on opportunities and challenges with multilingual study guidance carried out in VRenvironments. The results show in what ways adaptive VR-environments function as a bridge between students’ first language and the subject area content and that the SG-teachers act as pillars of support. Furthermore, the study contributes to increasing the quality of multilingual study guidance, which is a field where research is limited. Keywords: Didactic Design · Design-based Research · Multilingual Study Guidance · Teachers · Teaching · Virtual Reality
1 Introduction The integration of digital technologies in school settings has gained increasing prominence, and this, naturally, has consequences for how teachers plan and organize teaching [1]. To implement digital technologies in instructional settings in meaningful ways © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 79–94, 2023. https://doi.org/10.1007/978-3-031-34550-0_6
80
E. Edstrand et al.
requires a systematic and critical understanding of the teaching situation [2]. Virtual Reality (VR) is a technology increasingly adopted by schools allowing experience destinations and content in 3D format beyond the classroom and recently also for teachers and students to engage in the creation of VR content [3]. The use of VR in education provides students with new and innovative ways to learn, helping them to engage with the subject content in a more interactive and immersive way. This is particularly beneficial for subjects that require practical experience, such as science and technology [4]. VR offers virtual experiences which users can perceive as real [5], which in turn can make students gain a deeper understanding of the content matter and make the learning experience more meaningful [3]. By using VR, teachers can tailor the educational experience to meet the individual needs and learning styles of their students. This means that students who may struggle with traditional teaching methods can benefit from a more interactive and engaging learning experience [6]. Furthermore, the combination of threedimensional (3D) systems and interface devices enables users to become immersed in a virtual environment [7]. Another positive aspect of VR in education is that it enables students to experience different environments and cultures without having to leave the classroom, which is especially useful for subjects such as language, history, geography, and social studies [8]. VR as a teaching and learning tool in language learning in general and in nonnative language learning in particular is swiftly emerging [9], even though the research in this area still is relatively scarce [10]. This paper has a particular interest in teachers’ educational design using adaptive VR-environments in multilingual study guidance. Multilingual study guidance is a support in Swedish schools to enhance the development of subject content learning in native language [11]. The Swedish School Inspectorate [12] stresses that multilingual study guidance does not always provide the expected support that students need to be able to meet the knowledge requirements of the curriculum. Knowledge and research on multilingual study guidance is limited and therefore there is a need for new arenas and resources that can be adapted and developed to meet students’ needs. VR can be used as a resource to meet individualized study guidance designs. The present study is part of an ongoing research project, The use of adaptive VRenvironments to support students’ learning in multilingual study guidance (VRiS), involving researchers, study-guidance (SG) teachers, their students (aged 9–12), ICTteacher and a VR-designer. The aim here is to explore how to didactically design multilingual study guidance to promote the development of students’ conceptual knowledge with adaptive VR-environments. The research questions posed in the study are: (1) How do SG-teachers plan and organize for multilingual study guidance with adaptive VR-environments? and (2) In what way is the subject content enacted in adaptive VR-environments?
2 Review of Research 2.1 Multilingual Study Guidance In order for multilingual study guidance to be a successful arena for students to meet subject-specific knowledge and knowledge goals in the curriculum, research highlights the importance of collaboration between SG-teachers and subject teachers [12, 14].
Teachers’ Educational Design Using Adaptive VR-Environments
81
However, the Swedish School Inspectorate [12] identified a lack of collaboration between SG-teachers and subject teachers. Ernst-Slavit and Wenger [15] and Kenner et al. [16] argue that a reason why there are shortcomings in this kind of collaboration is due to the unbalance of power between the two professions. The way the power is balanced determines how involved the SG-teachers are when it comes to decisions of, for instance, teaching approaches and the teaching materials being used. Other important aspects that might hinder a successful multilingual study guidance concerns the amount of time subject teachers set aside to share information and materials with the SG-teacher [12]. On the one hand, it also matters how much time SG-teachers have for planning teaching activities and competence development [17]. An all-encompassing goal for students in Swedish primary school is to develop conceptual knowledge in all school subjects [18]. This implies that SG-teachers need to focus on the development of students’ subject-specific knowledge first hand rather than having a primary focus on teaching language [19]. The point here is to underline the importance of making use of the student’s entire linguistic repertoire and not limit it to only the language of instruction. Research has shown that subject knowledge and language develop in an integrated process, which means that subject teaching, not only in the language of instruction, becomes significant for the students’ linguistic development [20]. In this sense and as Dávila and Bunar [17: 109] put it, SG-teachers should act “as a bridge between children’s first language and the subject area content”. In the context of multilingual study guidance in science education, the results of Karlsson et al. [21] illustrate that by encouraging students to use all their linguistic repertoire in learning activities promotes their learning as the students can see relationships between different languages and science subject matters. Hajer and Meestringa [22] point to the challenges involved when SG-teachers simplify the language and concepts as this limit the opportunities for students to learn science content and subject-specific concepts. This in turn has implications on the continuity of science learning [21]. A way of dealing with the challenges of teaching science in multilingual study guidance is to implement multimodal resources that can visualize abstract science content and concepts [21]. VR is an example of such a visualizing technology, which can support student learning. This will be further elaborated in the next section. 2.2 Using VR in Instruction VR is an example of a technology that can open new ways of designing learning activities. The way we used VR in the present paper was by creating immersive VR where SG-teachers and students used head-mounted displays (HMDs) [23]. Immersive VR through HMDs offer possibilities for a user to engage in a highly interactive learning environments where they, for instance, can manipulate 3D-objects [24] and explore concepts and processes in effective ways [25, 26]. From an instructional point of view, VR has interesting qualities as it can increase students’ motivation to learn [27, 28] and improve their memory of the learning content [29]. In addition, the technology enables students to be present and active in the virtual environment which could enhance the quality of learning [28]. All these factors make VR an interesting complement for teaching and learning. The technological development has led to VR equipment being more accessible to schools and a growing group of teachers implementing VR in their teaching
82
E. Edstrand et al.
activities [30]. However, for VR to function as a resource for learning in schools, Luo et al. [31: 897] argue that “in VR-based instruction, student engagement with the VR intervention can only form part of the learning process as other key learning activities might be necessary outside the VR-environment.” Activities such as reflection and discussion between teachers and students related to VR-based instructions are examples of effective scaffolding strategies to strengthen students’ learning. In a study targeting the educational influence of using VR in science classrooms, Liu et al. [24] sought to illustrate 90 six-grade students’ academic achievement, engagement and technology acceptance. The participants were divided into an experimental group using HMDs and a control group, where traditional classroom teaching took place. The analyses of the data showed that the students in the experimental group performed better in terms of academic achievement and engagement than the students in the control group. Furthermore, Liu et al. [24] argued that the students in the experimental group found VR easy to use and contributed to an improved understanding of the science content. In another study, Yildirim et al. [32] explored teachers’ opinions about using VR in the classroom. The teachers took part in a three-week intensive training of using VR which was followed by applying VR in their classrooms during a period of two months. The results demonstrated that the teachers identified signs of increased attention and motivation among the students. Furthermore, VR was considered as a resource that facilitated learning of concepts (e.g., molecules) as the students could visualize them in the virtual environment. In this way, Yildirim et al. [32] point to the role of VR as making abstract topics and concepts visual for the students, which in turn can lead to enriching instruction.
3 Materials and Methods 3.1 Study Design and Participants As mentioned previously, the study is part of a larger practice-developing research project (VRiS) which combines methods of action research [33, 34] and design-based research [35, 36] to systematically and over time study and improve educational practice. This involves interventions of educational activities in real classroom situations [eg., 37] planned in iterative and reflective collaboration [38] between 2 SG-teachers, 3 students, 3 researchers, 1 VR-designer and 1 ICT-teacher, involving 2 primary schools in a smaller municipality in the south of Sweden. This entails initial problem identification, contribution with new design ideas that are jointly discussed and reflected upon during workshops informing the development of subsequent didactical designs (i.e. re-designs) then implemented and tested in classroom settings. This study is based on the first cycle (of three) and the initial exploration phase understanding the educational problem, context and needs of the SG-teachers and students [36]. The study has been guided by ethical research practice principles [39] including personal privacy compliance. Informed consent was sought before data was collected. The informed consent forms included information about the study, data collection, use and storage of data, principles of anonymity (all the collected data has been anonymized), and respondents’ right to withdraw at any time without questions asked [39].
Teachers’ Educational Design Using Adaptive VR-Environments
83
3.2 Design for Learning: The Design Dice Framework The new forms of educational environments in which learners engage with different digital technologies is acknowledged as a paradigmatic change in education [40]. Teaching is recognized moving away from delivering content to students towards a creative process of activity-centered emergent learning situations designing for learning [41], where students have the opportunity to create their own learning paths [42]. The fundamental three components of ‘didactic triangle’ linking student, teacher and content in a concrete teaching and learning situation [43, 44] is here expanded with new didactical relationships that arise between technology, student, and content [45]. In the student-technology relationship, the focus shifts to the use of digital resources and interactivity in the physical and virtual spaces, when to teach and where. The contenttechnology relationship is a question of design and layout of teaching situations and learning activities [46, 47, 48]. Moving from a content-centric to an activity-based approach involves the teacher to delimit activities and design for learning that is context sensitive requiring responding to unexpected circumstances, student’s immediate initiatives and serendipitous events [45]. In the teachers’ design for learning the Design Dice framework has been introduced, a previously developed model of didactic design for planning, implementing and evaluating teaching in blended learning environments [49]. The framework is addressing the student-content-technology relationships in six specific categories in the design of teaching sequences and learning activities: 1) what specific (subject) content (knowledge), 2) why these learning objectives (competence), 3) when will activities take place (time), 4) where will activities take place (space) 5) which learning materials (resources), and finally 6) does this design supports learning (added value) [49]. The last dimension has proven to be a useful facilitator to reflect upon design for learning from the perspective of activity-based approach utilizing affordances of digital environments rather than engaging in reproducing content-driven teaching designs. 3.3 Analysis In this first cycle, the SG-teachers’ participated in workshops creating and reflecting upon teaching plans, created a subject-specific VR-environment, and wrote short logs before and after their teaching sequences. The data in this study concerns interviews and workshop discussions with 2 SG-teachers reflecting on opportunities and challenges with multilingual study guidance carried out in VR-environments. The head-mounted display (HMDs) used by SG-teachers and students in this study is Meta Quest 2. The software used to create the VR-environments is Spatial. The audio recorded interviews and workshop discussions were transcribed verbatim and anonymized before analysis. We then applied a qualitative thematic analysis with an inductive approach to identify patterns in the data material without imposing theoretical perspectives or preconceived categories [50]. The first step of the thematic analysis [50] was to get familiar with the data through transcribing, reading, and rereading the interviews. Then a systematic coding of statements took place highlighting in color interesting and representative features in the data
84
E. Edstrand et al.
in relation to the research questions. This initial open coding was conducted by the authors independently that in the next step compared the coding and arranged the data systematically in potential themes. The identified codes and emergent themes were then discussed to achieve a mutual understanding and enhance inter-rater reliability [51]. The emerging themes were reviewed and refined to reflect recurring and significant content of the interviews. The analysis resulted in four themes named to reflect the content.
4 Result The learning activities the SG-teachers planned for in their designs for learning concern content areas of chemistry in nature in the subject of chemistry and the human body in the biology subject. The multilingual study guidance is organized in both onsite and distance learning environments. The thematic analysis yielded four themes (Table 1). Table 1. Overview of themes and coding categories Themes
Coding categories
1
Design for learning
a. Concept understanding through diverse modes approach b. Time consuming and explorative approach
2
Subject content enactment
a.Visualizing processes and 3D-object interaction b. Bilingual visualization in realistic scenarios
3
Relation to subject teachers
a. Planning with subject teachers a prerequisite b. Requires dedicated time and foresight
4
Limitations in the tool
a. Good quality 3D-models cost b. Interactivity limitations of free software c. Access to student’s non-verbal communication
4.1 Design for Learning The SG-teachers demonstrate different ways of using VR in study guidance. Teacher 1 (T1), has previous experience of using VR and creating adaptive VR-environments for students in multilingual study guidance. Her teaching plan includes three main parts: reading, watching film and using VR. In the reading part the SG-teacher and the students read together, select and highlight theoretical concepts central to the current subject area: We read. The student reads and I read, and we mark concepts that are important and then I go deeper and explain these concepts. In the case of my student, the mother tongue doesn’t matter that much because I have to go deep and explain in a simpler Swedish because my student is neither strong in the mother tongue nor Swedish so you have to find some strategies. (Interview T1)
Teachers’ Educational Design Using Adaptive VR-Environments
85
After the reading part, the SG-teacher shows a film with the purpose of visualizing what the student previously read. To further visualize concepts and content, the student enters the VR-environment which the teacher has created: In the case of my student it’s much concepts and explanations because he isn’t that strong in his mother tongue compared to Swedish so we visualize concepts through VR. It was so fun! When they had a test, I got to see his answers, and there were a lot of these specific words that we had talked about in VR that he wrote in his test even though he didn’t take notes during the lesson. So it was this that he remembered and this that he wrote so I recognized this in his answers. (Workshop T1) Teacher 2 (T2) describes a somewhat different teaching plan, more focusing solely on the experience of being in the VR-environment. She spends a lot of time building advanced VR-environments with as authentic images as possible and a lot of room for exploration. In this case, the SG-teacher and the student are not in the same physical location. They start their lesson in a video-meeting using videoconferencing software (Zoom) and after a short introduction they enter the VR-environment where they spend most of the class before they reconvene in Zoom for a short debriefing at the end: From the beginning I think I thought twenty minutes [was enough time for VR]. We have an hour when we see each other and twenty minutes would probably be enough for them because I didn’t know if they would get dizzy or if it would be too heavy for their head because it could also be that you get a headache or an eye ache and so on, but they have fixed it quite well. So, the moments in VR have been much longer, perhaps forty, forty-five minutes, because they have thought that it has been fun and it has perhaps taken a little longer than I thought and that they have fixed it. So really, the lessons have only included VR, so maybe it was five minutes before and ten minutes after that we just summarized or talked about what we have done. (Interview T2) In this case, it is the experience itself in the VR-environment that becomes essential. This is further elaborated on in this quote: The teaching plan is not that different from if I were to plan for teaching in 2D and only in Zoom or in a classroom for that matter. But the experience is the great reinforcement. To really see a heart and get to hold it instead of only seeing it in a picture or a film and to see them [the organs] in relation to other organs. (Interview T2) The disadvantage, as most things revolve around VR, is that there is really no plan b if something with the technology goes wrong: Then, unfortunately, the last moment that we were going to do then, we were going to move on to the nose and the sense of smell and that, then the student couldn’t get in, then they were apparently logged out of VR and there was no one who could help them, so then we had to do it normally, that is via Zoom only. (Interview T2)
86
E. Edstrand et al.
A difference between the two SG-teachers is that for Teacher 2 the use of VR in teaching was something new, which meant that she has had to spend a lot of time learning the technology herself: It has taken time for me to learn both Spatial [VR software] and also learn the other programs that I need, or find on which pages I can find free material that is good. And I wanted to do this really, really well, so of course it took time to think about which environment would suit this. (Interview T2) 4.2 Subject Content Enactment In this section we illustrate the ways in which the SG-teachers approach subject content enactment. For instance, Teacher 1 meets her student in a gallery where the teacher has prepared frames representing the subject content (e.g., carbon cycle) and various 3D-models relevant for interaction with the content knowledge (Fig. 1): I have 3D-models of an atom where you can clearly see what the atomic nucleus consists of and the entire process and a 3D-model of a molecule and I explain the differences. I also have tiny golf balls which the student can interact with, and he can build a water molecule using the golf balls. The golf balls have different colors. They represent hydrogen molecules and oxygen molecules. (Interview T1)
Fig. 1. Screenshots of the VR-environment designed by Teacher 1.
Teacher 2 has the ambition to create authentic VR-rooms for her students. In the example below, she takes as a starting point in what the student is currently reading about in biology (the human body) and creates an operating room with doctors operating on different human organs for the student to feel immersed: We’ve worked with the body, so I’ve had a collaboration with the biology teacher and they started with the heart, or lungs, these organs first and that meant that I had to first read about what a fifth grader should be able to do regarding these parts because I’m not a biology teacher and have not received their book, but I myself have to sort of look for what might fit. Where are we going, how deep are we going and so on. Based on that, I have made a room that is an operating room, you could say, so that there was a bit of this feeling of a hospital and operations and you take out the organ like that. (Interview T2)
Teachers’ Educational Design Using Adaptive VR-Environments
87
Fig. 2. Screenshots of the VR-environment designed by Teacher 2.
Teacher 2 applies also bilingual visual support creating boards with words and pictures in combination with realistic objects (e.g. the heart) that the student can examine in detail and that they can talk about together (Fig. 2): We’ve had different boards, both in German and in Swedish, where the student /…/ had to match the Swedish and the German then with the help of the board pictures and so on. Then we’ve had three-dimensional organs, like the heart, that I’ve found in uh, like very complete ones then, which are incredibly well made, so you can see all the veins and so on. So she could really imagine what it looks like, and she’s also been able to grab them and turn and twist, enlarge and reduce like that. (Interview T2) As the quotes reveal, Teacher 1 and Teacher 2 use different strategies for concept training, immersement, interaction and visualizing the subject content. Real scenarios and 3D-models are used with the purpose of deepening the student’s understanding of concepts through interaction with objects. 4.3 Relation to Subject Teachers The teachers’ study guidance is dependent on the content the student is working with in class at that moment, relaying on contact and planning with the class/subject teacher. The SG-teachers experience a major shortcoming in schools in general that study supervisors have reserved time with subject teachers: You try to get hold of the [subject] teacher when you find the chance and the opportunity to ask a little: What are you working with? Can I get a copy of what you are working with? and then you search for more material yourself in that case because I know my student well, I know what he copes and cannot cope. (Interview T1) They view this relation as a prerequisite to be able to accommodate the content and study material and approach the students on the right level during the lesson: It is difficult [the collaboration]. I have been in contact with the teacher who had biology now and I have asked her on several occasions to scan or get help with scanning the pages that the student should now know so that I can see what they should know, how much they should do, how much of the eye [the subject content]
88
E. Edstrand et al.
he should know and so on. But I’ve never been given that, so I’ve had to find out for myself what a fifth grader should be able to do in this area. (Interview T2) As the quote reveals, the SG-teachers needed to search for materials on their own and learn about subject-specific things in order to create the learning environments. To overcome this they wish for involvement in class planning and to have a little more foresight: I feel that all the time we are a little too close on the “yes but what are you doing this week.” I would like to know a couple of three weeks in advance so that I have time in my own planning. (Interview T1) 4.4 VR Technology and Its Limitations In general, the SG-teachers have good experiences of using VR in their multilingual study guidance. However, in the interviews they express some limitations concerning 3D-models: I can see what’s out there. There are really good things [3D-models] uhm like for example for medical students. But they cost too much money. A lung costs two hundred dollars so that’s the big challenge right now. To find good material that doesn’t cost any money. (Interview T2) The quotation demonstrates that T2 has knowledge about where to find good 3Dmodels, which in her case, indicate that the models are well made and look realistic. The fact that such models are expensive is perceived as a challenge by T2 since she is limited to use models, pictures, software etc. that are free of charge. This is further elaborated in this quote: If I want a table, it can be one that has a lot of cobwebs on it because that’s what’s free. A lot is made for horror movies and horror games in VR. So that’s probably the biggest challenge. I haven’t tried to scan objects and upload them myself yet but that’s because we talk about organs now so that would be difficult hehe. (Interview T2) In her line of reasoning about scanning objects and uploading them on her own, T2 refers to a creative workshop that the VR-designer held for the SG-teachers and researchers in this research project. The VR-designer showed how objects shaped in clay or in Lego can be scanned with the mobile phone and uploaded as a 3D-model in the VR-environment. However, this, T2 believes, is not relevant for her particular environment due to the subject content. In addition, T1 also points to challenges related to 3D-models. T1 uses the software Gravity Sketch, which is a design and collaboration tool that enables the user to create their own objects in a VR-environment. T1 sees the potential of this software, however, since she is limited to the free version the owndesigned objects she created are not available for her students to interact with. The challenges, thus, concern limitations connected to the use of free version software:
Teachers’ Educational Design Using Adaptive VR-Environments
89
My student can’t take or hold the objects that I use [objects made in Gravity Sketch] or make them bigger or smaller. He can’t interact with them because I’m the one who has created them. (Interview T1) Another limitation linked to the implementation of VR in multilingual study guidance concerns that it is not possible for the SG-teachers to see the students’ facial expressions or body language: I want to know if the student follows so that he is looking at what I’m looking at or is he looking around at something else. I can’t see if that’s the case. But you try to adapt your teaching in VR to for example discussions just like we do in the reading part: “do you follow?” “can you see this?”. (Interview T1) Both SG-teachers and students meet as avatars in the VR-environment. The challenge that T1 experiences when interacting in this way concerns an uncertainty related to whether the students keep up with the content in the VR-environment. However, the strategies to ensure that the students understand the content are based on repeated dialogue including questions and reasoning.
5 Discussion and Conclusion The present study has identified a variation of approaches when it comes to how SGteachers didactically design multilingual study guidance to promote students’ conceptual knowledge development when interacting with adaptive VR-environments (RQ1). Their design strategy is also reflected in the enactment of subject content (RQ2). The thematic analysis yielded four themes that reveal their teaching strategies, readiness as well as challenges: Design for learning, Subject content enactment, Relation to subject teachers, and Limitations in the tool. In line with previously published research, this study shows that the SG-teachers simplify concepts in their teaching for students to learn. In the literature, it is argued that such simplification of the language entails a risk that students miss out on important subjectspecific concepts, which in turn has implications on the continuity of learning [21, 22]. However, the findings from this study extend the related literature by demonstrating the ways in which the simplification of concepts constitutes a deliberately planned part of the teaching process. One of the SG-teachers combines initial reading together with the students, where she often simplifies the language, with encounters of the subject content and central concepts through various resources, for example film clips and VR. Contrary to the limitations involved when SG-teachers simplify concepts, the SG-teacher in this study, acknowledged that her design for learning contributed to a deeper understanding of the subject content as the student re-used formulations and reasoning used in the VR-environment in an upcoming test. This thus demonstrates the potential of VR to become a resource for transfer of content into other contexts and formats to improve the quality of learning [28, 29]. Thus, by including different resources offering a variation of modalities promotes ways to unfold concepts from being simplified towards a deeper understanding. Additionally, the results illustrate different approaches used by the SG-teachers concerning the didactic questions when and where [49]. SG-teacher 1
90
E. Edstrand et al.
plans for VR activities for approximately 10–15 min whereas SG-teacher 2 uses VR for approximately 40 min which constitute almost the entire lesson. The approach of using VR during the entire lesson indicates that SG-teacher 2 is aware of some of the benefits of using VR in instruction, such as manipulating 3D-objects [24] or exploring concepts and processes [25, 26]. She argues that the VR-environment, through offering the student the possibility to engage in a 3D environment deepens students’ learning in a way that Zoom (being 2D) does not. In this way, VR is used as a technical resource during the lesson (when) and to engage in learning activities which are designed in VR (where). However, different subject content requires different designs and different learning activities in the virtual environment. This is an aspect that teachers need to be aware of that is, what subject content is more appropriate to teach in VR and what content is not? Do all activities need to be carried out in VR is a relevant question to consider and, if so, what kind of added value can VR offer within a lecture or a module? The interviews and workshop material offered further insight into what kind of considerations SG-teachers make when choosing among didactic alternatives and deciding how to design learning environments in VR. The SG-teachers point to the benefits of using 3D-models for learning as the students get to explore concepts and processes by interacting with and manipulating them, for example to build a water molecule using golf balls or to discover body organs through holding, enlarging and turning them [24, 25, 26]. One of the SG-teachers emphasized the importance of an authentic VR-environment when enacting specific content, that is, a virtual environment that is realistic and represents the real world (e.g., a hospital environment). In doing so, she uses 3D-models of organs which are close to how the real ones look like. The SG-teachers describe how the subject content is enacted through an interactive virtual environment where 3D-models are seen as central for students’ learning. The students are expected to show and demonstrate their knowledge, while discussing using acquired concepts in a specific knowledge domain. In this way, 3D-models are regarded as significant for students’ learning where the tactile aspects play a central role for educational benefits. The SG-teachers in this study experience lack of collaboration with subject teachers [cf., 13, 14]. Both SG-teachers mention that they wish for better collaboration, which according to the Swedish School Inspectorate [12] is essential for a successful multilingual study guidance. In order to achieve this, a short conversation in the corridor or in the staff room is not enough, rather an organized collaboration between subject teachers and SG-teachers is required. The SG-teachers in this study build the VR-environments themselves and thus have the opportunity to create a learning environment and activities which are adapted to an individual student. Consequently, how well the environment is adapted to the individual student depends on the collaboration with the subject teacher, since the subject teacher has insights and knowledge about the subject content the student needs to develop knowledge about. If we return to the discussion about creating authentic environments in VR, this was an approach that one of the SG-teachers in this study found as limiting due to the challenge of finding good (authentic) 3D-models. The search for authentic materials is expressed as time consuming. However, a question to ask is how authentic 3D-models need to be in order to do their job of promoting learning? The SG-teachers mention activities where students could create 3D-models themselves, for instance, through using
Teachers’ Educational Design Using Adaptive VR-Environments
91
materials such as clay or through using software where they can draw 3D-objects in VR. To use such self-made 3D-models, implies that the authenticity gets lost, which is one reason why one of the SG-teachers does not contemplate such an approach as relevant when creating a learning environment in biology. However, the very activity of creating the objects could have a pedagogical value, which is an interesting aspect that needs further research. Another limitation that is illustrated in the empirical material is the difficulty to see if the students keep up their engagement and interest in the VR-environment. In this regard, a challenge could be when SG-teacher and student meet as avatars which implies that gaze and facial expressions are invisible. However, according to one of the SGteachers, this challenge is solved through constant maintenance of a dialogue with the students about the content. The outcomes of the study demonstrate that when designing for learning in a VR-environment, a planning for student-active practice is pivotal. To create such condition, the design should encourage students to engage with specific knowledge content in the virtual space, where actions such as moving around, showing knowledge by pointing and/or selecting, and becoming familiar with objects by holding them, and have opportunities to study certain processes (e.g., how food passes through the body organs. In addition, one of the teachers focuses on the interaction between student and teacher in the VR-environment, that is, that they are active together (‘we watch’, ‘we move around’). Thus, the ways in which SG-teachers organize teaching in the VR-environment have implications for what is possible for the students to learn. To conclude, the results indicate that the SG-teachers use adaptive VR-environments as a resource to promote students’ conceptual knowledge. Thus, they plan and organize their teaching in VR with a focus on visualizing and materializing subject-specific concepts and content for the students by creating environments with 3D-objects and models that the students can interact with. The degree of readiness related to working with VR is an aspect relevant for planning and organizing study guidance. Another aspect concerns collaboration between SG-teachers and subject teachers. This study shows that SG-teachers, in their planning, put lots of time on searching for relevant materials and to find out on what level the learning activities should be. This procedure could improve through better collaboration with subject teachers, as the SG-teachers then instead could use this time to plan and organize a more effective and focused study guidance. Even if adaptive VR-environments make it possible for students to engage in subject content in ways that otherwise would not be possible, the use of free software and free 3D-objects and models was shown to be a limitation for the SG-teachers. Also, another limitation of the VR-environments found in this study was the SG-teachers’ experience of not knowing whether the students were keeping up with the content when engaged in the VR-environment. However, adaptive VR-environments and its benefits of visualizing learning content may be seen as triggers that provide support for a learning trajectory. Thus, subject content can be enacted through visualizing processes and 3Dobjects interactions as well as in bilingual visualizations in realistic scenarios. This study contributes with knowledge about how multilingual study guidance can be didactically designed to promote the development of students’ conceptual knowledge with adaptive VR-environments. The empirical material is a clear illustration of how adaptive VRenvironments metaphorically function as a “bridge” between students’ first language
92
E. Edstrand et al.
and the subject area content [17] and that the teachers act as pillars of support. Furthermore, the study contributes to increasing the quality of multilingual study guidance, which is a field where research is limited. Acknowledgments. This research has received financing within the ULF agreement: Utveckling (Development), Lärande (Learning) and Forskning (Research), a national initiative to develop sustainable collaboration models between academia and education. The authors wish to particularly thank the participating teachers and students in this research study.
References 1. Säljö, R.: Medier i skola och undervisning – om våra kunskapers medieberoende. In: Godhe, A-L., Hashemi, S.S. (eds.), Digital kompetens för lärare, pp. 17–35. Gleerups, Malmö (2019) 2. Sofkova Hashemi, S.: Didaktisk design i teknikmedierad undervisning. In: Godhe, A-L., Sofkova Hashemi, S. (eds.) Digital kompetens för lärare, pp.17–35. Gleerups, Malmö (2019) 3. Eisenlauer, V., Sosa, D.: Pedagogic meaning making in spherical video based virtual reality – a case study from the EFL classroom. Des. Learn. 14(1), 129–136 (2022) 4. Soliman, M., Pesyridis, A., Dalaymani-Zad, D., Gronfula, M., Kourmpetis, M.: The application of virtual reality in engineering education. Appl. Sci. 11, 2879 (2021) 5. Blaschovich, J., Baileson, J.: Infinite Reality. HarperCollins, New York (2011) 6. Drigas, A., Mitsea, E., Skianis, C.: Virtual reality and metacognition training techniques for learning disabilities. Sustainability 14, 10170 (2022) 7. Huang, H.M., Rauch, U., Liaw, S.S.: Investigating learners’ attitudes toward virtual reality learning environments: based on a constructivist approach. Comput. Educ. 55(3), 1171–1182 (2010) 8. Chen, B., Wang, Y., Wang, L.: The effects of virtual reality-assisted language learning: a meta-analysis. Sustainability 14, 3147 (2022) 9. Klimova, B.: Use of virtual reality in non-native language learning and teaching. Procedia Comput. Sci. 192, 1385–1392 (2021) 10. Parmaxi, A.: Virtual reality in language learning: a systematic review and implications for research and practice. Interact. Learn. Environ. (2020) 11. The Swedish National Agency for Education (Skolverket): Studiehandledning på modersmålet: att stödja kunskapsutvecklingen hos flerspråkiga elever. Stockholm: Skolverket (2019) 12. The Swedish School Inspectorate (Skolinspektionen): Studiehandledning på modersmålet i årskurs 7–9 [Elektronisk resurs] (2017) 13. Engblom, C., Fallberg, K.: Nyanländas lärande och språkutvecklande arbetssätt. Rapport från en forskningscirkel. Uppsala: Uppsala universitet (2018) 14. Duek, S.: Med andra ord. Samspel och villkor för litteracitet bland nyanlända barn. Karlstad: Karlstads universitet (2017) 15. Ernst-Slavit, G., Wenger, K.J.: Teaching in the margins: the multifaceted work and struggles of bilingual paraeducators. Anthropol. Educ. Quart. 37(1), 62–81 (2006) 16. Kenner, C., Gregory, E., Ruby, M., Al Azami, S.: Bilingual learning for second and third generation children. Lang. Cult. Curr. 21(2), 120–137 (2008) 17. Dávila, L., Bunar, N.: Translanguaging through an advocacy lens: the roles of multilingual classroom assistants in Sweden. Eur. J. Appl. Linguist. 8(1), 107–126 (2020) 18. Schleppegrell, M.J.: Academic language in teaching and learning. Elem. Sch. J. 112(3), 409–418 (2012)
Teachers’ Educational Design Using Adaptive VR-Environments
93
19. Vogel, D., Stock, E.: Opportunities and hope through education: how German schools include refugees. Education International Research, Brussels (2017) 20. Rubin, M.: Språkliga redskap - Språklig beredskap: en praktiknära studie om elevers ämnesspråkliga deltagande i ljuset av inkluderande undervisning. Malmö universitet, Fakulteten för lärande och samhälle (2019) 21. Karlsson, A., Nygård Larsson, P., Jakobsson, A.: The continuity of learning in a translanguaging science classroom. Cult. Sci. Edu. 15(1), 1–25 (2019). https://doi.org/10.1007/s11 422-019-09933-y 22. Hajer, M., Meestringa, T.: Språkinriktad undervisning: En handbok [Language-based education: A handbook] (2nd ed.). Hallgren & Fallgren, Stockholm (2014) 23. Wu, B., Yu, X., Gu, X.: Effectiveness of immersive virtual reality using head-mounted displays on learning performance: a meta-analysis. Br. J. Edu. Technol. 51(6), 1991–2005 (2020) 24. Liu, R., Wang, L., Lei, J., Wang, Q., Ren, Y.: Effects of an immersive virtual reality-based classroom on students’ learning performance in science lessons. Br. J. Edu. Technol. 51(6), 2034–2049 (2020) 25. Jarvelainen, M., Cooper, S., Jones, J.: Nursing students’ educational experience in regional Australia: Reflections on acute events. A qualitative review of clinical incidents. Nurse Educ. Pract. 31, 188–193 (2018) 26. Mills, K., Jass Ketelhut, D., Gong, X.: Change of teacher beliefs, but not practices, following integration of immersive virtual environment in the classroom. J. Educ. Comput. Res. 57(7), 1786–1811 (2019) 27. Bourgeois-Bougrine, S., Richard, P., Burkhardt, J., Frantz, B., Lubart, T.: The expression of users’ creative potential in virtual and real environments: an exploratory study. Creat. Res. J. 32(1), 55–65 (2020) 28. Yu, Z.: A meta-analysis of the effect of virtual reality technology use in education. Interact. Learn. Environ. (2021) 29. Krokos, E., Plaisant, C., Varshney, A.: Virtual memory palaces: immersion aids recall. Virtual Reality 23(1), 1–15 (2018). https://doi.org/10.1007/s10055-018-0346-3 30. Maas, M.J., Hughes, J.M.: Virtual, augmented and mixed reality in K-12 education: a review of the literature. Technol. Pedagog. Educ. 29(2), 231–249 (2020) 31. Luo, H., Li, G., Feng, Q., Yang, Y., Zuo, M.: Virtual reality in K-12 and higher education: a systematic review of the literature from 2000 to 2019. J. Comput. Assist. Learn. 37(3), 887–901 (2021) 32. Yildirim, B., Sahin-Topalcenqiz, E., Arikan, G., Timur, S.: Using virtual reality in the classroom: reflections of STEM teachers on the use of teaching and learning tools. J. Educ. Sci. Environ. Health (JESEH) 6(3), 231–245 (2020) 33. Elliott, J.: Principles and methods for the conduct of case studies in school-based educational action research. In: Anderberg, E. (ed.) Skolnära forskningsmetoder, pp. 111–141. Studentlitteratur, Lund (2020) 34. Adelman, C.: Kurt Lewin and the origins of action research. Educ. Action Res. 1(1), 7–24 (1993) 35. Design-Based Research Collective: Design-Based Research: An Emerging Paradigm for Educational Inquiry. Educ. Res. 32(1), 5–8 (2003) 36. McKenney, S.E., Reeves, T.C.: Conducting Educational Design Research. Routledge, New York, NY (2019) 37. Cviko, A., McKenney, S., Voogt, J.: Teachers as co-designers of technology-rich learning activities for early literacy. Technol. Pedagog. Educ. 24(4), 443–459 (2015) 38. Schön, D.A.: Educating the reflective practitioner: toward a new design for teaching and learning in the professions (vol. 1.). Jossey-Bass, San Francisco (1987) 39. Swedish Research Council.: Good research practice (codex for research) (2018).https://www. vr.se/
94
E. Edstrand et al.
40. Beetham, H., Sharpe, R.: Rethinking Pedagogy for a Digital Age: Designing for 21st Century Learning. Routledge, London (2013) 41. Sun, Y.H.: Design for CALL – possible synergies between CALL and design for learning. Comput. Assist. Lang. Learn. 30(6), 575–599 (2017) 42. Lindstrand, F., Åkerfeldt, A.: En förändring av lärandets kontext – aspekter på lärande i gestaltningsarbete med digitala resurser. Linderoth, I J. (ed.) Individ, teknik och lärande, pp. 200–217. Carlssons, Stockholm (2009) 43. Künzli, R.: German Didaktik: models of representation, of intercourse, and of experience. In: Westbury, I., Hopmann, S., Riquarts, K. (eds.) Teaching as a reflective practice: the German Didaktik tradition, pp. 41–54. Lawrence Erlbaum, Mahwah (2000) 44. Krogh, E., Qvortrup, A., Ting Graf, S. (eds.): Didaktik and Curriculum in Ongoing Dialogue. Routledge, New York (2021) 45. Lund, A., Hauge, T.E.: Designs for teaching and learning in technology-rich learning environments. Nordic J. Digit. Lit. 6(4), 258–271 (2011) 46. Hudson, B.: Didactical Design for Technology Enhanced Learning. In: Hudson, B., Meyer, M. (eds.) Beyond Fragmentation: Didactics, Learning and teaching in Europe, pp. 223–238. Barbara Budrich, Leverkusen Opladen (2011) 47. Selander, S., Kress, G.: Design för Lärande: Ett Multimodalt Perspektiv. Norstedts, Stockholm (2010) 48. Boistrup, L.B., Selander, S.: Designs for Research, Teaching and Learning: A Framework for Future Education. Routledge, New York (2022) 49. Sofkova Hashemi, S., Spante, M.: Den didaktiska designens betydelse: IT-didaktiska modeller och ramvillkor. I Kollaborativ undervisning i digital skolmiljö, pp. 125–135. Gleerups, Malmö (2016) 50. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006) 51. Cohen, L., Manion, L., Morrison, K.: Research Methods in Education, 8th edn. Routledge, Oxfordshire (2018)
Didactics and Technical Challenges of Virtual Learning Locations for Vocational Education and Training Thomas Keller1(B)
, Martin Berger2 , Janick Michot1 , Elke Brucker-Kley1 and Reto Knaack1
,
1 ZHAW, Winterthur, Switzerland
[email protected] 2 PHZH, Zürich, Switzerland
Abstract. This paper presents an immersive Virtual Reality App for vocational training of assembly electricians and installation electricians. The app was designed on a didactics first principle and with a strong human-centric approach. The motivation for such an app is the heterogenous situations of the apprentices within their companies that prevents a fair and equal chance at the final exam. With the app apprentices have a tool available that enables them to prepare for the final exam independent on their individual situation in the local company. An evaluation based on A/B testing at several sites was preformed which showed a significant impact on the learning success. However, VR is today hardly used systematically in vocational training, which can be attributed to a lack of experience and research, among other factors. This example shows the potential and a promising approach for VR developments. Finally, it shall serve as a motivation to introduce VR for other use cases. Keywords: Vocational Training · Virtual Reality · Field Study
1 Introduction The technological possibilities of virtual reality (VR) for simulating work- and occupation-related activities in a way that is effective for learning have increased significantly in recent years (K. Kim et al., 2018; K. G. Kim et al., 2020a; Zobel et al., 2018). For in-company training in the context of dual basic vocational training, enrichment with virtual learning environments appears interesting, as it is hardly possible for many companies to offer their learners sufficient practical learning situations for all trainingrelevant action competencies for cost and safety reasons. Nevertheless, VR has not yet been used systematically in company training, which can be attributed in part to a lack of experience and research. There are also doubts about the extent to which complex professional action competencies can be adequately learned in a VR environment. This is where this research contribution comes in. Using the example of a prototypical virtual learning environment, the project “Virtual Reality as a Learning Venue for Basic Vocational Training - Competence in Action © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 95–114, 2023. https://doi.org/10.1007/978-3-031-34550-0_7
96
T. Keller et al.
through Immersive Learning Experiences” examines the didactic, design and organizational factors for effective use of the technology in in-company training. This article presents the results from this project and focuses on the didactic aspects of the learning effectiveness and on the technical challenges of the VR application. For the occupations of assembly electrician and electrical installer, a suitable occupational action competence was determined with the “initial testing of electric installations” (EIT.swiss, 2015b, 2020b), a corresponding VR learning space was designed and programmed via a multi-stage iterative procedure. In a field study, the realized VR learning space was being applied in training companies in the cantons of Bern and Zurich during several weeks. The required virtual reality headsets (Oculus Quest 2) are provided to the companies. The sensational results of the field study motivated companies to integrate the VR learning app in their standard learning procedures. The experiences with the virtual learning room and its effect on learning in the company are being quantitatively investigated in an intervention group (n = 32 learners) and a control group (n = 43 learners). This is done via pre- and post-tests. The pre- and posttests consist of approximately one-hour practical exams, which are administered and assessed by experts. For the posttest, the results from the final apprenticeship examination (qualification procedure) are included. The research results thus provide a first comparison between learners who were able to practice action competencies in a VR environment in addition to the apprenticeship and learners who did not have access to a VR environment but were able to use the corresponding learning time in the company. The results of the field study could thus provide empirical evidence on the effects of a virtual learning environment for learning complex vocational skills. The paper is structured as follows: Sect. 2 describes the situation of the application domain as well as the objectives of the project. Sect. 3 summarizes the conception of the VR app. Section 4 makes a deep dive into the technical realization. Finally, Sects. 5, 6, and 7 introduces the field experiment, its results and findings. The paper concludes with some lessons learned. The following chapters follow the narrative of (Berger et al., 2022) but with an additional emphasis on the technical realization.
2 Initial Situation and Objective At the heart of basic vocational education and training is the development of vocational action competencies, which are understood as a “holistic repertoire of actions and a person’s disposition to act in a self-organized manner in different situations (SBFI, 2017, S. 7). In dual-organized basic education, the provision of action situations relevant to training takes place predominantly in the training companies. There, however, situational vocational action is not only made possible, but must also be supported in the context of vocational learning (Dehnbostel, 2006). The combination of conceptual-systematic learning (predominantly on the part of the vocational school) with action-based experiential learning pursued in vocational education and training makes a significant contribution to the development of vocational action competence. For structural, managerial, and also safety reasons, it is not easy for many training companies to sufficiently enable their learners to act in an accompanied manner
Didactics and Technical Challenges
97
in the entire spectrum of situations relevant to training (Leemann, 2019; Stalder & Carigiet, 2013). Supplementing in-company training with “technology-supported experiential worlds” therefore seems interesting (Zinn, 2020). Such worlds of experience can be created with immersive VR. Immersive VR refers to experiences in a technically generated 3D world in which learners are immersed audio-visually shielded by a VR headset to look around, move and interact with objects in 360-degree space and thus feel a real sense of presence (Lanier, 2017; Slater & Sanchez-Vives, 2014). With this technology, complex content can be presented and communicated in a realistic way, thus promoting professional skills and competencies (Cattaneo, 2022; Goertz et al., 2021; K. G. Kim et al., 2020b). This for example for a work-integrated competence development in the context of the operational environment. (Heinlein et al., 2021; Zenisek et al., 2021). Nevertheless, the potential of VR learning environments is hardly used systematically in the vocational training of the dual basic vocational education and the relevant research situation is still insufficient - among others in the industrial sector, where the technology is most likely to find its way in recent years (Wolfartsberger et al., 2022). Existing research in this area mainly relates to laboratory studies that investigate the added value of VR learning environments compared to traditional methods for acquiring vocational skills. For example, compared to classroom training for industrial robot operators (Pratticò & Lamberti, 2021), or compared to 2D learning programs for the acquisition of motorcycle parts designations (Babu et al., 2018). Winter et al. (2020) investigate the added value of a VR training simulation for the maintenance of a metering pump versus video training, and Wolfartsberger et al. (2022) investigate the added value of a VR simulation versus traditional on-the-job training with tutor support for learning an assembly process. The study results of the aforementioned studies show that VR learning environments are in principle suitable for vocational learning, depending on the area of application, but the potential of the technology for the acquisition of vocational competencies is still unclear (Wolfartsberger et al., 2022, S. 298). In particular, there is a lack of field studies that examine the effect of VR learning environments that are applied unaccompanied and over a longer period of time in the context of regular in-company training. In order to close the gap in the small amount of experience and research regarding the use of VR technology in the context of vocational education and training, Zender et al. (2018) suggest the transfer of didactic concepts to VR applications. Such an approach is followed by the applied research project “Virtual Reality as a Learning Venue for Basic Vocational Education - Action Competence through Immersive Learning Experiences” (DIZH, 2022) on which this paper is based. In this project, a prototype of a VR learning environment for in-company training was developed and applied. The project results are intended to generate concrete evidence for the implementation of VR learning environments in corporate training and to support innovation in practice through technology-based learning environments.
3 Analysis and Conception of the Prototype The fact that VR learning environments are not systematically used in company training despite the generally increasing digitization of company learning processes and the potential of technology-based worlds of experience can be attributed to various reasons.
98
T. Keller et al.
For example, stakeholders express concerns about health and comfort, loss of reality, and social concerns are also raised (Zender et al., 2022; Zinn, 2020). In addition, there are doubts about the efficacy of VR in the context of vocational training, which prevent technological innovation through VR (Pletz & Zinn 2018). Further, it is evident that despite its rapidly advancing development, the technology still has limitations in teaching skills in vocational education, for example, due to the limited ability to image complexity, physicality, and materiality (Huchler, Wittal & Heinlein 2022, 30). Based on an online brainstorming with 15 experts, Zender et al. (2018) formulate not only technology- and media-specific challenges (costs, technological maturity, lack of technical standards) but also learning process- and education-specific challenges. 3.1 VR Implementation as a Collaborative, Multidisciplinary Process Guidelines such as the ‘Methodological-didactic kit for the design of virtual learning environments’ (Zinn, 2020) and the ‘Guidelines of the Community of Practice for Learning with AR and VR’, in short ‘COPLAR Guidelines’ (Goertz et al., 2021), show how diverse the criteria and aspects are that have to be taken into account when using VR in vocational training. It is expressed that the quality of VR learning environments essentially depends on a fit between learning objective and learning technology, which is generally not given when incorporating new technologies (Kerres, 2003). In the context of implementing VR technology, it requires the inclusion of a wide variety of professional perspectives (Zender et al., 2022). The TPACK model (Koehler & Mishra, 2009) was used in the project as an orientation framework in this regard as shown in Fig. 1. This is an established “guiding model of media pedagogical competence” (Schmid & Petko, 2020), which is used here as the basis for the multidisciplinary approach. The TPACK model describes the entanglement knowledge, Technological Pedagogical Content Knowledge (TPACK for short), that is necessary for the meaningful implementation of technology. It is composed of content knowledge about the subject content to be learned and its structure, pedagogical knowledge about the teaching and learning processes, and technological knowledge about the possibilities (and limitations) of the technology. To ensure a multidisciplinary knowledge base, the project involved close interinstitutional cooperation based on the TPACK model. Experts from the leading Swiss association for electrical professions EIT.swiss covered the content-related knowledge, experts from the Zurich University of Teacher Education (PH Zurich) covered the pedagogical knowledge and experts from the Zurich University of Applied Sciences (ZHAW) covered the technological knowledge. The focused challenges were primarily dealt with from the perspective of subject didactics; accordingly, the content-related knowledge and the pedagogical knowledge with their intersections with the technological knowledge guided the design process. In a collaborative process, the pedagogical-subject didactic needs had to be analyzed and brought together with the current technological possibilities.
Didactics and Technical Challenges
99
Fig. 1. The model of “Technological Pedagogical Knowledge” (Koehler & Mishra, 2009) as an orientation framework of the multidisciplinary approach in the project.
3.2 Pedagogical Framing as the Basis of Learning in VR Simulation Acting in complex situations in in-company training is particularly useful if it is accompanied by pedagogical support (Dehnbostel, 2006). In order to enable learning in complex situations, a pedagogical framing of the situations is required. In real situations of in-company training, the trainers are responsible for providing such a framework. In virtual situations, this can be done with the help of computers, which is interesting from the company’s point of view, since it eliminates the need for personnel resources for learning support. Learning theories and pedagogical approaches form the basis of a pedagogical framing. In the context of VR learning environments, several conceptual approaches present themselves as connectable, based on the relevant state of research. For example, Loke et al. (2015) identify eleven different approaches among 80 relevant VR researches. Experimental learning, situated learning, social constructivism, constructivism, presence learning, and flow theory are cited as the six most common approaches (Loke, 2015). In the context of applying VR in vocational education, for example, embodied cognition, experiential learning, situated learning, constructivism, social constructivism, presence theory, flow theory, and cognitive load theory are considered particularly central (Schwendimann et al., 2015; Zinn, 2019; Zinn et al., 2020). The approaches enumerated above demonstrate the diversity of pedagogical references in the use of VR. For the design of the exemplary VR learning environment in the project, this discus was taken up and specified via concrete approaches to the pedagogical design of the learning-teaching situations: The scaffolding approach to modulate the learning process within the virtual space, the competence orientation to promote the
100
T. Keller et al.
central connection of systematic and action-based learning in vocational education and the gamification approach to support this in the motivational area. Scaffolding: The scaffolding approach can be used as a basis for designing a pedagogical framework (Seidel & Shavelson, 2007). Scaffolding is a process of facilitation that enables learners to perform a task even if it is beyond their current abilities. Scaffolding is situational, meaning that minimal support is provided only when learning difficulties arise, and then increased until those learning difficulties are overcome (Wood et al., 1976). This is intended to keep learners within their “zone of proximal development” where learning growth can occur optimally (Vygotsky & Cole, 1978). Computer-assisted scaffolding has been described as more effective than teacher-assisted scaffolding (Doo et al., 2020) and is also appropriate in the context of VR learning environments (Wang & Cheng, 2011). In the project, it is implemented by adding support elements. Competence Orientation: Professional action competence is based not only on corresponding skills and readiness but also on situation- and occupation-specific knowledge that can be activated in the corresponding requirement situation (Dietzen, 2015). The combination of conceptual-systematic learning and action-based experiential learning is therefore also described as a central challenge of vocational training (Tramm, 2011). At the same time, (Schwendimann et al., 2015) state that dual-organized apprenticeships often do not sufficiently succeed in integrating concrete experience and theoretical knowledge and in this context also emphasize the potential of digital technology to combine theoretical and practical learning content. The consolidation of job-specific knowledge in connection with the selected action situations was therefore included in the VR simulation. To this end, question and reflection elements on the respective background and theoretical principles were added to individual action steps. The VR learning environment can thus make a contribution to linking experience and knowledge in the sense of competence orientation. Gamification: The pedagogical framing can additionally be extended on the basis of the gamification approach by using game design elements that can positively influence the motivation of learners (Deterding et al., 2011). Common elements mentioned include awarding points for a completed task and providing rankings or progress indicators for tasks to be completed (Hamari et al., 2014). The use of gamification in VR training simulation appears to be particularly effective for participants who have no prior experience with VR (Palmas et al., 2019). This was implemented in the prototype using a points system with a competitive nature as well as audiovisual effects.
3.3 Selection of Use Case In a collaborative multidisciplinary process, an operational action situation for the prototype of the VR learning environment was determined in the training plans for the three-year training of assembly electricians and the four-year training of electricians. From a didactic perspective and on the basis of pedagogical content knowledge, the focus was on situations that are central to the training on the one hand and challenging for learners to develop on the other - especially due to the lack of practical opportunities in in-company training. This was done, among other things, on the basis of the current
Didactics and Technical Challenges
101
training plans for the two training courses, in which the action situations and competences are listed (EIT.swiss, 2015b, 2015a), on the basis of the examination guidelines, in which the practical examination situations to be assessed in the qualification procedure (QV) are defined (EIT.swiss, 2020b, 2020a) and on the basis of the evaluation reports on former QV results (SDBB, 2020a, 2020b). From a technological perspective or on the basis of technological knowledge, the situations focused on from a didactic perspective were assessed for their technological suitability for VR simulation. Primarily, action situations are suitable which exploit the diverse interaction possibilities of virtual reality while also taking into account the limits of the technical possibilities. In concrete terms, this means that learners have to interact with 3D models by touching a button in the simplest case, or by connecting components of a measuring device in a more complex case. The technical limits here are given by the characteristics of the VR hardware used, such as the resolution or the accuracy of the tracking by the VR headset. The required Technological Pedagogical Content Knowledge (TPACK) was generated by interweaving the didactical and technological perspectives, on the basis of which the so-called initial test of an electrical system was selected as a relevant professional action situation. As a central situation in the everyday professional life of electricians, the initial test is an important part of the practical part of the qualification procedure (QV) for electricians and, since 2018, also for assembly electricians. However, experience shows that apprentices are given too few opportunities to carry out the initial examination during their in-company training (Bertinelli, 2020). This lack of experience cannot be sufficiently compensated for by simulations in mandatory inter-company courses, which are organized in Switzerland by the professional organization at the cantonal level, which may be one of the reasons for the poor results in the practical examination areas “Troubleshooting & Measurement” (electricians) and “Measurement & Testing” (assembly electricians) in the QV (Bertinelli, 2020). The initial test describes a complex action process of a total of six partial measurements, with which every electrical installation is checked with regard to its function and safety-related requirements. According to Article 24 of the Swiss Ordinance on LowVoltage Electrical Installations (NIV), “Before commissioning an electrical installation or parts thereof […] an initial test during construction must be carried out” (§24 Para. 1 NIV). The required competence is based on various knowledge and skill elements that are built up over the entire training period at the three learning locations of basic vocational training (company, vocational school and inter-company courses) and are listed in the training plan in the form of performance targets. From a technical point of view, the simulation of the initial test is suitable for VR because the central action competence therein involves compliance with well-defined sequences with precisely specified interactions on electrical installations and on the measuring device. The strengths of an immersive virtual reality system come into play because a wide variety of interactions with 3D models are required to successfully complete the tasks. In addition, the subject’s interactions in VR can be monitored with a rule-based system. Based on this, the help system can directly communicate hints to the test person about missing or wrong actions as well as reward correct actions accordingly. The initial test is additionally suitable from a safety aspect, because there is definitely
102
T. Keller et al.
a risk of life-threatening electric shock for the test person if it is carried out improperly on the real installation.
4 Technical Design of the Prototype The implementation of the prototype is based on Unity1 version 2020.3.17 (LTS) and optimized for an Oculus Quest 22 . Scripting is done with C#. Additionally, various assets from the unity assetstore3 for 3D models are used. The main components are the rule and help system, the electric circuit and of course the measurement device “Fluke”. Additionally, a session management is implemented for collecting user data and to enable some basic gamification functionality. The rig for the player is based on the asset “VR Interaction Framework”4 . The player has no visualization except for the hands. Visual tracking of the hands and fingers had to be discarded because of unsatisfactory accuracy and resolution. The following sections introduce the above mentioned components in detail. 4.1 The Measurement Device “Fluke” The measurement device “Fluke” is the most complex and complicated visualized gameobject in this project. The Fluke is located in front of the player and follows him. It can be grabbed and rotated to have a better view on the display or to easily connect the cables. The visual design of the device including its display follows the real device closely (Fig. 2). It consists of active elements like the buttons, the slots and the rotary switch. The display is dynamic and shows currents state’s measurement information. The slots on the top of the device are designed to be connected with different sets of cables, as shown in Fig. 3. Whereas the three-wired cable at the top has fixed plugs, the one-wired cables have a set of plugs to choose from depending on the measurement task. By using the virtual hands, cables can be plugged together or attached to the measurement device. The cables are implemented with an asset called “Obi Rope”5 that gives the cables a realistic physical behavior. All the tools and materials needed for a measurement can be found in a toolbox. The rotary switch at the front right of the Fluke (Fig. 2) is fully functional and must be put in the appropriate state depending on the measurement task. The state change is initiated by tipping the switch with a hand or a finger respectively. The display is adjusted accordingly and information about the selected measuring mode is shown. Depending on the measurement, additional settings must be set, for which the function keys (blue buttons on the left of the display) are used. Some measurements are initiated by pressing the “Test” button on the left side of the Fluke (Fig. 2). These measurements have a one-time characteristic, whereas other 1 https://unity.com/. 2 https://www.meta.com/ch/en/quest/products/quest-2/. 3 https://assetstore.unity.com/. 4 https://assetstore.unity.com/packages/templates/systems/vr-interaction-framework-161066. 5 https://assetstore.unity.com/packages/tools/physics/obi-rope-55579.
Didactics and Technical Challenges
103
Fig. 2. The measurement device “Fluke” as a 3D model
Fig. 3. Measurement Cables for the “Fluke”
measurements like voltage measurement are continuously performed. For these continuous measurements, the update method of the MonoBehaviour6 class is used. In both cases, a measurement requires that the device is connected to the electrical system via the cables and ends. Accordingly, during a measurement, the state of the connected object is retrieved, and corresponding measurement results are derived and shown on the display. 4.2 The Electric Circuit The app doesn’t contain a real circuit simulation and therefore doesn’t implement the laws of an electric circuit. Instead, the implementation of the circuit is a chained list of circuit elements like cables, fuses, switches, and lamps. Figure 4 shows a failure current switch as an example of a component of the electric circuit. As for the Fluke the 3D-model is as realistic as possible designed including the labeling. However, the text size had to be adapted to stay readable for the player. The white circles represent the 6 https://docs.unity3d.com/ScriptReference/MonoBehaviour.html.
104
T. Keller et al.
interactive components of such a device. These include the pins to which the measuring ends can be attached, or the hinge with which the state of the fuse can be switched.
Fig. 4. Failure current switch
The chained list is used to forward and backward information about the connection state. When the chain of connected components starts with a source and ends with a neutral or earth connector and the connected items are conducting, the chain is considered as closed and e.g., a lamp is shining, or a motor is spinning. The chained list is updated whenever a state of a connected component has changed. 4.3 The Rule System We consider the rule system and subsequently the help system, which is introduced in the next section, as the core component of this app. The rule system is responsible for monitoring the progress of users within a measurement and ultimately controlling whether a measurement has been successfully performed. For this purpose, configuration is required for each measurement, with which in particular the steps to be performed in a measurement are defined. The configuration of the rule system is based on a JSON-file as shown below for the example of an RCD measurement.
Didactics and Technical Challenges
105
{ "name": "RCD measurement", "task": "Führe den FI-Test (auch RCD-Messung genannt) durch, ...", "id": "rcd", "ruleSets": [ { "name": "Test Button auslösen", "helpMessages": [ "Am Anfang einer korrekten RCD-Messung steht ..." ], "rules": [ { "name": "TestButton", "delegateName": "OnRcdTestEvent", "helpMessages": [ "Suche und Finde die RCDSchutzeinrichtung ...", "Die RCD-Schutzeinrichtung befindet sich im Nebenraum." ], "ruleType": "Milestone", "onSuccessMessage": "Super, der FI hat ausgelöst und..." }, { "name": "ToggleSwitch", "delegateName": "OnRcdToggleEvent", "helpMessages": [ "Schalte die RCD mit dem Kippschalter wieder an." ], "ruleType": "Milestone" } ], "onSuccessMessage": "Gut gemacht. Gehe zurück zum Hilfe-Screen und ...", "onSuccess": "AskQuestion(rcd1);" }, As can be seen from the example above, a rule has the following parameters: name, delegateName, helpMessages, ruleType, and onSuccessMessage. The configuration of a measurement is divided into three hierarchical levels: At the bottom are individual rules, such as taking out a fuse, several such rules are combined into rule sets or tasks, and ultimately these tasks together represent a measurement.
106
T. Keller et al.
The rule system works event-based. At the start of a measurement all rules are read from the JSON-file and event callbacks are set dynamically. Now the rule system gets always informed about an event (e.g. ToogleRCD) and can evaluate if a rule was executed correctly. The strict separation between configuration and code allows to use the same environment/scene for all measurements. Consequently, there is only one scene for all measurements. The distinction takes place exclusively on the level of the configuration. For example, the user follows different tasks for the insulation measurement than for the RCD measurement, while being in the same scene. The ruleType defines how the ruleSets are evaluated. If the ruleType is a “Milestone” then the simple occurrence of the corresponding event defined in “delegateName” is sufficient to make the rule true. Other values can be “Expressions”, “All”, and “Any”. In these cases, an expression as a string is evaluated. The evaluation is realized by using NCalc7 . NCalc is a mathematical expressions evaluator in.NET. It can parse any expression and evaluate the result, including static or dynamic parameters and custom functions. In the case of a successful ruleSet, a concluding action as well as a message can be defined by onSuccessMessage and onSuccess. The respective method must be available on the same gameobject. 4.4 The Help System The help system is strongly linked to the rule system, as can be seen by the example above. Each ruleSet contains one or more help messages, just as each rule contains help messages. Help messages are defined in an order from general help to detailed help. This corresponds to the scaffolding principle that the user is helped in the least possible way. While the rule set does not specify any order in which tasks are to be completed, the help system only provides help on the current step. The help system itself is a state machine as depicted in Fig. 5. Since the figure is self-explanatory, no further explanation is given here.
Fig. 5. Help system state machine 7 https://github.com/ncalc/ncalc.
Didactics and Technical Challenges
107
Help messages are visualized on a dedicated help canvas (Fig. 6) that has a fixed position in the virtual world. In addition to a textual output, the help message is dynamically synthesized to speech and presented to the user as an audio clip. This has the advantage that defining the help message in the JSON-file is sufficient. Audio is created dynamically and does not need to be present. In the current version, no embodiment for the help system is implemented. An option would be an avatar following the player and speaking the help messages.
Fig. 6. Help canvas showing on the left the working steps and on the right the corresponding help messages.
The help system is also used for asking theoretical questions whenever a working step has been completed by the player corresponding to the didactical concept.
5 The Field Experiment The effect of the VR learning environment on the learning of the professional competence of carrying out an initial examination was investigated in the spring semester of 2022 by means of a field study with 78 learners, 35 of whom were electricians and 43 of whom were assembly electricians in their last semester of training. The apprentices came from a total of eleven independent training companies from the cantons of Zurich, Bern and Solothurn, which belong to the same company conglomerate. The allocation to the intervention group (n = 37) and control group (n = 41) could not be completely randomized, since it was not possible for all learners to participate in a necessary introduction to the use of the VR learning environment for operational reasons. Table 1 shows the distribution of learners in the intervention and control groups (minus the ten dropouts) across the eleven training companies and three cantons. By means of a pre-post design, the learners were tested in January 2022 within the framework of QV preparation courses with regard to their action competence within the framework of the initial examination (pre-test). Then, in consultation with the trainers, the apprentices were instructed to work on their competence in carrying out the initial examination over a period of approximately two months (March/April - May/June 2022)
108
T. Keller et al.
Table 1. Distribution of learners in the intervention and control groups. 2 dropouts in the intervention group and 8 dropouts in the control group of electricians have been deducted
during their in-company training time (paid working time). A total of 120 min, distributed over several time slots, was specified as a time benchmark. The learners in the intervention group were instructed to use the VR learning environment to which they had previously been introduced together with their instructors. The required VR headsets of the type Oculus Quest 2 were provided to the companies by the ZHAW. The frequency and length of use of the VR prototype could be recorded by a user-specific login and a database in the background. Learners in the control group were asked to use traditional learning and teaching tools (Boxler, 2020; Bryner et al., 2015; Bumillier et al., 2020) for preparation within the same time frame. In each case, the local instructors were informed that the appropriate learning time must be available to both groups. The intervention phase ended with the practical part of the qualification procedure, in which, among other things, the learners’ action competence was tested (post-test). The pre- and post-tests were carried out by means of practical individual tests of around 60 min in accordance with federal guidelines. In these tests, experts gave the learners tasks to perform on practice installations and assessed them on the basis of uniform criteria. All information collected in the tests was collected with the permission of the participating learners, trainers, head experts and the vocational training offices of the cantons of Bern and Solothurn and treated in accordance with the applicable data protection guidelines. Comparisons between the intervention and control groups were made using a repeated-measures T-test. A single-factor, univariate analysis of variance with repeated measures was used to analyze whether there were any differences between the two training occupations studied, namely electrician and installation electrician.
6 Results of the Field Experiment Of the 78 learners initially selected for the pre-test, 68 were still tested in the post-test (2 dropouts in the intervention group, 8 in the control group, all dropouts are electricians). The response rate of the pre-survey was 87%, that of the post-survey 41%8 . The learning 8 The results of the surveys were used primarily for questions regarding the application and
application process of the VR learning environment. The results are discussed elsewhere.
Didactics and Technical Challenges
109
duration of the intervention group (application of the VR learning environment) was on average M = 67.84 min, which was below the specified time frame of 120 min, however, the time values scatter very strongly (SD = 42.26). The learning time of the control group was not collected. Table 2 below shows the test results in the pre-test and post-test and their different development. Both groups showed an insufficient score in the pre-test and a sufficient score in the post-test, whereby the score of the intervention group was .62 lower in the pre-test and .14 higher in the post-test. Both groups showed a positive development of the test results as shown in Table 2. Table 2. Mean (M) of the scores and standard deviation (SD) of the learners (number N) of the intervention and control groups in the pre- and post-test, respectively the development of the scores (post-pre). Difference in the development of grade scores between groups (delta) and Cohen’s-d (p < .01). The increase of 1.35 in the M of the intervention group (from 3.18 to 4.54) is. 64 higher than the increase of 0.72 of the control group (from 3.70 to 4.41)
The delta of test scores was 0.64 grade points higher on average for learners in the intervention group than for learners in the control group. This difference is statistically significant (95% CI[0.12, 1.15]), t(66) = 2.98, p = .008). There was a medium-size effect of the VR learning environment on test score development (Cohen’s d = 0.60). The sample included two different training occupations with different training durations: 25 electricians (four-year training) and 43 assembly electricians (three-year training). A single-factor, univariate analysis of variance with repeated measures shows that the development of the test results is related to the training occupation (F(1,33) = 6.919, p = .013, η2 = .173, n = 35) and was more positive in the case of the assembly electricians, whereby the electrical installers achieved a significantly higher value in the pre-test (see Table 3). In the case of the assembly electricians, the different results in the pre-tests are striking, which cannot be explained because the grades were not yet known when the students were divided into the groups. In the case of the electricians, there was a small difference of .12 grade points in the development of the test results between the 11 learners in the intervention group and the 14 learners in the control group, but its significance cannot be demonstrated using a T-test (95%-CI[-.66, .09]), (n = 14), t(23) = 0.38, p = .377). In the case of the assembly electricians, on the other hand, there is a significant difference (95%-CI[.18, 1.52]), t(41) = 2.55, p = .007), which is large. Thus, here the development of the test scores of the intervention group (n = 24) compared to the control group (n = 19) was higher by an average of .84 grade points. A medium to strong effect (Cohen’s d = 0.97) of the VR learning environment on the development of test results was manifested.
110
T. Keller et al.
Table 3. Grade values of the electricians of the intervention and control groups in the pre- and posttest, respectively the development of the grade values (post-pre). Difference in the development of the grade values between the groups (delta) and Cohen’s-d (p < .01)
7 Discussion The significant, medium-size effect of the VR learning environment on the development of test results (Cohen’s-d = .60) for initial grades in the unsatisfactory or just satisfactory range indicates that the supplementation of on-the-job training with VR learning environments is fundamentally effective for learning. Differentiated by occupation, a significant large effect (Cohen’s-d = .97) was found for electricians, whereas no significant effect was found for electricians. Compared to electricians, installation electricians have a shorter training period and the profession is vertically lower positioned (lower competence level). This means that they tend to be less involved in the initial testing of electrical systems on the construction site (Bertinelli, 2020), which could explain the high effect. When interpreting the study results, it must be noted that combined analyses were not possible due to the sample size. Thus, in the quantitative analysis, the influence of central variables of the structure and process dimension (training company, duration of use, etc.), which could not be included due to the chosen setting (field study instead of laboratory study). In order to clarify whether the relatively large differences in the pre-test results between the intervention and control groups are due to a systematic bias or to chance, further studies with a completely randomized sample are necessary.
8 Conclusions The findings in the project and the empirical results of the summative evaluation indicate that virtual experience and learning spaces can support corporate learning and be effective in building professional action competencies. The significantly higher effectiveness of the exemplary VR learning environment in the project for assembly electricians (three-year training with higher reference to practical activities) compared to electricians (four-year training with higher requirements in the field of knowledge) could indicate that especially people with learning difficulties can benefit particularly strongly from a closely guided didactic learning setting
Didactics and Technical Challenges
111
(Knöll, 2007; Nickolaus et al., 2006), as is also realized in the technological support of individual activities in a VR simulation. These possible interpretations woulsd need to be empirically tested in follow-up work. For further research cycles, a targeted variation of the inclusion of the three approaches (scaffolding, competence orientation, and gamification) could produce even more differentiated results on their effectiveness. Last but not least, in further cycles, the organizational challenges that became apparent in the project in the context of multidisciplinary collaboration would have to be focused on. This in order to create sustainable inter-institutional innovation structures that have proven effective in the project to develop VR environments for vocational education.
References Babu, S.K., Krishna, S., Unnikrishnan, R., Bhavani, R.R.:Virtual reality learning environments for vocational education: a comparison study with conventional instructional media on knowledge retention. In: 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT), pp. 385–389. https://doi.org/10.1109/ICALT.2018.00094 Bertinelli, E.: Kann mit Hilfe von Videosequenzen, als unterstützende Massnahme zum Messkurs, die Handlungsfähigkeit des Montagepersonals bei der Erstprüfung gesteigert werden? Fallbeispiel zur Entwicklung der Weiterbildung in einem Elektrogrossbetrieb in der Schweiz. unveröffentlicht (2020) Boxler, H.: Prüfen & Kontrolle. In: Boxler, H. (eds.) Niederspannungs-Installationsnorm NIN 2020 (S. 191–195). Bildungsservice Schweiz (2020) Bryner, P., Hofmann, D., et al.: NIN COMPACT NIBT. Maximales Know-how – minimales Volumen. Electrosuisse (2015) Bumillier, H., et al.: Prüfen der Schutzmassnahmen. In: Fachkunde Elektrotechnik (S. 368–380). Verlag Europa-Lehrmittel (2020) Cattaneo, A.: Digitales Lernen: Nutzen wir wirklich alle Möglichkeiten? Überlegungen zur Integration von Technologien in die Berufsbildung 51(2), 8–12 (2022) Dehnbostel, P.: Lernen im Prozess der Arbeit. Waxmann, Münster (2006) Deterding, S., Dixon, D., Khaled, R., Nacke, L.: From game design elements to Gamefulness: defining” gamification”. In: Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, pp. 9–15 (2011) Dietzen, A.: Die Rolle von Wissen in Kompetenzerklärungen und im Erwerb beruflicher Handlungskompetenz. In: Stock, M., Schlögl, P., Moser, D. (Hrsg.), Kompetent—Wofür? Life skills—Beruflichkeit—Persönlichkeitsbildung: Beiträge zur Berufsbildungsforschung, pp. 39–53. Studienverlag (2015) DIZH. Virtual Reality als Lernort für die Berufliche Grundbildung. DIZH Digitalisierungsinitiative des Kantons Zürich, Jan 3 2022.https://dizh.ch/2022/01/03/vir tual-reality-als-lernort-fuer-die-berufliche-grundbildung/ Doo, M.Y., Bonk, C., Heo, H.: A meta-analysis of scaffolding effects in online learning in higher education. Int. Rev. Res. Open Distrib. Learn. 21(3), 60–80 (2020) EIT.swiss. Bildungsplan Elektroinstallateurin/Elektroinstallateur EFZ (2015a). https://www.eit. swiss/fileadmin/user_upload/documents/Berufsbildung/Grundbildung/Elektroinstallateurin_ EFZ/_de/2015_EI_Bildungsplan.pdf EIT.swiss. Bildungsplan Montage-Elektrikerin/Montage-Elektriker EFZ. EIT.swiss (2015b). https://www.eit.swiss/fileadmin/user_upload/documents/Berufsbildung/Grundbildung/Mon tage-Elektrikerin_EFZ/_de/2015_ME_Bildungsplan.pdf
112
T. Keller et al.
EIT.swiss. Wegleitung zum Qualifikationsverfahren Elektroinstallateurin/Elektroinstallateur EFZ (2020a).https://www.eit.swiss/fileadmin/user_upload/documents/Berufsbildung/Grundb ildung/Elektroinstallateurin_EFZ/_de/2015_EI_Wegleitung_QV.pdf EIT.swiss. Wegleitung zum Qualifikationsverfahren Montage-Elektrikerin/Montage-Elektriker EFZ (2020b). https://www.eit.swiss/fileadmin/user_upload/documents/Berufsbildung/Grundb ildung/Montage-Elektrikerin_EFZ/_de/2015_ME_Wegleitung_QV.pdf Goertz, L., Fehling, C.D., Hagenhofer, T.: COPLAR-Leitfaden: Didaktische Konzepte identifizieren—Community of Practice zum Lernen mit AR und VR (2021). https://www.social-aug mented-learning.de/wp-content/downloads/210225-Coplar-Leitfaden_final.pdf Hamari, J., Koivisto, J., Sarsa, H.: Does gamification work?—a literature review of empirical studies on gamification. In: 47th Hawaii Int. Conference on System Sciences, pp. 3025–3034 (2014) Heinlein, M., Huchler, N., Wittal, R., Weigel, A., Baumgart, T., Niehaves, B.:Erfahrungsgeleitete Gestaltung von VR-Umgebungen zur arbeitsintegrierten Kompetenzentwicklung: Ein Umsetzungsbeispiel bei Montage-und Wartungstätigkeiten. Zeitschrift für Arbeitswissenschaft 75(4), 388–404 (2021) Kerres, M.: Wirkungen und Wirksamkeit neuer Medien in der Bildung. In: R. K. Keill-Slawik (Hrsg.), Education Quality Forum. Wirkungen und Wirksamkeit neuer Medien, pp. 31–44. Waxmann (2003) Kim, K., Boelling, L., Haesler, S., Bailenson, J., Bruder, G., Welch, G.F.: Does a digital assistant need a body? The influence of visual embodiment and social behavior on the perception of intelligent virtual agents in AR. In: 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 105–114 (2018) Kim, K.G., et al.: Using immersive virtual reality to support designing skills in vocational education. Br. J. Edu. Technol. 51(6), 2199–2213 (2020). https://doi.org/10.1111/bjet.13026 Knöll, B.: Differenzielle Effekte von methodischen Entscheidungen und Organisationsformen beruflicher Grundbildung auf die Kompetenz- und Motivationsentwicklung in der gewerblichtechnischen Erstausbildung. Eine empirische Untersuchung in der Grundausbildung von Elektroinstallateuren. Shaker (2007) Koehler, M., Mishra, P.: What is technological pedagogical content knowledge (TPACK)? Contemp. Issues Technol. Teacher Educ. 9(1), 60–70 (2009) Krommer, A.: Warum der Grundsatz „Pädagogik vor Technik” bestenfalls trivial ist. Bildung unter Bedingungen der Digitalität (2018).https://axelkrommer.com/2018/04/16/warum-der-grunds atz-paedagogik-vor-technik-bestenfalls-trivial-ist/ Lanier, J.: Dawn of the new everything: encounters with reality and virtual reality. Henry Holt (2017). https://us.macmillan.com/dawnoftheneweverything/jaronlanier/9781627794091/ Leemann, R.J.: Educational Governance von Ausbildungsverbünden in der Berufsbildung – die Macht der Konventionen. In: Langer, R., Brüsemeister, T. (eds.) Handbuch Educational Governance Theorien. Educational Governance, vol. 43, pp. 265–287. Springer VS, Wiesbaden (2019). https://doi.org/10.1007/978-3-658-22237-6_13 Loke, S.-K.: How do virtual world experiences bring about learning? A critical review of theories. Australas. J. Educ. Technol. 31(1) (2015) Nickolaus, R., Knöll, B., Gschwendtner, T.: Methodische Präferenzen und ihre Effekte auf die Kompetenz-und Motivationsentwicklung–Ergebnisse aus Studien in anforderungsdifferenten elektrotechnischen Ausbildungsberufen in der Grundbildung. ZBW 102, 552–577 (2006) Palmas, F., Labode, D., Plecher, D. A., Klinker, G.: Comparison of a gamified and non-gamified virtual reality training assembly task. In: IEEE 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1–8 (2019) Pratticò, F.G., Lamberti, F.: Towards the adoption of virtual reality training systems for the selftuition of industrial robot operators: a case study at KUKA. Comput. Ind. 129, 103446 (2021)
Didactics and Technical Challenges
113
SBFI. (2017). Handbuch Prozess der Berufsentwicklung in der beruflichen Grundbildung. Staatssekretariat für Bildung, Forschung und Innovation SBFI. https://www.sbfi.admin.ch/sbfi/ de/home/bildung/bwb/bgb/berufsentwicklung.html.html Schmid, M., Petko, D.: ‹ Technological Pedagogical Content Knowledge› als Leitmodell medienpädagogischer Kompetenz. MedienPädagogik: Zeitschrift für Theorie Und Praxis Der Medienbildung, pp. 121–140 (2020) Schwendimann, B.A., Cattaneo, A.A.P., Zufferey, J.D., Gurtner, J.-L., Bétrancourt, M., Dillenbourg, P.: The ‘Erfahrraum’: A pedagogical model for designing educational technologies in dual vocational systems. J. Vocat. Educ. Train. 67(3), 367–396 (2015). https://doi.org/10.1080/ 13636820.2015.1061041 SDBB. Evaluation Qualifikationsverfahren 2021; Praktische Arbeit ElektroInstallateurin/Elektro-Installateur EFZ. Schweizerisches Dienstleistungszentrum Berufsbildung | Berufs-, Studien- und Laufbahnberatung (SDBB) (2020a). https://www.eit.swiss/ fileadmin/user_upload/documents/Berufsbildung/Grundbildung/Elektroinstallateurin_EFZ/_ de/2015_EI_Wegleitung_QV.pdf Seidel, T., Shavelson, R.J.: Teaching effectiveness research in the past decade: the role of theory and research design in disentangling meta-analysis results. Rev. Educ. Res. 77(4), 454–499 (2007) Slater, M., Sanchez-Vives, M.V.: Transcending the self in immersive virtual reality. Computer 47(7), 24–30 (2014). https://doi.org/10.1109/MC.2014.198 Stalder, B., Carigiet, T.: Ausbildungsqualität aus Sicht von Lernenden und Betrieben. In: Qualität in der Berufsbildung. Anspruch und Wirklichkeit. Berichte zur beruflichen Bildung. W. Bertelsmann (2013) Tramm, T.: Theorie-Praxis-Verknüpfung in der beruflichen Ausbildung (2011).https://www.ew. uni-hamburg.de/ueber-die-fakultaet/personen/tramm/files/theorie-praxis-verknuepfunginderb eruflichenausbildung.pdf Vygotsky, L.S., Cole, M.: Mind in society: Development of Higher Psychological Processes. Harvard University Press, Cambridge (1978) Wang, S., Cheng, Y.: Designing a situational 3D virtual learning environment to bridge business theory and practice. In: Proceedings of the 13th International Conference on Enterprise Information Systems - Volume 3: ICEIS, pp. 313 (2011) Winther, F., Ravindran, L., Svendsen, K.P., Feuchtner, T.: Design and evaluation of a VR training simulation for pump maintenance based on a use case at Grundfos. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 738–746 (2020).https://doi.org/10.1109/ VR46266.2020.00097 Wolfartsberger, J., et al.: Virtual Reality als Trainingsmethode: Eine Laborstudie aus dem Industriebereich. HMD Praxis der Wirtschaftsinformatik 59(1), 295–308 (2022) Wood, D., Bruner, J.S., Ross, G.: The role of tutoring in problem solving. J. Child Psychol. Psychiatry 17(2), 89–100 (1976) Zender, R., Buchner, J., Schäfer, C., Wiesche, D., Kelly, K., Tüshaus, L.: Virtual Reality für Schüler: Innen: Ein «Beipackzettel» für die Durchführung immersiver Lernszenarien im schulischen Kontext. MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung 47, 26–52 (2022) Zender, R., Weise, M., von der Heyde, M., Söbke, H.: Lehren und Lernen mit VR und AR–Was wird erwartet? Was funktioniert. Proceedings der Pre-Conference-Workshops der 16. E-Learning Fachtagung Informatik (DeLFI 2018), CEUR-WS.org, 2018. 16 (2018) Zenisek, J., Wild, N., Wolfartsberger, J.: Investigating the potential of smart manufacturing technologies. Procedia Comput. Sci. 180, 507–516 (2021) Zinn, B.: Lehren und Lernen zwischen Virtualität und Realität. J. Tech. Educ. 7(1), 16–31 (2019)
114
T. Keller et al.
Zinn, B. (Hrsg): Virtual, Augmented und Cross Reality in Praxis und Forschung: Technologiebasierte Erfahrungswelten in der beruflichen Aus-und Weiterbildung: Theorie und Anwendung. Franz Steiner Verlag (2020) Zinn, B., Peltz, C., Guo, Q., Ariali, S.: Konzeptionalisierung virtueller Lehr- und Lernarrangements im Kontext des industriellen Dienstleistungsbereichs des Maschinen- und Anlagebaus. In: Zinn, B. (Hrsg.) Virtual, Augmented und Cross Reality in Praxis und Forschung—Technologiebasierte Erfahrungswelten in der beruflichen Aus-und Weiterbildung–Theorie und Anwendung, pp. 141–168. Franz Steiner Verlag (2020) Zobel, B., Werning, S., Berkemeier, L., Thomas, O.: Augmented- und Virtual-RealityTechnologien zur Digitalisierung der Aus- und Weiterbildung – Überblick, Klassifikation und Vergleich. In: Thomas, O., Metzger, D., Niegemann, H. (Hrsg.) Digitalisierung in der Ausund Weiterbildung: Virtual und Augmented Reality für Industrie 4.0, pp. 20–34. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-662-56551-3_2
WebAR-NFC to Gauge User Immersion in Education and Training Soundarya Korlapati
and Cheryl D. Seals(B)
Auburn University, Auburn, AL 36830, USA [email protected]
Abstract. Augmented Reality (AR) is a promising technology for enhancing engagement and understanding in education and training. However, the challenge of measuring user immersion and engagement in AR remains a significant barrier to its widespread adoption. This research proposes a new WebAR-NFC system with hand tracking to measure user immersion and engagement in educational settings. The system integrates WebAR content with NFC tags and hand-tracking technologies to enable the measurement of user interactions with educational content. The proposed system addresses the limitations of previous methods for measuring user immersion in AR, such as self-report or behavioral measures, by providing a more objective and accurate measurement of user engagement. Handtracking in the system offers a unique measure of user interactions with educational content that is not available with previous methods. The system’s potential to improve learning outcomes by enhancing user engagement and immersion suggests it may have significant practical applications in education and training. The system’s accessibility through a web browser on mobile devices or computers makes it widely available to learners, making it suitable for various educational settings. The research proposes a novel methodology for measuring user immersion and engagement in AR, which can significantly contribute to the development and adoption of AR in education and training. The results of this research can advance the understanding of user engagement and immersion in AR, providing insights into how to design effective AR systems for education and training. Keywords: Augmented reality (AR) · Near Field Communication (NFC) · Education · User engagement · Hand tracking · user immersion
1 Introduction Immersive technologies such as augmented reality (AR) and virtual reality (VR) in education and training have gained significant attention in recent years. These technologies can enhance the learning experience by providing a more engaging and interactive environment. However, the challenge with these technologies has been to accurately measure user immersion and engagement, which is critical for determining the effectiveness of the learning experience. One potential solution to this challenge is using WebAR-NFC systems with hand tracking [17]. WebAR, or Web-based AR, allows users to access AR content through a web browser without downloading a dedicated app. NFC, or Near © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 115–126, 2023. https://doi.org/10.1007/978-3-031-34550-0_8
116
S. Korlapati and C. D. Seals
Field Communication, tags can provide additional information or interactive elements [11], while hand tracking can objectively measure user engagement and immersion [12, 15]. Previous research has shown the potential benefits of using AR and VR in education and training. For example, in a study by Yoon [1], AR technology was found to improve students’ understanding of complex scientific concepts. Similarly, in a study by Parmaxi [2], VR technology was found helpful in enhancing students’ motivation and engagement in a language learning environment. However, measuring user engagement and immersion in these environments remains a challenge. Traditional methods such as surveys and self-reports may not accurately measure user engagement, as users may overestimate their level of engagement or may not accurately recall their experience [14]. Therefore, we need to develop more objective user engagement and immersion measures. The proposed WebAR-NFC system with hand tracking has the potential to provide a more accurate and objective measure of user engagement and immersion. By tracking hand gestures and movements, the system can provide a more objective measure of user engagement than self-reports or surveys. Additionally, by combining WebAR with NFC tags, the system can provide a more interactive and engaging learning experience. This study proposes a new methodology and system design for the WebAR-NFC system with hand tracking to gauge user immersion in education and training. Specifically, the research will focus on developing a new methodology for measuring user engagement and immersion and designing a new WebAR-NFC system with hand tracking using the Three.js WebAR library. The research will also explore the potential benefits of the proposed approach for enhancing the learning experience and improving learning outcomes.
2 Literature Review Augmented Reality (AR) has the potential to revolutionize the way we learn and train. AR provides a unique, interactive learning experience that enhances user engagement, motivation, and retention. One of the challenges of using AR in education and training is measuring user immersion and engagement. Measuring user immersion in AR is critical to the success of the learning experience, as it provides a metric to evaluate the effectiveness of the AR system and its impact on learning outcomes. Several studies have proposed methods for measuring user immersion in AR. Using self-report methods, such as questionnaires or surveys, to gather information on user involvement and presence is one of the most popular strategies. However, various variables, including social desirability bias, can impact self-report assessment [3]. Another strategy is to measure the degree of user engagement using physio-logical indicators like heart rate or skin conductance. While intrusive and impractical in educational environments and physiological measures [4]. Recent AR technology advances have enabled hand tracking to measure user immersion and engagement. With hand tracking technology, user hand motions and movements may be recognized and followed in the AR world. Because it can assess the effectiveness of the user’s interactions with the augmented reality information, such as the precision of hand gestures or the smoothness of movements, hand tracking offers a
WebAR-NFC to Gauge User Immersion in Education and Training
117
more objective and precise measure of user engagement. Moreover, hand tracking can give users immediate feedback to modify their behavior and increase their engagement with the information [5, 6]. The proposed WebAR-NFC system with hand tracking to gauge user immersion in education and training is a new and innovative approach to measuring user engagement in AR. The system integrates WebAR content with NFC tags and hand-tracking technologies to enable the measurement of user interactions with educational content. WebAR enables the creation of AR content that can be accessed through a web browser on a mobile device or a computer, making it widely accessible to learners. NFC tags can provide additional information and interactive elements in the educational environment [11, 18]. Hand tracking can measure user gestures and movements to provide a more objective measure of user engagement [13]. The suggested method expands on earlier work in augmented reality and teaching. A significant piece of literature is a study by Jääskä [7] investigating the effects of AR-based interactive learning on students’ motivation and involvement in learning. The study used a self-report questionnaire to measure user engagement and motivation and found that AR-based interactive learning was associated with increased engagement and motivation among students. In many fields, augmented reality and gamification have positive benefits on user experience and performance. Combining the two methodologies to create an inventive teaching tool, particularly for assembly training, is relatively new. Therefore, it can be difficult to understand how gamification affects people and user engagement. Despite numerous attempts in this direction, the overall situation is still developing. The gamified design increases the performance and engagement of the user in training compared to non-gamified systems using augmented reality [10]. Another method is to infer user engagement and immersion using behavioral indicators, such as the time spent on a task or the percentage of correct answers on a quiz. These metrics do not account for the quality of the user’s interactions with the AR system; hence they might not give a complete picture of user engagement [8, 9].
3 Methodology The study will use a quasi-experimental design with a pre-test and post-test, an experimental group using the WebAR-NFC system with hand tracking, and a control group using traditional learning methods. Participants will be randomly assigned to either an experimental group that uses the WebAR-NFC system with hand tracking or a control group that uses traditional learning methods. The study will be conducted at Auburn University. Data will be collected using a combination of qualitative and quantitative methods. Participants will be provided with qualitative survey questions to obtain qualitative data on their opinions of the learning experience. Quantitative information will be gathered using interaction data from the WebAR content, NFC tags, hand-tracking data, and quantitative survey questions.
118
S. Korlapati and C. D. Seals
Fig. 1. Experiment design
3.1 The Research Questions Are 1. Does using the WebAR-NFC system with hand tracking improve user immersion in education and training? 2. Does using the WebAR-NFC system with hand tracking improve learning outcomes compared to traditional learning methods? 3. What is the impact of the WebAR-NFC system with hand tracking on user engagement in education and training? 3.2 The Hypotheses Are 1. The experimental group using the WebAR-NFC system with hand tracking will have a significantly higher level of user immersion in education and training than the control group. 2. The experimental group using the WebAR-NFC system with hand tracking will have a significantly better learning outcome than the control group. 3. The experimental group using the WebAR-NFC system with hand tracking will have a significantly higher engagement level than the control group. Structural Equation Modeling (SEM) is used in this study to test the theoretical model of relationships between four variables, webAR-NFC system, user immersion, learning outcomes, and engagement. SEM is a powerful statistical technique used to test complex relationships among multiple variables and evaluate the goodness of fit of a theoretical model to the data. The theoretical model proposes that the WebAR-NFC system will have a positive effect on user immersion, as the system is designed to create a more immersive learning experience. The immersive experience will encourage learners to become more involved in learning, resulting in better learning outcomes. The model also suggests that participation will improve learning outcomes. This is consistent with
WebAR-NFC to Gauge User Immersion in Education and Training
119
prior research that has shown that learners more engaged in the learning process are more likely to achieve better learning outcomes. The WebAR-NFC system is the independent variable in this model, while user immersion, engagement, and learning outcomes are the dependent variables. The model can be used to test hypotheses and see how the WebAR-NFC system affects these outcomes. The study’s data can then be analyzed using structural equation modeling to assess the model’s fit to the data and estimate the model’s parameters. A t-test is also performed to compare the means of experimental and control groups to determine whether a statistically significant difference exists between them. The experimental group would use the proposed webAR-NFC system with hand tracking, while the control group would use a traditional AR system with screen touch for interaction. This test will help answer the research questions if there is a statistically significant difference between the mean scores of the experimental group using the WebAR-NFC system and the control group using the traditional AR system. To build webAR-NFC, we use the AR.js framework along with the NFC.js library. For hand-tracking, we will use the manomotion SDK. The backend is built using express and node JavaScript libraries, and the data collected will be stored in MongoDB. Web technologies like HTML5, CSS (Cascading Style Sheets), and JavaScript will be used to build the platform. Unity and blender will be used to build 3D models.
4 System Design The proposed system will be a WebAR-NFC platform that utilizes hand-tracking technology. The WebAR content will be designed to provide an immersive and interactive learning experience. It will include 3D models and quizzes with which the user can interact using hand gestures. The hand tracking technology will track the user’s hand gestures and movements as they interact with the WebAR content using natural hand gestures such as pointing, grabbing, and dragging, as shown in Fig. 1. The user interface will be designed to provide a seamless and intuitive experience for the user. It will include a dashboard that displays the user’s progress and performance and a menu that allows the user to access different WebAR content. Blockchain technology has gained popularity in recent years because of its potential to revolutionize industries such as finance, healthcare, and supply chain management. The technology is built on a decentralized, distributed ledger, which allows secure and transparent transactions without intermediaries. There is a growing demand for blockchain experts, and many educational institutions have begun to offer blockchain-related courses and training programs [16] (Fig. 2).
120
S. Korlapati and C. D. Seals
Fig. 2. Hand gesture actions to perform tasks.
The proposed system is designed to enhance the learning experience of blockchain concepts through a WebAR-NFC application, an interactive and immersive experience. The system involves two sections: the learning experience and a gamified blockchain challenge. The learning experience is designed to provide a deeper understanding of basic blockchain concepts through 3D digital augmented reality (AR) content over the real world. The gamified blockchain challenge is designed to reinforce the learned concepts through four tasks based on the concepts learned. The tasks involve testing the user’s understanding of the components of a block, joining a new block to the appropriate chain, understanding distributed peer-to-peer networks, and forming a blockchain network starting with a single block. The data points collected during these tasks include the time taken to complete each task, the number of attempts made, the number of lives left, hand tracking, if the hand movement was smooth or fast, and whether the hand gesture was easy to follow, or the user was frustrated. 4.1 Learning Experience The first section of the proposed system is the learning experience. The user will tap the first NFC tag to start the WebAR-NFC application. The learning experience is designed to provide a deeper understanding of basic blockchain concepts through 3D digital AR content over the real world. The learning experience involves three screens, each with 3D digital AR content overlaid on the real world (Figs. 3, 4, and 5). The screens will cover basic blockchain concepts such as block, chain, hashes, validation, and distributed peer-to-peer networks. The hand interactions will involve tapping the 3D content in the real world while experiencing it through an iPad screen.
WebAR-NFC to Gauge User Immersion in Education and Training
121
Fig. 3. Screen 1 starts with the left image. The user can interact with the Block with a tap to learn what it holds.
Fig. 4. Screen 2 consists of how blocks of chains are formed with hashes and validation.
4.2 Gamified Blockchain Challenge The second section of the proposed system is a gamified blockchain challenge. The user will tap on the second given NFC tag to start the blockchain challenge. The gamified blockchain challenge is designed to reinforce the learned concepts through four tasks based on the concepts learned. The four tasks are designed to test the user’s understanding of the components of a block, joining a new block to the appropriate chain, understanding distributed peer-to-peer networks, and forming a blockchain network starting with a single block. The four tasks are as follows:
122
S. Korlapati and C. D. Seals
Fig. 5. Screen 3 shows a Blockchain network.
Task 1: Components of a Block The first task involves testing the user’s understanding of the components of a block (Fig. 6). This task presents a block and five components on the screen, and the user will drag three of the five into the block by interacting with them as though they are in real world. When selected, each component is highlighted with orange (the block’s color), which lets the user know what components they interact with. This is performed by the grab-drag-release action. When a correct component is added to the block, it turns green. The design of this task is to reinforce the user’s understanding of the components of a block.
Fig. 6. Task-1
Task 2: Adding a New Block The second task involves testing the user’s understanding of adding a new block to the appropriate chain (Fig. 7). It prompts the user to join a new block to the appropriate chain. This task involves clicking the “Proof of Work” (POW) button and dragging the new block to the appropriate chain. By tapping on the 3D lock model, the user verifies
WebAR-NFC to Gauge User Immersion in Education and Training
123
the block, and once it is approved, the block hash appears below the block, which is used to add to the appropriate chain. The block is added using a grab-drag-release action. After the block is added to the appropriate chain, the entire chain is highlighted with green, indicating correctness. This task tests understanding the concept of hashes and verification by POW in the blockchain.
Fig. 7. Task-2
Task 3: Distributed Peer-To-Peer Network. The third task involves testing the user’s understanding of a distributed peer-to-peer network (Fig. 8). An incomplete network is given with nodes incompletely connected to each other. The user needs to complete it to form a valid peer-to-peer network.
Fig. 8. Task-3
Task 4: Adding a New Transaction to Blockchain Network. The fourth and final task involves testing the user’s understanding of the steps in validating a new transaction and adding it to the blockchain network, starting with a single
124
S. Korlapati and C. D. Seals
block (Fig. 9). The screen is filled with five options (3D and text) randomly, and the user will need to rearrange them to form a valid blockchain network. This task is designed to reinforce the user’s understanding of forming a blockchain network starting with a single block.
Fig. 9. Task-4
During the gamified blockchain challenge, data points are collected to provide insights into the user’s performance. The time taken to complete each task will be used to measure the user’s efficiency in completing the task. The number of attempts made will be used to measure the user’s persistence in completing the task. The number of lives left will be used to measure the user’s ability to learn from mistakes. Hand tracking will be used to measure the user’s handiness and accuracy while performing the tasks. User hand gestures will be used to improve the user interface and experience of the application. A combination of objective and subjective data from the webAR-NFC system, user immersion, learning outcomes, and engagement variables would allow us to gain a comprehensive understanding of the effectiveness of the WebAR-NFC system for training beginners in blockchain concepts and identify areas for improvement. This data can then be analyzed using SEM to investigate the relationships between these variables and identify any factors that may be contributing to or hindering the effectiveness of the training program. Following up with T-tests will help in understanding the proposed system’s performance compared to traditional AR systems.
5 Conclusion and Future Work Based on the expected outcomes of the data analysis, the use of the webAR-NFC system with hand-tracking technology could be an effective way to train beginners in blockchain concepts. The SEM analysis is expected to show a significant positive relationship between the webAR-NFC system and user immersion, as well as between user immersion and learning outcomes. An important positive relationship between user engagement and learning outcomes is also expected.
WebAR-NFC to Gauge User Immersion in Education and Training
125
These expected findings could help anticipate that the webAR-NFC system with hand-tracking technology can enhance user immersion and engagement, which in turn leads to better learning outcomes. Therefore, this system may have the potential to be a valuable tool for education and training in blockchain concepts. However, it should be noted that further research is needed to investigate the longterm effects of the training program and to test its effectiveness with a larger and more diverse sample population. Additionally, more research is needed to explore the potential of the webAR-NFC system with hand-tracking technology in other educational and training contexts beyond blockchain concepts.
References 1. Yoon, S., Anderson, E., Lin, J., Elinich, K.: How augmented reality enables conceptual understanding of challenging science content. https://eric.ed.gov/?id=EJ1125896 2. Parmaxi, A.: Virtual reality in language learning: a systematic review and implications for research and practice: semantic scholar. https://www.semanticscholar.org/paper/Virtual-rea lity-in-language-learning%3A-a-systematic-Parmaxi/11e1c2c806d1d559171dee5339751b ff4d4a38bb 3. Miller, M.R., Jun, H., Herrera, F., Yu Villa, J., Welch, G., Bailenson, J.N.: Social Interaction in augmented reality. PLOS ONE 14 (2019) 4. Thammasan, N., Stuldreher, I.V., Schreuders, E., Giletta, M., Brouwer, A.-M.: A usability study of physiological measurement in school using wearable sensors. https://www.ncbi.nlm. nih.gov/pmc/articles/PMC7570846/ 5. Konstantoudakis, K., et al.: Drone control in AR: an intuitive system for single-handed gesture control, drone tracking, and contextualized camera feed visualization in augmented reality. https://www.mdpi.com/2504-446X/6/2/43 6. Khundam, C., Vorachart, V., Preeyawongsakul, P., Hosap, W., Noël, F.: A comparative study of interaction time and usability of using controllers and hand tracking in virtual reality training. https://www.mdpi.com/2227-9709/8/3/60 7. Jääskä, E., Lehtinen, J., Kujala, J., Kauppila, O.: Game-based learning and students’ motivation in project management education. https://www.researchgate.net/publication/362737 092_Game-based_learning_and_students_motivation_in_project_management_education 8. Da˘g, K., Çavu¸so˘glu, S., Durmaz, Y.: The effect of immersive experience, user engagement, and perceived authenticity on place satisfaction in the context of augmented reality. https:// doi.org/10.1108/LHT-10-2022-0498 9. Wen, Y.: Augmented reality enhanced cognitive engagement: designing classroom-based collaborative learning activities for young language learners - educational technology research and development. https://doi.org/10.1007/s11423-020-09893-z 10. Nguyen, D., Meixner, G.: Gamified augmented reality training for an assembly task: a study about user engagement. https://ieeexplore-ieee-org.spot.lib.auburn.edu/document/8860007 11. Chiang, T.-W., et al.: Development and evaluation of an attendance tracking system using smartphones with GPS and NFC. Appl. Artif. Intell. 36 (2022) 12. Buchmann, V., et al.: FingARtips. In: Proceedings of the 2nd International Conference on Computer Graphics and interactive techniques in Australasia and South East Asia. https:// doi.org/10.1145/988834.988871 13. Nivedha, S., Hemalatha, S.: Enhancing user experience through physical interaction in handheld augmented reality. https://ieeexplore.ieee.org/document/7218127/ 14. Arifin, Y., Sastria, T.G., Barlian, E.: User experience metric for augmented reality application: a review. https://doi.org/10.1016/j.procs.2018.08.221
126
S. Korlapati and C. D. Seals
15. Seo, D.W., Lee, J.Y.: Direct hand touchable interactions in augmented reality environments for natural and intuitive user experiences. https://doi.org/10.1016/j.eswa.2012.12.091 16. Francisco, J.: Increasing demand for blockchain experts: higher institutions introduce educational courses. Business Blockchain HQ (2020). https://businessblockchainhq.com/businessblockchain-news/increasing-demand-blockchain-technology-education/ 17. Wang, T., Qian, X., He, F., Hu, X., Cao, Y., Ramani, K.: GesturAR: an authoring system for creating freehand interactive augmented reality applications. In: The 34th annual ACM symposium on user interface software and technology. https://doi.org/10.1145/3472749.347 4769
Evaluation of WebRTC in the Cloud for Surgical Simulations: A Case Study on Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST) William Kwabla1 , Furkan Dinc1 , Khalil Oumimoun1 , Sinan Kockara2 , Tansel Halic3 , Doga Demirel4(B) , Sreekanth Arikatla5 , and Shahryar Ahmadi6 1 University of Central Arkansas, Conway, AR, USA 2 Lamar University, Beaumont, TX, USA
[email protected]
3 Intuitive Surgical, Peachtree Corners, GA, USA
[email protected]
4 Florida Polytechnic University, Lakeland, FL, USA
[email protected] 5 Kitware Inc, Carrboro, NC, USA 6 Memorial Orthopaedic Surgical Group, Long Beach, CA, USA
Abstract. Web Real-Time Communication (WebRTC) is an open-source technology which enables remote peer-to-peer video and audio connection. It has quickly become the new standard for real-time communications over the web and is commonly used as a video conferencing platform. In this study, we present a different application domain which may greatly benefit from WebRTC technology, that is virtual reality (VR) based surgical simulations. Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST) is our testing platform that we completed preliminary feasibility studies for WebRTC. Since the elasticity of cloud computing provides the ability to meet possible future hardware/software requirements and demand growth, ViRCAST is deployed in a cloud environment. Additionally, in order to have plausible simulations and interactions, any VR-based surgery simulator must have haptic feedback. Therefore, we implemented an interface to WebRTC for integrating haptic devices. We tested ViRCAST on Google cloud through hapticintegrated WebRTC at various client configurations. Our experiments showed that WebRTC with cloud and haptic integrations is a feasible solution for VR-based surgery simulators. From our experiments, the WebRTC integrated simulation produced an average frame rate of 33 fps, and the hardware integration produced an average lag of 0.7 ms in real-time. Keywords: webRTC · cloud computing · surgical education · surgical simulation · remote collaboration
1 Introduction Web Real-Time Communication (WebRTC) is a web-based technology which provides audio/video calls, chats, peer-to-peer (P2P) file-sharing functionalities, and everything in between to web and mobile applications without additional third-party plugins. WebRTC © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 127–143, 2023. https://doi.org/10.1007/978-3-031-34550-0_9
128
W. Kwabla et al.
can be used in many different domains. In this study, we investigate the usability of WebRTC on surgical simulations. Surgery simulation is a specialty where students and professionals train and practice modern surgical procedures. Recent developments in virtual/augmented reality (VR/AR) introduce new possibilities and dimensions to surgery simulations. Due tof the high fidelity, real-time, 3D animations and the ability to manipulate hardware instruments attached or integrated into haptic devices, surgical communities have adopted VR/ARbased medical simulations [1]. Surgical simulations can range from simple suturing exercises for an individual student to advanced robotic surgery simulations for expert surgeons. Medical simulations have been shown to reduce costs, medical errors, and mortality rates while improving providers’ performance [2, 3]. Current surgical simulations require the physical presence of an experienced surgeon with a trainee, which can even be difficult due to the busy schedule of expert surgeons. When the COVID-19 pandemic hit the world in 2020, it forced people to work remotely, including medical professionals and trainees. This shift brought many changes to many industries across the world. Medical education was one of the most critical fields affected by this [4, 5]. The disruption has curtailed an essential part of surgical education which is the acquisition of surgical skills through continuous practice [6, 7]. With the current shift towards working and schooling from home, there is a need to explore new ways of using surgical simulations to foster remote collaborations and continuous practice to gain surgical skills. Cloud computing, delivers computing services, including servers, storage, databases, networking, software, analytics, and intelligence over the internet to offer faster innovation, flexible resources, and economies of scale [8]. Recent advancements in cloud computing have the potential to open doors to exploit different ways of carrying out surgical simulations. This shift towards cloud computing can provide many benefits for running surgical simulations on the cloud compared to the traditional way of running surgical simulation applications on-site on bulky and costly equipment. This shift has several advantages, including but not limited to increased collaboration, cost-saving, being independent of platform dependency issues, and remote access [9]. One of the key components of surgical simulations is user interactivity and force feedback through surgical tool interactions. However, currently, cloud computing lacks support for dedicated or attachable specialized hardware support for haptic devices. Therefore, this work aims to present our solutions for running surgical simulation applications in the cloud environment with integrated haptic devices. These solutions consist of three parts; I) integrating WebRTC [10] for surgical simulations, II) running WebRTC-based surgery simulation on Google Cloud, and III) integrating dedicated surgical hardware tools with haptic integration for high-fidelity interactions. All these components are accessible through web browsers from anywhere, anytime, and with any device with an internet connection. To the best of our knowledge, no prior work runs surgical simulations remotely in the cloud with hardware integration. Surgeons’ schedules are already busy with cases. For every fellow/resident surgeon under training, the attending surgeon’s schedule becomes even busier because the attending surgeon must supervise the critical portions of the surgeries. Attending surgeons usually work on multiple cases in parallel and ensure no critical parts overlap. Adding
Evaluation of WebRTC in the Cloud for Surgical Simulations
129
extra load on surgeons for being physically supervising a fellow/resident surgeon operating on a case in a VR-based surgery simulator is very challenging. This has adversely affected the acquisition of clinical and surgical skills, which is a critical component in training surgical residents. Current surgical simulations require a physical presence of an expert surgeon to supervise a fellow/resident. Using current surgical simulations is a challenge in situations like the COVID-19 pandemic, where physical gatherings were banned. Most surgical simulations require high-performance computers, which are costly to run on due to the high computations and intensive realistic 3D rendering associated with realistic simulators [8, 14]. Aside from that, most surgical simulation applications are platform-dependent. This means that they can only be executed on a specific platform. Cloud computing provides high-end computers with high specifications enabling surgical simulations to run compute-intensive realistic 3D rendering with high-fidelity interactions [5]. Using cloud environments with WebRTC for surgical simulations first eradicates the burden of acquiring high-end computers. Second, it helps solve platform dependency issues associated with the execution of surgical simulations. With the world shifting towards remote work and collaboration, cloud computing and WebRTC provide an avenue to enable remote collaboration from anywhere in the world without requiring the physical presence and the presence of special high-performance equipment to run a simulation. This would provide medical students and surgical fellows/residents an avenue to practice and gain more clinical and surgical skills remotely. With the surgical simulation applications running on the cloud, all surgical residents and surgeons can use surgical simulations through a low-cost computer, tablet, or smartphone with the internet and a web browser. This is illustrated in Fig. 1 with an arthroscopic view scene from our virtual rotator cuff arthroscopic skill trainer (ViRCAST) [19]. We use ViRCAST as our testbed platform to investigate possibilities of developing surgical simulations over WebRTC and cloud technologies with haptic integration. Furthermore, these technologies may help researchers and developers of surgical simulations to focus on improving the realism of surgical simulations and developing new features without any hardware and resource limitations [9]. There are several arthroscopic surgery-related simulators. A tabletop arthroscopic simulator, Sawbones “FAST” system [12], is similar to a basic arthroscopic surgery trainer, except that FAST only focuses on arthroscopic skills. FAST has a series of interchangeable boards to practice different scenarios such as navigating, triangulating, and merging. It is validated with an opaque dome and can reliably distinguish between experienced and novice surgeons [13]. One of them is the knee arthroscopy simulator developed by Tuijthof et al. [11] is one of them. It aims to provide a complete knee arthroscopy experience. The prototype design allows surgeons to practice meniscus examination, repair, irrigation, and limb extension. They evaluated their simulator to validate the face and content and found it to be an effective simulation of arthroscopic knee surgery, as well as the realism of the arthroscopic movement and mobility. ViRCAST is our shoulder arthroscopy simulation platform to virtually simulate arthroscopic rotator cuff repair surgeries. ViRCAST platform’s simulation components are illustrated in Fig. 2.
130
W. Kwabla et al.
Fig. 1. Depicts cloud use for a surgery simulation using different devices at geographically different locations. In the same scene, multiple residents can collaborate with an expert surgeon’s guidance and supervision.
All these works provide a method of training surgical residents on performing various arthroscopic surgical procedures but face limitations such as remote collaboration and continuous practice in times of global pandemic, hardware limitations, and platform dependency issues when using these simulations. This study aims to investigate the possibilities of using cloud and WebRTC with haptic integration to alleviate the limitations associated with current surgical simulations.
Fig. 2. Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST) simulation platform.
Evaluation of WebRTC in the Cloud for Surgical Simulations
131
2 Method We implemented a cloud-based solution with WebRTC running in the cloud by integrating the solution with our surgical simulation platform, ViRCAST. The architecture overview is illustrated in Fig. 3. The solution involves integrating the surgical simulation with WebRTC and deploying the WebRTC-integrated simulation on Google Cloud with hardware support for interaction and haptic feedback. This section details the design and development methodologies utilized for our cloud-based surgical simulation environment. Our implementation consisted of three core components: a) integration of ViRCAST with WebRTC, b) real-time interaction with ViRCAST through specialized hardware, and c) deploying WebRTC-integrated ViRCAST on the cloud. All these components are architecturally illustrated in Fig. 3.
Fig. 3. Architectural overview of our implementation.
2.1 Integrating WebRTC with a Surgical Simulation Our surgical simulation application was developed with the Unity Game Engine. In our ViRCAST simulation, we added two cameras to broadcast the scenes to the browser for multiple peers. This allowed each peer to have their own copy of the scene to interact with and a rendering stream to allow the surgical simulation to communicate with the signaling servers. With a C# render streaming script, we specified the signaling type, signaling URL, interactive connectivity establishment (ICE) servers, and an option to use hardware encoding or the default software encoding of the browser. As part of the simulator, we implemented a broadcast stream script in C# to stream media and data via multiple peer connections. This broadcast stream allowed us to attach components that needed to be streamed to multiple peers, such as audio in the browser, through the signaling server. This is illustrated in the Unity portion of Fig. 3.
132
W. Kwabla et al.
Using the User Datagram Protocol (UDP/IP), we implemented a WebSocket signaling web server in Nodejs, which creates a peer-to-peer network between Unity and the web browser, which broadcasts the surgical simulation scenes from Unity to the peers connected by the web server through their web browsers (clients). We then implemented a data channel as part of the Nodejs signaling server built on top of stream control transmission protocol (SCTP) to transmit data between the peers and get the data from the specialized surgical tools for interactions with the simulation This is illustrated in the web server portion Fig. 3. Once we integrated WebRTC with our surgical simulation, the next step was solving the integration of the specialized surgical tools for interaction. 2.2 Getting Data from the Specialized Hardware to the Surgical Simulation One of the most essential parts of surgical simulations is the surgical tools used to interact with the simulations, and most of these tools are specialized hardware (arthroscopes and other surgical instruments), as illustrated in Fig. 4. Haptic devices provide force feedback and ways to enable real-time interactions in the simulations [14–17]. ViRCAST connects real arthroscopic surgery instruments to haptic devices through 3D-printed custom connectors.
Fig. 4. Specialized hardware used in surgical simulations (a) a haptics device, (b) an arthroscope, and (c) a grasper.
In order to provide interactions with our surgical simulation, we had to tackle an important question: “How do you get the data from specialized surgical tools to the simulation in the cloud?” since cloud computing lacks support for specialized surgical hardware tools. Most of these surgical tools are USB enabled and the web browser cannot communicate with USB devices on computers. Due to security reasons, browser manufacturers do not allow browsers to access USB ports and the challenge was to find a way to enable the reading of data from surgical tools. To solve this issue, we implemented a C + + interface as a middleware between the hardware and the web browser, as illustrated in Fig. 5. The interface uses dynamic link libraries to read float data from the hardware devices, converts it to bytes, and uses the Win32 API to send the data to the serial communication port (COM). The COM port is selected by the user at the first-time execution of the C + + application. Most of the data from these surgical tools are floats and the COM ports do not understand float types, so we converted the floats to bytes before being parsed by COM ports. Each set of data was terminated with a newline character to indicate the end of each set.
Evaluation of WebRTC in the Cloud for Surgical Simulations
133
Web browsers cannot access COM ports on users’ computers due to security reasons. In order to read the hardware data sent to the COM port by the C + + application, we implemented a script using the Web Serial API, which reads the bytes of data and parses it. In order to avoid buffer overflow in the data transfer between the C + + interface and the web browser, both applications use the same baud rate of 115 and 200. The C + + interface sends data to the serial port through a stream. Streams are beneficial but challenging because they don’t necessarily get all of the data at once; the data may be arbitrarily chunked up. In most cases, each character is on its own line. Ideally, the stream should be parsed into individual lines, and each message shown as its own line. Therefore, we implemented a transform stream, which made it possible to parse the incoming stream of bytes and return the parsed data. A transform stream sits between the stream source (in this case, the hardware), and whatever is consuming the stream (in this case the browser), and transforms the data read from the COM port from bytes to strings before it’s finally consumed. Similar to a vehicle assembly line methodology, as a vehicle comes down the line, each step in the line modifies the vehicle, so that by the time it gets to its final destination, it’s a fully functioning vehicle. The web serial API [18] reads each character on its own line, which is not very helpful because the stream should be parsed into individual lines with each data shown as its own line. To do that, multiple transforms can be implemented in the streams, taking a stream in and chunk it based on a user delimiter. In our case, we implemented a Line Break Transform which takes the stream in and chunks it based on line breaks that we inserted in the data in the C + + interface. Using the data channel in WebRTC, we sent the device data as a JSON file to the surgical simulation application. In the surgical simulation application, we implemented a C# script to read the data from the data channel and process the JSON data into floats. The transformed data was then applied to the surgical simulation enabling real-time interactions through the haptic device(s). To ensure users are safe, before users can start using the simulation, the web application allows them to pick and connect to the surgical tools to be used in the simulation. This ensures that they give access to the hardware devices themselves instead of the hardware devices automatically connecting themselves. The architectural overview is illustrated in Fig. 5. 2.3 Deploying WebRTC Integrated Surgical Simulation in the Cloud To test out our haptic integrated WebRTC-based ViRCAST simulation on the cloud, we chose the Google Cloud Platform (GCP) as a choice of cloud platform. With the main aim of running surgical simulations in the cloud fostering remote collaboration, using a Session Traversal Utilities (STUN) for Network Address Translation (NAT) server prevents some users from using the application because of the firewall issues associated with their networks. To solve this, we deployed a COTURN (an open-source implementation for Traversal Using Relays (TURN) around Network Address Translation (NAT) server) server on a Google Compute Engine instance. Traversal Using Relays (TURN) servers help bypass network firewalls. The architecture is illustrated in Fig. 6. The GCP Compute Engine instance was located at Council Bluffs, Iowa, North America (us-central1-f) with a machine type of e2-micro (2 vCPUs, 1 GB memory,
134
W. Kwabla et al.
Fig. 5. Architectural overview for getting data from the specialized hardware to the surgical simulation.
Fig. 6. Architectural of how the TURN server allows peers to connect over the firewall
Shared-core VM), 10GB SSD storage and a Bandwidth of 1Gbps running Ubuntu 20. The Unity application and the web server application communicate with the TURN server through Interactive Connectivity Establishment (ICE), so we configured the ICE servers of both the Unity application and web application to point to the external Internet Protocol (IP) address of the COTURN server. After setting up the COTURN server, we deploy our WebSocket web server for signaling to Heroku’s (a platform as a service (PaaS) that enables developers to build, run, and operate applications entirely in the cloud) free-tier dyno. The web server allowed the connection between multiple peers and the surgical simulation remotely. Once the signaling server was deployed, we updated the Unity application with the signaling server’s URL from Heroku to enable communication between the Web server and Unity. The next step of the deployment process is to deploy the WebRTC-integrated ViRCAST. We set up an NVIDIA RTX Virtual Workstation with Windows Server 2019 on Google Cloud located at Council Bluffs, Iowa, North America (us-central1-b). The NVIDIA Workstation Compute instance is configured with the NVIDIA Tesla T4 GPU with 1 GPU, 24 CPUs, 156 GB of memory, 32 Gbps bandwidth, and 50 GB SSD. We then set up firewall rules for the instance to allow Hypertext Transfer Protocol Secure
Evaluation of WebRTC in the Cloud for Surgical Simulations
135
(HTTPS) connections to the instance using Google Cloud’s Virtual Private Cloud (VPC) network configuration. After setting up the Google Cloud Platform (GCP) computer instance, we package the WebRTC-integrated ViRCAST in release mode as an executable and transfer it onto the NVIDIA workstation. We finally execute the surgical simulation application on the NVIDIA workstation running on the Google Cloud and access the surgical simulation from the web application deployed on Heroku.
3 Experiment and Results In this section, we evaluate the performance of WebRTC over real networks. We specifically focus on studying the performance of our ViRCAST simulation with WebRTC and haptic integration over Google cloud instance. We consider two types of WebRTC nodes: (I) a remote wireless node and (II) a remote wired node. The experiment setup can be seen in Fig. 7. Once the simulation is deployed on the cloud, the two nodes access the simulation through their browsers, and we gather the data for each node’s framerate, packet loss, and jitter.
Fig. 7. Experiment Setup
3.1 Wireless Performance The wireless node was at a residence in Conway, Arkansas (Location 1) and Lakeland, Florida (Location 2). The node is a Lenovo AMD Ryzen 5 laptop with 12GB RAM running Windows 11. The simulation was accessed through both Google Chrome and Microsoft Edge browsers. In Tables 1 and 2, we present the number of bitrate, packets sent during each simulation for each browser, and frame rate for the wireless connections. For location 1 (chrome browser), there was a drop in packets at the initial start of the simulation and at minute 5, as shown in Fig. 8a. This was due to the network jitter, as depicted in Fig. 8b. This produced drops in frame rates as shown in Fig. 8c. Overall, the simulation produced a steady frame rate of 31 fps, as shown in Table 1.
136
W. Kwabla et al.
Table 1. Data and packets for wireless nodes gathered for running the simulation at location 1 with a Chrome and Edge browser.
Data (Kbits) Packets Data (Kbits) Packets
Total
Average
Variance
Browser
Frame Rate (fps)
533,167
1,618
0.284
Chrome
31
199
0.255
Chrome
1,298
0.36
Edge
168
0.295
Edge
64,012 437,494 55,368
30
Table 2. Data and packets for wireless nodes gathered for running the simulation at Location 2 with a Chrome and Edge browser.
Data (Kbits) Packets Data (Kbits) Packets
Total
Average
Variance
Browser
Frame Rate (fps)
460,076
1,250
0.395
Chrome
33
162
0.319
Chrome
1,075
0.654
Edge
140
0.575
Edge
58,149 390,916 49,905
28
Fig. 8. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 1 with the Chrome browser.
For location 2 (chrome browser), there were network spikes at minute 1, minute 2, minute 3, minute 4 and minute 5 as shown in Fig. 9a. This was due to the network jitter
Evaluation of WebRTC in the Cloud for Surgical Simulations
137
as shown in Fig. 9b. Overall, the simulation produced a steady frame rate of 33 fps as shown in Table 2 and Fig. 9c.
Fig. 9. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 2 with the Chrome browser.
For location 1 (edge browser), there was a drop in packets at the initial start and at the end of the simulation from minute 4 to minute 5 as shown in Fig. 10a. This reduced the frame rates as shown in Fig. 10c. This was due to the network spikes which produced high jitter as depicted in Fig. 10b. Overall, the simulation produced a steady frame rate of 30 fps as depicted in Table 1. For location 2 (edge browser), there was a drop in packets at the initial start and at the end of the simulation from minute 4 to minute 4.60 as shown in Fig. 11a. This reduced the frame rates as shown in Fig. 11c. This was due to the network spikes and traffic which produced high jitter as depicted in Fig. 11b. Overall, the simulation produced a steady frame rate of 28 fps as depicted in Table 2. The network jitter at the end of simulation affected the framerate. We realized the packet losses and jitter at the start of the simulations for all locations which led to drop-in frame rate at the start of the simulations as shown in Figs. 8, 9, 10 and 11. We attribute this to the network bursts, and retransmission losses, and the long Round-Trip Time (RTT) for the wireless node. We realized the packet losses and jitter at the start of the simulations for all locations which led to drop-in frame rate at the start of the simulations as shown in Figs. 8, 9, 10 and 11. We attribute this to the network bursts, and retransmission losses, and the long Round-Trip Time (RTT) for the wireless node.
138
W. Kwabla et al.
Fig. 10. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 1 with the Edge browser
Fig. 11. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 2 with the Edge browser.
3.2 Wired Performance The wired node was set up in our lab in Conway, Arkansas (Location 3). The node is a Supermicro desktop with 65GB RAM running Windows 10. The simulation was accessed through both Google Chrome and Microsoft Edge browsers.
Evaluation of WebRTC in the Cloud for Surgical Simulations
139
In Table 3, we present the number of bitrate, packets sent, and frame rate recorded during each simulation for each browser for the wired connections. Table 3. Data and packets for wired nodes gathered for running the simulation at location 3 with a Chrome and Edge browser.
Data (Kbits) Packets Data (Kbits) Packets
Total
Average
Variance
Browser
Frame Rate (fps)
564,140
1,592
0.313
Chrome
33
195
0.271
Chrome
2,007
0.186
Edge
237
0.175
Edge
67,621 612,315 70,766
33
For location 3 (chrome browser), there was a drop in packets at the initial start and end of the simulation, as shown in Fig. 12a. This was due to the network jitter, as depicted in Fig. 12b. Frame rates dropped due to the network jitter and drop in packets, as shown in Fig. 12c at minute 00:00 and minute 5:30. Overall, the simulation produced a steady frame rate of 33 fps as depicted in Table 3.
Fig. 12. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 3 with the Chrome browser.
140
W. Kwabla et al.
For location 3 (edge browser), there was a drop in packets at the initial start of the simulation and minute 2:40 as shown in Fig. 13a. This was due to the network high jitter as depicted in Fig. 13b. Frame rates dropped due to the network jitter and drop in packets as shown in Fig. 13c. Overall, the simulation produced a steady frame rate of 33 fps as depicted in Table 3. The small packet loss and jitter led to a steady frame rate of 33 fps during the simulation as shown in Fig. 13. We attribute the steady frame rate of 33 fps to the short Round-Trip Time (RTT) for the wired node.
Fig. 13. Experimental results of (a) packets, (b) jitter, and (c) frame rate for wireless connection at Location 3 with the Edge browser.
3.3 Hardware Lag Measurement One key component of the research is how the specialized hardware interacts with the surgical simulation in real-time efficiently with minimal lag. A situation where a user interacts with the hardware but response is noticeably delayed in visual feedback is detrimental to any real-time simulation. Therefore, lag has to be tracked and measured. In order to track the lag measurement, we added a timestamp for when the data is sent from the client to the server and timestamp to the server for when it receives the data from the client. During our testing, the lag was unnoticeably small from the beginning to the end of the simulation (Table 4).
Evaluation of WebRTC in the Cloud for Surgical Simulations
141
Table 4. Sample hardware lag measurement data. We tagged each data with a timestamp from the time the client sent it to when the server received it and applied it to the scene. Time Sent (Client) yyyy-mm-dd#hh:mm:ss.SSS
Time Received (Server) yyyy-mm-dd#hh:mm:ss.SSS
Lag (ms)
2022–02-28#16:02:03.7
2022–02-28#16:02:03.003
0.7
2022–02-28#16:02:03.7
2022–02-28#16:02:03.033
0.7
2022–02-28#16:02:03.7
2022–02-28#16:02:03.068
0.6
2022–02-28#16:02:03.7
2022–02-28#16:02:03.098
0.6
2022–02-28T16:02:03.7
2022–02-28T16:02:03.126
0.6
4 Discussion One key component expected from simulations is the framerate. From our testing and data collection after running the simulation two times at each location for each browser, the wired node streamed at a steady framerate of 33 fps while the wireless connection gave a framerate of 30 fps. We observed the drop in framerate for the wireless connection. It was due to the long Round Trip Time (RTT), which led to a loss of packets which increased the jitter in the streaming of the simulation. This led to poor video quality for the wireless node at some points. The wired node provided a steady frame rate of 33 fps producing a high video quality due to few packet losses producing less jitter. In summary, our experiments produce results which proves that using WebRTC in the Cloud is feasible, fosters remote collaboration, and removes hardware limitations. From the research, we realized that the simulation on wired connection produced better video quality, high frame rate with few packet losses and less jitter as compared to the wireless connection. We observed the broadcast of a relatively high-bandwidth transmission over a short period and retransmission losses can degrade the frame rate of wireless networks, especially when the end-to-end RTT (Round-Trip Time) is long. Moreover, one important aspect of the research was the specialized hardware integration. From our test, the hardware integration for the simulation produced an average initial lag of 0.7 ms, that is the average time data moves from the hardware and interacts with objects in the simulation. This enabled interactions with simulation to be in synchronization with the hardware.
5 Conclusion In this work, we presented a method of developing surgical simulations over WebRTC and cloud technologies with haptic integration. Our subject study was carried out with ViRCAST. Based on the data presented in the study, we determined that our proposed method was found to be effective for running surgical simulations. Our experiments revealed that the WebRTC-based simulation had an average frame rate of 33 fps, and the hardware integration resulted in a 0.7 ms real-time lag.
142
W. Kwabla et al.
For future work, we plan to add live chat for messaging between participants, a recording feature for recording of sessions for later watch, polls for moderators to create polls for participants to vote on during sessions, and a Whiteboard for illustrations and discussions. Acknowledgement. This project was made possible by NIH/NIAMS 5R44AR075481- 03 and the Arkansas INBRE program, supported by the National Institute of General Medical Sciences (NIGMS), P20 GM103429 from the National Institutes of Health (NIH), and partially supported by NIH/NIBIB 1R01EB033674-01A1, 5R01EB025241–04, 3R01EB005807-09A1S1, and 5R01EB005807–10.
References 1. Stone, R.J.: The (human) science of medical virtual learning environments. Philos. Trans. R. Soc. Lond. Ser. B, Biol. Sci. 366(1562), 276–85 (2011). https://doi.org/10.1098/rstb.2010. 0209 2. Combs, C.D.: Medical Simulators: Current Status and Future Needs. In: 2010 19th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises, pp. 124–129 (2010). https://doi.org/10.1109/WETICE.2010.26 3. Morris, D., Sewell, C., Barbagli, F., Salisbury, K., Blevins, N.H., Girod, S.: Visuohaptic simulation of bone surgery for training and evaluation. In: IEEE Computer Graphics and Applications, vol. 26, no. 6, pp. 48–57, November–December 2006. https://doi.org/10.1109/ MCG.2006.140 4. Dedeilia, A., Sotiropoulos, M.G., Hanrahan, J.G., Janga, D., Dedeilias, P., Sideris, M.: Medical and surgical education challenges and innovations in the Covid-19 Era: a systematic review. In Vivo 34 (3 Suppl), 1603–1611 (2020). https://doi.org/10.21873/invivo.11950 5. Chen, S.Y., Lo, H.Y., Hung, S.K.: What is the impact of the COVID-19 pandemic on residency training: a systematic review and analysis. BMC Med. Educ. 21, 618 (2021). https://doi.org/ 10.1186/s12909-021-03041-8 6. Adesunkanmi, A.O., et al.: Impact of the COVID-19 pandemic on surgical residency training: perspective from a low-middle income country. World J. Surg. 45, 10–17 (2020). https://doi. org/10.1007/s00268-020-05826-2 7. Osama, M., et al.: Impact of COVID-19 on surgical residency programs in Pakistan; a residents’ perspective. Do programs need formal restructuring to adjust with the “new normal”? A cross-sectional survey study. Int. J. Surg. 79, 252–256 (2020). ISSN 1743–9191. https:// doi.org/10.1016/j.ijsu.2020.06.004 8. Sadiku, M.N., Musa, S.M., Momoh, O.D.: Cloud computing: opportunities and challenges. IEEE Potentials 33(1), 34–36 (2014). https://doi.org/10.1109/MPOT.2013.2279684 9. González-Martínez, J.A., Bote-Lorenzo, M.L., Gómez-Sánchez, E., Cano-Parra, R.: Cloud computing and education: a state-of-the-art survey. Comput. Educ. 80, 132–151 (2015). ISSN 0360–1315. https://doi.org/10.1016/j.compedu.2014.08.017 10. Sredojev, B., Samardzija, D., Posarac, D.:WebRTC technology overview and signaling solution design and implementation. In: 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1006-1009 (2015). https://doi.org/10.1109/MIPRO.2015.7160422 11. Tuijthof, G.J.M., van Sterkenburg, M.N., Sierevelt, I.N. et al.: First validation of the PASSPORT training environment for arthroscopic skills. Knee Surg. Sports Traumatol. Arthrosc. 18, 218 (2010).https://doi.org/10.1007/s00167-009-0872-3
Evaluation of WebRTC in the Cloud for Surgical Simulations
143
12. Goyal, S., Radi, M.A., Ramadan, I.K., Said, H.G.: Arthroscopic skills assessment and use of box model for training in arthroscopic surgery using Sawbones – “FAST” workstation. SICOT-J 2, 37 (2016). https://doi.org/10.1051/sicotj/2016024 13. Sawbones. Sawbones FAST Arthroscopy Training System (2018). https://www.sawbones. com/sawbones-fast-arthroscopy-training-system 14. Khan, Z.A., Mansoor, S.B., Ahmad, M.A., Malik, M.M.: Input devices for virtual surgical simulations: A comparative study. In: INMIC, pp. 189–194 (2013). https://doi.org/10.1109/ INMIC.2013.6731348 15. Chan, S., Conti, F., Salisbury, K., Blevins, N.H.: Virtual reality simulation in neurosurgery: technologies and evolution. Neurosurgery 72(suppl_1), A154–A164 (2013). https://doi.org/ 10.1227/NEU.0b013e3182750d26 16. Chen, E., Marcus, B.: Force feedback for surgical simulation. Proc. IEEE 86(3), 524–530 (1998). https://doi.org/10.1109/5.662877 17. Basdogan, C., De, S., Kim, J., Muniyandi, M., Kim, H., Srinivasan, M.A.: Haptics in minimally invasive surgical simulation and training. In: IEEE Computer Graphics and Applications, vol. 24, no. 2, pp. 56–64, March-April 2004. https://doi.org/10.1109/MCG.2004.1274062 18. Web Serial API - Web APIs | MDN (mozilla.org) 19. Farmer, J., et al.: Virtual rotator cuff arthroscopic skill trainer: results and analysis of a preliminary subject study, pp. 139–143 (2020). https://doi.org/10.1145/3404663.3404673
Building VR Learning Material as Scaffolding for Design Students to Propose Home Appliances Shape Ideas Yu-Hsu Lee
and Sheng-Wei Peng(B)
National Yunlin University of Science and Technology, Yunlin 64002, Taiwan R.O.C. [email protected], [email protected]
Abstract. This study explores the development and evaluation of Virtual Reality in education using home appliance products as the example to develop VR teaching aids (Gravity Sketch) and teaching materials for testing and application through scaffolding teaching method. There are18 master students, who are familiar with the design process, and18 undergraduate students, who are unfamiliar with the computer aided design tools, from the Department of Industrial Design participate two experiments. The scaffold teaching method is employed in 2 semesters to construct and execute the theory respectively. This study transforms the VR learning process of the MA students into a supplementary teaching aid to guide the second year students, and explores the students’ nearside development area through VR sketch modelling and Alias modelling tasks. The scaffold was removed after the learning progress, and the students continues to work on various modelling details (e.g. R-angles and transition surfaces) in the Alias course. After the four-week VR course, most of the students were able to submit three different kinds of product shapes in Alias and were significantly more confident in mastering and manipulating 3D spatial sense and product shape modify ability. VR is a highly effective tool between hand drawing and CAD, it can help design students to express their shape ideas in an intuitive and freedom way, to overcome the frustrating modelling barriers of using 3D software. Keywords: Virtual Reality · Gravity Sketch · Design Education · Scaffolding Theory
1 Research Background In the product design process, students are often taught to express their shape ideas using hand sketches then to evaluate and validate the shapes using Computer Aided Design (CAD). Spatial ability plays an important role in this process, for example when converting an orthographic projection into a perspective drawing or constructing a digital model in CAD software, the concepts in the mind need to be taken apart and reassembled and then transformed into three-dimensional information. This takes time for students to learn effectively and it is one of the challenges for the design educator to help (force) students to absorb and apply it quickly. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 144–160, 2023. https://doi.org/10.1007/978-3-031-34550-0_10
Building VR Learning Material as Scaffolding for Design Students
145
Students now can use Gravity Sketch (GS) 3D software to sketch ideas to build up models in an immersive virtual space to visually express modeling concepts with Virtual Reality (VR) equipment. This study incorporates VR equipment as an intuitive interface tool into the traditional CAID course, allowing students to develop ideas for product modelling in an immersive space during digital modeling course. It can reduce students’ frustrated experience when using 3D software due to the complex interfaces and commands. In this study, VR teaching aids (Gravity Sketch) and teaching materials (video in VR) were developed to test and apply scaffolded teaching theories in household appliance product design. Firstly, the VR learning process for senior students(first year MA) was transformed into learning courseware to support VR sketching and modelling for junior students (second year undergraduate), and then scaffold was removed to perform Alias modelling tasks to explore students’ proximal development area. Semi-structured interviews and acceptability assessments were conducted at the end of the teaching experiment to understand the students’ acceptability and preference of VR learning.
2 Literature Review The problems that students encounter with computer-aided design software learning is usually stem from the steep learning curve and lack of time to learn the commands or functions of the software. It is resulting in many students lacking of motivation to learn 3D software and being limited in their visual expression of shapes by their knowledge of the tools. The aim of this study is to integrate VR into a traditional CAID curriculum by introducing an immersive environment that provides students with a better sense of space and stereoscopy. In order to investigate whether VR can be used as a new learning tool, the literature covers the areas of learning scaffolding, teaching and proximal development, spatial ability, VR design applications and education. 2.1 Scaffolding Theory and Zone of Proximal Development Bruner, Wood and Ross (1976) extended former Soviet scientist L.S. Vygotsky’s concept of the Zone of Proximal Development (ZPD) and proposed a theory of scaffold-ed theory, Vygotsky (1934/1978) argued that in addition to assessing the actual level of development of students’ existing abilities, it is necessary to understand the potential level of development of students with the help of others. Thus, instructional scaffolding theory emphasizes not only that teachers judge whether to provide or remove scaffolding to help students construct knowledge based on their abilities and level of progress in their individual cognitive and knowledge construction processes (Kao & Lehman, 1997), but also that higher levels of knowledge are acquired through a four stages cycle of assistance from others independent completion automation (internalization),de automation. (Gallimore & Tharp, 1999, Bodrova & Leong, 2006). Belland (2017) indicated that computer-based learning scaffolds can be modified for different age groups depending on the difficulty of the problem to be solved, if the goal is to solve the problem and the teacher is unable to provide one-on-one instruction. This study uses the supplementary textbooks constructed in senior grades to understand the rules of modeling changes,
146
Y.-H. Lee and S.-W. Peng
and allows students to complete modeling tasks independently in order to internalize (automate) the modeling rules and become their own modeling ability. The evaluation method is to use modeling exercises in VR at different stages as a learning aid, if the student is able to complete the modeling proposal within the given time frame and finally produce a different modeling product proposal using the 3D software originally taught, Can be used as an internalized process for students to solve modeling problems (independent performance level). 2.2 Spatial Ability Spatial abilities include the ability to think pictorially (image thinking), which is the ability to recognize static image patterns, and operational thinking, which is the ability to move or manipulate objects (Piaget, Inhelder, 1967). Samuel Lee (1996) indicated that a good spatial ability can correctly identify and grasp shapes and translate them in the mind to manipulate and change them, for example by observing and recognising two-dimensional objects (pictures), then imagining and translating them into three-dimensional objects in the mind, and being able to control the presentation of the objects at different angles. Sorby (2007) suggests that spatial ability has a direct impact on performance in engineering drawing and is a key success factor in engineering, and Bertoline (1995) suggests that spatial ability is required to translate conceptual ideas into sketches or three-dimensional objects, and that learners need to be trained and taught to use it further in hand drawing and engineering. In Cooper’s (1988) study, it was found that providing physical models for students to observe and manipulate objects from different perspectives could improve learners’ spatial abilities and enhance spatial logic. According to Yi-Feng Wei (2002), “Computer simulation is the most effective tool for enhancing spatial capabilities because it allows color, movement and rotation, animation and repetition, and can quickly represent two- or three-dimensional spatial objects that cannot be replicated in the real world”. Professor Rowena Kostellow has been involved in industrial design education for almost half a century and has developed a method of teaching “visual compositional relationships” as a basis for a variety of designs (Hanna, 2002). Professor Kostellow is particularly concerned with the distinction between two and three dimensions, noting that computer-aided design and visual media can be used to explore fourth dimensional relationships such as time and motion, but still cannot replace what the human eye and hand can do. The purpose of introducing VR equipment in this study is to try to overcome the lack of physical space on the computer screen, although it is still not possible to touch the objects directly, but it is possible to use the immersive environment to repeatedly examine the shapes you have constructed from different angles and distances in three-dimensional space. “Through the nature of threedimensional space, it is possible to deepen the memory and experience of the object while manipulating it. 2.3 VR Design Applications and Education VR technology is widely used in entertainment, gaming, education, healthcare, etc., and is now increasingly being researched in product design applications such as early design stages, collaborative design, 3D modelling and evaluation, etc. VR is currently used in
Building VR Learning Material as Scaffolding for Design Students
147
the early stages of product design to provide a virtual 3D environment in which designers can freely create 3D sketches and models, enabling designers to quickly express ideas. It also allows designers to communicate with others in a collaborative manner to analyse design features and assess feasibility. In Israel’s (2009) research findings, he states that paper sketching will not be replaced, but also that the greatest benefit of 3D immersive sketching is in the drawing itself, that drawing in three dimensions can help to develop inspiration, improve exploration of spatiality and spatial thinking, and is optimistic that 3D sketching has the potential to develop into a tool to support creativity. Tovey (2000) combined hand-drawn and CAD tools to attach a designer’s concept sketch (orthographic projection) to a CAD model by mapping it to produce a 3D-like sketch, providing a quick and appropriate physical model as part of the conceptual design phase. However, with advances in technology, VR now offers a better conceptual design environment. Virtual reality technology can also be used for digital modelling and product evaluation. Svott corporate uses VR tools to create 3D design concepts and produce CNC milled prototypes from 3D models. This new tool solves many of the problems in Svott’s traditional workflow, enabling designers to communicate with modelers at an early stage in a realistic 1:1 scale via VR, significantly reducing design cycle times and costs. Daria Vlah (2021) compares VR with CAD. The results of the experiment suggest that while the use of VR to construct digital models is intuitive and fast, it lacks the level of detail compared to CAD and is not suitable at this stage of the technology to provide adequate support for engineering design, but the experiment also shows that VR offers a faster option for creating conceptual models and provides better spatial perception than CAD and as a tool for transitioning from concept development to detailed design. From the above literature, it is clear that VR can currently provide a good aid for early design stage or design evaluation, but its accuracy is not yet sufficient to support the back-end design process. This study considers VR as a tool for rapid visualization of concepts, and its intuitive operation allows students to better understand the principles of 3D design before learning to CAD such high precision models. As VR devices continue to develop and the cost of training was decreases, scholars have invested in VR in education. Kavanagh (2017) reviews the literature on previous introductions to VR education. The literature suggests that the potential for students to explore and learn at their own pace through VR is appealing to educators attempting to teach students at different levels through VR (Angeloni, 2012, Chang, 2014, Chung, 2012, Gieser, 2013, Perez-Valle and Sagasti, 2012), As Joundi (2020) considers Gravity Sketch to be a tool with a steep learning curve based on his experimental results. This study pre-recorded audiovisual material that even beginners could learn and stored it in a VR headset so that students could review the material and learn independently when the teacher was unable to teach one-on-one in the classroom. Cecil et al (2013) argue that the introduction of VR in education can engage users because it is immersive. Unlike motivation, which refers to a person’s willingness to complete a task, immersion makes a person more likely to persist. Jacobson et al. (2005) have a different view of immersion, arguing that VR provides an immersive experience that allows students to focus on the subject matter. On the other hand, this study agrees more with Cecil et al. that immersion is only useful when students are actively engaged in learning or performing a task, and
148
Y.-H. Lee and S.-W. Peng
that VR, like other tools, tends to enhance learning when students are more motivated (Sutcliffe, 2003).
3 Method 3.1 Experimental Steps In this study, the four action research steps of planning, action, monitoring and reflection proposed by Kemmis et al. (2002) were used to test and apply the VR teaching materials and tools. In the first semester, 18 first-year students (9 females and 9 males, aged 23 ~ 25) of the master of Industrial Design programme at National Yunlin University of Science and Technology (NYUST), who were already familiar with the design process, were invited to conduct tests of the VR learning tools and VR materials (learning scaffold) in groups in the industrial design research course. In the second semester, 18 s year students (6 females and 12 males, aged 19–21 years) in the Alias Surface Design course were taught a modular learning scaffold for four weeks to implement a guided discovery method for interpreting modelling ideas (Chen, Kui-Bo & Yen, 2008), with the ultimate goal of expanding the breadth and depth of modelling thinking. At the end of the experiment, semi-structured interviews and the SUS Ease of Use Scale were conducted with participants to investigate their acceptability and preference of VR learning. Equipment: there are 6 VR headset devices (Meta Quest2) employed in this research Due to the limited number of devices, this study was conducted in groups (3 people per device, 18 people in total). Students were asked to bring their own computers or tablets, so that if students had problems with the operation, the VR scenes could be projected on the screen for the researchers to understand the situation and provide assistance, and group members could also teach each other (Fig. 1). The VR-3D software is based on Gravity Sketch, which is currently widely used in art and product design, as the software for this experiment.
Fig. 1. The VR scenes was projected on screen by senior students in classroom.
Building VR Learning Material as Scaffolding for Design Students
149
3.2 Course Material Production Processes The senior students responsible for building the scaffolding had no experience of using VR before the teaching experiment. It was found that although GS provided built-in instructional videos and shared many case studies on the internet, the explanations were in English and mostly function-oriented, which was not easily absorbed by the students in a short time. The syllabus was written by a researcher (with a year’s experience of using VR) and the syllabus selected the most commonly used tools, i.e. curves, rotational shaping, geometric planes, basic blocks and Sub-D functions. The new tool and related function was introduced one by one in each week by teaching assignment (Fig. 2). In order to test the effectiveness of the teaching materials, students were asked to submit two design proposals for product modelling by using VR after the 4 weeks VR teaching and practice. A household appliance was chosen as an example to evaluate whether the VR teaching materials and teaching tools could achieve the purpose of modelling in a short time.
Fig. 2. Tools commonly used in Gravity Sketch design applications.
In this study, VR handle manipulation and curve drawing are used as a pedagogical introduction. Curves are the basic function of GS (equivalent to the CAD concept of point, line and surface). Since there is no definition of point in GS, students will start with curves and work through the course to learn the core function of GS. The editing mode such as model rotation and geometric planes for object mirror will be taught after teaching creative tools (Fig. 3). Using the classic Alessi product as an example (Fig. 4), students will need to use model rotation and basic block functions to bring control points together and extrude a handle shape by editing. In this study, the Sub-D function is used to teach complex surfaces and geometric planes together. The rough model of a product shape is constructed using geometric planes and the Sub-D function is used to transform it into complex surfaces, Fig. 5 shows the examples how to control the points to quickly adjust the sharpness of the surface or to hollow out the surface by editing points. When students are familiarizing VR equipment with the various drawing tools and completing the tasks assigned in class, the researchers use the application of curves and geometric planes (Sub-D) as teaching objectives and core tools for product design, demonstrating how to use the tools taught for rapid product proposals.
150
Y.-H. Lee and S.-W. Peng
Fig. 3. Editorial control points.
Fig. 4. Classroom work on the Alessi teapot.
The steps are shown in Fig. 6: a. To open mirror mode and draw curves to create a complete product wire frame. b. To construct geometric planes on a linear structure using adsorption. c. Once the rough shape is complete, convert the geometric plane to Sub-D and adjust the control points to handle the surface detail. d. Once the main model is complete, make the rest of the accessories, such as buttons, wires, etc.
Fig. 5. Classes Sub-D Teaching Materials.
Fig. 6. Product construction step-by-step materials - example of an iron.
3.3 Course Material Testing This experiment was designed to test the teaching methods and modify the content of the teaching materials to confirm that the VR teaching tools and teaching materials could achieve the purpose of modelling in a short time and then to further explore the
Building VR Learning Material as Scaffolding for Design Students
151
possibility of using VR as a computer-aided design tool. Therefore, the students in the master degree class were asked to hand drawing a handed iron and a handed vacuum cleaner then the drawings were imported into VR device as a reference to construct a 3D model. Figure 7 shows two students who have modified the external details in VR modeling: The left picture shows the sharpness of the end of the handle has been modified and the detail at the top exit position has been drawn more clearly. The right picture displays the width ratio has been revised to make it narrower in VR and the R angle has been made more visible. The test was completed in about 30 min, and the students indicate that they would not be able to produce the same digital model in the same time using their familiar 3D tools. Most of the master students express that using VR to draw in the pre-development stage was more imaginative than hand drawing. Because they can observe and draw objects from all angles of viewing point, which was more efficient than drawing perspective drawings on paper. Students who are not good at hand drawing mentioned that it was a sense of achievement to be able to draw imaginary objects using VR, but at the same time they questioned the accuracy and controllability of VR models.
Fig. 7. Detail view of senior students’ modifications in VR.
3.4 Curriculum Design and Planning This study adapts and extends the learning process from master degree students to dividing the VR learning material into three phases, i.e. introduction and interfaces, curved courses, surface courses, which are presented in the undergraduate student’s course, Alias Surface Design. The four-week VR course is divided into modules with weekly audiovisual material and additional class assignments after teaching the basic functions of Alias while the fourth week is a comprehensive exercise for the VR teaching material. During the first week of the VR course, the group leaders were asked to set up an account for bundling the VR (hardware) and GS (software), and the researcher would import pre-recorded audio and video material into the device. Each group was also asked to bring their own tablet or computer. The first week of the audio-visual lesson was an introduction to the environment, with basic functions such as controlling the movement of the environment, zooming in and out on the environment or objects, adjusting the brush, etc. At the end of the lesson a review task was given. The task is
152
Y.-H. Lee and S.-W. Peng
shown in right picture of Fig. 8, students are asked to draw any object (curved lines or geometric blocks), to change its color and to duplicate it becomes two objects then scale up one of them.
Fig. 8. VR Course Week 1 Environment Introduction (left) and Color Changing (right) for classroom Tasks.
The second week is the curves course which divided into two parts, the first part is introduction of the curve’s function, including how to turn on the mirror setting to draw symmetrical curves in 3D space and how to edit the curves using the editing function in the GS software. Moreover, adding, deleting and moving any object to according to a lines smoothly (Fig. 9). The second part emphasizes the concept of three-dimensional drawing including the usage of the two hands grips to change perspective frequently when creating concepts, viewing or drawing objects from different angle of viewing point to make use of three-dimensional space, and using the adsorption function to organize sketches. After all students have watched the video a class assignment were given to design a mask using the GS built-in reference model (human head) for any topic that they want then submit a file at the end of class to test their learning results.
Fig. 9. VR Course Week 2 Curve Course - Drawing Concepts.
Building VR Learning Material as Scaffolding for Design Students
153
The third week of the course focuses on the usage of moving control points to attach a geometric plane to a drawn curve. The video includes building a rough model, turning on Sub-D control points, adjusting the shape of the surface, attaching control points and more advanced ways of controlling the surface structure (e.g. Transitional Surfaces). This week’s task is to design a hand-held vacuum cleaner (Fig. 10). The students will learn by doing through the audio-visual materials, introducing the reference drawing to trace the product outline in planar mode, and using the curve editing taught in the previous lesson to assemble the three dimensional wire frame of the product. Once the rough model was completed, the Sub-D was tuned on to transform the geometry plane into a complex surface, and details were be adjusted by manipulating control points then accessories were added as well. Again, at the end of the class, each student’s working file was uploaded for research assistant to obtain the learning results.
Fig. 10. VR Course Week 3 Facing the Course Learning Sequence.
The final week’s integrated exercise is designed to test the effectiveness of the learning. The task is to design a hairdryer (any theme) and students are encouraged to use their creativity to come up with unique shapes. Students are encouraged to be creative and come up with unique shapes. During the test, if they come across a concept that is difficult to implement, they can discuss it with the teaching assistant for help.
4 Research Results 4.1 VR Teaching Outcomes In the Curve course, with the exception of five students who only achieved the content taught in the audio-visual materials, the rest of the students completed the class assignments with their own creativity. In the end, there are six students (marked with a tick) completed their work with the geometric blocks that had not yet been taught, as shown in Fig. 11.
154
Y.-H. Lee and S.-W. Peng
Fig. 11. Curved classroom work (mask) - recycling results.
This part of the course including the principles of 3D digital modelling. Although it had been taught in the original Alias course, it was still evident that some students did not yet understand these principles and so the models they initially drew (ticked off as shown in Fig. 12) had unnatural wrinkles or lacked sharpness. However, with one-to-one tuition from the teaching assistants, the students were able to master the techniques and make the necessary adjustments in VR. On the other hand, those students who progressed more quickly were able to handle more detailed designs such as parts or buttons.
Fig. 12. Curved classroom work (handheld vacuum) - recovery results.
In the final week’s integrated exercise, several students’ work exceeded the researcher’s expectations in terms of modelling detail where the control points were accurately used to adjust model shape alignment and clear explanation of the product
Building VR Learning Material as Scaffolding for Design Students
155
shape, as indicated by the tick marks in Fig. 13. Although there were still a few students who only made the basic outline of the hair dryer, the level of model alignment was much better than last week’s vacuum cleaner exercise and it was not easy for the second year students to complete the 3D product shape in an average of one hour using the provided tools. In addition, there were many students who were willing to take on the challenge of making special shapes or using multiple components to make products and breaking them into several parts, so the results show that some students already had the concept of product grouping and assembling.
Fig. 13. Combined exercise (blower) - recovery results.
4.2 Scaffolding Removal After 4 weeks of VR training, the students returned to the Alias course during the scaffolding removal phase to work on various modelling details (such as R-angles and transition surfaces). The teacher reported that after 4 weeks of VR training, the students were more confident in mastering and manipulating their sense of 3D space and product shape modify. The final assignment was also based on a vacuum cleaner, but students were required to develop three different shapes that could evolve from A to B and C. Seventeen students submitted their assignments (Fig. 14), of which 8 students submitted their assignments on unspecified topics due to unfamiliarity with the software and time constraints, but they still followed the assignment criteria and developed three different shapes of products.
156
Y.-H. Lee and S.-W. Peng
Fig. 14. Student learning after scaffolding removal.
4.3 Acceptability Evaluation At the end of the semester, the SUS Ease of Use Scale was employed for VR and Alias via a questionnaire to evaluate students’ acceptability of the two tools. The mean score for VR was 61 and for Alias 36 (Table 1). Although the mean usability score of 68 was not good, it is clear that VR scored significantly higher than Alias. In terms of rapid modelling, the students felt that VR was like drawing by hand and shape could be constructed quickly, but they still felt that they lacked confidence in the accuracy of the formal models. Although Alias is capable of producing accurate models, it is too complicated and not intuitive enough to use, and with the language barrier of the English interface, most students felt that they needed to use handouts or audio-visuals to supplement their revision after class. 4.4 VR Teaching Feedback In this study, we were unable to conduct one-to-one interviews due to the impact of the new coronary pneumonia (Covid-19), and used a semi-structured questionnaire to gather feedback on the introduction of VR into the traditional teaching curriculum. The undergraduate students agreed that Alias had a high technical threshold and that it was important to develop basic 3D concepts before taking the Alias surface design course, otherwise they would not be able to keep up with the pace of the teacher and fully absorb the knowledge. They often need to discuss with each other after class. Students felt that VR was like hand-drawing in that it allowed them to draw concepts quickly and intuitively, VR had a high sense of space compared to a computer screen, and that being in a virtual three-dimensional space where they could manipulate objects at any time gave them a sense of immersion. It helped them better understand 3D construction, and offered a variety of modelling possibilities that they wanted to incorporate into the regular design process. However, some students were unable to use VR for long periods due to equipment limitations and physical factors, and had to take a break after using it for a while.
Building VR Learning Material as Scaffolding for Design Students
157
Table 1. VR and Alias SUS Usability Scale. System Usability Scale (SUS)
VR
Alias
1. I think that I would like to use this system frequently
3.83
3.11
2. I found the system unnecessarily complex
2.56
4.33
3. I thought the system was easy to use
3.78
2.27
4. I think that I would need the support of a technical person to be able to use this system
3.22
4.11
5. I found the various functions in this system were well integrated
3.56
3.44
6. I thought there was too much inconsistency in this system
2.72
2.94
7. I would imagine that most people would learn to use this system very quickly
3.67
2.16
8. I found the system very cumbersome to use
2.61
3.61
9. I felt very confident using the system
3.67
2.77
10. I needed to learn a lot of things before I could get going with this system
2.89
4.44
Total number of points
32.5
33.2
Total SUS score
61.2
35.8
Following the introduction of VR into the surface design course during the semester, two students reported a learning gap when returning to the Alias software (e.g. forgetting instructions) and using VR as a tool to learn on their own after class. The most frequently cited feedback was that students would like the Alias course to have the same kind of teaching materials as the VR course, so that they could keep track of their own learning and repeat it over and over again.
5 Conclusion 5.1 The Usefulness of VR Three-Dimensional Spaces There are some students do not have a good concept of three-dimensional space, which makes it difficult to understand how to manipulate the lines on the screen. When students first use the computer to edit 3D curves, they do not know in which direction to move the control points. It is difficult for students to notice their problem in 3D software operation if their spatial ability is weak, they usually find the problem only after they have been drawing for some time, and often have to start drawing from scratch. The high spatiality of VR immersive environment reduces such problems efficiently. The key is that VR allows students to view or interact with objects from different angles of viewpoints, for example by moving their bodies or viewpoint with the handles to easily rotate the angle of view or zoom in and out on objects, helping them to better understand the objects and to immediately identify and correct errors as they are aware.
158
Y.-H. Lee and S.-W. Peng
5.2 VR Teaching Effects In this study, the teaching materials were tested by senior students and the editing function was used as the core structure of the teaching. The final textbook was streamlined and recorded as video material into three sections of course, introduction to the environment - curved course - surface course, which were integrated. Students are able to view and work through the VR with video materials, so they can really take control of their own learning, where the researchers only needing to assist with technical issues. As a teaching tool for introducing surface design to the course, the biggest difference between VR and computer tools is that the immersive environment allows students to experience the object through three-dimensional feeling, providing intuition and freedom of operation. Students without 3D skills are more willing to experiment with concepts and ideas as there is less of a barrier between complex commands and interfaces when creating objects in GS. 5.3 Future Recommendations In this study, it was not possible for each student to have their own VR due to the limitations of the equipment; in the group setting, if the student assigned to link the hardware was not present, the group members could not log into their account to project VR scenes and the teacher could not know what problems the student was encountering. This is a limitation of using Meta Quest in the classroom. The best classroom situation would be one equipment per student or for the teacher to create their own shared account. VR-GS has a learning threshold for both software and hardware, and it is recommended that future research to VR for designing experiments should extend the learning time to ensure that the experiments are of a good standard. Although the use of VR as a teaching tool is new, it was possible to observe the passive state of some students during the lessons. This phenomenon can be observed and verified in future experiments. Based on the experience of this study, if the main goal is to achieve a good visualization of the graphics, we suggest that using curves and geometric planes (Sub-D) and editing functions as the main construction tool when developing VR-GS materials. It will solve most of the surface modelling problems. If it is a quick idea or visualization, curves and geometry and clay can be used as the main construction tool. If VR is introduced into the curriculum in the future, we can try to differentiate the curriculum between VR and the main course, for example, the first part of the course can focus on VR and the second part can focus on the main course, to reduce the learning gap for students returning to the main course. Acknowledgement. The above research results - VR teaching aids and teaching materials development, thanks to the Ministry of Education’s Teaching Practice Research Project: VR Immersive Environment in Product Modelling Development and Evaluation of Teaching Materials and Teaching Aids - An Example of Home Appliance Products (Project No. PHA1100892).
Building VR Learning Material as Scaffolding for Design Students
159
References Bodrova, E., Leong, D.J.: Tools of the Mind. Pearson Australia Pty Limited, Pearson Australia Pty Limited (2006) Belland, B.R., Walker, A.E., Kim, N.J., Lefler, M.: Synthesizing results from empirical research on computer-based scaffolding in STEM education: a meta-analysis. Rev. Educ. Res. 87(2), 309–344 (2017) Bertoline, G.R., Wiebe, E.N., Miller, C.L. Nasman, L.O.: Engineering graphics communication. Irwin, Chicago (1995) Cooper, L.A.: The role of spatial representations in complex problem solving. In: Cognition and Representation, pp. 53–86. Routledge (1988) Chung, L.-Y.: Virtual reality in college English curriculum: case study of integrating second life in freshman English course. In: Proceedings of the 26th International Conference on Advanced Information Networking and Applications Workshops, pp. 250–253. IEEE Press, Los Alamitos (2012) Chang, Y.J., Wang, C.C., Luo, Y.S., Tsai, Y.C.: Kinect-based rehabilitation for young adults with cerebral palsy participating in physical education programs in special education school settings. In: EdMedia+ Innovate Learning, pp. 792–795. Association for the Advancement of Computing in Education (AACE), June 2014 Cecil, J., Ramanathan, P., Mwavita, M.: Virtual learning environments in engineering and STEM education. In: Proceedings of the 2013 IEEE Frontiers in Education Conference, pp. 502–507. IEEE Press, Los Alamitos (2014) Vygotsky, L.S., Cole, M.: Mind in Society: Development of Higher Psychological Processes. Harvard University Press, Richmond (1978) Kao, M.T., Lehman, J.D.: Scaffolding in a computer-based constructivist environment for teaching statistics to college learners (1997) Gallimore, R., Tharp, R.: Teaching mind in society: teaching, schooling, and literate discourse. In: Lloyd, P., Fernyhough, C. (eds.) Critical assessments, Lev Vygotsky, vol. 3, pp. 296–330 (1999) Sorby, S.A.: Developing 3D spatial skills for engineering students. Australas. J. Eng. Educ. 13(1), 1–11 (2007) Wei, Y.F.: 3D-computer aided drafting in engineering graphics. J. Hung. Eng. 4(1) (2002) Hannah, G.G.: Elements of Design: Rowena Reed Kostellow and the Structure of Visual Relationships. Princeton Architectural Press, Princeton (2002) Israel, J.H., Wiese, E., Mateescu, M., Zöllner, C., Stark, R.: Investigating threedimensional sketching for early conceptual design—Results from expert discussions and user studies. Comput. Graph. 33(4), 462–473 (2009) Tovey, M., Owen, J.: Sketching and direct CAD modelling in automotive design. Des. Stud. 21(6), 569–588 (2000) Kavanagh, S., Luxton-Reilly, A., Wuensche, B., Plimmer, B.: A systematic review of virtual reality in education. Themes Sci. Technol. Educ. 10(2), 85–119 (2017) Gieser, S.N., Becker, E., Makedon, F.: Using CAVE in physical rehabilitation exercises for rheumatoid arthritis. In: Makedon, F. (ed.) Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 1–4). ACM Press, New York (2013) Perez-Valle, A., Sagasti, D.: A novel approach for tourism and education through virtual VitoriaGasteiz in the 16th century. In: Proceedings of the 18th International Conference on Virtual Systems and Multimedia, pp. 615– 618. IEEE Milan (2012) Joundi, J, Christiaens, Y., Saldien, J., Conradie, P., De Marez, L.: An explorative study towards using VR sketching as a tool for ideation and prototyping in product design. In: Proceedings of
160
Y.-H. Lee and S.-W. Peng
the Design Society: DESIGN Conference, vol. 1, pp. 225–234. University Press, Cambridge, May 2020 Jacobson, J., Holden, L.: The virtual Egyptian temple (2005) Sutcliffe, A.: Multimedia and Virtual Reality: Designing Multisensory User Interfaces. Psychology Press, England (2003)
Digital Interactive Learning Ecosystems in Metaverse Educational Path Based on Immersive Learning Yuan Liu1(B) , Wei Gao1 , and Yao Song2 1 Beijing Institute of Fashion Technology, Beijing 100029, China
[email protected] 2 Beijing Jingwei Hirain Technologies Co., Inc., Beijing, China
Abstract. Virtual technology presents us with opportunities in the interactive learning ecosystems. Accordingly, the three factors of the learning ecosystem have expanded new channels, and immersive learning appears to be a commonly discussed method under the metaverse layout. From the perspective of digital narrative, this paper explores the impact of online immersive learning on the construction of a virtual interactive learning ecosystem, along with the challenges it brings to the learning process. The main method includes case studies and phased framework extension based on previous work (Liu, Ricco & Calabi, 2021b; Liu, Su, Nie & Song, 2022). As an output, we link the frame of four immersive typologies with several researchers’ work to give advice to the fields of application, teaching models and narrative tools. Keywords: Digital Interactive Learning Ecosystem · Education in Metaverse · Immersive Learning
1 Learning Ecosystem and Its Digitalization 1.1 Digital Learning Ecosystem and Its New Stakeholders The discussion of learning ecosystem is not new, what becomes interesting is the emerging change taking place within e-learning ecosystem because of technology. Sridharan and Corbitt (2010) defines e-learning ecosystem as consists of a web-based “learning and teaching community, substance and content, principles and methods, systems and processes, and management of learning resources”, not hard to tell, they put their thoughts on the modernization of learning ecosystem, rely on the new contents like audio, AR, VR, or new teaching community like web-based platforms and social media. Ficheman and de Deus Lopes (2008) thinks that reliance on technological development is an inevitable trend of education, and so does “apply modern technology as a medium or tool” to create learning for new generations (Sarnok, Wannapiroon & Nilsook, 2019). It is worth noting that they mentioned the standards which enable the digital ecosystem to run, along with the three important levels in its operation process, including: digital learning ecosystem, digital learning environment and digital storytelling, digital © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 161–169, 2023. https://doi.org/10.1007/978-3-031-34550-0_11
162
Y. Liu et al.
storytelling learning ecosystem and digital storytelling learning & teaching community. Moreover, they also pay attention to the storytelling issue in the digital era, about how to understand the relevant storyteller, the learning style, and the establishment of a related evaluation system. For such an ever-changing ecosystem, Kowch (2018) believes that the learning ecosystem includes three factors: the changing of instruction, the changing of learning context and the changing of internet. He considers the intervention of different brains, such as how artificial intelligence can change design requirements, and the relationship between companies and sustainable learning environments. Sridharan, Deng and Corbitt (2010) claims that there are three e-learning ecosystem dimensions, namely “pedagogies, associated technologies, and management of learning resources”, they understand the effectiveness of sustainable e-learning models on different level. Uden, Wangsa and Damiani (2007) also clarify three components of e-learning ecosystem: content providers, consultants and infrastructure. Digital Ecosystems consists of human and digital populations, which also been called as biotic factors and “between biotic and abiotic factors”. Biotic factors include actors (human species) and content (digital specie), and abiotic factors include interactions between biotic factors along with applied technology. Holgado and Peñalvo consider users are part of the ecosystem, they also see human interaction as important factors in learning ecosystems based on open source solutions (García-Holgado & García-Peñalvo, 2016; García-Holgado & García-Peñalvo, 2018). For participants in the digital world, Ficheman and de Deus Lopes (2008) thinks that learners, teachers, or content creators are the most important stakeholder, and the relevant participants are distributed in different communities and are linked together through broader sets of populations of content. There is a phenomenon: breaking away from the constraints of geography and time, they are linked together with more subjective purposes such as interests and thoughts, just like Kowch (2018) states in the article, the context of learning and the shifting internet itself is a reliable ecosystem frame. For new stakeholders in digital learning ecosystem, the current literature tends to be traditional, and most scholars choose to interpret them as variant of designers, instructors, leaders, policymakers, and learners (Kowch, 2018), or teachers, IT support staff, supervisor, teaching practicum advisors, school mentor teachers which more rely on the support of enterprise power (Sarnok, Wannapiroon & Nilsook, 2019). Students undoubtedly play a very important role during the activity, and the student-centered learning style is reflected in the participatory activities, where they present ideas through digital storytelling. Somehow, people also question the traditional teaching role of teachers, and believed that digital resident skills training is crucial (Ficheman & de Deus Lopes, 2008). Other important concept, such as the sustainability of the digital learning eco-system are also known as very important factor, since it promotes sustainable learning for each student, creates media and provides experiences through digital storytelling (Sarnok, Wannapiroon & Nilsook, 2019). To achieve sustainability, more flexible learning content transformation methods (learner-centered), richer learning resource integration methods, and more effective technology transformation support methods will all be considered. Meanwhile, relevant scholars also raised the negative influence among various aspects of e-learning, including lack of understanding of the technology behind the pedagogy, the
Digital Interactive Learning Ecosystems in Metaverse
163
inadequacy of popular learning management systems, and the sustainability of learning object repositories (Sridharan, Deng & Corbitt, 2010). Other influencing factors includes environment, teach skills, subject matter skills, support, content, instructor, technology and organization (Wesson & Cowley, 2003). We notice there is another important idea called digital storytelling learning ecosystem, it refers to “the learning of learners using electronic devices such as tablets, smartphones, desktops or laptops”, which has a very high dependence on digital media and technology. It is enough to tell that storytelling appears to be very important factor of discussing the e-learning ecology. 1.2 The Layout of Education in Metaverse and Its Impact on Learning Ecology The concept of “education in metaverse” is considered as a teaching field among how to construct a virtual field suitable for education from the physical layer, software layer, application layer and analysis layer (Hua & Wang, 2021). Strictly speaking, the education reform in metaverse is divided into two parts, one is the changes in teaching methods by technologies such as AR, VR, and the other is the transformation of the nature of education itself (Liu, Ricco & Calabi, 2021b; Liu, Su, Nie & Song, 2022). We have three aspects to understand the theories of the education in metaverse: New Learning Patterns and Participation Mechanisms The transformation of education in metaverse is reflected in the innovation and iteration of learning forms, as the traditional education model is difficult to meet the future needs. Education must go through an iterative process from blended learning to chaotic learning, eliminating the existence of online and offline forms, such as scenario-based learning. Some scholars notice the mechanism of knowledge transfer from the perspective of the Metaverse, along with the ways and means of knowledge dissemination, and on the linear process of knowledge transfer. Ayiter’s (2008) pilot project discuss the possibility of teaching architecture in VLEs. Under the idea of decentralization, the knowledge transfer mechanism will undergo a transformation from single-line screening to multiline value synergy. Also, for discussing the concept of “field” education, the idea varies into four aspects: “immersive learning of explicit knowledge, direct absorption of tacit knowledge, research-oriented space for knowledge creation, and virtualized space for knowledge sharing”. New Forms of Application Different from the two-way knowledge flow between teachers and students, relevant scholars have proposed four application levels of the education in metaverse, which are “virtual reproduction, virtual simulation, virtual-real fusion, and virtual-real linkage (Zhong, Wang, Wu, Zhu, & Jin, 2022). Yu Yongjin (2021) also mentioned in the Metaverse Education Forum that there are three stages in the existing Metaverse education: 1.0 deep immersion and interaction; 2.0 education engine is mature, and the content is extremely rich; 3.0 The richness of education economy and digital asset production. It is not difficult to tell, that with the support of virtual technology, educational models and assets will be integrated into daily
164
Y. Liu et al.
life in new forms. Among them, immersive experiential learning and blended learning forms are the stars of education in the future. New State of Development One study believes that it is possible to start with practical application fields such as informal learning and vocational training, to simulate the discussion of future education under related topics in the Metaverse (Zhong, Wang, Wu, Zhu, & Jin, Zhong). Among them, science courses mainly rely on real scene simulation, to support the learning of mathematics, physics, and chemistry through virtual laboratories. Liberal arts and related subjects such as history and design, mainly rely on subjective immersive experience.
2 Metaverse Learning Ecology: Digital Narrative Tools and Methods To understand the nature of online immersive narrative and apply them to a wider educational context, we search materials based on keyword constraints such as “immersive website”, “immersive educational experience” and “interactive website”, 11 cases were selected to give a glance at these problems. Most of the cases are based on the web platform, occasionally with the emergence of VR technology carriers. Their common feature is a strong “narrative” approach, which is reflected in the way of expression, the purpose of the website, and the artistic performance based on the narrative itself. Some of these websites are for commercial promotion purposes (we have discussed the strong economic value of narrative development before), and some are based on educational purposes, such as first aid knowledge or personal safety knowledge. Others are based on better interpretation and shaping of the brand. These narrative contents will strongly involve consumers into their story, to establish stronger sense of identity with the brand. Based on Sarnok, Wannapiroon and Nilsook’s (2019) three layers of digital ecosystem1 , we try to understand several questions by case study (Table 1): 1. Are there new stakeholders within the digital learning ecosystem? 2. What factors does digital storytelling contain to influence digital learning media? 3. How to realize sustainability in the digital world?
2.1 New Stakeholders in Digital Learning Ecosystem Surprisingly, the traditional stakeholder is not clearly represented in education-related material. Such as Openlear and (UN)TRAFFICKED, the former case is based on nonprofit short stories that support the disadvantaged people, learning through their own choices by playing the role of “helper” in the lives of bullied children, black, unemployed and the elderly people among how to intervene these special situations. The 1 (1) Digital Learning Ecosystem: Digital Learning Environment and Digital Storytelling.(2)
Digital Storytelling Learning Ecosystem.(3) Digital Storytelling Learning & Teaching Community.
Digital Interactive Learning Ecosystems in Metaverse
165
Table 1. Resource names and related links.
(UN)TRAFFICKED reproduces the dilemma faced by an Indian girl: how to find a job to avoid falling into illegal traps, how to deal with the relationship between the family, etc. Similarly, the webpage also substitutes the visitor into the subjective role of the girl through the first-person perspective, to obtain information and very detailed situations. The stakeholders in these two typical cases have very clear levels, such as investors, non-profit organizations, universities, and social welfare institutions. Educator here is not in the form of a person or avatar, but presents an open and free content, where the story is told in the form of an “ending” when visitor depart to the relevant “wrong” or “correct” routes. When you come to the relevant option, the storytelling will change in the form of “ending”, thus allowing the visitor to clarify the correct information. We also cannot ignore the value implantation in promotional activities, that is, through a certain learning narrative method, talk about the value of a brand and to achieve better commercial results than traditional advertisement. The case AWGE uses a very retro web design and sells its products in a form of TV shopping (Fig. 1). PRIOR is the official website of a wedding company. It uses a very clever visual symbol, a white ribbon, to express the purity and cleanliness of the ceremony. The ribbon to connect the independent content between pages is quite clever. These are all kinds of narrative creations from the perspective of business needs. Hacienda Patrón Tourshi is an immersive application developed based on the OCULUS platform. Within, the visitor can walk through the wine brewing base from the perspective of bees (thirdperson perspective), to understand the production process and historical value of the wine brand. Similar cases include Boursin and Type Terms. It can be found that with the support of new technologies and new forms of interaction, capital is no longer satisfied with the value brought by one-way communication but encourages more participation and emotional recognition of consumers’ subjective values.
166
Y. Liu et al.
Fig. 1. The screenshot of case studies and its’ positioning of narration.
2.2 New Biotic Factors and “Between Biotic and Abiotic Factors” Uden, Wangsa and Damiani (2007) thinks that biotic factors usually refer to actors (human species) and content (digital specie), and “between biotic and abiotic factors” summed up the interactions in between. There is no doubt that immersive learning in the online learning ecosystem has a very rich narrative form, and there are many interpretations of what is called digital storytelling. In our eleven cases, the means and methods of narration and ways of providing immersion all varies. Some require music to guide interactive actions, while others provide a certain degree of an immersive experience. Moreover, the perspective of the story is strongly considered, Sarnok’s (2019) research shows that different visual experience can support different stories. Among the eleven cases, eight cases have very clear narrative attributes and educational purposes, including Openlearn, Lifesaver, Patrón, Let’s Play, Because, Cyan Planet, Boursin, (UN)TRAFFICKED. Among them, the three cases of Openlearn, Lifesaver, and (UN)TRAFFICKED all have the appearance of a clear character “me”. The role of “me” is to help visitors more involved into the story. In Lifesaver (its physical content is carried on the mobile or tablet platform), the visitor will act as a passerby who implements rescue measures and performs corresponding first aid actions by operating his own mobile phone or tablet. These three educational games based on electronic platforms pay great attention to their “between biotic and abiotic factors”, that is, how to help the visitor better integrate into the story through exquisite story structure and interaction design. This kind of narrative form is usually called multi-line narrative, and the content and ending of the story will change according to the visitor’s operation. For Patrón, Boursin, and Cyan Planet, which have a heavy commercial purpose, they will focus on how to display their virtual content (it is worth noting that all three are based on VR platforms), by making visits more interesting to increase the attractiveness of their own brand or product. “Because” is a website with a very simple structure but a subtle interaction. It displays music in a very controllable form: users need to hold down the space bar, music will be displayed randomly according to the pause time of the space bar and needs to unlock the music playback permission by interacting with the web content, such as selecting the corresponding hairstyle, clothing, and shoes of the musician. Users seem to have a sense
Digital Interactive Learning Ecosystems in Metaverse
167
of control, this feeling is not provided by a complete story, but by its very high focus and operability. Through these cases, we can find that the realization of immersive feeling on the digital platform does not need to rely on a complete story, such as describing all the details of first aid, or completely describe a whole scene. Like traditional storytelling, narratives on digital platforms also need to have shining points and focus, better narrative effects can be obtained by amplifying these key experiences. Also, the interactivity and dissemination of the platform provides very rich possibilities for new interactions, which cannot be fully summarized by limited case study.
3 Discussion: The Development Path of Online Learning Ecology and Education in Metaverse From the above study, the opportunities for virtual technology to intervene in innovative teaching are reflected in: – Virtual technology can promote the understanding of abstract knowledge. – The sense of presence stimulated by VR can improve the learning effect. – Virtual technology is compatible with innovative teaching methods including problem-based learning, experiential learning, situational learning, concept teaching, and collaborative learning. – Virtual technology allows students to explore in a highly interactive and immersive environment, also to supports online teaching in the context of a major epidemic. To facilitate theoretical results to teaching activities, we also try to combine previous research related to immersive narratives with specific teaching methods to draw the corresponding relationship between metaverse and learning ecosystem. From the Table 2 we can tell that the types of tools used for knowledge-based immersion and user-contributed immersion are more abundant (Liu, Ricco & Calabi, 2021b; Liu, Su, Nie & Song, 2022). In terms of sensory factors, several synesthesia methods based on “auditory-visual synesthesia” and “visual-auditory synesthesia” are used. Regarding the social nature of teaching activities, the four narrative models encourage open communication in public spaces, passive sensory immersion has restrictions on user behavior, and other immersive spaces are more tolerant. Social media plays a certain role in the knowledge-based immersive activities, such as live music, public lectures, and the use of apps. Considering environmental factors, most narrative frameworks support the combination of physical props and virtual content. Among them, exploration-based immersion and user-contributed immersion are more inclined to the interactive mode of motion tracking, body tracking and facial capture as methods to add fun. Partial interactions, such as touch, gestures, and visual stimuli, can play an auxiliary role in narrative and add value to subjective immersion. In terms of narrative, abstract story can create a certain atmosphere through sensory metaphors, natural simulation, and scene creation. Sensory stimuli such as visual dynamics, spatial sound, and animation can all contribute to enhance immersion. The tool’s value to narrative is reflected in five perspectives: virtual storytelling, first-person perspective, third-person perspective, audiovisual synesthesia, and facial analysis. Similarly,
168
Y. Liu et al.
Table 2. Correspondence between types of education in metaverse, fields of application, teaching models and narrative methods.
for the purpose of expanding specific application ideas, this article also corresponds to the types of educational tools involved in the past, to give more specific application ideas. Table 2 explains in detail the correspondence between several scholars’ research, and we understand it as a theoretical supplement for the construction of a further developed eco-system.
4 Conclusion For conclusion, education in Metaverse has a profound impact on the digital learning ecosystem. First, the progressive relationship of the learning mode is transformed from one-way dissemination to knowledge creation and sharing. Second, there is a fuzzification of subdivision relations of virtual technology applied disciplines. Traditional liberal arts and science education tend to be authentic and knowledge-based, while liberal arts tend to be scene-based learning with experiencing a high level of simulation. Third, education in metaverse has developed new methods for related interdisciplinary projects or big science, such as product design involving mechanical knowledge, or design management courses involving business models. Last, the educational nature allows more stakeholders to get involved, providing new profit opportunities for platforms, tools, and institutions.
References Ayiter, E.: Integrative art education in a metaverse: ground. Technoetic Arts 6(1), 41–53 (2008) Cai, S., Jiao, X.Y., Song, B.J.: Opening another door to education—applications of the educational metaverse. Mod. Educ. Technol. (2022) Ficheman, I.K., de Deus Lopes, R.: Digital learning ecosystems: authoring, collaboration, immersion and mobility. In: Proceedings of the 7th International Conference on Interaction Design and Children, pp. 9–12 (2008) García-Holgado, A., García-Peñalvo, F.J.: Architectural pattern to improve the definition and implementation of eLearning ecosystems. Sci. Comput. Program. 129, 20–34 (2016)
Digital Interactive Learning Ecosystems in Metaverse
169
García-Holgado, A., García-Peñalvo, F.J.: Human interaction in learning ecosystems based on open source solutions. In: Zaphiris, P., Ioannou, A. (eds.) LCT 2018. LNCS, vol. 10924, pp. 218–232. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91743-6_17 Hua, Z.X., Wang, M.X.: Research on the teaching field structure, key technologies and experiments of the educational metaverse. Res. Mod. Distance Educ. (33), 23–31 (2021) Kowch, E.G.: Designing and leading learning ecosystems: challenges and opportunities. TechTrends 62(2), 132–134 (2018) Kuster, M.W., Ludwig, C., Aschenbrenner, A.: TextGrid as a digital ecosystem. In: 2007 Inaugural IEEE-IES Digital EcoSystems and Technologies Conference, pp. 506–511. IEEE (2007) Liu, G.P., Wang, X., Gao, N., Hu, H.L.: From virtual reality to metaverse: a new direction for online education. Res. Mod. Distance Educ. 33(6), 12–22 (2021a) Liu, Y., Ricco, D., Calabi, D.A.: Immersive learning from basic design for communication design: a theoretical framework. In: 6th International Conference for Design Education Researchers DRS LEARNxDESIGN 2021. Engaging with Challenges in Design Education, vol. 3, pp. 756–771. Design Research Society (2021b) Liu, Y., Su, H., Nie, Q., Song, Y.: The distance learning framework for design-related didactic based on cognitive immersive experience. In: Zaphiris, P., Ioannou, A. (eds.) Human-Computer Interaction, pp. 81–96. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05675-8_8 McPherson, M.A., Nunes, J.M.: Critical issues for e-learning delivery: what may seem obvious is not always put into practice. J. Comput. Assist. Learn. 24(5), 433–445 (2008) Phumeechanya, N., Wannapiroon, P., Nilsook, P.: Ubiquitous scaffolding learning management system. In: Proceedings of National Conference on Educational Technology 2015: NCET 2015, pp. 22–33 (2015) Sarnok, K., Wannapiroon, P., Nilsook, P.: Digital learning ecosystem by using digital storytelling for teacher profession students. Int. J. Inf. Educ. Technol. 9(1), 21–26 (2019) Sridharan, B., Deng, H., Corbitt, B.: Critical success factors in e-learning ecosystems: a qualitative study. J. Syst. Inf. Technol. (2010) Uden, L., Wangsa, I.T., Damiani, E.: The future of E-learning: E-learning ecosystem. In: 2007 Inaugural IEEE-IES Digital EcoSystems and Technologies Conference, pp. 113–117. IEEE (2007) Wesson, J., Cowley, L.: The challenge of measuring e-learning quality: some ideas from HCI. In: Davies, G., Stacey, E. (eds.) Quality Education@ a Distance. ITIFIP, vol. 131, pp. 231–238. Springer, Boston, MA (2003). https://doi.org/10.1007/978-0-387-35700-3_25 Zhong, Z., Wang, J., Wu, D., Zhu, S., Jin, S.Z.: Analysis of the application potential and typical scenarios of the educational metaverse. Open Educ. Res. (28) (2022)
3D Geography Course Using AR: The Case of the Map of Greece Ilias Logothetis(B)
, Iraklis Katsaris , Myron Sfyrakis, and Nikolas Vidakis
Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Crete, Greece {iliaslog,katsarisir,nv}@hmu.gr
Abstract. Augmented reality (AR) is utilized alongside game-based learning to increase student engagement in blended learning environments. This paper describes an AR-based edugame designed to improve learning the geography curriculum in Greek elementary schools. The game utilizes two interaction techniques: via a virtual hand model and via a touch screen. Moreover, 3D objects represent prefectures and geographic regions to provide a more extensive presentation of a map. The game is a puzzle-like game with an additional phase of matching names to the puzzle pieces. Players move the 3D objects to complete the puzzle and choose provinces to continue the game by locating counties. Furthermore, this paper investigates the impact of Felder Silverman learning style and tests learning outcomes with the use of AR-based edugame as a learning object. Keywords: Gamification · VR · AR · MR · XR and Metaverse · Game-Based Learning
1 Introduction The geography course syllabus in Greece is based on the traditional learning model with simple 2D maps in atlases. For that reason, many students often find understanding such geographic terms demanding. This leads students to lose interest and keep a passive position toward the course. Traditional learning methods, such as lectures and reading from textbooks, have long been criticized for being boring and ineffective in retaining information [1]. On the other hand, game-based learning has been gaining popularity as a more engaging and effective alternative. Studies have shown that incorporating gaming elements into the learning process can increase student motivation and engagement, as well as improve their retention of information and ability to transfer knowledge to real-life scenarios. In addition, game-based learning often allows for active problem-solving and collaboration, further enhancing the learning experience [2]. AR is a technology adopted widely in education due to the alternative presentation that is offers and the low-cost devices that requires to run. The main requirement for AR to run is a common modern smartphone that can be found almost everywhere. Moreover, AR has the potential to greatly enhance the educational experience, by making learning © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 170–182, 2023. https://doi.org/10.1007/978-3-031-34550-0_12
3D Geography Course Using AR: The Case of the Map of Greece
171
more interactive, engaging, and accessible. Finally, AR can help students to visualize and understand complex concepts, create interactive simulations, allowing students to practice and experience situations in a safe and controlled environment, and due to the 3D nature of AR it can automatically provide a gamified experience [3]. To that end, the proposed AR game includes a 3D map of Greece and prompts learners to put geographic regions in the correct place like in a puzzle game. The game also asks players to find where specific counties are located within a specific region. This representation in AR aims to strengthen the spatial understanding of learners, as well as through the puzzle interaction to better remember the locations learned [4]. Research on AR games in geography courses has previously shown a positive impact on students [5–10], but most studies are focused on the visualization of the maps and not on how the learners interact with them. This paper presents an alternative way of teaching geography in Greece elementary schools using AR and two different approaches of interaction - freehand interaction and through a touchscreen - in a puzzle-like game. The study measures player satisfaction with the interactions and the presentation of the map. Furthermore, the importance of learning styles in such an environment is measured in terms of learning ability and the overall impression of the game from learners. The rest of the paper is structured as follows: Sect. 2 provides background information on the topics as well as a brief literature review on similar work, while Sect. 3 describes the AR Geography Game. Section 4 explains the experimental setup of the study, Sect. 4 presents the results, and Sect. 5 follows with a discussion of the results. Section 6 closes this paper with the final thoughts and future directions.
2 Background This study examines the impact of mobile AR technology on the performance, cognitive load, and perceptions of 95 university students in a geography course in Turkey. Results indicate that mobile AR improves performance, and reduces cognitive load and students hold positive views of the technology. Mobile AR is an effective tool for teaching geomorphology and geography [11]. A study involving 123 university students examines the impact of AR on spatial orientation skills, as measured by paired sample t-test. Results show that the treatment group, who had used AR, had an average gain of 20.14 degrees in their spatial orientation skills, while the control group did not improve. The study suggests that AR technology can be an effective tool for developing students’ spatial skills in geography education in higher education [6]. The main discovery of the study was that Augmented Reality Instructional Tool (ARIT) improves learners’ performance, retention, and promotes gender equality. The study found that teaching and learning geography can be enhanced by using ARIT [7]. 2.1 Puzzle Games Puzzle serious games are a type of video game that combines elements of puzzle-solving with more serious or educational themes. These games often have a specific message or goal, such as teaching players about history, science, or social issues, while also
172
I. Logothetis et al.
challenging their problem-solving skills. The puzzles in these games can range from simple logic puzzles to complex spatial challenges, and they are integrated into the overall story or theme of the game. One of the key benefits of puzzle serious games is that they can provide a fun and engaging way to learn about important topics. For example, games about history can bring historical events to life and make learning about the past more interactive and enjoyable. They can also help players develop critical thinking and problem-solving skills, as well as reinforce what they’ve learned through the puzzles. Additionally, these games often have a strong emotional impact, as they often deal with serious or meaningful subjects, making the experience of playing them more memorable and impactful. This study examined the impact of using jigsaw puzzles on primary school students’ learning of Asia and world maps. Participants were divided into puzzle and no-puzzle groups. Results showed that collaborative puzzle-solving led to better retention. The study also found that the scores were higher for both maps, but it was only significant for the world map [12]. The study aimed to improve spatial thinking and support social studies education in geography through a flash-based puzzle game of a map of Japan. The game was tested on 28 fourth-grade students who played it once a week for three weeks. Results showed an improvement in the ability to solve the puzzle within 5 min, from 20% to 80%. Additionally, a paper-based test found that 70% of students had better scores on identifying prefectures on the map of Japan after playing the game. These findings suggest that digital educational games can effectively foster spatial thinking and support geography education [13]. 2.2 Geography Games Geography serious games are a type of educational video game that uses geographic concepts and themes as a way to engage players and help them learn about the world around them. These games can take many forms, including simulations, strategy games, and puzzle games, and they often involve tasks such as mapping, navigation, and resource management. Using geography as a foundation, these games can give players a deeper understanding of the relationships between different countries, cultures, and physical landscapes. One of the main benefits of geography serious games is that they can help players develop a strong sense of spatial awareness and cultural sensitivity. They can also foster an understanding of the complexities of global issues, such as climate change, population growth, and resource management. Additionally, these games can provide a fun and interactive way to learn about different regions of the world and their unique physical, cultural, and political characteristics. Whether played in a classroom setting or at home, geography serious games can be a valuable tool for helping players expand their knowledge and appreciation of the world around them. The article describes the implementation of a geography game in 32 classrooms in Ontario, Canada, and the difficulties in assessing student learning with a game that lacks a built-in assessment system. 795 students participated in the study. Data were collected through classroom observations, interviews with teachers, and pre and post-evaluations of students. The results indicate that students did learn from the gameplay, as evidenced by changes in their scores on multiple-choice and short-answer evaluations [8]. The
3D Geography Course Using AR: The Case of the Map of Greece
173
purpose of this article is to present an augmented reality-based activity to aid in the teaching and learning of geography for students in the 6th year of an elementary school in a public educational institution. A study was conducted, and the results indicate that the students found the approach to be beneficial. The use of technology and the orientation activity helped not only in visualizing the content but also in establishing the connection between theory and practice [10]. This study developed and implemented two versions of an educational game on geography, one in 2D and one in 3D, in eight elementary schools. The impact of both versions on motivation to learn and user experience was studied. Both versions had a positive impact on learning, but the 2D version had a greater impact on learning and the 3D version had a greater impact on motivation and user experience [9]. 2.3 Learning Styles People have unique learning styles, yet most educational systems still follow a “one size fits all” approach. Keefe [14] defines learning styles as stable indicators of how people perceive, interact with, and respond to the learning environment. Many models exist that categorize learners into Learning Styles. While research has shown that incorporating learning styles into e-learning platforms can have a positive impact on the learning process [15], there are still difficulties in its implementation. Some difficulties, according to the study [16], it is reported the concept of learning styles is fraught with a variety of significant issues. Firstly, there is a significant gap between an individual’s preferred way of learning and what actually leads to effective and efficient learning. Secondly, a preference for studying is not synonymous with a learning style. Most of the so-called learning styles are based on categorizing people into specific groups, but there is little to no evidence to support the idea that people can be classified into distinct groups. Finally, nearly all research that purports to support the existence of Learning Styles fails to meet the basic criteria for scientific validity. Despite the criticism of Learning Styles, education research still considers them as an impactful aspect of learning. Felder-Silver Model is a widely recognized Learning Styles framework [17] based on the idea that learning style is a set of predispositions that affect how an individual process and retains information. According to the Felder-Silverman Model, there are four dimensions of learning style - Processing: Active vs. Reflective, Perception: Sensing vs. Intuitive, Input: Visual vs. Verbal, and Understanding: Sequential vs. Global - each representing different aspects of learners’ preferences in information acquisition. The Felder-Silverman Model is often used in education and training to help individuals understand their own Learning Styles and to develop effective strategies for learning and retaining information. It is also used to design educational materials and instructional methods that are effective for learners with a range of different learning styles [17]. In the games sector, according to Hwang et al. [18], active learners had better performance in the game, but no significant difference was found for reflective learners in any of the conditions. In the study of Khenissi et al. [19] the results showed that the sequential learning style is associated with puzzle games and the sensory learning style is associated with casual games. This study [20] supports the notion that personalized learning can be improved by considering individual learning styles, as learners who prefer active processing benefited more from the game. It highlights the importance
174
I. Logothetis et al.
of taking individual characteristics into account in the design of multimedia learning environments.
3 The AR Geography Game AR Geography map puzzle game provides a dynamic and educational mode of improving a wide array of skills and personal attributes. Through engaging in these games, individuals can develop their memory retention and recall abilities as they must recall the geographical locations of various countries and cities. The aim of these games is also to enhance the player’s understanding of geography, including the placement of major cities and communities, thus advancing their geographical literacy. Furthermore, the process of solving geography map puzzles requires players to employ their problemsolving skills as they work to fit the pieces together correctly. The game also fosters an improvement in spatial awareness as the player visualizes the shapes and locations of different countries and cities on the map. Upon successful completion of the puzzle, the player experiences a sense of achievement, which can promote their confidence and self-esteem. With the addition of AR, the game aims to create an interactive and entertaining experience that encourages players to explore their physical surroundings through augmented reality overlays and challenges while promoting fun and enjoyment for players of all ages through engaging graphics and interactive elements. Finally, the AR Geography game consists of two phases - one for the geographic regions’ placement and one for the counties - where each uses a different interaction technique - the first phase uses freehand interaction, while the second phase uses the touch screen - to observe the advantages and disadvantages of each method. The first phase of the game comprises a puzzle game behavior with the regions placed around the map and requiring placement. The second phase is similar to a matching game in which players select a geographic region and the game asks them to select the county that is displayed on their screen. 3.1 Development Phase The game used the Unity 3D game engine [21] with the AR Foundation [22] - a Unity library that acts as a wrapper for AR Core and ARKit - for the development process. Furthermore, the Hand Interaction Toolset [23] was used to provide interaction with the physical hands. The current version of this toolset uses the MediaPipe library [24] for the hand-tracking feature as it is faster and more stable than the previous hand-tracking model. To generate the map of Greece Real World Terrain [25] package from the Unity asset was utilized. This package uses GIS systems that allow the user to select an area. After that, it retrieves the geographic information and generates a terrain in Unity with the representation of land in the selected location. Additionally, the package allows texture and terrain manipulation. Therefore one can choose the detail and size of the resulting map. Map Configuration. The resulting map from Real World Terrain is in a square shape. For that reason, to retrieve only the borders of Greece and to further separate the map
3D Geography Course Using AR: The Case of the Map of Greece
175
into geographic regions and later on into counties, Blender [26] was employed for this process. To carry out this process, the first step is to import into Blender the generated map of Greece from Real world Terrain. From the resulting square map of Real World Terrain, the borders of Greece were separated into a piece.
a. Map of Greece
b. Remaining Map
c. Splitting the Regions
Fig. 1. Map Separation in Blender
Then the geographic regions of Greece were outlined by hand and snapped into segments from the retrieved map. Next, a copy of each piece was made and split further into counties of the respective region. Finally, the resulting 3D models were imported into Unity 3D game engine. Figure 1 (a–c) shows the process of cutting the map of Greece into geographic regions. The same process followed for the separation of regions into counties. 3.2 The Game The game starts with the application trying to detect horizontal planes. When the planes are adequately detected in the real world, the player can tap the smart device’s screen to place the map in the desired location. The starting map is an empty map of Greece with the geographic regions placed randomly around the empty map Fig. 1(a), Fig. 2. After the map is placed the plane detection runs in the background to further assist the stability of the game, without a visual representation of the detected planes. Puzzle Game with Physical Hand. The first phase of the game begins right after the map placement in the environment. This phase is about learning about the geographic regions of Greece. The player has to grab each region around the map and drag it to the correct location on the map. To fulfill this action, the players must use their physical hands. To complete this phase the player must move within the space with the smart device in one hand. This is necessary because the spawning locations of the regions are not accessible for the grab action from a single point. Thus, the player needs to move to complete this phase. The first phase ends when the player has correctly placed all the regions on the map.
176
I. Logothetis et al.
a. Generate the digital map on the physical environment
b. Grab a region to place
c. Drag the region into the correct posi- d. Place the region in the correct position tion
Fig. 2. The First phase
Raycasting on an invisible model of the region in the location is responsible for detecting the placement of a fragment in the correct spot. The raycasters are configured to cast a wider-than-usual ray. Therefore, the positioning will be easier for the players. Once the player releases the grabbed object in the correct location, the region locks into position filling a missing piece of the puzzle. Prefectures Selection with Touchscreen. The second part of the game starts with the player performing a gesture over a geographic region as shown in Fig. 3(a). This gesture triggers the next step of this level. This level aims to teach the counties of each geographic region. When the game detects the gesture performed in a region this region is split into counties. The game presents the name of a county of the selected region and prompts the player to tap on the outlined area representing this county within the region Fig. 3. The outline of a correctly picked county changes to green color. Outlined with red color is an indication of a wrong answer. The separation in smaller 3D pieces assists in this outlining effect. Following the correct placement of all counties in a region, the model switches back to the bigger slice. Given that all counties of this sector are correctly selected, the region is outlined with green and is not available for selection anymore. The player continues to the rest regions until all counties of the regions are found. This phase swaps between
3D Geography Course Using AR: The Case of the Map of Greece
177
physical hand interaction and touch interaction. In order to compare the preference of the learners, the touch interaction is selected in this phase. When the user correctly finds every county, the model switches back to the bigger slice. The smaller pieces assist in outlining each county.
a. Select region
b. Start Quiz
c. Answers
Fig. 3. The Second phase (Color figure online)
4 Experimental Setup To evaluate the performance of the game a study was conducted with 40 participants from the Software Engineering course of the Electrical and Electronic Engineering Department of the Hellenic Mediterranean University. Each participant came to the laboratory where the study took place. After accepting the informed consent, they took a pre-test in which they had to answer questions randomly pooled from a pool containing questions in the geography of Greece. Moreover, they answered some infographic questions, for example, if they had used AR applications previously. After the pre-test, the subjects were introduced to the AR Geography Game and started to play the game until they completed the two phases. After the game, they took a post-test with questions randomly pooled from the same pool as the pre-test. Finally, the subjects answered questions about their overall experience of the game and how they liked each element - hand interaction, touch interaction, and AR presentation - and if they found it helpful in better understanding the course. In the last section of the form, the subjects were asked to give suggestions on what applications they would like to experience using their physical hands as the means of interaction.
5 Results and Discussion To classify the participants into Felder-Silver Learning Style dimensions, they answered the Index of Learning Styles Questionnaire. The results are further classified into three levels, representing how much a dimension is expressed into an individual. The three are low with values 1 to 3, medium with values 4 to 7, and high with values 8 to 11. Figure 4 shows a visual representation of this classification.
178
I. Logothetis et al.
Fig. 4. Felder Silverman Questionnaire Results
5.1 Pre-test, Post-test Comparison Having 40 participants to evaluate the AR Geography game the study reaches the following results. Taking into account the pre-test and post-test 34 from 40 participants improved their knowledge while 6 from 40 had worse results in the post-test. From the 6 subjects with lower scores, the Felder-Silverman dimensions were Global (4) and Sequential (2), Reflective (3) and Active (3), Visual (5) and Verbal (1), Sensitive (2) and Intuitive (4). From the 34 subjects with better results, the most improved by 0 to 4 questions (75%). 5% improved by 5 to 6 and another 5% is observed with improvement of 10 questions or higher (Fig. 5). The highest improvement is found to have Sensitive, Visual, Active and Sequential dimensions. The same dimensions present the best average scores also as can be observed in Fig. 6. This is expected as these dimensions are most related with such content. As it can be observed from Fig. 6 (c) Sequential and Global have the least distance between their average scores while the Active and Reflective have the highest. The overall differences are within one question, but the Reflective dimension is the only one with improvements lower than 2 questions in average.
3D Geography Course Using AR: The Case of the Map of Greece
179
Fig. 5. Pre-test Post-test Difference in Results
a. Sensitive vs Intuitive
b. Visual vs Verbal
c. Active vs Reflective
d. Sequential vs Global
Fig. 6. Results per Dimension and Average Scores
5.2 Game Elements Liking Hand Interaction. Hand Interaction was liked by most players (29) having an average score of 4.7 out of 5. In detail, Fig. 7 shows participants answers to the question related to Hand Interaction liking. The lowest scores are observed in the functionality of the hand interaction, and this was due to environment difficulties mostly occurred by light. Touch Interaction. Touch Interaction showed wide acceptance from the participants and the transition between the two interaction techniques also left a good impression on the participants. 23 participants were very pleased with the transition between the two
180
I. Logothetis et al.
Fig. 7. Hand Interaction Liking
interactions and 25 participants greatly enjoyed that two interactions were available. While Touch Interaction was found very easy by most subjects the majority of them selected that they would prefer physical hand interaction (57.5%), with 17.5% preferring the touch interaction and a 25% that could not decide what type of interaction they prefer. It is important to note that most of the participants considered the touch interaction as a good choice for the second phase of the game. Presentation. The AR presentation of the map reported by the participants was a helpful aspect in understanding better the geography of Greece. Additionally, participants found this presentation visually appealing. This type of presentation was very comprehensible. When participants asked what was the best aspect of the game the majority answered that they were fascinated by this type of presentation meaning that 3D maps should be incorporated into learning materials. 5.3 Gameplay Participants reported the game as easy to play with a minority of participants expressing difficulties in playing the game (6 of 40). The game flow was found very entertaining by 17 subjects and 16 subjects did not find the game boring at all. 15 subjects respond as fully immersed in the game while 20 subjects - the majority - respond with a 4 out of 5 to this question meaning that they were not fully immersed.
6 Conclusion and Future Work This paper proposed an educational game for learning the geography of Greece, based on the course syllabus of the elementary school. The study focused on the alternative interaction between learner and map, and not only on the visualization of maps in 3D. The results suggest that alternative interactions - specifically the freehand interaction increase interest in the course. Another finding that needs further research is the alternate interaction techniques that participants found interesting.
3D Geography Course Using AR: The Case of the Map of Greece
181
The next steps of this study include repeating the study in elementary schools in which it is aimed. Data needs to be stored with respect to which questions were answered wrong and which were correct. This will aid in understanding better the impact of the two interaction techniques on the learning ability of learners. Finally, the next versions of the game will include material about the cultural heritage and urban traditions of the country’s communities.
References 1. Logothetis, I., Papadourakis, G., Katsaris, I., Katsios, K., Vidakis, N.: Transforming classic learning games with the use of AR: the case of the word hangman game. In: Zaphiris, P., Ioannou, A. (eds.) HCII 2021. LNCS, vol. 12785, pp. 47–64. Springer, Cham (2021). https:// doi.org/10.1007/978-3-030-77943-6_4 2. Hao, K.C., Lee, L.C.: The development and evaluation of an educational game integrating augmented reality, ARCS model, and types of games for English experiment learning: an analysis of learning. Interact. Learn. Environ. 29(7), 1101–1114 (2021). https://doi.org/10. 1080/10494820.2019.1619590 3. Saidin, N.F., Halim, N.D.A., Yahaya, N.: A review of research on augmented reality in education: advantages and applications. Int. Educ. Stud. 8(13), 1–8 (2015) 4. Maulana, H., Sato, T., Kanai, H.: Spatial augmented reality (SAR) system for agriculture land suitability maps visualization. In: Virtual, Augmented and Mixed Reality: Applications in Education, Aviation and Industry, pp. 314–328 (2022) 5. Schnürer, R., Dind, C., Schalcher, S., Tschudi, P., Hurni, L.: Augmenting printed school atlases with thematic 3D maps. Multimodal Technol. Interact. 4(2), 23 (2020). https://doi. org/10.3390/mti4020023 6. Carrera, C.C., Asensio, L.A.B.: Landscape interpretation with augmented reality and maps to improve spatial orientation skill. J. Geogr. High. Educ. 41(1), 119–133 (2017). https://doi. org/10.1080/03098265.2016.1260530 7. Adedokun-Shittu, N.A., Ajani, A.H., Nuhu, K.M., Shittu, A.K.: Augmented reality instructional tool in enhancing geography learners academic performance and retention in Osun state Nigeria. Educ. Inf. Technol. 25(4), 3021–3033 (2020). https://doi.org/10.1007/s10639020-10099-2 8. Hébert, C., Jenson, J., Fong, K.: Challenges with measuring learning through digital gameplay in K-12 classrooms. Media Commun. 6(2), 112–125 (2018). https://doi.org/10.17645/mac. v6i2.1366 9. Zaharias, P., Chatzeparaskevaidou, I., Karaoli, F.: Learning geography through serious games: the effects of 2-dimensional and 3-dimensional games on learning effectiveness, motivation to learn and user experience. Int. J. Gaming Comput. Simul. 9(1), 28–44 (2017). https://doi. org/10.4018/IJGCMS.2017010102 10. Herpich, F., Nunes, F.B., De Lima, J.V., Tarouco, L.M.R.: Augmented reality game in geography: an orientation activity to elementary education. In: 2018 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 601–606 (2018). https:// doi.org/10.1109/CSCI46756.2018.00121 11. Turan, Z., Meral, E., Sahin, I.F.: The impact of mobile augmented reality in geography education: achievements, cognitive loads and views of university students. J. Geogr. High. Educ. 42(3), 427–441 (2018). https://doi.org/10.1080/03098265.2018.1455174 12. Dang, S., Ved, A., Vemuri, K.: Geography map knowledge acquisition by solving a jigsaw map compared to self-Study: Investigating game based learning. Int. J. Game-Based Learn. 8(2), 80–89 (2018). https://doi.org/10.4018/IJGBL.2018040107
182
I. Logothetis et al.
13. Yuda, M.: Effectiveness of digital educational materials for developing spatial thinking of elementary school students. Procedia - Soc. Behav. Sci. 21, 116–119 (2011). https://doi.org/ 10.1016/j.sbspro.2011.07.045 14. Keefe, R.A.O.: Implementing A ∗ in Prolog, no. October, pp. 1–20 (1997) 15. Van Zwanenberg, N., Wilkinson, L.J., Anderson, A.: Felder and silverman’s index of learning styles and honey and mumford’s learning styles questionnaire: how do they compare and do they predict academic performance? Educ. Psychol. 20(3), 365–380 (2000). https://doi.org/ 10.1080/713663743 16. Kirschner, P.A.: Stop propagating the learning styles myth. Comput. Educ. 106, 166–171 (2017). https://doi.org/10.1016/j.compedu.2016.12.006 17. Felder, R., Silverman, L.: Learning and teaching styles and libraries. J. Eng. Educ. 78, 674– 681 (1988). https://www.engr.ncsu.edu/wp-content/uploads/drive/1QP6kBI1iQmpQbTXL08HSl0PwJ5BYnZW/1988-LS-plus-note.pdf 18. Hwang, G.-J., Chiu, L.-Y., Chen, C.-H.: A contextual game-based learning approach to improving students’ inquiry-based learning performance in social studies courses. Comput. Educ. 81, 13–25 (2015). https://doi.org/10.1016/j.compedu.2014.09.006 19. Khenissi, M.A., Essalmi, F., Jemni, M., Graf, S., Chen, N.S.: Relationship between learning styles and genres of games. Comput. Educ. 101, 1–14 (2016). https://doi.org/10.1016/j.com pedu.2016.05.005 20. Wouters, P., Van Der Meulen, E.S.: The role of learning styles in game-based learning. Int. J. Game-Based Learn. 10(1), 54–69 (2020). https://doi.org/10.4018/IJGBL.2020010104 21. Unity: Unity. https://unity.com/. Accessed 11 Mar 2022 22. ARFoundation: AR Foundation. https://unity.com/unity/features/arfoundation. Accessed 20 Oct 2022 23. Logothetis, I., Karampidis, K., Vidakis, N., Papadourakis, G.: Hand interaction toolset for augmented reality environments. In: International Conference on Extended Reality, pp. 185– 199 (2022) 24. Zhang, F., et al.: Mediapipe hands: on-device real-time hand tracking, arXiv Prepr. arXiv:2006.10214 (2020) 25. InfinityCode: Real World Terrain. https://assetstore.unity.com/packages/tools/terrain/realworld-terrain-8752. Accessed 20 Oct 2022 26. Blender: blender. https://www.blender.org/. Accessed 20 Oct 2022
Educational Effect of Molecular Dynamics Simulation in a Smartphone Virtual Reality System Kenroh Matsuda1(B) , Nobuaki Kikkawa1 , Seiji Kajita1 , Sota Sato2 , and Tomohiro Tanikawa3 1
3
Toyota Central R&D Labs., Inc. , 41-1, Yokomichi, Nagakute, Aichi, Japan [email protected] 2 The Department of Applied Chemistry, The University of Tokyo, 6 Chome-6-2 Kashiwanoha, Kashiwa, Chiba, Japan Next Generation Artificial Intelligence Research Center, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku,Tokyo, Japan
Abstract. Students have difficulty understanding complex molecular structures and chemical bonds in a two-dimensional media such as textbooks and writing on the blackboard. Teachers also use molecular models or viewers to teach, but the cost of using them is not negligible. To easily share the molecular images of experts and enhance the understanding and motivation of novices, we have developed a prototype of a standalone smartphone virtual reality (VR) interface with a molecular dynamics (MD) simulator for chemical education, called VR-MD. In this application, users can touch, and move molecules whose coordinates are updated in real time by the MD engine. For teachability and safety, we adopted handheld VR glasses and a VR/augmented reality (AR) mode changer responsive in the state of the user’s hand. As a demonstration experiment, we conducted on-site lectures at a high school with one of these applications was provided for each student. A 7-point scale and a free-response questionnaire were administered following the lecture. The results confirmed the effectiveness of the program for improving comprehension and the motivation to learn.
Keywords: Chemistry lecture Dynamics Calculations
1
· Smartphone VR · Molecular
Introduction
Expectations for the use of VR technology in education have increased in step with its recent development [1,2]. In the study of chemistry, it is crucial to understand the structures and fluctuations of ensembles of molecules because important phenomena such as chemical reactions and phase transitions are caused by nanoscale dynamics. However, teaching these properties using only diagrams in a textbook or verbal explanation to students without prior images of the nanoscale c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 183–198, 2023. https://doi.org/10.1007/978-3-031-34550-0_13
184
K. Matsuda et al.
world is extremely challenging. Experts in chemistry have developed and used images of molecules in forms that have been built up over the course of years of experience conducting experiments and providing simulations. The calculations of MD can accurately present a wide range of MD and are widely used by researchers in materials science. MD that can be seen and touched as reproduced by MD engines in VR could help students understand them more efficiently. An interactive MD (iMD) system was constructed by M. B. O’Connor et al. [3] and S. Seritan et al. [4]. In this system, MD calculations are performed in real time in a VR system and molecules can be touched with controllers. This is being incorporated into university courses [5]. However, the cost of acquiring display equipment is a major problem. To enable MD calculations to be performed, the time difference between the molecule’s motion and the controller’s actions to be reduced. Therefore, short-distance network communication with the computer cluster is indispensable, which requires a special room and skilled staff. The cost of study using the equipment is significant. Previously, each student received about 10 min of hands-on training with a dedicated assistant before the lecture [4,5]. It would be desirable to use VR materials simultaneously in lectures with several dozen students. To keep costs low, a realistic device would include a simple device with VR lenses attached to a smartphone. The advantages of Smartphone VR include its low cost and ease of use. Many people now have their own smartphones, and inexpensive plastic VR lenses are available. This is therefore an inexpensive system with the potential for individual use by all students. In addition, the ease of use of this setup, which would not interfere with lectures, is an important factor for the acceptance and long-term use of this system by students and teachers [6]. A major limitation may be seen in the limited computational resources available in a smartphone, but this is not a critical problem, considering the frame rate for MD computations, namely, around several ms/frame [7]. This frame rate is close to that required for VR, over 60 fps(i.e., ≤ 16.7 ms/frame) [8]. Thus, a smartphone MD application is feasible if the number of atoms is limited. In this study, we created VR-MD, a standalone Smartphone VR that implements MD calculations for chemistry education (a concept film can be seen in [9]). This application provides users with an experimental means of observing, touching, and grasping molecules in motion in a VR world. In the following sections, we present the requirements and possible implementations of the educational features of the VR-MD. Then, we report a case study of the use of this application in a high school lecture, followed by an analysis of a student questionnaire. Finally, we provide a summary of this work.
2
Lecture Settings and Requirements
In 2021, we gave an experimental on-site lecture at Tokyo Metropolitan Musashi Senior High School and Junior High School on the intermolecular interaction. From a preliminary survey regarding the effective use of VR for on-site lectures, we identified the following requirements:
Educational Effect of MD Simulation in Smartphone VR System
185
– All students in the class should use the VR-MD system individually. – The students can easily be aware of their surroundings during the experience and are able to stop immediately if they feel that the interaction is becoming dangerous: Approximately 30 students attended the on-site lectures. They gathered in one classroom, with a maximum of three students using each long desk. The long desks were fixed in place, but they could be moved some distance by moving the chairs. Each student’s personal space was not large enough to prevent the risk of injury from students bumping into each other, the desks, or the chairs. – This application ran on a standalone basis(i.e., does not require use of the network) because we did not know until the day of the lecture whether stable network access would provide in the classroom. Real-time display and manipulation of the VR molecules over the network carried the risk of disrupting the lecture due to an unexpected loss of network communication. Therefore, it was desirable for the VR devices to be standalone. – Students were free to observe, touch, and move the molecules.
3
Hardware
To ensure that all students could experience VR simultaneously, we provided smartphones (Apple iPhone 7) and handheld VR goggles (Fig. 1). Unlike headband VR goggles which attach to the head, handheld VR goggles can be put on and taken off immediately. Becasues they do not cover the eyes, it is easy for the user to see what is going on in the immediate environment during the experience. Although an experience of this type is less immersive, it is also expected to reduce the possibility of missing the teacher’s instructions.
Fig. 1. Handheld Smartphone VR
We implemented a six-degrees-of-freedom (6DoF) and hand-tracking system similar to that used for a VR head-mounted display such as Meta Quest 2 by using the rear camera and sensors of a smartphone. Although a Smartphone VR natively implements functions of 3DoF and viewpoint manipulation in general, these were not suitable for the observation of three-dimensional structures of
186
K. Matsuda et al.
molecules and moving them. In addition, we implemented a feature that seamlessly switches between the AR (passthrough) mode and the VR mode. Even with handheld VR glasses, it is difficult for users to see is the straight-ahead direction in the VR experience. By switching between AR/VR modes according to the situation, users can understand their surroundings and prevent accidents.
4
Implementation
This VR application consists of a molecular dynamics engine, 6DoF, hand tracking, hand-molecule interaction function, and AR/VR switching function, we used Unity [10] to build. A HandMR asset [11] was used to implement 6DoF and hand tracking operations. We added modifications to improve hand tracking behavior. We implemented the AR/VR switching function using some features of HandMR. To identify the standalone real-time MD simulation on a Smartphone VR, we developed an MD engine using C# on Unity. We also identified the contacts between the molecules and the user’s hand in a site-site interaction model. This approach can imitate many-to-many contacts with an easy and natural to implement MD engine. The details are presented in the following subsections. 4.1
Hand Tracking
Three-dimensional positions of the users’ finger sites (Fig. 2) are obtained using the HandMR asset. This asset uses the Google MediaPipe Hands library [12] to estimate the positions of the black sites from the real-time camera image on the smartphone. The red and blue sites indicate the midpoints of the black sites. We used the black and red sites as the repulsive interaction sites with the constituent atoms, and we used the blue site as attractive interaction sites. The black sites at the fingertips were also used as auxiliary sites to calculate the attractive interaction. While developing this system, we found that movements of the finger position recorded from the HandMR asset showed a temporal dicscontinuity. This leads to problems in smooth user manipulation via VR hands. For this reason, we introduced a robust Kalman filter [13] to correct the finger sites. In the Kalman of the k-th finger site obtained from HandMR are taken filter, the positions Xobs k ˆ k with a noise of zero-mean normal as the observations of the true position X obs distribution with covariance Σ : obs ˆ Γobs k (t) = HΓk + Wk (t),
Wkobs (t) ∼ N (O, Σobs )
(1)
obs ˆ ˆ ˆ where Γobs k (t) = [Xk ], Γ(t) = [Xk , Vk ] and H = [1, 0], respectively. The vector ˆ k denotes the true velocity of position k. We also O denotes a zero vector and V ˆ assumed that the true state Γk is determined by the uniform motion with noise ˆ of zero mean normal distribution with covariance Σ:
Educational Effect of MD Simulation in Smartphone VR System
187
Fig. 2. Finger sites.
ˆ k (t) = FΓ ˆ k (t − δt) + W ˆ k (t), Γ
F=
1 δt , 0 1
ˆ ˆ k (t) ∼ N (O, Σ) W
(2)
ˆ denotes a normal distribution with mean μ and covariance Σ. ˆ where N (μ,Σ) Following these assumptions, the estimated state was Γ(t) = [Xk (t), Vk (t)] , where Xk (t) and Vk (t) are the corrected position and velocity of the k-th site and can be calculated from the Previously estimated state Γk (t − δt) and its covariance Σ(t − δt) as follows: ˜ k ← FΓk (t − δt) Γ ˜ ← FΣ(t − δt)F + Σ ˆ Σ ˜ + Σobs )−1 ˜ (HΣH K ← ΣH ˜ k + Kf (Γobs ˜ ˜ Γk (t) ← Γ k (t) − HΓk , cHΣH ) ˜ Σ(t) ← (I − KH)Σ
(3) (4) (5) (6) (7)
where ⎧ ⎪ x>a ⎨a f (x, a) = x −a ≤ x ≤ a ⎪ ⎩ −a x < −a
(8)
The limitation using f (x, a) gives the robustness of the estimated state Γk (t). ˆ and c are parameters. The finger positions corrected by this Quantities Σobs , Σ, filter improve time continuity to a degree sufficient to operate the VR hand naturally. Although the VR hand lagged behind the real hand in the AR mode, this lag did not cause any discomfort in the VR mode.
188
4.2
K. Matsuda et al.
Molecular Dynamics Engine
An adequate MD engine was crucial, as Unity’s pseudo-mechanical engine did not work well with realistic molecular interactions. Although the use of fictional molecular interactions may be a solution, we want to avoid user misunderstanding due to fictional motion. In addition to the adequacy of the molecular interaction, we should have achieved a more rapid update of atomic positions in the MD time step than the frame rate in a smartphone. To display the VR movie and respond to the user’s hand motion, the frame rate should be 60 fps. This means that the MD updates should also be faster than 60 fps, which is 16.7 ms per MD update. Fortunately, since the leading MD simulators [6,14–16] achieved a several microsecond interval for updates, the necessary requirements for the VR-MD simulator are achieved. Therefore, we set the performance target of our MD implementation in iPhone 7 to the 16.7 ms update interval for the 100-atom molecular system. In the MD simulation, the atoms move in relation to the discretized Newton’s equation of motion or variant of it [17]. In our implementation, we used the velocity Verlet integrator with the BAOAB-type Langevin thermostat [18]: Fi δt 2mi vi xi ← xi + δt 2
v i ← vi +
(9) (10)
vi ← vi exp(−γδt) + Ri vi δt 2 Fi δt v i ← vi + 2mi
kB T0 [1 − exp(−2γδt)], mi
xi ← xi +
Ri ∼ N (O, I)
(11) (12) (13)
where xi , vi , Fi , mi are the position, velocity, force and mass of atom i. The left arrow indcates that the left side is updated by the right side. The values of xi and vi at time t are updated to those at time t + δt through this chain of updates. By repeating these updates, we can identify the trajectory of the atoms. The third update controls the temperature of the system around T0 with a relaxation time of 1/γ using Gaussian random numbers. The symbol kB denotes the Boltzmann constant. Force Fi is calculated from the analytic differential of Amber-type from the k-th classic potential U ({xi }) [19,20] and the external forces Fhand ik interaction site on the VR hand: Fi = −
∂U ({xi }) hand + Fik ∂xi k
(14)
Educational Effect of MD Simulation in Smartphone VR System
U ({xi }) = 4
LJ fij εi j
i 0.05). Discussion. In this experiment, we investigated the differences in the environment when watching a lecture video. The results of Friedman’s test showed that there was no significant difference in the environment between the three environments when watching the lectures. Some participants felt that the virtual classroom was closer to the actual lecture because it was a classroom, while others felt that other things came into view and reduced their concentration. Thus, Table 2. Concentration on a virtual space with different environments Mean Median Standard Deviation Classroom virtual space (no avatar)
6.27
6
2.26
Classroom virtual space (with avatars) 6.09
7
1.88
Virtual space with nothing around
8
3.17
6.72
The Effect of Changes in the Environment
205
the way people perceive the environment is different, which may have led to the lack of differences in test scores and perceived concentration. In this experiment, only one type of avatar was used, and there was no movement during the lecture. This result may vary depending on the variety of avatars and their behavior.
5
Experiment 2
In Experiment 2, participants were asked to memorize Japanese translations of English words and were given a free playback test. The memorization time was 5 min, and the location was one of the rooms in the laboratory. No writing utensils were used during inscription in any of the environments to match the conditions. 5.1
Experiment 2.1
Conditions of the Experiment. The experiment was conducted on 14 subjects (10 men and 4 women) aged between 19 and 21 years (M = 20.1). Before the experiment, the participants were given a questionnaire asking them to rate their VR experience and their ability to remember English words on a 10-point Likert scale. In Experiment 2.1, memorization was performed in three different environments. The first was in a laboratory room, as shown in Fig. 8. The second and third memorizations were performed in a virtual space. The memorizations were performed in a virtual space imitating a classroom, as shown in Fig. 9, and in a space with only a screen, as shown in Fig. 10. A free play test was performed and scored after the inscription was completed. After each inscription and free playback test in each environment, participants were asked to complete a questionnaire to assess their ability to concentrate. To reduce the influence of the order of the environments, the order was divided into six groups. Three of the nine lists of English words were used for memorization, lists 1 to 3.
Fig. 8. Laboratory
Fig. 9. Classroom Space
Fig. 10. Black Space
206
R. Nakao and T. Nakajima
Result of Experiment. The results of the free recall test in this experiment and the results of the questionnaire are presented in Figs. 11 and 12.
Fig. 11. Test Results
Fig. 12. Concentration
Table 3 summarizes the means and standard deviations of the free recall tests conducted in the three virtual space environments. A Friedman test was performed on these results. The results confirmed a statistically significant difference in the medians of the three conditions (p < 0.05). Table 4 summarizes the means and standard deviations of perceived concentration in each virtual space. A Friedman test was performed on these results. The results confirmed a statistically significant difference in the medians of the three conditions (p < 0.05). Discussion. In this experiment, we investigated the differences between the environments during the writing of English words. The results of the Friedman test showed that there was a difference in the time taken to write English words in the three environments. Comparing the medians and means, the test results are Table 3. Results of tests on virtual spaces with different environments Mean Median Standard Deviation Laboratory room in real space
23.6
24.5
4.5
Classroom in virtual space
22.4
23
4.6
Virtual space with nothing around 19.5
19
5.19
Table 4. Concentration on a virtual space with different environments Mean Median Standard Deviation Laboratory room in real space
7.9
8
1.7
Classroom in virtual space
6.6
6.5
2.0
6
2.5
Virtual space with nothing around 6.3
The Effect of Changes in the Environment
207
better in the order of the real room, the virtual space imitating a classroom and the black virtual space. This may be because the restoration did not work well in the virtual space due to the excitement associated with the VR. In particular, we thought that in the black room, the subject’s sense of anxiety might be increased by the black enclosed space around them, and as a result, their ability to restore might be reduced. 5.2
Experiment 2.2
Conditions of the Experiment. The experiment was conducted with 10 subjects (8 men and 2 women) aged between 20 and 24 years (M = 22.3), unlike Experiment 2.1. As in Experiment 2.1, we administered a preexperiment questionnaire asking participants to rate their experience with VR and their ability to remember English words on a 10-point Likert scale. Participants were asked to write in three different virtual spaces for 5 min and then a free playback test was performed in the lab. The first environment is the room shown in Fig. 13, which is the same as the third room in Experiment 2.1. For the other two rooms, the background colors of that room were changed to red and blue. These rooms are shown in Figs. 14 and 15, respectively. As in Experiment 2.1, a scoring and questionnaire survey was conducted after the inscription and free play tests in each of the environments. To reduce the influence of the order of the environments, the order was divided into 6 groups. The list of English words to be memorized was different from the list used in Experiment 2.1, as there were three different lists, lists 4 to 6.
Fig. 13. Black Space
Fig. 14. Red Space (Color figure online)
Fig. 15. Blue Space (Color figure online)
Result of Experiment. The results of the free recall test in this experiment and the results of the questionnaire are presented in Figs. 16 and 17.
208
R. Nakao and T. Nakajima
Fig. 16. Test Results
Fig. 17. Concentration
Table 5 summarizes the means and standard deviations of the free recall tests conducted in the three virtual space environments. A Friedman test was performed on the results of the experiment. As a result, a statistically significant difference in the means of the three conditions was confirmed.(p < 0.05) Table 5. Results of tests on virtual spaces with different environments Mean Median Standard Deviation Black Virtual Space 19.4
20.5
5.1
Red Virtual Space
21.6
22
5.5
Blue Virtual Space
23.5
26
5.1
Table 6 summarizes the means and standard deviations of the perceived concentration in each virtual space. A Friedman test was performed on the experimental results. The results did not confirm statistically significant differences in the medians of the three conditions.(p > 0.05) Table 6. Concentration on a virtual space with different environments Mean Median Standard Deviation Black Virtual Space 6.2
6
1.9
Red Virtual Space
6.2
6.5
1.9
Blue Virtual Space
7.0
7
1.7
Discussion. In this experiment, we investigated the color differences in the environment during the writing of English words. The results of the Friedman test indicate that there is a difference between the three color environments in terms of the test scores when the English words are inscribed. Comparing the medians and means, the test scores are better in the order of blue, red, and black
The Effect of Changes in the Environment
209
virtual spaces. No significant differences were found in the perceived concentration in each environment. This suggests that the colors of the environment have increased only memory. In the real room, the blue room had a calming effect on the mentality, while the red room increased the level of excitement. This result seems to work well in the virtual space. This suggests that the blue virtual space may have been more memorable than the red virtual space.
6 6.1
Conclusion and Future Prospects Conclusion
In the future, lecture videos will be projected in a virtual space, and there will be more remote lectures in remote locations. Most of the current research focuses on whether lectures and meetings can be held in virtual space as well as in real space. On the other hand, there have been few studies on what kind of virtual space people can concentrate better in, and such studies are very important. In this study, we focused on the difference between environments in virtual space. An experiment was conducted to determine whether lecture-based learning through video lectures and self-study-based learning through word memorization have an effect on learning. As a result, in lecture-type learning with video lectures, there were no differences due to the presence or absence of avatars or differences in the environment. On the other hand, in self-study-type learning by word inscription, we showed that the surrounding physical environment and the surrounding colors in the virtual space affected the learning effect. 6.2
Future Prospects
Due to several limitations, there is future work that remains to be done in the study of this topic. These future issues give directions for future research. In the first study, there was only one type of avatar, and the avatar could only perform sitting actions, but it is possible to investigate how the results change when there are more types of avatars and avatar movements. In terms of color comparison, only three colors were used in this experiment, but by examining other colors, it would be possible to determine which aspects of the colors are more influential. One of the advantages of VR is that the environment can be easily changed. There are many factors in the environment. In this study, we conducted an experiment on the physical environment and color. However, there are many environmental factors, such as the size of the room and the distance to the screen. Such changes can be easily made in a virtual space, and experiments focusing on these points should be possible. Through such studies, it will be possible to investigate which elements of the environment influence learning.
210
R. Nakao and T. Nakajima
References 1. Adams, F.M., Osgood, C.E.: A cross-cultural study of the affective meanings of color. J. Cross-Cult. Psychol. 4(2), 135–156 (1973) 2. Anderson, J.R., Schooler, L.J.: Reflections of the environment in memory. Psychol. Sci. 2(6), 396–408 (1991) 3. Arbaugh, J.B.: Virtual classroom characteristics and student satisfaction with internet-based MBA courses. J. Manage. Educ. 24(1), 32–54 (2000) 4. Balanskat, A., Blamire, R., Kefala, S.: The ICT impact report. Eur. Schoolnet 1, 1–71 (2006) 5. Birren, F.: Color Psychology and Color Therapy; A Factual Study of the Influence of Color on Human Life. Pickle Partners Publishing, Maryland (2016) 6. Bjork, R., Richardson-Klavehn, Y.: On the puzzling relationship between environmental context and human memory. Current issues in cognitive processes: Tlze Tulane Flowerree Synlposium, pp. 313–334 (1989) 7. Cavanaugh, C., Gillan, K.J., Kromrey, J., Hess, M., Blomeyer, R.: The effects of distance education on k-12 student outcomes: A meta-analysis. Learning Point Associates/North Central Regional Educational Laboratory (NCREL) (2004) 8. Chen, X., Siau, K., Nah, F.F.H.: 3-D virtual world education: An empirical comparison with face-to-face classroom (2010) 9. Coban, M., Bolat, Y.I., Goksu, I.: The potential of immersive virtual reality to enhance learning: A meta-analysis. Educ. Res. Rev., 100452 (2022) 10. Condie, R., Munro, R.K.: The impact of ICT in schools - a landscape review. Other, Coventry (2007). https://strathprints.strath.ac.uk/8685/ 11. Dhawan, S.: Online learning: a panacea in the time of COVID-19 crisis. J. Educ. Technol. Syst. 49(1), 5–22 (2020) 12. Elliot, A.J.: Color and psychological functioning: a review of theoretical and empirical work. Front. Psychol. 6, 368 (2015) 13. Freina, L., Ott, M.: A literature review on immersive virtual reality in education: state of the art and perspectives. In: The International Scientific Conference eLearning and Software for Education, vol. 1, pp. 10–1007 (2015) 14. Godden, D.R., Baddeley, A.D.: Context-dependent memory in two natural environments: on land and underwater. Br. J. Psychol. 66(3), 325–331 (1975). https:// doi.org/10.1111/j.2044-8295.1975.tb01468.x 15. Gros, B., Garcia, I., Escofet, A.: Beyond the net generation debate: a comparison of digital learners in face-to-face and virtual universities. Int. Rev. Res. Open Distrib. Learn. 13(4), 190–210 (2012) https://doi.org/10.19173/irrodl.v13i4.1305 16. Hamilton, D., McKechnie, J., Edgerton, E., Wilson, C.: Immersive virtual reality as a pedagogical tool in education: a systematic literature review of quantitative learning outcomes and experimental design. J. Comput. Educ. 8(1), 1–32 (2021) 17. Hidenori, A., Akemi, T.: KazutakemKozono: educational effect of online lecture using streaming technology. Electron. Commun. Jpn. 126(8), 782–788 (2006). https://doi.org/10.1541/ieejfms.126.782 18. Jensen, L., Konradsen, F.: A review of the use of virtual reality head-mounted displays in education and training. Educ. Inf. Technol. 23, 1515–1529 (2018) 19. Kaware, S.S., Sain, S.K.: ICT application in education: an overview. Int. J. Multidisc. Approach Stud. 2(1), 25–32 (2015) 20. Kuo, Y.C., Walker, A.E., Belland, B.R., Schroder, K.E.: A predictive study of student satisfaction in online education programs. Int. Rev. Res. Open Distrib. Learn. 14(1), 16–39 (2013)
The Effect of Changes in the Environment
211
21. Mehta, R., Zhu, R.J.: Blue or red? exploring the effect of color on cognitive task performances. Science 323, 1226–9 (2009). https://doi.org/10.1126/science.1169144 22. Mishra, L., Gupta, T., Shree, A.: Online teaching-learning in higher education during lockdown period of COVID-19 pandemic. Int. J. Educ. Res. Open 1, 100012 (2020) 23. Oiwake, K., Komiya, K., Akasaki, H., Nakajima, T.: VR classroom: enhancing learning experience with virtual class rooms. In: 2018 Eleventh International Conference on Mobile Computing and Ubiquitous Network (ICMU), pp. 1–6 (2018). https://doi.org/10.23919/ICMU.2018.8653607 24. Radianti, J., Majchrzak, T.A., Fromm, J., Wohlgenannt, I.: A systematic review of immersive virtual reality applications for higher education: design elements, lessons learned, and research agenda. Comput. Educ. 147, 103778 (2020) 25. Sato, M.: Effects of background color on task performance. Fukuoka Univ. Rev. Lit. Humanit. 40(2), 229–245 (2008). https://cir.nii.ac.jp/crid/1050282677531772160 26. Smith, S.M.: A comparison of two techniques for reducing context-dependent forgetting. Memory Cognit. 12, 477–482 (1984) 27. Smith, S.M., Glenberg, A., Bjork, R.A.: Environmental context and human memory. Memory Cognit. 6, 342–353 (1978). https://doi.org/10.3758/BF03197465 28. Smith, S.M., Vela, E.: Environmental context-dependent memory: a review and meta-analysis. Psychon. Bull. Rev. 8(2), 203–220 (2001). https://doi.org/10.3758/ BF03196157 29. Sprenger, D.A., Schwaninger, A.: Technology acceptance of four digital learning technologies (classroom response system, classroom chat, e-lectures, and mobile virtual reality) after three months’ usage. Int. J. Educ. Technol. High. Educ. 18(1), 8 (2021) 30. Takato, M., Takuji, N., Hideaki, K.: The effect of environmental context variationbetween encoding and retrieval on free recall in VR. Trans. Virtual Reality Soc. Jpn. 26(3), 187–197 (2021). https://doi.org/10.18974/tvrsj.26.3 187 31. Uehara, Y.: Communication environment in an online classroom: from the practice of an activity-centered elementary Japanese class. The Possibility of Online Interaction Japanese Language Education Report on Online Classes and Activities in the Spring Semester of 2020. Kanda University of International Studies (2021) 32. Valdez, P., Mehrabian, A.: Effects of color on emotions. J. Exp. Psychol. Gen. 123(4), 394 (1994) 33. Wastiau, P., Blamire, R., Kearney, C., Quittre, V., Van de Gaer, E., Monseur, C.: The use of ICT in education: a survey of schools in Europe. Eur. J. Educ. 48(1), 11–27 (2013) 34. Wilms, L., Oberfeld, D.: Color and emotion: effects of hue, saturation, and brightness. Psychol. Res. 82(5), 896–914 (2018)
Perceived Effects of Mixed Reality in Distance Learning for the Mining Education Sector Stefan Thurner1(B) , Sandra Schön1 , Martin Ebner1 , Philipp Leitner1 , and Lea Daling2 1 Educational Technology, Graz University of Technology, Graz, Austria
[email protected] 2 Institute of Information Management in Mechanical Engineering, RWTH Aachen University,
Aachen, Germany
Abstract. Mixed reality as a tool for teaching has made only limited use of its possibilities so far. However, it brings a plethora of new opportunities, with benefits ranging from interactivity to more vividness. These factors could improve numerous areas of teaching. The mining sector would benefit from new methods combined with mixed reality especially. Therefore, the MiReBooks project was launched: Various applications have been developed that can vividly present content using 3D models, virtual field trips and other methods. To verify and further improve these tools, an evaluation phase was conducted. During two test lectures in distance learning, a total of 23 participants answered a posttest questionnaire. The results showed that the teaching quality could be maintained well by the mixed reality application even in distance learning. Students were satisfied with the methods used, attributed good usability to the tool, and felt integrated into the classroom. At the same time, the team realized that the quality of the lesson depends heavily on the quality of the materials and the expertise of the lecturer. It also became clear that other factors, such as the technical infrastructure and support, are particularly important in this format. Keywords: Mixed Reality · Evaluation for Mixed Reality · User Evaluation · Evaluation Methods First Section
1 Introduction Technology-enhanced learning has come to the interest of scholars more and more in the recent past. This also includes the use of Mixed Reality (MR). Virtual and augmented reality bring numerous possibilities and potential for new types of teaching. Various studies have shown possible applications in the past [1]. These differ in scope and format, which is why precise planning is necessary to determine which tools, settings, and formats are helpful for the respective learning context. Especially within the mining sector, new technologies for teaching could help to overcome recent challenges. Research shows that there have been massive changes in this area in recent years. In addition to the representation of complex processes using two-dimensional learning material [2], new challenges have emerged [3]. In many countries, for example, unprofitable mining © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 212–226, 2023. https://doi.org/10.1007/978-3-031-34550-0_15
Perceived Effects of Mixed Reality in Distance Learning
213
operations had to be closed. At the same time, more and more state-owned enterprises were privatized. One of the big problems is also the social acceptance of the sector itself, because there is a disparity between the rising demand for many materials and mining education becoming less and less attractive in the eyes of many students [4]. Wagner [3] speaks of the mining sector being perceived as a “dangerous and environmentally damaging low-technology industry.” To tackle these challenges, a framework of tools, texts and applications was developed within the course of the MiReBooks project. These should assist lecturers and students and help to improve the image of the mining sector as well as the overall learning experience [5]. In this work, a three-step study is described, which was carried out within the project. Two test lectures with 23 volunteers, answering a post-test questionnaire, were held in a distance learning format. The research question was: “Can the developed tool and MR in general meet the challenges of the mining industry education sector while maintaining their quality in a remote setting?”.
2 The MiReBooks Project The European Institute of Innovation & Technology (EIT) Raw Materials initiated the MiReBooks project in 2018. As mentioned above, the focus was laid on an improvement of the learning experience and the overall image of the mining education sector. Within the international project multiple Mixed Reality tools were designed, developed, and evaluated in an iterative process. The experience of these phases was also used to create a foundation of additional learning materials such as handbooks [6]. 2.1 Mixed Reality in Education Within the last years, MR technologies have brought change to different sectors and processes. Research shows, that there are more and more projects, which try to integrate these tools into the educational sector [7]. The benefits of Mixed Reality as a teaching tool have been researched and discussed in a plethora of articles. The technology can help to make the learning contents more accessible and better illustrated. It can also help for developing new forms of communication, collaboration and problem solving [8, 9]. Granic et al. [10] said, that it can increase the motivation of learners and according to Mellet-d’Huart [11], MR could also be a chance to improve distance learning. However, it shows, that many lecturers have little experience with using Mixed Reality and that universities have yet to learn how to integrate Mixed Reality comprehensively [1]. 2.2 Developed VR-Tools As mentioned, multiple MR tools have been developed within the scope of the MiRe Books project. This includes an AR software as well as different applications for immersive VR experiences. For this work, the focus will be on the Virtual Reality part of the framework, which is accessible through a desktop version for the lecturer and a version for head mounted displays (HMDs), in this case Oculus Quest 2.
214
S. Thurner et al.
The application can be used to display 3D models, 360° videos or a combination of both. One use case is the visualization of mining equipment and machines as shown in Fig. 1. The students see a model of the corresponding machine, while the lecturer can explain several details. The view of the students can also be altered by changing the focus, the viewing direction or even animating the model. Alternatively, the lecturer can give each participant free access to their viewports. This allows the students to rotate or pan the model and inspect the illustrated machines further on their own.
Fig. 1. The lecturer guides a group of students through a VR-session of visualizing concepts along a 3D model of a machine.
The second main function of the application are virtual field trips. Through displaying 360° videos of mining sites, a group of students can explore different scenarios relevant for mining education. The lecturer can guide the group through this experience by adding notifications and markers in real time or focus the line of sight on specific parts of the video. In some cases, it is also possible to seamlessly alternate between 360° videos and the corresponding 3D models. For curating and designing content within this framework, an authoring tool as well as a guiding handbook have also been developed.
3 Research Design During the runtime of the project, there were multiple evaluation steps in an iterative manner. In this article, the following study was carried out through two test lectures in a distance learning setup. The test lectures were 60 min each and differed in their topic. The first lecture was held on 19.07.2021 discussing the topic Underground Longwall Mining, while the second one was about Continuous Surface Mining and took place on 23.07.2021. Overall, 23 participants took part in this process, which were recruited among students from RWTH Aachen, Montana University Leoben and Freiberg University of Mining and Technology. 13 students participated in the first lecture, which was held by a lecturer from RWTH Aachen, and 10 students visited the second lecture, held by a lecturer of Freiberg University of Mining an Technology. Four of these participants took part in both lectures. The course of this evaluation phase can be seen in Fig. 2.
Perceived Effects of Mixed Reality in Distance Learning
215
Fig. 2. Graphical illustration of each step within this evaluation phase.
As mentioned before, the test lectures were held in a distance learning setup. The lecturer invited each participant and additional observers to an online video chatroom through Zoom. However, each lecture was designed in a way, so that it could also be transferred to regular presence teaching easily. The lectures itself included a mixture of frontal teaching of basic concepts along slides, and immersive phases with the HMDs, where the participants were able to inspect 3D models or make a guided virtual field trip to a mining site.
Fig. 3. Controls within the MiReBooks application with the Oculus Quest 2 HMD (MRE – Institute of Mineral Resources Engineering. RWTH Aachen University)
For the technical setup, Meta Oculus Quest 2 HMDs were used, because these standalone devices are lightweight and relatively affordable while including all features needed for the MiReBooks software. The desktop version allowed, guiding the students through the immersive parts of the lecture, while they were connected with each other through the headset. Both lectures had a similar course of events. First, the lecturer gave a short technical instruction for the immersive parts of the lesson. The controlling scheme can be seen in Fig. 3.
216
S. Thurner et al.
Fig. 4. Virtual field trip to a mining site, while the lecturer uses onscreen notifications.
During the main part of each lecture, the lecturer started with an explanation of basic concepts through screensharing and slides. To enhance the knowledge of the students, they switched to VR multiple times, as marked by a special notification on the slides. Here, the students could inspect 3D models or experience a virtual trip to a mining site as seen in Fig. 4. During these parts, the lecturer used the didactic tools of the desktop application to guide the students’ views or add more information through onscreen notifications. At the end of each lecture, there was an informal evaluation of learning goals through Mentimeter. Here the students answered simple questions about the topic of each lecture. As a last step of this setting, students and lecturers were asked, to fill in an online questionnaire. A schematic illustration of the time schedule of each lecture can be seen in Fig. 5. More details of this process can be found in Daling et al. [12]. With this method, the research question “How suitable do the participants consider Mixed Reality in general and the developed software in particular for the usage in mining education?” was inspected.
Fig. 5. Schematic overview of both test lectures.
Perceived Effects of Mixed Reality in Distance Learning
217
3.1 Online Questionnaire To inspect the impact of MR as a tool in this distance learning setting, several aspects were examined. These were the perceived usability, user experience and suitability of MR as a tool for remote teaching. For this step, an online questionnaire was designed, which consisted of open and closed questions based on established methods such as System Usability Scale [13], Technology Acceptance Model [14] and iGroup Presence Questionnaire (http://www.igroup.org/pq/ipq/index.php). More details about this part of the evaluation can be found in Daling et al. [12]. For the evaluation, described in this article, the existing questionnaire by Daling et al. [12] has been expanded by more questions to further examine the topics in focus and aspects, which have been little considered thus far. In this process, eight closed and three open questions were designed for the students. In addition to that, the lecturers received five more open questions. The closed questions could be answered along a Five-Point Likert Scale. The specifically designed questions will be displayed beneath. For the students, a group of closed questions were designed, that could have been answered along a five-point Likert scale. They were as follows: • • • • •
I had the feeling that I did not miss out on anything working remotely. The distance learning setup did make my learning experience worse. My concentration was high during the lecture. I lost track during the lesson and could not follow along sometimes. I think my motivation would not differ from presence teaching when using the MR tools remotely. • I felt isolated and not actively integrated in the lecture. • I had enough guidance so that I could follow along well. • More guidance through the lecture would have been better for me. In addition to these questions, there were also three open questions for the learners: • Would you like to try out the system in class too? • Did you encounter any problems while using the tool? • Did anything influence your concentration or motivation during the lecture while using VR? Lastly, there was a group of open questions specifically designed for the lecturers. They were as follows: • • • • •
How do you think you could use this tool in your lectures in the future? Which challenges and obstacles did you encounter during these test lectures? Which aspect of the tool did you like the most and why? Which aspect of the tool did you dislike the most and why? Which chances do you see in this technology?
The data analysis was performed using Microsoft Excel. The closed questions were analyzed statistically, while the open questions were coded and analyzed using the methodology of qualitative content analysis according to Kuckartz [15]. The analysis was roughly based on the knowledge gained in Daling et al. [12]. The open questions were coded and sorted into different categories.
218
S. Thurner et al.
4 Results Overall, both test lectures were received positively by the participants and lecturers. They praised the possibilities of the application and many features it offers for their learning progress. As depicted in Daling et al. [12], the study showed a high usability (M = 83.70, SD = 12.64), and good suitability (M = 4.65, SD = 0.63) further underpinned by the results of the open questions. The informal evaluation via Mentimeter also showed good results as the students were able to understand the content of each lecture rather well. However, this was an unstructured survey outside the regular evaluation process. These results are not discussed further. 4.1 Answers to the Closed Questions Participants mostly indicated that they did not have the feeling that they have been missing out on important details as a result of the distance learning setting. Both lectures had high scores. No participant answered that they strongly agreed when asked if they had the feeling of missing out on something. For lecture 1, 9 students and for lecture 2, 5 students gave overall disagreement in this case, as shown in Fig. 6. No answer gave strong agreement and only one student agreed. Overall, both units were received quite similarly.
Fig. 6. Results of the question about missing out due to the distance learning setting (N1 = 13, N2 = 10)
In the second question, participants should give feedback if they had they saw a disadvantage through the remote teaching format. Again, the students were very positive about this, as more than 50% agreed strongly in both cases. Strong disagreement has not been chosen for both lectures. As shown in in Fig. 7, in both lectures about half of the students answered that they had not perceived any negative impact in their learning experience. However, there were individuals who appeared to feel disadvantaged by the format.
Perceived Effects of Mixed Reality in Distance Learning
219
Fig. 7. Answers to if distance learning have worsened the learning experience (N1 = 13, N2 = 10).
The next question was about the concentration during the lecture. Results showed high values again. The most prominent category was a general agreement in both cases. Strongly agreed was the second most chosen answer twice. Strong disagreement has not been chosen. This is illustrated in Fig. 8. Some participants stated that they had lost focus during the lecture occasionally. However, Fig. 9 shows that a majority could follow well, but that there were isolated cases. While both lectures had a strong agreement with more than half choosing this answer, in lecture 1, one student strongly agreed, while in lecture 2 one student agreed.
Fig. 8. Question about the concentration during the lecture (N1 = 13, N2 = 10)
Question 5 showed far more distributed answers. Students gave answers across all categories. Figure 10 illustrates the larger distribution of responses for both lectures. This divided opinion will be explored further during the explanation of the open questions.
220
S. Thurner et al.
Fig. 9. Responses about whether students had lost their focus or had difficulty following along (N1 = 13, N2 = 10).
Fig. 10. Question about perceived motivation in the distance learning format (N1 = 13, N2 = 10).
In question 6, participants should give feedback on the feeling of being isolated. In the first test lecture, 10 out of 13 students answered that they did not feel isolated at all. In lecture 2, there were 5 out of 10. In both cases, there was one subject who felt at least partially isolated as illustrated in Fig. 11. This part of the questionnaire was closed with two questions about guidance and assistance. Both lectures showed that students were satisfied with the support provided. Lecture 1 had a general agreement with all students giving positive feedback, while in lecture 2 three participants were neutral about this aspect. At the same time, most students did not feel they needed more guidance. Only one student in lecture 1 and three students in lecture 3 gave general agreement as a feedback. These questions are illustrated in Figs. 12 and 13.
Perceived Effects of Mixed Reality in Distance Learning
Fig. 11. This question dealt with the perceived feeling of isolation (N1 = 13, N2 = 10).
Fig. 12. Did students have enough guidance through the lectures? (N1 = 13, N2 = 10).
Fig. 13. Perceived need of guidance through the lecture (N1 = 13, N2 = 10).
221
222
S. Thurner et al.
4.2 Answers to the Open Questions The answers to the open questions further validated the knowledge, gained through the closed questions. Overall, the participants praised the possibilities of the software, especially during the immersive parts of the lectures. One student answered: “Certainly, it can be very helpful in gaining knowledge about the hidden side of machinery and operational aspects of mining, plus it can enable a virtual mine visit with enough vision of many details.” The participants liked the visualization of abstract processes, and it was mentioned, that this technology is a great way to give a first insight in introduction lectures to certain topics. On the other side, the participants also gave feedback on the pain points and challenges, using the MiReBooks application. Using the answers to the open questions, three main categories of problems were formulated. Bugs and technical issues, challenges caused by the VR technology itself, and lastly, didactic problems caused by planning and instructions. Most answers about occurred problems could be attributed to the first category. Students gave feedback, that the stereoscopic view was irritating. Others said that their positioning during the VR experience felt off. One of the participants even said, that they could not follow along very well because of the wrong visualization of graphics and notifications. A subcategory of bugs and technical issues was the stability of the software. Feedback was given, that the application had crashed sometimes or lost connection to the Wi-Fi signal, thus leading to an interruption of the learning experience. As expected, multiple participants complained about their well-being during or after the usage of HMDs. It was noted that the interface itself was too bright or had a problematic contrast, as mentioned by one student: “Very high brightness and white backgrounds irritated my eyes”. This led to headache or motion sickness in some instances. For the last category, multiple problematic aspects about the general design of the lectures have feen formulated. One challenge was the constant change between slides on the monitor and immersive experience with HMDs. A participant answered: „It would have been helpful to keep the slide with the instructions on how to use the tool up during using it (functions of the different buttons on the controls)”. It showed that students had problems with memorizing the functionality of the software without having a guideline visible. Improvements between the two lectures also delivered results, as it was noted, that the removal of stereoscopic view was received positively. The participants further answered that there were multiple things, that challenged their concentrations. For instance: “In the 360° part. The overview of the system is maybe too bright (white background) because of the constantly switches between underground and the white overview”. Some also had problems with orienting during the immersive experience: “It sometimes was a bit confusing when something was drawn on the screen because at first I didn’t really know where it is and had to search it”. Others said that the lack of seeing the real world and noise coming from there, irritated them and aggravated their learning process. Lastly, the design of the Oculus Quest headset itself was received problematic for some users: “My nose is too small, so the glasses do not fully cover my face”. Concerning the answers of the lecturers, the application was received to be a good introduction: “For some introduction sessions on various mining methods. Specific contents and calculations still require more classic tools, but for a good realistic introduction
Perceived Effects of Mixed Reality in Distance Learning
223
it can be really helpful”. Both professors praised the annotation function as a very helpful tool, especially when answering questions that arise during the immersive part of the lecture. One lecturer mentioned the required effort for the usage in other subjects as a possible disadvantage since the content must be created first. However, the same person also mentioned the advantage of being able to easily integrate the software into lessons, as long as there is content available. Another big challenge mentioned was Classroom Management. One of the lecturers replied: “Sometimes I was not sure if every-one is following me, even though there are green lights at the bottom indicated. It is something one needs to get used to”. Despite these challenges, the two instructors were very positive about the application and would like to integrate it into their regular teaching. One presenter also saw the software beyond university education: “It should be integrated into the education process at a larger scope. Not only at universities, but it also can be interesting for raw material professionals to train the staff”.
5 Discussion The results of this evaluation phase were very positive overall. The feedback of the participants showed the big potential of the application as well as the main challenges when using it. Daling et al. [12] already proved the positive effects of the developed tools with promising results for suitability of MR as a distance learning tool. The results of study presented in this article, further validated these findings, and delivered additional feedback. Compared to the results of the study from Daling et al. [6] from 2020, most of the positive findings could have been proven and known challenges were further investigated. It appeared that students found the distance learning format, in combination with the MR experience, to be enriching. Students feedback suggests that the quality of the lecture could have been guaranteed, while using a remote format. However, the data also show that this cannot be stated without reservations. Some participants stated that they had problems with the special design of the units. These statements can mostly be attributed to the problems addressed by the open questions. Besides points of irritation due to the design, such as too high contrast, or the general experience in VR, the technical stability was criticized by individuals. Nevertheless, only a few students stated that they had technical problems during the unit. However, it was these that seemed to affect the lesson the most. For example, it was those participants who gave lower marks in the closed questions whose application had crashed or was disturbed by unstable Wi-Fi signals. The general orientation and some details of the implementation, such as the field of view, also apparently had an effect on these scores. Here, the students’ answers match the answers of the lecturers. They especially mentioned classroom management as one of the biggest challenges in working with the software. During the unit, the observations of the study participants also showed that many had to familiarize themselves with the movement in the VR room before following along the lecture. It was observed several times that individuals had to turn around again and again in order to be able to follow what was happening. This difficulty was exacerbated by the distance learning format and
224
S. Thurner et al.
the minor technical problems that were encountered in isolated cases. This again shows how volatile the learning situation can be in such a setting. While similar problems can be solved much faster and easier in face-to-face classes, they mark one of the biggest challenges in MR distance learning. However, the results also show, that students could follow along the lecture very well most of the time. They also felt integrated into the group quite well. Most of the disputes were over motivation. There were participants, who would not prefer presence over remote teaching in this format. At the same time, some students said, that their motivation would have been even higher when visiting a traditional lecture. This could be attributed to multiple reasons. Thus, the spatial imagination of the individual learners can influence the overall learning success [16]. As shown before, some participants had more difficulties to orientate themselves in the VR room. This makes it clear that the tool should not be seen as a general substitute for conventional face-to-face teaching. It can also be noted that the need for additional guidance strongly differs between the students, which means, that there should be made improvements for accessibility. Overall, the study showed very positive tendencies for the tested application and methods and beside the mentioned challenges it can be said, that the quality of the lecture is highly dependent on the quality of the VR content. 5.1 Limitations Although an initial survey was conducted in an earlier phase of the project, it used a very early version of the software, and the study design cannot be compared with the evaluation discussed in the later stages. Thus, a potential pretest was omitted, and the findings are based on the results of the posttest only. Comparison of the two lectures should also be viewed with a grain of salt. Although both took place in a very similar setting, the units differed in terms of the lecturers, the students, and their topics. Furthermore, it must be noted that the sample of 13 and 10 learners respectively was quite small. Also, the selection of the test persons was not randomized completely and was based on a voluntary basis with persons who already had experience with or at least interest in Mixed Reality. Also, due to an error in the evaluation process, the personal data was not collected. The project team tried to collect this afterwards, but due to the low response rate the project was cancelled. Lastly, the specific situation in which the survey took place must also be mentioned. In the summer of 2021, there were still extensive restrictions in place due to the ongoing pandemic. This not only created unique factors in the creation and implementation of the units. Students had already taken many distance education courses during this second year of Covid-19. So, on the one hand, they had more experience with remote formats, but on the other hand, they have missed being present at the university. To this end, there have already been other test lectures within the project in a similar format. However, their results were not discussed in this article.
Perceived Effects of Mixed Reality in Distance Learning
225
6 Conclusion and Outlook Within this article, a study was explored, which showed the promising possibilities of MR technologies as a learning tool. It further also showed that the technology and the developed application is suitable for remote teaching. Through the evaluation of two similar test lectures, participants gave feedback on the chances and challenges, when using this tool and technology in general. The units themselves were received well and revealed the strengths and weaknesses of the system. Many aspects could subsequently be further validated through the survey data. The additional knowledge, combined with existing results from former studies, also brought numerous new insights and findings. One of the main findings was the fact, that the technology is able to support the learning experience in a remote setting for the mining educational sector. Moreover, it is suggested, that the tool can help to face some general challenges within this sector. Participants were also positively inclined about the quality of their learning experience when transferring the lecture to a distance learning session. Therefore, the research question “How suitable do the participants consider Mixed Reality in general and the developed software in particular for the usage in mining education?” could have been answered in a positive way. Overall, students and lecturers found the technology very suitable for usage in the classroom. Especially in the context of test lectures and their subjects, the results and statements testified the good suitability and advantages for both sides. However, there were also some new challenges and concerns. Time requirement for content creation was one of the main factors. The planning of a VR unit is associated with considerable additional work. In addition, the quality of the unit depends even more directly on the quality of the materials. The technical aspect also proved to be a challenge. The learning situation thus has the chance to add value, but at the same time is very volatile. Future studies could focus on further validating these findings for a bigger sample size or transfer the technology to other topics or completely different areas. For gaining more insight about the capabilities of MR when transferring material into a remote format, there could be a direct comparison to traditional lectures or mixed settings. Finally, future studies could use different evaluation tools and methods to explore completely different aspects. Acknowledgements. This work is part of the project “Mixed Reality Books (MiReBooks)” and was funded by the EIT RAW Materials. The authors are responsible for the contents of this publication.
References 1. Kommetter, C., Ebner, M.: A pedagogical framework for mixed reality in classrooms based on a literature review. In: Theo Bastiaens, J. (ed.) Proceedings of EdMedia + Innovate Learning, pp. 901–911. Association for the Advancement of Computing in Education (AACE), Amsterdam, Netherlands (2019) 2. Kalkofen, D., Mori, S., Ladinig, T., Daling, L.: Tools for teaching mining student in virtual reality based on 360° video experiences. In: IEEE VR Fifth Workshop on K-12+ Embodied Learning through Virtual & Augmented Reality, Atlanta, USA (2020). https://doi.org/10. 1109/VRW50115.2020.00096
226
S. Thurner et al.
3. Wagner, H.: How to address the crisis of mining engineering education in the western world? Mineral Resour. Eng. 8(4), 471–481 (1999) 4. Galvin, J., Roxborough, F.: Mining engineering education in the 21st century – will universities still be relevant? In: The AusIMM Annual Conference Ballarat (1997) 5. Thurner, S., Daling, L., Ebner, M., Ebner, M., Schön, S.: Evaluation design for learning with mixed reality in mining education based on a literature review. In: Zaphiris, P., Ioannou, A. (eds.) HCII 2021. LNCS, vol. 12785, pp. 313–325. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-77943-6_21 6. Daling, L., Kommetter, C., Abdelrazeq, A., Ebner, M., Ebner, M.: Mixed reality books: applying augmented and virtual reality in mining engineering education. In: Geroimenko, V. (ed.) Augmented Reality in Education. SSCC, pp. 185–195. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-42156-4_10 7. Dede, C.J., Jacobson, J., Richards, J.: Introduction: virtual, augmented, and mixed realities in education. In: Liu, D., Dede, C., Huang, R., Richards, J. (eds.) Virtual, Augmented, and Mixed Realities in Education. SCI, pp. 1–16. Springer, Singapore (2017). https://doi.org/10. 1007/978-981-10-5490-7_1 8. Sternig, C., Spitzer, M., Ebner, M.: Learning in a virtual environment: implementation and evaluation of a VR math-game. In: Virtual and Augmented Reality: Concepts, Methodologies, Tools, and Applications, pp. 1288–1312. IGI Global (2018). https://doi.org/10.4018/978-15225-5469-1.ch062 9. Schiffeler, N., Stehling, V., Haberstroh, M., Isenhardt, I.: Collaborative augmented reality in engineering education. In: Auer, M.E., Ram B., K. (eds.) REV2019 2019. LNNS, vol. 80, pp. 719–732. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-23162-0_65 10. Granic, A., Nakic, J., Marangunic, N.: Scenario-based group usability testing as a mixed methods approach to the evaluation of three-dimensional virtual-learning environments. J. Educ. Comput. Res. 58(3), 616–639 (2020) 11. Mellet-d’Huart, D.: Virtual reality for training and lifelong learning. Themes Sci. Technol. Educ. 2(1–2), 185–224 (2012) 12. Daling, L., et al.: Evaluation of mixed reality technologies in remote teaching. In: Zaphiris, P., Ioannou, A. (eds.) HCII 2022, vol. 13329, pp. 24–37. Springer, Cham (2022). https://doi. org/10.1007/978-3-031-05675-8_3 13. Brooke, J.: Sus: a quick and dirty usability. Usability Eval. Ind. 189(3) (1996) 14. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 319–340 (1989) 15. Kuckartz, U.: Qualitative Inhaltsanalyse: Methoden, Praxis, Computerunterstützung (Qualitative content analysis: methods, practice, computer support), 3rd edn. Beltz, Weinheim Basel (2016) 16. Huk, T.: Who benefits from learning with 3D models? The case of spatial ability. J. Comput. Assist. Learn. 22(6), 392–404 (2006) https://doi.org/10.1111/j.1365-2729.2006.00180.x
Developing an Augmented Reality-Based Interactive Learning System with Real-Time Location and Motion Tracking Ching-Yun Yu1 , Jung Hyup Kim2(B) , Sara Mostowfi2 , Fang Wang3 , Danielle Oprean4 , and Kangwon Seo2 1 Department of Electrical Engineering and Computer Science, University of Missouri,
Columbia, MO 65201, USA [email protected] 2 Department of Industrial and Manufacturing Systems Engineering, University of Missouri, Columbia, MO 65201, USA {kijung,seoka}@missouri.edu, [email protected] 3 Department of Engineering and Information Technology, University of Missouri, Columbia, MO 65201, USA [email protected] 4 Info Science and Learning Technology Department, University of Missouri, Columbia, MO, USA [email protected]
Abstract. This study aims to develop an interactive learning solution for engineering education by combining augmented reality (AR), Near-Field Electromagnetic Ranging (NFER), and motion capture technologies. We built an instructional system that integrates AR devices and real-time positioning sensors to improve the interactive experience of learners in an immersive learning environment, while the motion, eye-tracking, and location-tracking data collected by the devices applied to learners enable instructors to understand their learning patterns. To test the usability of the system, two AR-based lectures were developed with different difficulty levels (Lecture 1 - Easy vs. Lecture 2 - Hard), and the System Usability Scale (SUS) was collected from thirty participants. We did not observe significant usability difference between Lecture 1 and Lecture 2. Through the experiment, we demonstrated the robustness of this AR learning system and its unique promise in integrating AR teaching with other technologies. Keywords: Augmented Reality · Real-time Tracking · Motion Capture
1 Introduction Augmented reality (AR) is an interactive experience that combines the physical world and computer-generated elements. Researchers have demonstrated the effectiveness and benefits of applying AR to education [1–6]. Unlike previous AR studies in learning, we focus more on integrating various technologies to enhance the usability and capability of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 227–238, 2023. https://doi.org/10.1007/978-3-031-34550-0_16
228
C.-Y. Yu et al.
AR educational platforms. Some researchers state that learners may have an immersive feeling and tangible connection in AR environments because hand gestures contribute to their ongoing cognitive processes [7]. Microsoft HoloLens 2 - a self-contained holographic projection headset has been shown as an excellent AR device [8]. Many applications for educational AR environments have been developed using HoloLens. However, due to the variety of human gestures, it is challenging for HoloLens to accurately recognize unique hand gestures to trigger specific functions. In addition, performing basic “air clicking” on virtual buttons can be difficult for users with no experience using AR devices. Users often touch the surface of the button rather than pressing it down enough to enable the click function successfully. These problems can easily frustrate learners during the learning process and cause them to lose confidence in the AR learning system and become reluctant to use it again in the future. To overcome these challenges, we proposed a comprehensive learning solution to enhance the interaction with AR environments. Some research indicated that locationbased AR applications could make learners feel immersed in the learning process [9]. Hence, we designed and developed a way of interacting with the AR system through users’ location by tracking real-time movement [10]. To accurately collect indoor location-tracking data, we used the most advanced indoor real-time positioning technology, Near-Field Electromagnetic Ranging (NFER), which shows an average range error as low as 34.0 cm for receivers in positioning [11]. We integrated the Q-Track NFER system [12] with HoloLens so that users could easily navigate multiple AR instructional modules by moving their position. Furthermore, we had the participants wear Xsens motion capture sensors [13] to capture their body movement data during the experiment. We analyzed these motion data to identify unique hand gestures that users would not normally make and integrated the motion capture sensors with the AR system through the Xsens software development kit (SDK). This enabled us to develop the interactive capability necessary to recognize these gestures for triggering specific functions. We also collected two other types of data during the experiment to help us understand the learning patterns of the participants so that we can improve teaching quality and even predict learning behavior. First, we collected eye-tracking data from the participants and recorded their field of view with HoloLens to assess their visual attention. By analyzing these data, we could find out whether the participants’ eyes were following the instructions of the virtual instructor to see if they were distracted in the learning process. Second, the Q-Track real-time positioning sensor worn by the participants provides records of their movements during the experiment. To evaluate learning outcomes, we asked participants multiple questions after viewing each AR instructional module. At the end of the experiment, we also provided feedback questionnaires for the participants to fill out to gather possible improvements to the system.
Developing an Augmented Reality-Based Interactive Learning System
229
2 Methods 2.1 System Design and Development We conducted a quasi-experimental intervention design to test the usability and learning performance within our interactive learning solution for engineering education. To begin, we designed two ergonomics lectures based on the instructional materials compiled by a professor who has been teaching in the field of engineering education for over ten years. We built fifteen 3D scenes using Unity to represent the two lectures. The first lecture contains seven scenes, and the second lecture contains eight. To create an immersive learning experience, we placed a large semicircular blackboard in each scene, consisting of five smaller connected panels (Fig. 1). This design provides the users with the ease and comfort to view and interact with the virtual space used for the lecture when they stand in the center of the scent and face forward. These five panels display figures, human avatar, formula calculations, problem statements, and tables of figures.
Fig. 1. The immersive learning environment.
For complex 3D models and animations, we used Autodesk 3ds Max to create and export them as Filmbox (FBX) files and then import them into Unity game engine. To guide the users’ gaze during the learning process, we created a 3D animated virtual instructor to simulate the real-world scenario of a professor in class (Fig. 2). For the virtual instructor’s voice, we used Murf AI, a high-quality speech generation software, to generate natural-sounding AI voices based on our input scripts. Xsens motion capture system was used to create the realistic character movement to simulate the professor’s body movements in class. Finally, the virtual instructor animation, voice, and the panel display of lecture contents are synchronized to provide a coherent and easy to follow the virtual lecture.
230
C.-Y. Yu et al.
Fig. 2. The 3D scene of AR lecture 2–6 built with Unity.
We used Microsoft HoloLens 2 as the projection device for AR environments. Microsoft Mixed Reality Toolkit 3 (MRTK3), a mixed reality development framework was used for the input system and building blocks for spatial interactions and UI. In addition, we used the eye gaze data provider of MRTK to collect eye-tracking data from the participants, including timestamps, the names and coordinates of the virtual objects touched by the participants’ gaze, and the linear distances between the coordinates of participants’ eyes and the coordinates of the virtual objects. If the participants’ gaze did not intersect with any virtual objects at a given time, no data was collected for that moment. It is worth noting that each AR scene generates an eye-tracking data file for each run. If the participants watched a scene several times, multiple data files for that scene were generated. In other words, even if the researchers were not present during the experiment, we could still clearly identify which participants watched which scenes repeatedly. To analyze the participants’ learning behavior, we divided each data file into the data in class and the data when answering questions after class. By visualizing the data in class on a two-dimensional plane, we could easily determine whether the participants were distracted in the learning process. Besides, analyzing whether the participants’ gaze followed the instructions of the virtual instructor holds unique promise for discovering learning patterns and predicting learning behavior. To validate the eyetracking data, we also used the video capture function of HoloLens to record the field of view of the participants during the experiment. For each of the Unity scenes built, we exported them separately as a Visual Studio solution for the Universal Windows Platform. After pairing Visual Studio with HoloLens over Wi-Fi, we deployed these solutions to HoloLens, creating fifteen AR applications. To allow users to easily navigate through these AR applications based on their positions, an accurate and fast indoor tracking technology that could be integrated with the AR systems was needed. The Q-Track NFER system met these needs. It consists of four components, the router, locator receiver, real-time positioning sensor, and real-time positioning software (Fig. 3). Since the system uses a Transmission Control Protocol (TCP) socket-based protocol, after receiving the location signal sent by the real-time positioning sensor using NFER technology, the locator receiver transmits the information through the router using Wi-Fi to the real-time tracking software running on TCP port 15752 on a laptop. We developed a client program in C# based on the Application Programming Interface (API) of the Q-Track NFER system to determine which
Developing an Augmented Reality-Based Interactive Learning System
231
AR scene should be triggered by the received location coordinates. Since we divided the experiment site into seven areas for lecture one and eight areas for lecture two, the client program could easily determine which area the current location belongs to based on the pre-defined boundaries, and open Microsoft Windows Device Portal for HoloLens through the browser automation tool Selenium [14] to run the corresponding AR application and project the scene onto HoloLens.
Fig. 3. System architecture and components.
2.2 System Deployment and Experiment Setup We recruited thirty undergraduate engineering students from the University of Missouri. These students were enrolled in the same engineering course where the lectures we redesigned would be given. We designed a comprehensive learning process and conducted a pilot study to test the robustness of the developed AR instructional system. First, only a few participants had experience with AR devices, so, we provided a ten-minute training session. We explained in detail how to wear and calibrate HoloLens and real-time tracking sensors, the display of AR environments, ways to navigate through multiple AR scenes, and how to answer assessment questions after viewing each module. After making sure that the participants had no questions, we equipped them with the HoloLens and sensors (Fig. 4). For the order of wearing, we first put HoloLens on the participants and had them perform eye calibration to ensure the accuracy of the collected eye-tracking data. Participants followed the audio instructions from the HoloLens and looked in each designated direction. We then attached Q-Track real-time positioning sensors on the participants’ waists to record their movements during the learning process. The data collected can be played back and exported using Q-Track real-time positioning software for analysis to discover the learning patterns of the participants. Finally, we attached Xsens motion capture sensors on the participants to accurately collect their body movement data as the basis for developing gesture recognition capabilities. Since we were only interested in upper body motion, we used sensors for eleven body parts, i.e., head, sternum, pelvis, right shoulder, left shoulder, right upper arm, left upper
232
C.-Y. Yu et al.
arm, right forearm, left forearm, right hand, and left hand. Participants were asked to stand and move around following the voice instructions of the Xsens MVN software to calibrate the sensors to ensure the accuracy of the data collected. Using the MVN software, these recorded motion data could be saved as a 3D avatar simulation video, or we could export each sensor’s direction, position, velocity, acceleration, joint angle, and center of mass as an Excel file. With these details, we identified unique hand gestures that participants would not make during the learning process through JMP statistical software and used Xsens SDK to develop programs that recognize these gestures to trigger specific functions for integration into the AR system.
Fig. 4. A student participant wearing HoloLens and sensors in the testing area.
After the participants put on the required equipment, they could start the AR lessons. We divided the experimental area into seven and eight areas based on the two sets of AR lectures containing seven and eight scenes respectively and marked an X at a specific point in each area (Fig. 5). We prepared a table equipped with a Q-Track sensor for participants to fill in the quiz sheet. The table also served as a navigator for switching AR scenes. The client program we developed would immediately receive the position information through the locator receiver as soon as the participants moved the table to the X mark in a certain area illustrated in Fig. 5. After determining the area, it operated the Windows Device Portal to run the corresponding AR application and project the scene onto HoloLens. We did not use the movement of the Q-Track real-time positioning sensor attached to the participants for switching AR scenes because the areas we delineated were not as large as the developed AR scenes. As participants moved around to explore the AR environment, they might walk into other areas, causing the AR scene to be switched. Hence, we used the table with a Q-Track real-time positioning sensor as the basis for positioning, and the table must be placed on the X mark during the class until the participants wanted to switch scenes before being moved to the next X marker. Additionally, since the AR scene projection position of HoloLens depends on the initial gaze of the participants, if they randomly look at the ceiling or floor, the AR scene would be projected to a position that is difficult to view. Therefore, we marked each area with a number on the corresponding wall. Participants were asked to stare at the number marker until they saw the AR scene before looking away. At the end of each
Developing an Augmented Reality-Based Interactive Learning System
233
AR scene, the virtual instructor asked the participants to fill out the quiz sheet on the table, including a question to assess learning outcomes and a quiz for this lesson, before moving to the next area.
Fig. 5. Experiment setup conditions.
The two lectures were about ergonomics, including knowledge of biomechanics, static equilibrium, human body diagram, and center-of-mass calculation. We developed them in terms of knowledge difficulty. Lecture 1 focused mostly on the use of declarative information including basic concepts with definitions that were easier to understand (Easy Level), while Lecture 2 involved more procedural knowledge in the form of complex formula calculations that was dependent on knowing the previous declarative information (Hard Level). By determining whether the difficulty of the lectures affected the usability of the AR instructional system, we could reasonably demonstrate the system’s robustness. The first metric of our study focused on the results from a formative assessment following each Unity scene by using one-to-two multiple-choice quiz questions. Our second metric included asking participants to fill out the System Usability Scale (SUS) [15] after completing both Lectures 1 and 2 using the AR instructional system. Several characteristics made the use of SUS attractive. First, it evaluates almost any type of user interface. Second, it has only ten questions, reducing the amount of fatigue experienced by participants. Third, it results in a single score from zero to one hundred, which is easy for researchers from different disciplines to understand. Fourth, it is non-proprietary and cost-effective to use. The SUS consists of ten statements, with even-numbered items worded negatively and odd-numbered items worded positively. 1. 2. 3. 4.
I think that I would like to use this system frequently. I found the system unnecessarily complex. I thought the system was easy to use. I think that I would need the support of a technical person to be able to use this system. 5. I found the various functions in this system were well integrated. 6. I thought there was too much inconsistency in this system.
234
7. 8. 9. 10.
C.-Y. Yu et al.
I would imagine that most people would learn to use this system very quickly. I found the system very cumbersome to use. I felt very confident using the system. I needed to learn a lot of things before I could get going with this system.
Each statement has five options with score contributions ranging from zero to four, namely strongly disagree, disagree, neutral, agree, and strongly agree. The score contribution for positively-worded items (1, 3, 5, 7, and 9) is the scale position minus one, and the score contribution for negatively-worded items (2, 4, 6, 8, and 10) is five minus the scale position. We can obtain the overall SUS score by adding these scores and multiplying the result by 2.5.
3 Results 3.1 Demographics on the Participants For both Lecture 1 and Lecture 2, all thirty undergraduate engineering students we recruited completed the full experimental process from training, wearing equipment and calibration, watching AR modules and answering questions, and filling out questionnaires. According to the basic information provided by these participants, we produced demographics to help contextualize the learning performance information (Table 1). Table 1. Demographics on the thirty participants we recruited. Age
Gender
Mean
Std Dev
Median
Mode
Male
Female
N
21
1.3
21
21
25
5
30
3.2 Learning Performance To compare the learning performance between Lecture 1 (Easy) and Lecture 2 (Hard), we averaged the scores of thirty participants in both lectures. According to the learning performance, the average score for Lecture 1 is 95.71, and the average score for Lecture 2 is 81.43. We also used the t-statistic to compare the two lectures (Table 2). There is a significant performance difference between Lecture 1 and Lecture 2 (Fig. 6). 3.3 System Usability Scale (SUS) We calculated the SUS scores of thirty participants in Lecture 1 (Easy) and Lecture 2 (Hard) respectively. The results show that the average SUS score of Lecture 1 is 66.75, and the average SUS score of Lecture 2 is 68.17 (Table 3). There is no significant difference between Lecture 1 and Lecture 2 (Fig. 7). We can conclude that the current difficulty of the lectures did not affect the usability of the AR instructional system.
Developing an Augmented Reality-Based Interactive Learning System
235
Table 2. Statistical results of learning performance between two levels. Difficulty Level
N
Mean
SD
Easy
30
95.71
7.64
Hard
30
81.43
14.11
Lower 95%
Upper 95%
T-Statistic
P-Value
91.57
99.86
4.88
0.05). Then, the test subjects will complete the questionnaire in a unified classroom, and the whole test process time is controlled within 40 min. The test process is shown in Fig. 5.
Fig. 5. Test Process
4.2 MF Test MF Theory. MF theory refers to an individual’s concentration on an activity and pleasure experience, which is one of the theoretical frameworks for studying the continuous use of new media tools by individuals. At the heart of it is the MF experience, the overall feeling of being immersed in an activity. MF experience, as a kind of experience with active purpose, can stimulate people to participate in activities spontaneously and repeatedly and produce positive results. MF experience is the best experience for
246
Z. Zhang and Z. Xu
users, so applying MF theory to educational game testing has certain research significance. To sum up, this paper sets questions from the three aspects of MF experience conditions, experience and results to investigate whether players have MF experience when operating science education games, and whether it promotes players’ continuous learning. Test Object and Questionnaire. The test subjects in this case are 70 students in a primary school in Hangzhou, including 35 boys (50%) and 35 girls (50%), aged between 12 and 13 years old. Prior to the test, 70 students were surveyed about their learning style preferences and their basic information (gender, age, grade, knowledge of VR equipment and graphic instruction, etc.). In MF test, the professional MF questionnaire developed by Pearce et al. [14] was used to score participants’ participation, enjoyment and sense of control along with five-point Likert scale. The members of the workshop translated and adjusted the questionnaire according to the actual needs, and the reliability of the revised questionnaire was 0.91, with high reliability and validity. The questions before and after knowledge test were all from the teaching materials jointly developed by the members of the second stage workshop and researchers in related fields. After repeated testing and improvement, ten multiple choice questions of varying difficulty were finally extracted from the pre-test and post-test. Preference statistics, MF questionnaires and knowledge pre - and post-test questionnaires were released and collected on the “Wenxing” platform, and then SPSS data analysis software was used for statistics and analysis of the recovered data. Test Process. In order to reduce the tension of the subjects, the researchers made a preliminary explanation to the subjects before the experiment, so that the subjects fully understood the process of the experiment. The subjects were required to choose the learning method (vr equipment, pictures and texts) according to their favorite degree, and analyze the preferences of the users. Before MF test, the basic information of all objects should be measured and knowledge pre-test was conducted. The test results showed that 70 students had no significant individual differences (p = 0.407 > 0.05). According to the total number of participants in the test (N = 70), the students were randomly assigned to the experimental group (N = 35) and the control group (N = 35). At the beginning of the test, the experimental group wore a headset and studied in a VR scene. Control group members learn by browsing the knowledge graph manuscript. After the test, all students answered the MF questionnaire and knowledge post-test questionnaire immediately. The whole MF test process is controlled within 65 min. See Fig. 6 for the test process.
Study on VR Science Educational Games
247
Fig. 6. Test process
5 Data Analysis and Statistics 5.1 Participative Test Result Analysis Effectiveness Analysis of Participatory Method. SPSS was used to conduct independent sample T test on the effectiveness results of PD method of the two groups of students. The results showed that, as shown in Table 1 and Table 2, the average score of the experimental group (M = 85) was higher than that of the control group (M = 75.5), and there was a significant difference between the two groups in the value of academic achievement (t = −5.02, p = 0.000*** < 0.05). The difference is huge. It can be seen that science education games based on PD method have a positive impact on students’ acquisition of Marine knowledge.
248
Z. Zhang and Z. Xu Table 1. Results
Group
N
Mean
SD
SEM
The control group
10
75.50
2.838
0.898
The experimental group
10
85.00
5.270
1.667
Table 2. Results of t-test of effectiveness of participatory method Levin variance isogeneity test F
T test for mean isogeneity
Significant t
Degrees Sig of freedom
Difference Standard Difference 95% of means error confidence difference interval Lower limit
Results Assumed 3.348 0.084 equal variance No Assumed equal variance
Upper limit
−5.02 18
0.000 −9.50
1.89
−13.48 −5.52
−5.02 13.82
0.000 −9.50
1.89
−13.57 −5.44
5.2 Analysis of MF Test Results Statistics of Learning Style Preference. The corresponding scores of different learning styles were sorted out and recorded. The lower the score, the lower the favorable rating of users, and the higher the score, the higher the favorable rating of users. Finally, the distribution table of preference degree of learning styles was obtained (Table 3), and the subjects obviously preferred learning based on VR devices. MF Experience Analysis. Independent sample T test was conducted on the MF test results of the two groups of students using SPSS. The results showed that, as shown in Table 4 and Table 5, the mean value of MF experience of the experimental group (M = 4.19) was higher than that of the control group (M = 3.27), and there was a significant difference between the two groups in MF experience value (t = 6.75, p = 0.000*** < 0.05). The difference is huge. It can be seen that different learning styles affect students’ MF experience, VR science education games have a positive impact on students’ MF experience, and VR-based immersive learning can improve students’ MF experience in the process of Marine knowledge learning. Analysis of Academic Performance. SPSS was used to conduct independent sample T test on the pre-test results of the two groups of students. The results showed that, as shown in Table 5, the mean score and standard deviation of the experimental group were M = 33.14, SD = 9.93, and that of the control group were M = 34.86, SD = 7.02. There was no significant difference between the two groups (t = 0.83, p = 0.108 > 0.05). It can be seen
Study on VR Science Educational Games
249
Table 3. Learning style preference statistics
Preference score
4.53 3.73
VR DEVICE LEARNING
GRAPHIC LEARNING
Table 4. MF value results
FLOW
Group
N
Mean
SD
SEM
The control group
35
3.27
0.54
0.09
The experimental group
35
4.19
0.61
0.10
that there is no difference in the knowledge reserve of 70 students before this test activity. SPSS was used to conduct independent sample T-test and one-way variance analysis on the post-test results of the two groups of students. The T-test results showed that (see Table 2), the mean of knowledge post-test results of the experimental group (M = 90.86, SD = 9.51) was higher than that of the control group (M = 74.86, SD = 7.43). There was a significant difference between the two groups in post-test results (t = 7.85, p = 0.000*** < 0.05), and the difference range Cohen’s d value was 1.88, which was very large. The results of one-way analysis of variance (see Table 2) showed that the significance P value of 0.78 was greater than 0.05, which satisfied the homogeneity of variance. It can be seen that different learning styles affect students’ academic performance. This VR science education game has a positive impact on students’ understanding of Marine knowledge and can promote students to independently learn more Marine knowledge (Table 6).
250
Z. Zhang and Z. Xu Table 5. T-test results of MF questionnaire Levin variance isogeneity test F
T test for mean isogeneity
Significant t
Degrees Sig of freedom
Difference Standard Difference of means error 95% difference confidence interval Lower Upper limit limit
FLOW Assumed 0.290 0.592 equal variance No Assumed equal variance
−6.75 68
0.000 −0.92
0.14
−1.20 −0.65
−6.75 67.12
0.000 −0.92
0.14
−1.20 −0.65
Table 6. Statistical results of knowledge test before and after Category
Group
N
Mean
S.D
t
F
before measuring
The experimental group
35
33.14
9.93
0.83
0.70
The control group
35
34.86
7.02
after measuring
The experimental group
35
90.86
9.51
7.85
61.56
The control group
35
74.86
7.43
6 Conclusion At present, PD method has been applied in many projects at home and abroad and achieved excellent results. The good performance of users has proved that user-assisted design can improve design efficiency and user satisfaction, and timely feedback from users can provide new ideas for design output and other related theoretical research. This study establishes a VR science educational game design model through the collaboration of PD’s “Workshop” design tool. This model is proposed to help design and develop scientific and entertaining science educational games, and to provide theoretical basis and practical reference for the development of other educational games in the future. VR science education games designed jointly by students can accommodate students’ individual differences, meet their diverse needs, and effectively mobilize their learning initiative. It is conducive to alleviating problems such as students’ lack of participation in the traditional teaching mode in our country, and educational games can also link together the science Museum to attract more students’ independent knowledge. VR technology and science education games are adopted to combine teaching and digital technology to build an immersive virtual learning environment for students. Students can fully understand knowledge in the interesting interaction with the virtual environment. By
Study on VR Science Educational Games
251
analyzing the influence of VR science education games on students’ MF experience and students’ understanding of Marine knowledge by scientific measurement methods and SPSS software, the results show that VR technology has a positive impact on students’ MF experience, VR based immersive learning can improve the effectiveness of Marine knowledge popularization, and contribute to promoting the application of VR technology in the field of education.
References 1. Men, L.: Participatory design method and model. Comput. Technol. Dev. (02), 163–166+170 (2006) 2. Large, A., Beheshti, J., Nesset, V., Bowler, L.: Designing web portals in intergenerational teams: two prototype portals for elementary school students. J. Am. Soc. Inf. Sci. Technol. 55(13), 1140–1154 (2004) 3. Xu, X., Xie, Q., Zhang, S.: Children’s participatory intelligent product design method using kansei engineering. Packag. Eng. 40(18), 129–134 (2019) 4. Xu, X., Li, F., Yang, P.: Review of product interaction design based on flow theory. Packag. Eng. 41(24), 14–21 5. Zha, H., Yuan, X.: Study on user participation design of library space reengineering based on value co-creation – based on the case of McKeldin library user participation design of university of Maryland. New Century Library (12), 73–78 (2020) 6. Zhu, X.: Case study and inspiration of user participatory space and service design in American university library. Books Inf. (05), 87–93 (2018) 7. Zhuo, T.: Research on user participatory design of smart home products. Ind. Des. (11), 30–31 (2017) 8. Spinuzzi, C.: The methodology of participatory design. Tech. Commun. 52(2) (2005) 9. Fang, F.: Theoretical basis and application model of educational games. Shanghai Jiao Tong University (2007) 10. Zhang, L., Hu, M., Shang, J.: Research on the qualitative analysis of gamified Learning experience. China Distance Educ. (03), 35–41+80–81 (2020) 11. Wang, C., Li, H., Shang, J.: Application and development prospect of educational games based on virtual reality and augmented reality. China E-Educ. (08), 99–107 (2017) 12. Liu, D., Liu, X., Zhang, Y., Lu, A., Huang, R.: Potential, progress and challenge of virtual reality technology application in education. Open Educ. Res. 22(04), 25–31 (2016) 13. Chan, J.C.P., Leung, H., Tang, J.K.T., Komura, T.: A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4(2), 187–195 (2011) 14. Pearce, J.M., Ainley, M., Howard, S.: The ebb and flow of online learning. Comput. Hum. Behav. 21(5), 745–771 (2005) 15. Liu, R., Ren, Y.: Study on flow experience and empathic effect in immersive virtual environment. Res. Audio-Vis. Educ. 40(04), 99–105 (2019). (in Chinese) 16. Zhang, Z.: New paradigm of furniture design research under the background of big data: methods, tools and paths. Furniture Interior Decoration (5) (2021) 17. Chen, N., Zhang, Z., Dai, Y.: Research on the design of home nursing bed for the aged based on INPD and entropy weight method. Packag. Eng. 1–14 18. Zhang, Z., Du, W., Ding, W.: Research on Intergenerational shared furniture from the perspective of future community. Packag. Eng. 43(S1), 122–127 (2022) 19. Wen, D., Zhang, Z.: Research on furniture design strategy of young people living alone based on KANO and TRIZ theory. Furniture Interior Decoration 29(03), 5–10. (in Chinese)
Learning with Robots
Collaborative Learning with Social Robots – Reflections on the Novel Co-learning Concepts Robocamp and Robotour Aino Ahtinen(B) , Aparajita Chowdhury , Nasim Beheshtian , Valentina Ramirez Millan , and Chia-Hsin Wu Tampere University, 33100 Tampere, Finland {aino.ahtinen,aparajita.chowdhury,nasim.beheshtian, valentina.ramirezmillan,chia-hsin.wu}@tuni.fi
Abstract. This article presents the Robostudio space for collaborative learning (co-learning) around and with social robots, and two novel co-learning concepts. The first concept, Robocamp, is a home-based one-month learning model for family members’ co-learning with a robot. In this model, a social robot is borrowed for families, and weekly hands-on tasks to be conducted with the robot are provided. The second concept, a co-learning workshop called Robotour, takes place in Robostudio and there, university students and primary school pupils together gain an understanding of different aspects of social robots. Both concepts aim for the cooperation between different learner groups and increasing their knowledge about social robots, interaction with them, and how to operate or program them. We present the current state-of-the-art in the area of educational robots, as well as initial evaluations of our concepts with the authentic target groups. We also reflect on our concepts in light of their benefits and potential. According to the evaluation findings, Robocamp provided an encouraging environment that allowed all the participating family members to participate in the collaborative activity, as well as think critically about the limitations of the robots. Robotour successfully raised different learner groups’ curiosity towards robots, which further resulted in the enhancement of their creativity. The knowledge from this article can be utilized by researchers, designers, and teachers, who are interested in the development and implementation of co-learning activities around and with social robots. Keywords: Social Robots · Educational Robots · Co-learning · Co-learning Space
1 Introduction In this article, we explore and reflect on collaborative learning with social robots. Collaborative learning (co-learning) refers to the learning model, where learners are learning and engaging actively together (Yuen et al. 2014). Our emphasis is on co-learning, where humans from different learner groups are learning together while utilizing social robots © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 255–270, 2023. https://doi.org/10.1007/978-3-031-34550-0_18
256
A. Ahtinen et al.
as tools or learning platforms. Learning with social robots is multidimensional. In addition to human-robot interaction (HRI) and social robotics, we can also learn many other skills, such as creativity and teamwork (Yuen et al. 2014; Kandlhofer and Steinbauer, 2016; Arís and Orcos 2019; Ryokai et al. 2009). Generally, robotic projects benefit from multiple backgrounds and abilities of the team members (Yuen et al. 2014), which makes robots a potential platform to learn with. Social robots can also act as efficient icebreakers between learners (Chowdhury et al. 2020; Sun et al. 2017, Beheshtian et al. 2020), which can make them beneficial tools also to support the team formation phase. Co-learning with robots has started to gain interest (e.g., Relkin et al. 2020; Govind et al. 2020; Eck et al. 2014; Chung and Santos, 2018; Ahtinen et al. 2023). However, the prior work mainly focuses on constructing and programming a robot by utilizing robotic toolkits, not social robots. Social robots are interesting platforms because they have some human-like features, they are interactive and multimodal, and they can have several different form-factors, roles and tasks. Robostudio1 is a co-learning space, which was established in Tampere University in Finland in Spring 2022. Robostudio owns several types of social robots, such as Pepper2 , QTRobot3 , Nao4 , Cozmo5 , Alpha Mini6 , and Alpha 1E7 . There exist also some non-humanoid robots such as Temi8 and Spot9 . Knowing and being able to interact and operate social robots is related to technological literacy, i.e., the knowledge and understanding of what a robot is, how do the robots work, how do they look like and what do they do (Jäggle et al. 2019; Kandlhofer et al. 2019). As Björling and Rose (2019) aptly describe: “Many North American teens are immersed in technology from the time they were born and are most likely to have long-lasting relationships with robotic technologies in their future work, education, and home settings” (p. 2). We assume that this statement can be applied to many other nationalities as well. The fundamental idea in Robostudio is to promote the knowledge of robots from multiple viewpoints by utilizing robots and co-learning between different learner groups, e.g., university students, school pupils and seniors. For example, school pupils and university students can join a robotic design project together, and both parties can bring in their own perspective for learning based on their interests, skills and worldviews. In addition to learning about HRI and social robotics, these shared projects aim at enhancing knowledge regarding different people and cultures, teamwork, creativity and problem solving. A regular annual activity in Robostudio is to run the “User Experience in Robotics” course, which is a master’s level hands-on project course about social robotics. As part of the project, the university students organize workshops with school pupils. There, the pupils learn about robots, while the students learn about facilitation and 1 https://www.tuni.fi/en/research/robostudio. 2 https://www.aldebaran.com/en/pepper. 3 https://luxai.com/robot-for-teaching-children-with-autism-at-home/. 4 https://www.aldebaran.com/en/nao. 5 https://www.digitaldreamlabs.com/pages/cozmo. 6 https://www.ubtrobot.com/AlphaMini/index.aspx. 7 https://www.ubtrobot.com/Alpha1E/index.aspx. 8 https://www.robotemi.com/. 9 https://www.bostondynamics.com/products/spot.
Collaborative Learning with Social Robots
257
get design-oriented knowledge for their robotic design projects. Additionally, and as the focus of this article, we have developed two novel co-learning concepts in Robostudio. The contribution of this article appears from the presentation of these two different colearning concepts, and their evaluation results. The first concept is called Robocamp, which is a home-based learning model, where a social robot is loaned to the families for one month. There, the family members act as co-learners. Our second concept is called Robotour. In Robotour, university students and some other learner groups, for example children or seniors, learn together in a co-learning workshop. This learning model includes co-learning stations, which are designed and facilitated by the university students. In this paper, we describe initial findings from the evaluations of these colearning concepts based on two explorative field studies and qualitative data collection. Based on the findings and related work, we reflect on the benefits and potentials of co-learning between different learner groups with social robots.
2 Related Work This section about related work focuses on two different areas. First, in 2.1., we describe how and why social robots are currently utilized for educational purposes, and what are the special characteristics of the educational robots. This section provides a good grounding of the benefits of the social robots in education. Second, we present literature of the research about co-learning with robots (2.2. and 2.3.). That provides information about what is already known about using robots as tools for co-learning. 2.1 Educational Robots and Their Benefits Robots can be considered as an essential area to learn and know in contemporary life (Kandlhofer et al. 2019). Social robots are autonomous or semi-autonomous machines that have the ability to interact and communicate with human beings and obey the behavioral norms set by humans (Bartneck and Forlizzi, 2004). Interacting and operating social robots is an essential skill for humans in the future, as the social robots are entering many parts of life, like education (e.g., Van den Berghe et al. 2019; Belpaeme et al. 2018; Ahtinen and Kaipainen 2020), customer service (e.g., Zalama et al. 2014; Stock and Merkle 2018; Aaltonen et al. 2017) and healthcare (Yang et al. 2017; Dawe et al. 2019; Cifuentes 2020). Human-robot interaction (HRI) is a research field that is “dedicated to understanding, designing, and evaluating robotic systems for use by or with humans.” (Goodrich and Schultz 2008, pp. 1). Under HRI, a specific branch studies child-robot interaction (CRI). It focuses on children as the users of robots (e.g., Salter et al. 2008). Robots are typically motivational educational tools for children (Yuen et al. 2014; Petre and Price 2004) and they can raise curiosity and creativity in the learners (Zawieska and Duffy 2015). They can be utilized in various educational areas, such as STEM (science, technology, engineering and mathematics) (Anwar et al. 2019; Jung and Wong 2018), languages (Van den Berghe et al. 2019; Belpaeme et al. 2018) and learning of general soft skills such as teamwork (Yuen et al. 2014; Kandlhofer and Steinbauer 2016; Arís and Orcos 2019). Alimisis and Kynigos (2009) explain that robotic activities in education can be divided into two broader categories: robotics as a learning object and robotics as
258
A. Ahtinen et al.
a learning tool. Robotics as a learning object refers to education, where the learners learn about robotics by working with robots. Robotics as a learning tool, on the other hand, means that robots are used to assist the learning of some other subject, e.g., language or mathematics. In that case, we often speak about robot-assisted learning. Social robots have an ability to raise curiosity in children towards learning, as noticed in many studies, e.g., Han et al. (2005); Ahtinen and Kaipainen (2020), and Zawieska and Duffy (2015). There are many reasons why children typically consider robots interesting and motivational (Tanaka et al, 2015; Zaga et al. 2015). Some of the reasons are their playfulness and their ability to give feedback and be reactive (Ahtinen and Kaipainen, 2020). Yuen et al. (2014) mention that robots provide concrete, authentic, accessible and motivating learning experiences for children. Social robots are designed to be humanlike, social and physically present (Belpaeme et al. 2018; Leite et al. 2014). In addition to verbal interaction, social robots have abilities for non-verbal communication, e.g., using gestures, expressions, movements and proximity. This is due to their physical embodiment. Physical embodiment increases the feeling of social presence and thus improves multimodal communication, perceived trust, pleasurable experiences, attractiveness and perception of how helpful the robot is (Deng et al. 2019). The physical embodiment of the robot enables the movements and expressions to be designed for the robot. For example, De Wit et al. (2018) found higher level of engagement during the children’s learning activities when the robot used gestures. Also, Leite et al. (2014) found that the use of facial expressions on robots had a positive impact on long-term interaction with robots. 2.2 Co-learning About Robots in Families Robot-related learning projects are mostly diverse, and they have potential to bring people with different interests, ideas and skills to work together (Yuen et al. 2014). For example, one project can include activities such as programming, constructing and designing (Yuen et al. 2014). So, they can naturally enable collaboration between different learners and team members. Generally, in collaborative robotics projects presented so far, group members have worked together in constructing and programming a robot by utilizing the robotic toolkits (Yuen et al. 2014; Relkin et al. 2020; Eck et al. 2014). The general characteristics of robotics projects are the hands-on tasks, and the learners work with tangible objects, i.e., robots. Lately, the number of studies targeting family-based co-learning about robots and robotics has increased. Robotic projects benefit from different interests and skills of the family members – the family can form a multi-faceted team to work in a robotics project. Previous research on this area has mainly studied co-learning with robotic toolkits (i.e., not social robots), and how children and parents build and program them collaboratively, typically in informal workshop or camp settings (i.e., not at home). The InterActions project presented by Bers (2007) conducted a series of five workshops, in which 4– 7 years old kids and their parents were instructed to use LEGO MINDSTORMS robotics kit for building a robotic project. They explored the challenges and opportunities related to multigenerational learning experiences for children and parents. They found lots of potential in this type of cross-generational learning. The study result indicated that both
Collaborative Learning with Social Robots
259
parties gained knowledge of robot programming. Additionally, their confidence and competence regarding technology was enhanced. The families were provided an opportunity to take the robotic kit home. However, the authors did not describe the actual learning experiences in home settings. Relkin et al. (2020) studied how parents supported their children’s informal learning experiences with robots. The study invited children between 5–7 years and their parents to participate in a 1–2 h KIBO Family Day workshops. The KIBO robot used in the study is screen-free and can be programmed with wooden blocks. Families who participated in the workshops had the chance to get familiar with and interested in robotics through open-ended and collaborative approaches. These workshops successfully raised families’ curiosity in programming. The study summarized that during the activity, parents played the role as coaches, while kids engaged as playmates and planners. Eck et al. (2014) studied cross-generational learning about robotics in a scientific kindergarten experiment day. There, the participating children visited several hands-on experiment stations with their grandparents. They recognized the value of discovery and experimentation in learning by stating that “in general the concept of discovering and experimenting represents a valuable pedagogical approach within the area of pre-school education, fostering the learning process of children in a holistic way.” (p. 15). According to their findings, the cross-generational concept worked out well. Some children were motivated to even build their own robots at home after the event by using, e.g., Lego Mindstorms. Yet another study by Chung and Santos (2018) explored the Robofest Carnival, which is an informal learning program with multiple learning stations with robotic tasks. The parents were integrated into the program to manage the learning stations. The study showed that parents can enhance children’s learning motivation and aspiration by providing instructions in the STEM challenges. The research concluded that parents indeed have a valuable role in robotic co-learning projects. We have started exploring the family members’ co-learning with social robots in the home context (Ahtinen et al. 2023). Based on our study with eight families we have noticed that home is a unique context for co-learning. It provides freedom for all family members as learners to adopt their personal perspective for learning based on their interest, level of knowledge about robots, and willingness to learn. In addition, home feels like a safe space for learning, thus providing comfortable settings to learn together. 2.3 Co-learning Between School Children and Adults Previous research has also explored collaborative robotic activities between children and adults, who do not belong to the same family (Angel-Fernandez and Vincze 2018; Arnold et al. 2016; Wegner et al. 2013). Angel-Fernandez and Vincze (2018) have conducted a study with children between 6–18 years old and two master’s students to identify the significance of storytelling session with children. The master’s students were responsible for running the workshops with children, where the children learned about the robot and implemented a story for it. First, all children were introduced to the basic programming of the robot, and they carried out a short driving task with it, including activities like starting, stopping and avoiding obstacles. After the initial introduction, the bigger group was divided into subgroups: first group was responsible for creating the story and props,
260
A. Ahtinen et al.
second group was responsible for implementing the story on the robot, and the third group was responsible for coordinating the work of the prior groups. According to their findings, although the young participants faced difficulty with programming and adult participants faced difficulty in collaborating for the story telling part, most of the participants strongly agreed that they learned best in teams and liked working with other people. In addition, more than half of the participants had fun working with the robots. The authors also mentioned that the participants seemed to be engaged during the activity. Arnold et al. (2016) conducted a study with children (6–11 years old) and researchers to explore the expectations and needs of the children in the context of social robot as a friend. They conducted a four-stage study session. During the coming together, the researchers provided a space for the children to interact with adults to create openness and equality among children and adults. During circle time, the director of the sessions asked, “question of the day” (a question related to the design activity of the day) to get participants to think about the design problem. During the design activity, groups of 2–3 children and 1–2 researchers were formed. These groups were provided with a bag of art supplies to build a low fidelity prototype of their robot friend. The researchers collaborated with the children by asking them questions about their design decisions and providing suggestions. During the presenting and wrapping up, the children presented their designs, and the director wrote down the presented ideas and features. According to their findings, the children preferred to work on individual designs rather than working in groups. However, the children would enthusiastically explain the story behind making their friend robot. On the other hand, the adults could gain deep understanding about children’s needs and wants towards the robot, which changed their existing perception of children’s expectations and wants from the robots. Children being the active designers opened an opportunity for the adults to see the difference and similarities in children’s design requirements. 2.4 Insights from the Related Work Based on the related work, social robots seem to be a potential learning tool for various learners. They can raise interest and curiosity of the learners, they are tangible and present in physical space together with the learners, and they provide good opportunities to work in hands-on tasks around them. In addition, robots are useful in group-based learning, such as collaborative learning, because they seem to have a natural tendency to break ice between people. Robotics projects also benefit from multidisciplinary teams including people from different disciplines and backgrounds, thus bringing in different skills, competences and perspectives. Thus, we wanted to generate novel concepts for robotic co-learning, which could be utilized also by other researchers and instructors. Collaborative learning about robots either at home or in the co-learning spaces would be beneficial because as robots are entering many sectors of life, such as education, health care and customer service, the knowledge about robots can be an essential skill for everybody in the future.
Collaborative Learning with Social Robots
261
3 Co-learning Concepts and Evaluation Methodology We have developed two novel co-learning concepts in Robostudio. They include different learner groups learning about robots together. Next, we present these co-learning concepts and their evaluation methods. 3.1 Robocamp Co-learning and It’s Evaluation The first concept is called Robocamp. It was developed and evaluated during Covid pandemics in 2021. Robocamp is a home-based learning model, where a social robot is loaned to a family’s home for one month. There, the family members act as colearners. Home can act as a comfortable learning space for different family members learning together. In Robocamp, we provide the families with instructions and weekly tasks around the robot, including, e.g., familiarizing themselves with the robot, basic programming of the robot, and designing applications for the robot. The idea is to collaboratively learn about robots and their programming, so that all family members can participate in learning. We have evaluated the Robocamp co-learning concept with eight voluntary families in Finland in 2021. The trial lasted for 4 weeks, and during this time period the families used Alpha Mini robot and its block-based programming environment uCode. Alpha Mini was selected to represent a social robot in the Robocamp, because it is interactive, easy to use and transport and it has a friendly appearance, thus suiting very well for the children’s education. We explained in the study introduction that Alpha Mini is just one example of social robots. In the beginning of each week, the families received new hands-on tasks to be conducted with Alpha Mini. The tasks are described in Table 1. The eight voluntary families were composed of 32 participants, which included 16 adults and 16 children. Each family had two parents, while the number of children varied from one to three. There were elementary school-aged children (6–15) in all families, however, four families had children’s participants at younger age. The children’s participants in this study had an average age of 8,4 years. The data was collected by utilizing online methods due to the pandemic situation. Two rounds of semi-structured online interview were conducted with the families. The interviews were conducted in Teams. The interviewer had a pre-defined script of the open-ended questions, which was slightly adapted during the interview based on the possible emerging themes and topics appearing. The first interview focused on the family’s experience and interest in robots, initial experiences with Alpha Mini and expectations towards the Robocamp. At the end of the trial, the second interview was carried out to discuss the family’s co-learning experiences throughout the one-month Robocamp. This interview included, e.g., the following themes: family’s co-learning experiences, collaboration between the family members, as well as benefits and challenges in learning. The participating families were asked to write an online diary about the pre-defined themes on Mural canvas (Mural) during the entire research period. The provided Mural included a structured diary template for each week to report, for instance, how was their co-learning, how did the programming feel, and did they face any challenges. Besides, the Mural has an open section for recording additional ideas, experiences, images, and screenshots of their program code. We had a separate diary view for each week of the trial.
262
A. Ahtinen et al. Table 1. The weekly topics and challenges of Robocamp.
Week number
Topic of the week
Challenges
1
Familiarizing with Alpha Mini and 1) Connecting Alpha Mini to the other social robots network and application, 2) exploring and trying out the skills and features of Alpha Mini, 3) searching for other social robots online
2
Programming Alpha Mini
1) Familiarizing with the programming environment and trying out some pre-defined programming tasks, e.g., Alpha Mini introducing itself, walking and doing yoga, 2) ideating own programs and programming those
3
Social robot as an encourager for sustainable behavior
1) Exploring the idea of a social robots as an encourager for sustainable behavior, 2) ideating and storyboarding family’s own scenarios for such a robot, 3) programming part of the idea on Alpha Mini
4
Sustainability game design on social robot
1) Ideating a creative game for Alpha Mini to support sustainable behavior, 2) programming part of the game on Alpha Mini
The families recorded their diary on Mural with digital sticky notes. An inductive content analysis (Mayring 2000) was conducted to analyze the qualitative data. The transcribed interview and diary data were coded in spreadsheets and grouped into emerging themes. The analysis resulted in more than 20 themes, for example, approachability of the robot, learner roles, limitations of the robot, robot’s embodiment and family dynamics in learning. Some of the themes have already been reported elsewhere (Ahtinen et al. 2023), and here we are focusing on some basic evaluation findings related to Robocamp. 3.2 Robotour Co-learning Workshop and It’s Evaluation Our second concept is called Robotour. There, the university students and some other learner group, for example children or seniors, learn together. This learning model includes co-learning stations, which are facilitated by the university students. In colearning stations, there is at least one robot involved, and some collaborative learning tasks around it. The learning tasks can include, for example, programming, story creation, responding to the quiz, or doing physical exercises with the robot. In addition, we encourage facilitators to undertake data security measures for safe and responsible human-robot interaction, such as taking care of the audio and image data acquired and stored by robots. These data security aspects are also learned and discussed on each learning station. An essential part of the co-learning station is reflective discussion
Collaborative Learning with Social Robots
263
about robots and the learnings. In this learning model, the learners have the possibility to interact with several robots, as they visit several stations. The basic structure of the workshops is the following: 1) welcome and intro, 2) hands-on learning activities with the robots, 3) wrap-up. We evaluated this co-learning model in spring 2022, when we arranged three workshop sessions with university students and school pupils from different levels (4th, 6th and 8th grade, age-range of pupils 10–14 years). The workshops were arranged in the university’s robotic co-learning space Robostudio. The first workshop with 8th grade (9 pupils, 1 teacher, 6 university students) was 1,5 h long. The second workshop with the 4th grade (15 pupils, 1 teacher, 10 university students) was 2 h long. The third workshop with 4th grade (18 pupils, 1 teacher, 10 university students) was similar to the second one. The research data was collected by hand-written observations during the workshops, the materials produced by the learners in the learning tasks, audio recorded focus group discussion with the university students after the workshop, essay text written by the pupils as homework, and an audio recorded semi-structured interview with the schoolteachers. The data was analyzed with the qualitative content analysis (Mayring 2000), similarly as Robocamp data. In this article, we report the findings from the school pupils’ perspective based on the content analysis of their written essays, while the perspectives of other parties will be reported elsewhere.
4 Findings Here, we describe evaluation findings concerning both co-learning concepts, Robocamp and Robotour. 4.1 Robocamp Findings Co-learning Experience. Most of the participating families mentioned that co-learning with Alpha Mini was a positive experience. Family 1 (F1) described that the family collaborated in working with the robot. Their children wanted to take Alpha Mini even on holiday trips. The siblings collaborated while they were exploring the robot. For example, the older child helped the younger one with the language of the robot, and the younger child explained some things about the robot to the older one. F2 described that they had collaborative activity within the family to get familiar with the robot’s’ capabilities and limitations. Siblings collaborated in this family as well - the older sibling was launching applications from Alpha Mini for his 3-year-old sibling. F3 considered the Robocamp activity to be “collaborative learning between parents and kids,”. F4 commented that the Robocamp activity was a pleasant time they spent together. F6 explained that the whole family collaborated to fix the robot’s technical problems. F7 defined the robotic activity as a whole family activity, and they only worked with the robot when all family members were present: “We had a big group on ideation with lots of ideas and we had the main programmer, who implemented the technical work.” (Father, F7). In most families, busy schedules often prevented the whole family from attending the robotic learning time. However, all family members participated in the exploration
264
A. Ahtinen et al.
of the Alpha Mini, discussions about the robotic activities and ideation of the programs. Sometimes, the family members collaborated in pairs. Thus, we can call Robocamp learning as collaborative learning, where all the family members could participate based on their own willingness, interest and perspective. Learning About the Limitations and Potentials. Another perspective about social robots raised during Robocamp was the robots’ technical issues and limitations: “Somehow an engineer inside me arose, and it was a must to check for example the voice control in the robot.” (Mother, F2). The families realized that there is still quite a lot of technical development required for these robots to become more intelligent: “I did a couple of tasks [with Alpha Mini]. For me this appears to be a little bit like a toy. Maybe for me, a more interesting device would be more intelligent one, and not like a telephone’s answering machine.” (Father, F2). Some parents also questioned the enthusiasm towards the robot for longer period:” Still, I am a bit suspicious about for how long it [Alpha Mini] can raise interest and if it made sense to purchase one for us, but I believe there are some occasions when they are beneficial.” (Mother, F3). When having the robot as part of life for a longer time, it enabled the participants to figure out the possible practical challenges as well, such as when to charge the robot for it to be ready for use, and where to store it. On the other hand, some parents who were critical towards the robot’s current limitations, still saw the potential of them for the further development: “But I agree that there is a huge potential on them, and I can understand they are an important key forward. Time will show the direction, towards which they will be developed.” (Father, F3). Some parents also perceived the role of social robots in society, especially their valuable role in education and learning. Some mothers (F2, F7 and F8) had noticed that children were eager to listen to what Alpha Mini asks them to do: “They are quite happy with the robot, and they are already following what Mini can do, so I think it would add some value to teach something to the kids.” (Mother, F8). Thus, having a social robot in children’s education was considered beneficial and valuable. 4.2 Robotour Findings Overall Robostudio Experience. Based on the essays, most pupils described the Robostudio workshop as a pleasant and positive experience. They described the Robostudio as “nice”, “interesting” and “fun”. On their writings, pupils mentioned it was interesting to see a great variety of social robots and their capabilities. Overall, many pupils stated they were able to learn new things about robots and their features. The pupils came up with good improvement ideas, for example the interaction could last longer, and there could be more activities on each learning station. They also wished for an opportunity to see how each robot is programmed, how they are built, and they were interested in having an opportunity to ask questions about robots. Some pupils wished to see more different robots, like industry robots. Two pupils mentioned that they did not consider Robostudio workshop a really good experience, and a few students stated that it would have been more interesting if everybody could see all the robots. Learnings About Robots. On the essays, pupils emphasized the variety of different robots they met and worked with: “There were many robots, and they all were very
Collaborative Learning with Social Robots
265
different, and it was nice to see what they are capable of.” (Pupil, 6th grade); “I find it interesting that there exists so many different kinds of robots !!” (Pupil, 4th grade). Overall, pupils found the robots interesting, and they liked to work with them. Clicbot and Pepper robots seemed to raise the most interest among the pupils, most probably because they were novel to them. About the Clicbot robot, pupils stated that they liked it because they were able to drive it by themselves and do a small transportation task with it. They also wrote that it is interesting to build different shapes for Clicbot. The Clicbot’s ability to climb on the glass wall was also considered impressive: “All robots were very interesting, but above all was the robot, which could climb on the wall with small support.” (Pupil, 8th grade). The size of the Pepper robot was liked, as well as its appearance, which was considered as “very cute”, “friendly” and “human-like”. One pupil considered Pepper’s body and eyes cute: “Pepper’s hands were lovely! The body was cute, as well as the eyes. So handsome .” (Pupil, 6th grade). Pupils got many insights and ideas related to the usage of robots, for example they raised some ideas about how robots could be used as part of learning, everyday purposes, and for entertainment. One 4th grade pupil commented that social robots are our future. An 8th grade pupil was wondering if robots could help in learning complicated topics, such as physics and math. One 8th grade pupil commented that even though robots are impressive, they do not reach the same level as in sci-fi movies. Some pupils mentioned that they learned about the programming of robots and their sensors.
5 Discussion Robostudio is a novel space for collaborative learning with social robots, where the usual mode of work is to work together with different learner groups in robot-related hands-on tasks. Similar to the findings by Yuen et al. (2014), we have also noticed that robot-related learning projects are diverse, and they have potential to bring learners with different interests, ideas and skills to work together. Additionally, we have realized that working around social robots through co-learning activities can create a positive team cohesion since the hands-on approach and tangibility of the objects can naturally make the learners become active in their tasks. Robots seem to have the ability to break the ice between the learners, as also noticed previously by Chowdhury et al. (2020), Sun et al. (2017) and Beheshtian et al. (2020). This aspect can be helpful in the initial phase of the co-learning, when the learners from different groups are forming the team and start collaboration. Co-learning Workshop Robotour – Raising Curiosity Towards Robots. Colearning workshops arranged in the co-learning space such as Robostudio, are a good opportunity for school pupils to get to know different robots and try out interaction with them. Similar to previous research (Yuen et al. 2014; Petre and Price 2004; Zawieska and Duffy 2015), we also believe that learning with robots can enhance pupils’ sense of curiosity and creativity. The co-learning model provides opportunities for participation in tasks, which may not be possible in everyday school settings. The model is responsible in the sense that the data security aspects are taken care of by minimizing the robots’ data collecting and storing. The model is responsible also from the perspective of participation, as it allows for opportunities to take the role of observer. This means that
266
A. Ahtinen et al.
the participants are not forced to do any activities with the robots, but they can choose the level of participation from the very active hands-on participation to quieter observer in the background. For many pupils, the co-learning workshops may be the first time they meet robots, so it is important to provide an opportunity for them to watch from a distance if they do not feel comfortable about robots. In addition, the reflective nature of the co-learning workshops allows for all kinds of discussions and questions, including the discussion on possible risks and threats, or fears towards the robot. This approach can result in enhancing pupils’ sense of discovery and experimentation (Eck et al. 2014) as they can learn about different aspects of robots during the discussions. Based on our observations, the model where younger pupils and university students are working together seems to provide good opportunities for creation of great group dynamics. Bers (2007) explored families as multigenerational robotics-based communities of practice. Similarly, we believe different learner groups in a robotic co-learning workshop can form a well-functioning community of practice for learning. In the future, it would be beneficial to arrange a series of co-learning workshops to strengthen the group dynamics as well as provide more opportunities for going beyond the very basic level of learning. For example, the first workshop could deal with the basic getting together and interaction with different robots, and the second session could focus on robotic programming or construction. We also like to emphasize the role of creativity as part of these co-learning workshops. First, it is beneficial to give freedom for the university students to design and organize the learning tasks and stations, because then they can use their own creativity and thinking for planning, and it also raises motivation in arranging the activity. Second, the co-learning tasks can be designed in a way that includes, for example, ideation or design, which will make the participating pupils think innovatively and creatively. Robocamp At Home - Beyond the Novelty Effect of Robots, Towards Critical Thinking. Social robots have a very strong novelty effect, which can have a high impact on the user experience results in short-term studies. In most cases, the initial user experience with the robot is very positive, because they seem to be interesting technology for people, but over time the novelty effect can wear off (e.g., Leite et al. 2009; Kanda 2004). With our longer co-learning setup and having the robot at home we were able to get beyond the immediate novelty effect caused by the robot. While the family members were allowed to explore the robot flexibly in the open-ended tasks, they were able to learn and discuss the challenges and limitations related to the robot. Some participants draw attention to the critical perspective on Alpha Mini robot and social robots in general. The longer co-learning at home can be a beneficial learning model concerning technological literacy skills, which is essential knowledge about the emerging technologies such as social robots. In shorter interactions with these robots, especially when a human supports the interaction, the experience and learnings from the robot can be very limited and biased towards the positive direction, because the issues and restrictions may not appear. For example, Jäggle et al. (2019) studied the technological literacy related to educational robots in the setup where young people visited the three-hour program in the technical university to get to know the robots, the robots’ applications, and the robots’ capabilities. It is important to get to know the limitations of these technologies in addition to the potential and benefits to adopt some level of critical thinking. Critical thinking and limitations may not appear in the shorter interactions with the robots, and
Collaborative Learning with Social Robots
267
thus, longer learning period can be beneficial. We can conclude that families can act as great multi-faceted co-learning teams with social robots. With our learning model, learning is open-ended and versatile, resulting in multi-faceted learnings and insights about robot’s design, interaction, role in society, and limitations. Even though all family members may not actively participate equally, the learnings are discussed inside the families, and even the members with supportive roles can get their own insights and learnings. It is plausible to arrange small learning and collaboration units within the families, e.g., one to one unit of child and parent or the unit of siblings. This discovery corresponds to the findings by Bers (2007), who explored families as multigenerational robotics-based communities of practice. All in all, collaborative learning between different learner groups seems to be a meaningful learning concept in robotic projects as every learner can adopt their own personal perspective for learning; everybody included have a possibility to participate and learn from one another. The main benefits include learning from others, getting different points of view, getting help for ice breaking from robots, possibilities to improve technological literacy, and offering a great platform for creative projects. We also see a great benefit of robots that are physically embodied and tangible. Working around these robots brings people to act on hands-on tasks together in the same physical space, which has benefits for getting to know each other, interaction, and understanding each other. Limitations and Future Work. The qualitative and explorative nature of our research enabled us to investigate different phenomena around co-learning with social robots in the authentic home context as well as in co-learning space context. We were able to explore these phenomena in-depth with real social robots. Our setting offered possibilities to explore co-learning in the natural environments. Characteristics to the qualitative indepth study, the limited number of participants affects the generalizability of the findings and conclusions. Thus, the generalizations of the findings need to be made cautiously. We recruited the participant samples considering their willingness to contribute to the study, therefore, the participants may have possessed a greater interest in robotic activities. In addition, providing specific types of robots based on which robots were available in Robostudio, may have affected the experiences and learnings about social robots due to their capabilities and restrictions. A study with different robots might have provided different learnings and experiences for the participants. In the future, the Robocamp colearning model could include different types of robots to be explored by more versatile samples of families to get more insights about different robots. More research is needed about the Robotour co-learning workshops including different tasks and more sessions.
6 Conclusion In this paper, we have presented and evaluated two novel co-learning concepts in the home context (Robocamp concept) and in collaborative learning space called Robostudio (Robotour concept). Robocamp was a one-month long study, where families used and programmed Alpha Mini robot at home using block-based programming. The families received new tasks and challenges each week, and two rounds of semi-structured interviews were conducted with each family. According to the findings, Robocamp
268
A. Ahtinen et al.
encouraged families to work collaboratively as a team. However, families expressed their concerns about the limitations of the robot and its capabilities of engagement in long-term interaction. Robotour, on the other hand, was a pop-up co-learning activity where school pupils and university students participated in co-learning activity. The colearning stations were managed by the university students and each station involved at least one robot and some collaborative tasks around it. According to the findings, the event raised curiosity and awareness towards robots. In addition, the children started to come up with ideas about the usage of robots in different areas, such as education and entertainment, which enhanced their creativity. This article provides insights for researchers, designers and teachers who are willing to design and implement co-learning activities around social robots. Acknowledgment. We express our gratitude to our study participants and thank the Faculty of Information Technology and Communication Sciences at Tampere University for providing funding for this research.
References Aaltonen, I., Arvola, A., Heikkilä, P., Lammi, H.: Hello Pepper, may I tickle you? Children’s and adults’ responses to an entertainment robot at a shopping mall. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 53–54, March 2017 Ahtinen, A., Beheshtian, N., Väänänen, K.: Robocamp at home: exploring families’ co-learning with a social robot: findings from a one-month study in the wild. In: The proceedings of ACM/IEEE International Conference on Human-Robot Interaction (2023) Ahtinen, A., Kaipainen, K.: Learning and teaching experiences with a persuasive social robot in primary school – findings and implications from a 4-month field study. In: GramHansen, S.B., Jonasen, T.S., Midden, C. (eds.) Persuasive Technology. Designing for Future Change. LNCS, vol. 12064, pp. 73–84. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45712-9_6 Alimisis, D., Kynigos, C.: Constructionism and robotics in education. In: Teacher Education on Robotic-Enhanced Constructivist Pedagogical Methods, pp. 11–26 (2009) Angel-Fernandez, J.M., Vincze, M.: Introducing storytelling to educational robotic activities. In: 2018 IEEE Global Engineering Education Conference (EDUCON), pp. 608–615. IEEE, April 2018 Anwar, S., Bascou, N.A., Menekse, M., Kardgar, A.: A systematic review of studies on educational robotics. J. Pre-College Eng. Educ. Res. (J-PEER) 9(2), 2 (2019). https://doi.org/10.7771/21579288.1223 Arís, N., Orcos, L.: Educational robotics in the stage of secondary education: empirical study on motivation and STEM skills. Educ. Sci. 9(2), 73 (2019) Arnold, L., Lee, K.J., Yip, J.C.: Co-designing with children: An approach to social robot design. In: ACM Human-Robot Interaction (HRI) (2016) Bartneck, C., Forlizzi, J.: A design-centred framework for social human- robot interaction. In: ROMAN 2004. 13th IEEE international workshop on robot and human interactive communication (IEEE Catalog No. 04TH8759), pp. 591–594. IEEE, September 2004 Beheshtian, N., Kaipainen, K., Kähkönen, K., Ahtinen, A.: Color game: a collaborative social robotic game for icebreaking; towards the design of robotic ambiences as part of smart building services. In: Proceedings of the 23rd International Conference on Academic Mindtrek, pp. 10– 19, January 2020
Collaborative Learning with Social Robots
269
Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., Tanaka, F.: Social robots for education: a review. Sci. Robot. 3(21), aat5954 (2018). https://doi.org/10.1126/scirobotics.aat 5954 Bers, M.U.: Project InterActions: a multigenerational robotic learning environment. J. Sci. Educ. Technol. 16(6), 537–552 (2007) Björling, E.A., Rose, E.: Participatory research principles in human-centered design: engaging teens in the co-design of a social robot. Multimod. Technol. Interact. 3(1), 8 (2019) Chowdhury, A., Ahtinen, A., Kaipainen, K.:. “ The superhero of the university” experience-driven design and field study of the university guidance robot. In: Proceedings of the 23rd International Conference on Academic Mindtrek, pp. 1–9, January 2020 Cifuentes, C.A., Pinto, M.J., Céspedes, N., Múnera, M.: Social robots in therapy and care. Current Robot. Rep. 1(3), 59–74 (2020). https://doi.org/10.1007/s43154-020-00009-2 Chung, C., Santos, E.: Robofest carnival—STEM learning through robotics with parents. In 2018 IEEE Integrated STEM Education Conference (ISEC), pp. 8–13. IEEE, March 2018 Dawe, J., Sutherland, C., Barco, A., Broadbent, E.: Can social robots help children in healthcare contexts? A scoping review. BMJ Paediat. Open 3(1), e000371 (2019). https://doi.org/10.1136/ bmjpo-2018-000371 Deng, E., Mutlu, B., Mataric, M.J.: Embodiment in socially interactive robots. Found. Trends Robot. 7(4), 251–356 (2019). https://doi.org/10.1561/2300000056 de Wit, J., et al.: The effect of a robot’s gestures and adaptive tutoring on children’s acquisition of second language vocabularies. In: Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, pp. 50–58, February 2018 Eck, J., Hirschmugl-Gaisch, S., Kandlhofer, M., Steinbauer, G.: A cross-generational robotics project day: pre-school children, pupils and grandparents learn together. J. Autom. Mob. Robot. Intell. Syst. 8 (2014) Goodrich, M. A., Schultz, A.C.: Human-robot interaction: a survey. Now Publishers Inc. (2008) Govind, M., Relkin, E., Bers, M.U.: Engaging children and parents to code together using the ScratchJr app. Visitor Stud. 23(1), 46–65 (2020) Han, J., Jo, M., Park, S., Kim, S.: The educational use of home robots for children. In: ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005, pp. 378–383. IEEE, August 2005 Jäggle, G., Lammer, L., Hieber, H., Vincze, M.: Technological literacy through outreach with educational robotics. In: Merdan, M., Lepuschitz, W., Koppensteiner, G., Balogh, R., Obdržálek, D. (eds.) Robotics in Education. AISC, vol. 1023, pp. 114–125. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-26945-6_11 Jung, S.E., Won, E.S.: Systematic review of research trends in robotics education for young children. Sustainability 10(4), 905 (2018) Kanda, T., Hirano, T., Eaton, D., Ishiguro, H.: Interactive robots as social partners and peer tutors for children: a field trial. Human-Comput. Interact. 19(1–2), 61–84 (2004) Kandlhofer, M., Steinbauer, G.: Evaluating the impact of educational robotics on pupils’ technicaland social-skills and science related attitudes. Robot. Auton. Syst. 75, 679–685 (2016) Kandlhofer, M., et al.: Enabling the creation of intelligent things: bringing artificial intelligence and robotics to schools. In: 2019 IEEE Frontiers in Education Conference (FIE), pp. 1–5. IEEE, October 2019 Leite, I., Castellano, G., Pereira, A., Martinho, C., Paiva, A.: Empathic robots for long-term interaction. Int. J. Soc. Robot. 6(3), 329–341 (2014) Leite, I., Martinho, C., Pereira, A., Paiva, A.: As time goes by: long-term evaluation of social presence in robotic companions. In: RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 669–674. IEEE , September 2009
270
A. Ahtinen et al.
Mayring, P.: Qualitative content analysis [28 paragraphs]. forum qualitative Sozialforschung/forum. Qual. Soc. Res. 1(2), 20 (2000). http://nbn-resolving.de/urn:nbn:de:0114fqs0002204 Mural: https://www.mural.co/ Petre, M., Price, B.: Using robotics to motivate ‘back door’learning. Educ. Inf. Technol. 9(2), 147–158 (2004) Relkin, E., Govind, M., Tsiang, J., Bers, M.: How parents support children’s informal learning experiences with robots. J. Res. STEM Educ. 6(1), 39–51 (2020) Ryokai, K., Lee, M.J., Breitbart, J.M.: Children’s storytelling and programming with robotic characters. In: Proceedings of the seventh ACM conference on Creativity and cognition, pp. 19– 28, October 2009 Salter, T., Werry, I., Michaud, F.: Going into the wild in child–robot interaction studies: issues in social robotic development. Intel. Serv. Robot. 1(2), 93–108 (2008) Stock, R. M., & Merkle, M. (2018, January). Can humanoid service robots perform better than service employees? A comparison of innovative behavior cues. In Proceedings of the 51st Hawaii international conference on system sciences Sun, Z., Li, Z., Nishimori, T.: Development and assessment of robot teacing assistants in facilitating learning. In: 2017 International Conference of Educational Innovation through Technology (EITT), pp. 165–169. IEEE, December 2017 Tanaka, F., Isshiki, K., Takahashi, F., Uekusa, M., Sei, R., Hayashi, K.: Pepper learns together with children: Development of an educational application. In; 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 270–275. IEEE, November 2015 van den Berghe, R., Verhagen, J., OudgenoegPaz, O., Van der Ven, S., Leseman, P.: Social robots for language learning: a review. Rev. Educ. Res. 89(2), 259–295 (2019) Wegner, C., Minnaert, L., Strehlke, F.: The importance of learning strategies and how the project" Kolumbus-Kids" promotes them successfully. Eur. J. Sci. Math. Educ. 1(3), 137–143 (2013) Yang, C.Y., Lu, M.J., Tseng, S.H., Fu, L.C.: A companion robot for daily care of elders based on homeostasis. In 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 1401–1406. IEEE, September 2017 Yuen, T., et al.: Group tasks, activities, dynamics, and interactions in collaborative robotics projects with elementary and middle school children. J. STEM Educ. 15(1) (2014) Zaga, C., Lohse, M., Truong, K.P., Evers, V.: The effect of a robot’s social character on children’s task engagement: peer versus tutor. In: Tapus, A., André, E., Martin, J.-C., Ferland, F., Ammi, M. (eds.) Social Robotics, pp. 704–713. Springer International Publishing, Cham (2015). https://doi.org/10.1007/978-3-319-25554-5_70 Zalama, E., et al.: Sacarino, a service robot in a hotel environment. In: Armada, M.A., Sanfeliu, A., Ferre, M. (eds.) ROBOT2013: First Iberian robotics conference. AISC, vol. 253, pp. 3–14. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-03653-3_1 Zawieska, K., Duffy, B.R.: The social construction of creativity in educational robotics. In: Szewczyk, R., Zielinski, C., Kaliczynska, M. (eds.) Progress in Automation, Robotics and Measuring Techniques. AISC, vol. 351, pp. 329–338. Springer, Cham (2015). https://doi.org/ 10.1007/978-3-319-15847-1_32
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can! Janine Breßler
and Janett Mohnke(B)
TH Wildau, Hochschulring 1, 15745 Wildau, Germany {janine.bressler,janett.mohnke}@th-wildau.de
Abstract. The ability to read and write is an important requirement for an active participation in social life – maybe even the most important one. It is a prerequisite for education and in most cases also for professional success. So, it is important, that children learn how to read. In this paper, a case study is presented which investigates the question how humanoid robots can support the motivation of children to read. For this, we considered the following questions: Can humanoid robots, like the NAO robot from Aldebaran Robotics, motivate children to read more and with pleasure? Is there a setup which is technically stable enough to work outside a lab environment and without developer support? The underlying project uses children’s curiosity about humanoid robots to create an exciting learning experience. The idea can be summarized as follows: Children read to a NAO robot and communicate to the robot about the content of the book. What started as a research project is now an application that is readily used in several public libraries in Germany. Currently, our so-called Reading-NAO can interact in German and in English. The paper also introduces the idea, its realization, practical experiences and future plans for the project. Keywords: Reading promotion · NAO · humanoid robots · social robots · educational robots
1 Introduction The ability to read and write in combination with a good reading comprehension is an important requirement for an active participation in social life – maybe even the most important one. It is a prerequisite for education and in most cases also for pro-fessional success. Children benefit directly from reading books and having books read to them: Reading promotes active language use. Children who read books, have a much larger vocabulary. It also promotes correct writing, the ability to engage with oneself, the ability to concentrate, mental independence and judgement [1]. So, to prepare children for life, it is one of the main goals of primary school education worldwide to teach children to read properly and to gain reading comprehension. The ability to read fluently is the first step to achieve this goal. However, not every child who can read well also has a sufficiently good reading comprehension, what often remains unrecognized in classroom [2]. Offers that motivate children to read together and talk about what they have read © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 271–286, 2023. https://doi.org/10.1007/978-3-031-34550-0_19
272
J. Breßler and J. Mohnke
can support the process of gaining reading comprehension. This should not be limited to the time children spend in school. If we take a look at the most popular leisure activities of children between the ages of 6 and 13, we find that reading books only ranks 14th in a survey conducted in Germany in 2020 [3]. In the same survey, playing digital games is ranked 5th after meeting friends, playing outside, doing sports and watching TV. So, is there a way to use some of children’s preferred pastime ac-tivities to motivate more reading? Let us consider the NAO robot (see Fig. 1). NAO is an approximately 57 cm tall humanoid robot. It was designed by the French company Aldebaran Robotics for use in teaching and research1 . The current version of the model has been launched in 2018. The NAO robot is very popular and now in use worldwide at colleges, universities, libraries and many other teaching and research institutions. There are many interesting application scenarios in which a NAO robot is used to support, educate, entertain or motivate children. Here are three examples: There are several projects that aim to support therapies for children with autism spectrum disorders (ASD) to improve social skills, reduce the therapist’s workload and more [4, 5]. At Children’s Hospital of Phil-adelphia, the NAO robot participates in activities or therapies to distract and delight the pediatric patients [6]. Children who have to stay in the hospital often miss their friends and start to feel socially isolated. This problem is aimed by the swiss project Avatar Kids which helps those children to maintain their social contacts with friends and family [7]. The perception that especially children are very curious about a personal contact with the NAO robot initiated our case study. The questions to be considered are: Can humanoid robots, like the NAO robot from Aldebaran Robotics, motivate children to read more and with pleasure? If so, is there a setup which is technically stable enough to work outside a lab environment and without developer support? After providing some background information about participating project partners we motivate the idea by considering related work, describe the concept of the application which was developed to investigate the formulated questions and explain the used field tests and their evaluation. We end the elaboration with a conclusion in which we summarize the results of the study and explain some future plans.
2 Background In this section, we give some background information about the participating project partners. The here described case study is a joined work between the RoboticLab Telematics of TH Wildau and several public libraries in Germany. The team of the RoboticLab Telematics has the leading role in the so-called ReadingNAO project. It has been working on the use of humanoid robots for about eight years2 . In particular, the question of how the use of such robots can contribute to solving current problems of society is an important focus of the research. NAO robots have played an important part for it. Since 2012 they have been used in the Telematics program at TH
1 Aldebaran Robotics Homepage: https://www.aldebaran.com/en (last accessed 2023/02/10). 2 iCampus Wildau Homepage: https://icampus.th-wildau.de/ (last accessed 2023/02/10).
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
273
Wildau3 to teach students to program complex so-called embedded systems and to gain experience with the possibilities and limitations for the use of today’s humanoid robots (see Fig. 1 for some examples). Results of the student’s projects are often used further in projects of the RoboticLab Telematics.
Fig. 1. RoboticLab Telematics at TH Wildau: Student projects and final theses, e.g., quizzes, games, NAO as fitness coach4
Together with the Wildau Public Library5 , RoboticLab Telematics has developed the concept of the Lese-NAO in 2018 (see Sect. 4). The library team also accompanied part of the first field tests (see Sect. 5). The next step was taken in cooperation with the Public Library in Frankfurt (Main)6 , Germany. With the help of the team’s library educator in Frankfurt, the original concept was adapted to make it suitable for public events for primary school children on site in Frankfurt where it has been in use since 3 The telematics program at TH Wildau: https://www.th-wildau.de/index.php?id=29447 (last
accessed 2023/02/10). 4 RoboticLab Telematics (student’s projects): https://icampus.th-wildau.de/cms/roboticlab/stu
dierendenprojekte (last accessed 2023/02/10). 5 Public Library Wildau Homepage: https://www.wildau.de/Stadtbibliothek-743968.html (last
accessed 2023/02/10). 6 Public Library Frankfurt (Main) Homepage: https://frankfurt.de/service-und-rathaus/verwal
tung/aemter-und-institutionen/stadtbuecherei (last accessed 2023/02/10).
274
J. Breßler and J. Mohnke
April 2021 (see Sect. 6). A prize7 won by the Frankfurters attracted the attention of other libraries in Germany. So, the application was introduced in other institutions as well. The Humboldt Library in Berlin-Reinickendorf 8 has now been offering reading afternoons with the NAO since August 2022 and the Hamburg Libraries9 since March 2022. For this study, they provided detailed feedback of their experience (see Sect. 6 as well).
3 Related Work In this section, case studies are first used to provide information about the use of learning and therapy robots. This is followed by studies that focus specifically on the promotion of learning to read. 3.1 Learning and Therapy Robots The idea of using learning and therapy robots to teach or promote skills with their help is not new. There are many studies and projects in which social humanoid robots, such as the NAO robot, are used and a higher learning success compared to conventional learning methods has been observed. For example, in the 2014 case study by Alemi et al. [8], a NAO robot was used in a classroom to support a human classroom teach-er for 5 lessons to teach English as a foreign language in Junior Highschool. Com-pared to students who only had a human teacher to teach this foreign language, the students who were additionally supported by the NAO retained significantly more vocabulary, as shown by subsequent tests. In a case study from 2018, the social robot NAO was used to carry out a specific educational activity in a school environment. The robot acted as a mediator to teach numeracy to primary school students. The robot’s activity was aligned with the curriculum. The results of the case study are promising; with the presence of the robot, the students were motivated and showed a better understanding of the mathematical concepts being taught [9]. NAO robots are also used for therapy purposes for children with Autism Spectrum Disorder (ASD) and help the children to better recognize and understand emotions and social behaviour. For children with ASD, interaction with a robot can even lead to the display of interactive and social behaviour that is uncommon in normal interactions with people [10, 11]. 3.2 Promoting Reading Skills Many settings exist in which children read aloud, alone or in small groups, to some-one listening. An especially interesting one is the Reading to Dogs (RTD) intervention [12, 7 German Reading Award 2021: https://journalismus-buecher-pfundtner.de/deutscher-lesepreis-
2021-in-sechs-kategorien-verliehen/ (last accessed 2023/02/10). 8 Humboldt Library Berlin-Reinickendorf: https://www.berlin.de/stadtbibliothek-reinickendorf/
(last accessed 2023/02/10). 9 In German Bücherhallen Hamburg: https://www.buecherhallen.de (last accessed 2023/02/10).
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
275
13]. Here the basic idea is, that children read aloud to dogs. In an exploratory study in UK, which examined the teachers’ view of RTD in their schools, participants of the survey had a generally very positive perspective on RTD. They especially per-ceived benefits for children’s motivation and confidence to read, which, in their opin-ion, was even greater than for their reading frequency or skill [13]. The RTD idea inspired the principal concept idea of the Reading-NAO (see Sect. 4). That the NAO robot can also be well suited for acquiring reading skills is shown by an actual pilot study from 2022, in which a research team, together with a team of primary school teachers, developed scenarios of how the Nao robot can be integrated into the English language lessons of the 1st and 2nd grade in order to improve the students’ language skills and motivate them to read books. This resulted in three differ-ent parts, an introduction and two stories. In the introduction the NAO introduces itself, i.e., presents its possibilities, the introduction should serve to get familiar with the NAO robot. In one of the developed stories, participants were asked to read a storybook in English, then some questions from the same book were asked to the participants. With the help of the NAO robot’s speech recognition, the participants were able to choose the correct answer from a selection of possible answers based on dialog. In the second story, a different visual approach is taken with the help of the NAO robot. The results obtained during the pilot test were so encouraging that further research will be planned in this area. The students, a total of 4 aged 5, 9, 10 and 11, mostly showed great interest in interacting and conversing with the robot, even though it was their first contact with a humanoid robot [14].
4 Concept The concept section will first give an insight into the architecture of the Reading-NAO application by using of an example in the library, then the implementation of a session will be described and the last section will go into the technical details of the application. 4.1 Architecture Overview The architecture of the Reading-NAO application can be divided into three parts: the Webbased Content Management System (CMS), a Web-Application (WebApp) running on a tablet and the Robot-Application for the NAO robot (see Fig. 2).
Fig. 2. Architecture overview for the Reading-NAO application
Below the steps needed to conduct a session with the NAO robot and one or more children technically are listed. It explains, how the three parts of the application as seen in Fig. 2 work together.
276
J. Breßler and J. Mohnke
In the first step, the library educator selects the appropriate books to be available for reading with the NAO robot. She or he thinks up the comprehension questions and answers for the quiz on the respective book. In the second step, the IT colleague enters the questions and answers into the CMS using the web interface and then tests them (see Fig. 2 left side). The third step comprises the technology set up for a session; the whole ReadingNAO application is implemented as a Stand-Alone-System, which allows it to be used out-of-the-box and independently of the infrastructure; it can be used self-sufficient in any facility, like fairs, schools or libraries. For this, the CMS and the WebApp are started. Both are located on a mini-PC. Once the WebApp is available, it can be accessed via the tablet. In addition, the NAO is started with the robot-application. The whole application is located in an independent local WiFi network. The final and fourth step, is to conduct the session with the NAO, one or more children, and the library educator. During a session, the NAO robot talks to the children, the children or the library educator interact directly with it or respond via the WebApp on the tablet. For this, the WebApp communicates with the web-based CMS on the one hand and with the robot-application on the other. All the necessary data for a book, i.e., the questions and answer options, are requested via the web-based CMS (see Fig. 2, left side). The different behaviours of the NAO robot, which include movements, sounds and texts, are triggered via the WebApp depending on the situation. In addition, all necessary information, such as the questions to be asked about a book and the answer options or which answer the child has chosen and whether it is correct, is also sent from the WebApp to the NAO. To act synchronously, the WebApp receives feedback from the NAO as soon as its behaviour is finished (see Fig. 2 right side). In principle, the Reading-NAO application can be used in German and English. The following section will go into more detail about the workflow of the Reading-NAO application. 4.2 Workflow of a Session with the NAO Robot The basic workflow of the session with a NAO robot and the Reading-NAO application can be divided into three parts: Introduction, Preparation and Perform. Figure 3 gives an overview of the workflow of the Reading-NAO application. In the Introduction part, it is first checked whether the child is meeting the NAO robot for the first time or whether it has been there before. If it is the first time, the NAO explains to the child in the so-called Get-to-know-you meeting how to interact with it, which actions are allowed and which are not. The Preparation part is used to determine the type and characteristics of the appointment. For instance, whether one child or several children want to act10 with the NAO and the activity itself: read to the NAO first and do the quiz, do the quiz right away or learning the alphabet. The difficulty level specifies the grade level, the favourite colour 10 The so-called group mode (several children do the session together) was added during the
pilot project with the Public Library Frankfurt (Main) – see Sect. 6. In this case, the NAO communicates differently. The NAO does not address a child directly but addresses a group of children.
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
277
can also be selected to give the app a personal touch, the background of the WebApp will subsequently be displayed in the chosen favourite colour. This part is usually done by the library educator.
Fig. 3. Reading-NAO: The workflow of the Reading-NAO application using the example of the reading and reading quiz
In the Perform part, the child starts to read aloud and the NAO robot “listens”. When the reading is finished, the quiz begins: The NAO robot asks a question and provides several options for an answer. The child decides on an answer and select it on the tablet.11 The robot gives the child feedback on whether the answer was correct. If the answer was wrong, it gives the correct answer. When all the questions have been asked and answered, the number of correctly answered questions is displayed on the tablet at the end, the NAO robot rejoices with the child about the achievement and says goodbye.
11 In the group mode, the children decide together what answer is correct and the library educator
selects it on the tablet.
278
J. Breßler and J. Mohnke
4.3 Technical Overview With regard to the technical part, a basic overview is given in Fig. 4. For the implementation of the web-based CMS (Fig. 4., left side) the open-source framework Drupal12 was used, which offers not only the administration of data through a database system, but also the administration of web pages, making it possible to conveniently enter and manage data. With the help of a RESTful webservices module, which Drupal provides, the communication between WebApp and CMS is realized. The WebApp is realized as a single-page web application with the JavaScript framework Vue.js13 (Fig. 4 centre). In addition to communication with the CMS, the WebApp serves to visualize information and offers an additional interaction option with the NAO robot. Furthermore, the WebApp coordinates the actions of the NAO robot. For this, the WebApp and the NAO robot communicate via a bidirectional socket connection.
Fig. 4. Technical overview for the Reading-NAO application
The application on the NAO robot consists of two parts, the first part was realized in Python and implements the control flow and the logic of the application (Fig. 4, right side). Movements, sounds and speech outputs for the NAO robot are encapsulated in so-called behaviours and were created via the Choregraphe14 application, a multiuser desktop application. The behaviours build the second part of the application for the NAO robot and are persistently stored on it. Whenever the NAO has to perform a certain behaviour in a situation, such as cheering, the Python application triggers the corresponding behaviour on the NAO. The web-based CMS and the WebApp are each started in a Docker15 container. This has the advantage that the functionalities of the applications can be executed in any environment. Especially for implementing the out-of-the-box principle, this is a suitable solution.
5 Field Tests and Evaluation In this section, we provide information about the first field tests and their results. They took place in the library of TH Wildau at the end of 2018. During a period of 6 weeks 21 children (10 boys, 11 girls) participated. The children’s age covered the range from 12 https://www.drupal.com (last accessed 2023/02/10). 13 https://vuejs.org (last accessed 2023/02/10). 14 http://doc.aldebaran.com/2-1/software/choregraphe/index.html (last accessed 2023/02/10). 15 https://www.docker.com (last accessed 2023/02/10).
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
279
kindergarten age to 4th grade. Each test lasted from 1 to 1.5 h. One or two children participated at a time. Most children were accompanied by an adult who also observed the tests. For evaluation purposes, the tests were accompanied by two supervisors of the Reading-NAO team: one caregiver and one observer. See Fig. 5 for some impressions.
Fig. 5. Some impressions from the first field tests in 2018 – incl. a thank-you-drawing by two participants (Johanna and Lenia)
Each session worked as follows: First, participating children could interact with the NAO by the Get-to-know-you application. Then the reading and quiz part took place (see Sect. 4.2). During the entire time, the caregiver helped the children if needed. The observer took evaluation notes. For this, the observer used an evaluation form focusing on the following items: general observations, including concentration, concentration while listening when someone else is reading aloud and the user experience with the application (NAO and tablet). The most important findings are summarized in Table 1. At the end of a session, the children were asked for their opinion and for further ideas or suggestions for improvements. Here is a summary of the given answers: All participating children liked their session with the Reading-NAO and would like to get the chance to participate again. In particular, they liked, how the NAO reacted to a given answer, that it was funny, told jokes and laughed, they liked, that the NAO told all possible answers to a question. Most children wished for more interaction with the robot and the chance to play games with it at the end. Our findings can be summarized as follows: The question of whether the possibility of reading with the NAO robot motivates children to do so can be clearly answered in the affirmative. It is appreciated that the children have as many opportunities for interaction with the NAO as possible. Actions that have nothing directly to do with the actual reading are also desired and can be added at the end. It is important that the NAO is fun, non-judgmental, but motivating and encouraging, so that children can relax. The children should perceive the reading NAO as a friend and not as a teacher. It is okay if the NAO is not perfect. The quiz on the read content is important because it strengthens
280
J. Breßler and J. Mohnke Table 1. Summary of important observations during the field tests
Observation type
Impressions
General
• As soon as the NAO starts talking the children relax • Because of NAO’s way (funny/friendly) the children were quickly open-minded • Alle participating children could concentrate well • Reading time was between 7 to 20 min • During the Get-to-know-you application children listened patiently • It was challenging for the children to listen the whole time while another child was reading
User experience (problems seen) • Some children felt uncomfortable because the NAO was “starring” at them all the time • Children’s answers given by voice were sometimes difficult to understand by the NAO • The font size of the tablet was too small for beginning readers • There are German words that couldn’t be pronounced well by the NAO, e.g., Lasagne, Postbote, Bär, Monsterhund. However, children said, this was funny. Finding: NAO isn’t perfect • For children who could read well, the NAO was too slow to give all possible answers because they could read them faster on the tablet and became impatient
the user’s experience of really interacting with the NAO. It can also promote reading and listening comprehension. From the technical point of view the observation during each session and the conversation with the children afterwards helped to improve the application and the workflow during a session.
6 Migration and Adaptation for a Public Library The second part of the case study serves to explore the challenges and opportunities that arise if the application is to find a place in a library’s offerings to its patrons. The findings on this are described in the following section. First, a pilot project was successfully launched with the public library in Frankfurt (Main) in 2020 (see Sect. 2). Later, the Reading-NAO application was also included in the program of Humboldt Library in Berlin and of Hamburg Libraries (see Sect. 2 as well). 6.1 Implementing the Application To implement Reading-NAO in a library, the steps as seen in Table 2 are required.
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
281
Table 2. Steps for introducing the Reading-NAO application in an institution Step Action 1
The necessary hardware has to be organized (see Sect. 4.1 for details)
2
The library educator must develop a concept for the planned events: How many children should the event be designed for? What age group should be addressed? Books need to be selected, a quiz needs to be developed. Both must be entered into the system (see Sect. 4.1, Step 1). The actual course of the event must be planned
3
The application has to be adapted to the concept. This mostly concerns what the NAO says when
4
A training of future caregivers on how to use the application must be provided
The steps 1, 3 and 4 were in all three cases accompanied by the RoboticLab Telematics. The goal in each case was to help the teams to be able to help themselves in the future, which successfully has worked out. An important requirement is, that there is a member of the team who has basic IT skills for future administration and adaptation of the application. 6.2 Overview The Reading-NAO application has been in use in a library since April 2021. Almost 170 children in the age between 5 to 10 participated in one of the offered reading events. All three library teams have used the group modus in which a couple of children read a book together – each child reads a part of it. Afterwards, the questions of the quiz are answered together. There is a library educator who moderates the event and a second colleague of the library who manages the application including entering the answers, that the children have chosen, into the tablet. A summary can be seen in Table 3. The specific process of a reading event varies from library to library, based on the workflow described in Sect. 4.2. Let us see two examples. First the team of the Hamburg Library explains its procedure: “The event for school classes took place in such a way that we first conducted a picture book cinema with the NAO - a librarian reads a book aloud and the pictures are shown on a screen. The robot comments on the story and occasionally asks the children a few questions about the content. We then used the reading app in "group" mode. The book "Cowboy Klaus and the Desert Wanda" was read aloud by the children. Each child read one page (or as much text as they were confident with). After a certain number of pages, NAO asked questions about the content. The children were asked to vote in the group for an answer and shout it out to NAO. The questions and answer choices for the book were created by the children’s librarians and entered through the CMS.” The open events using the reading app were designed so that children could volunteer to read one or more pages of the books "The Very Hungry Caterpillar" and "Outside My Door on a Mat". Following each read-aloud session, NAO asked questions about
282
J. Breßler and J. Mohnke Table 3. Overview Reading-NAO events in different libraries Public Library Frankfurt (Main)
Humboldt Library Berlin-R
Hamburg Libraries
First session
April 2021
August 202216
March 2022
Number of children reached so far
ca. 54
31
83
Age of children (in years)
8–10
7–9
5–8
Frequency of the offer?
monthly
monthly
as booked by school classes or organized for special events17
Children per session
max. 5
max. 5 (+ parents)
variable
Mode (group or single)
group
group
group
the book. One session lasted about 10–15 min and was repeated with a short break with more children.“18 In Berlin-Reinickendorf, the procedure takes about 45 min and is as follows: The NAO welcomes the children which it does each time a little different. Then, a library educator reads a book (a new one at each appointment) and shows the pictures on a big screen, called picture book cinema. It follows a quiz with NAO about this book. The children then read from a book for beginning readers provided by the library. The book has been divided into sections for the children by the library educator, so that the book is read altogether by all the participating children. Another quiz follows with NAO about the specific book. At the end, the children get to make a wish from NAO: Dance, talk, imitate an animal. The IT colleague of the team thinks up a new action each time. In all three libraries the application operates stable. After the introduction period, the support of the RoboticLab Telematics has not been necessary. The general concept allows adaptation to own ideas. A drawback is the time necessary for preparing an event. The following statement given by the Humboldt Library in Berlin-Reinickendorf explains vicariously why the teams still like to offer the reading events: “Preparation: Library educator selects the appropriate books with the appropriate picture book quiz from the publisher’s site for the book she is reading. She comes up with the questions for the quiz. The IT colleague enters the questions into the database and tests them. The technology setup in the children’s library takes about 15–20 min. The cost of this event format is quite high compared to our other events, but it is worth it! Our 16 First test run in June 2022. 17 Long Night of Literature and events in district book halls. 18 Katrin Weihe, Hamburg Libraries (Bücherhallen Hamburg), statement given by e-mail on
2023/02/07.
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
283
library educator is always surprised at how patiently the children listen to each other after all, they are "strangers." Even children who are less able to read, dare to do so! Although parents, who are strangers, are also listening. The enthusiasm about meeting a humanoid robot carries the situation and thus promotes social competence (listening, waiting, being considerate of slow and/or quiet readers). Motivation for participation often comes from our open robotics afternoons. It’s usually very crowded there (50–60 kids trying out different robots) and the NAO isn’t always there. The read-aloud event is then more or less an "exclusive event" with only a few children and one is allowed to spend 45 min with the NAO. The parents of course appreciate the reading aloud, especially that the children dare and want to read aloud. They also find the NAO exciting, of course. “19 There are also some ideas for further developments: The possibilities for direct interactions between the NAO and the children when using the group mode could be more. The team in Berlin would also like to take the Reading-NAO to a school and support reading promotion there, but they don’t have the staff for that.
7 Summary and Conclusion In this study, we investigated the following questions: Can humanoid robots, like the NAO robot from Aldebaran Robotics, motivate children to read more and with pleasure? Is there a setup which is technically stable enough to work outside a lab environment and without developer support? For this, we used the Reading-NAO application which is a joint project of the RoboticLab Telematics of TH Wildau and four public libraries in Germany: The Wildau Public Library, the Public Library Frankfurt (Main), the Humboldt Library in Berlin, and the Hamburg Libraries. The project started in 2018 with a concept development, the implementation of the first version of the Reading-NAO and subsequent field tests with 21 children in the age of 5 to 10 years at TH Wildau. As a result of these field tests, the question of whether the possibility of reading with the NAO robot motivates children to do so can be answered in the affirmative. However, this was still experimental. In a second stage of the study, the Reading-NAO application was introduced at Public Library Frankfurt (Main) and adapted for use in the library’s program with the help of a library educator from Frankfurt. One of the main enhancements was the addition of the so-called group mode, so that more than one child or two children can join a Reading-NAO session at once. The introduction in Frankfurt was successful20 . So, the third stage of the study began, the introduction of the adapted Reading-NAO application to two libraries that were not previously involved in the project and in the development of the application. Both institutions developed their own program for the public based on the Reading-NAO concept and have successfully used it for months. This rolling out of the application to other libraries was critical to the quality of the study because it allowed the application to be tested in practice, independent of the developers, by library educators, not engineers. 19 Christiane Bornett, Humboldt-Bibliothek Berlin-Reinickendorf, statement given by e-mail on
2023/02/07. 20 German Reading Award 2021: https://journalismus-buecher-pfundtner.de/deutscher-lesepreis-
2021-in-sechs-kategorien-verliehen/ (last accessed 2023/02/10).
284
J. Breßler and J. Mohnke
The success reinforces the findings from the tests in Wildau: Incorporating a NAO robot helps to motivate elementary school children to read. The library educator of the Public Library Frankfurt (Main) summarizes it as follows: “The robot is so attractive that even children who are not yet good at reading aloud register and dare to read aloud. In this respect, the pedagogical approach works well. It also appeals to children who are otherwise rather distant from reading.”21 The use of the Reading-NAO application in practice also shows that it is possible to build a setup that works stably without the application developers on hand and can be adapted to individual needs. This also provides a positive answer to the second question of the study. However, it has a price: The Reading-NAO is designed as a stand-alone system, that does not depend on the infrastructure of the facility where it is used (see Sect. 4.3), so it minimizes the effort for the IT administration. Anyway, you need a team member who is able to oversee the application IT, set up for events, make minor adjustments, and resolve technical issues. Content also needs to be developed in a way that makes pedagogical sense, quizzes for books need to be created and entered into the system. Only such a content concept makes it an attractive and valuable application with which the children have fun. The head of the library in Berlin summarizes their experiences with the Reading-NAO application as follows: “The event series is personnel-intensive, communication with the robot could be more direct, BUT: It is such an extraordinary and unique format that we are happy to put up with it. And from a reading promotion perspective: full marks for reading motivation.”22 The basic idea of the reading NAO is: children read to a NAO robot and communicate with the robot about the content of the book by answering questions that the NAO asks about the content of the book they read. In more detail this is explained in Sect. 4 of the paper. This concept is flexible enough to be extended for other scenarios like learning a foreign language, practicing math, doing quizzes about various topics in school. Also, it can be adapted to other areas. Here, the RoboticLab Telematics has gained first experiences in cooperation with an adult education center to teach adult illiterates. Last but not least, a successful and permanent use of the application requires that there are further developments, adaptations to future requirements and needs are possible and an exchange on pedagogical and content concepts can take place. As a first step, the RoboticLab Telematics plans to establish an IT-based network for the exchange of experience and content concepts between participating institutions to support this. Acknowledgments. There are many people who put a lot of knowledge, effort, creativity and time into the project. We want to thank Amanda Klinger and Tina Lüthe who wrote the final theses 21 Tania Schmidt, library educator, Public Library Frankfurt (Main), statement given by e-mail
on 2023/02/07. 22 Christiane Bornett, Humboldt-Library Berlin-Reinickendorf), statement given by e-mail on
2023/02/07.
Can a Humanoid Robot Motivate Children to Read More? Yes, It Can!
285
for their bachelor’s degree at the Telematics program of TH Wildau. For this, they developed main parts of the initial concept of the Reading-NAO application, implemented it and organized the first field tests (see Fig.5). In 2020, we started our pilot project with the team of the public library in Frankfurt (Main). We thank Elfriede Ludwig, Tania Schmidt and their team for their trust in us, for their contribution and for the development of the Read Aloud 4.0 – program. We want to thank Oskar Lorenz, student of the telematics program at TH Wildau and part of the RoboticLab Telematics team, who has developed the implementation material and the platform, that was necessary for a smooth introduction and adaption of the application for the public library in Frankfurt (Main). He also planned the introductory workshops and conducted them online. His work has been very valuable for the introductions of the application in the Humboldt Library in Berlin and the Hamburg Libraries. For this study we needed the feedback of our practice partners. We thank Christiane Bornett (Humboldt Library Berlin-Reinickendorf), Tania Schmidt and Elfriede Ludwig (Public Library Frankfurt (Main)) and Kathrin Weihe (Hamburg Libraries) for the time and effort they have taken, to share their experiences and findings with us. We also would like to thank Henning Wiechers, Alfredo Azmitia, Eike Rackwitz and Julia Reinke for putting their talents, their support and their time into the project. Furthermore, we thank all children, their parents and grandparents who participated in our field tests. Their input has been very important for the success of this project. And last but not least we would like to thank the anonymous reviewers for their careful reading of our paper and their valuable comments helping us to improve the quality of it.
References 1. About the importance of reading skills. https://www.meinbaby123.de/ratgeber/warum-lesenwichtig-ist/. Accessed 16 Dec 2022 2. Hulme, C., Snowling, M.J.: Children’s reading comprehension difficulties: nature, causes, and treatments. Curr. Direct. Psychol. Sci. 20, 139–142 (2011) 3. Popular leisure activities of children in Germany. https://de.statista.com/statistik/daten/stu die/29986/umfrage/beliebte-freizeitaktivitaeten-von-kindern-nach-geschlecht/. Accessed 16 Dec 2022 4. Mazel, A., Matu, S.: DREAM Lite: Simplifying Robot Assisted Therapy for ASD, April 2021. https://arxiv.org/pdf/2104.08034.pdf. Accessed 16 Dec 2022 5. ROB’AUTISME. https://www.westcreativeindustries.org/projects/robautisme/. Accessed 13 Jan 2023 6. CHOP Homepage. https://www.chop.edu/news/chop-unveils-interactive-robot-patients-andfamilies. Accessed 13 Jan 2023 7. "Avatar Kids. https://www.avatarkids.ch/. Accessed 13 Jan 2023 8. Alemi, M., Meghdari, A., Ghazisaedy, M.: Employing humanoid robots for teaching English language in Iranian junior high-schools. Int. J. Humanoid Rob. 11(3), 25 (2014) 9. Vrochidou, E., Najoua, A., Lytridis, C., Salonidis, M., Ferelis, V., Papakostas, G.: Social robot NAO as a self-regulating didactic mediator: a case study of teaching/learning numeracy. In: 26th International Conference on Software, Telecommunications and Computer Networks, Split, Croatia (2018) 10. Shamsuddin, S., et al.: Initial response of autistic children in human-robot interaction therapy with humanoid robot NAO. In: 2012 IEEE 8th International Colloquium on Signal Processing and its Applications, Malacca, Malaysia (2012) 11. Scassellati, B.: Quantitative metrics of social response for autism diagnosis. In: ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA (2005)
286
J. Breßler and J. Mohnke
12. Reading dogs and more in Berlin. https://www.besuchspfoten.de. Accessed 16 Dec 2022 13. Steel, J., Williams, J.M., McGeown, S.: Reading to dogs in schools: an exploratory study of teacher perspectives. Educ. Res. 63(3), 279–301 (2021) 14. Pandey, D., Subedi, A., Mishra, D.: Improving language skills and encouraging reading habits in primary education: a pilot study using NAO robot. In: 2022 IEEE/SICE International Symposium on System Integration (SII), Narvik, Norway (2022)
Learning Agile Estimation in Diverse Student Teams by Playing Planning Poker with the Humanoid Robot NAO. Results from Two Pilot Studies in Higher Education Ilona Buchem(B)
, Lewe Christiansen , Susanne Glissmann-Hochstein , and Stefano Sostak
Berlin University of Applied Sciences, Luxemburger Str. 10, 13353 Berlin, Germany [email protected]
Abstract. Agile estimation is performed by teams to predict relative effort needed to finish project tasks. Estimating in diverse teams may be challenging due to different subjective perspectives. Planning poker is a game-based technique applied by agile teams to empower all team members to jointly estimate and reach a consensus about the predicted effort. This paper presents results from two studies on learning agile estimation in which student teams were supported by the humanoid robot NAO acting as facilitator of the game. The paper describes the design of the robot-assisted planning poker simulation, the programming of the application, and the evaluation results from two pilot studies with 29 bachelor and master students in different study programs and with different cultural backgrounds. The evaluation aimed to investigate students’ perceptions of the robotic facilitator, selfassessment of learning outcomes related to agile estimation, and possible effects of different cultural dimensions on the perceptions of the robot-assisted simulations and the learning outcomes. The results show that both bachelor and master students, independent of their cultural background, perceived the NAO robot as a trustworthy, friendly and likeable facilitator of the planning poker game. Students with a European background rated the possibility of befriending NAO slightly higher compared to non-European students. Students also reported that playing planning poker in teams with support of the robot helped them to understand agile estimation. The participants recommended to use the simulation game “Planning poker with NAO" in future classes on project management and estimation. Keywords: Humanoid robots · NAO robot · agile estimation · planing poker · diversity · student teams · agile project management
1 Introduction Agile estimation is usually performed by teams and focuses on predicting relative effort needed to finish tasks in a project. Estimating effort in diverse teams may be challenging due to different subjective perspectives of the participants, whose assessment of the effort tends to be affected by their individual reasoning, experiences and understanding © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 287–299, 2023. https://doi.org/10.1007/978-3-031-34550-0_20
288
I. Buchem et al.
of the project tasks. Planning poker is a game-based technique applied by agile teams to estimate effort by empowering all team members to jointly estimate and reach a consensus about predicted effort [1]. While agile methods have been on the rise in the industry, the critical need to accommodate agile methodologies in higher education curricula, for example in courses on software development, project management and product engineering, have been called for by educators and practitioners alike. Agile methods in project management courses have only recently entered the curricula of business and engineering education, ensuring that study programs are up-to-date and in line with common practices outside of the academia. In fact, common agile practices in companies have informed the design of agile project management courses in higher education presented in this paper. As agile practices evolve, for example from Scrum and Kanban to Scrumban, educators are well advised to include such evolving practices in their curricula. For example, according to the 16th annual State of Agile Report [2], most popular agile methodologies in 2022 included Scrum (87% of respondents reported on leveraging Scrum), Kanban (56%), Scaled Agile Framework (SAFe) (53%), Scrum of Scrums (28%), Scrumban (27%), and Lean Management (8%). Designing curricula related to agile practices in higher education has to take such trends into account. Agile approaches in general emphasise close collaboration in teams, selforganisation of teams, and empowerment of team members in making decisions including the estimation of effort for delivering outcomes and finishing tasks. Skills in effort estimation are a critical factor for the success of agile teams [3]. However, agile estimation in teams may be challenging for team members, since it aims at comparative assessment of relative sizes or complexity of tasks and is based on subjective assessments of team members. Therefore, reaching a consensus in a team may take a long time and may be challenging for teams [3]. In fact, the 16th annual State of Agile Report showed that agile estimation was still a challenge for approx. 16% of the survey participants in 2022 [2]. Additionally, reliable estimation seems to be affected by the level of knowledge and experience of the evaluators. As pointed out by [3], reliable estimates can be best generated when the estimation is done in a team of knowledgeable experts. These and other challenges of agile estimation have been addressed by game-based techniques, such as planning poker, which were designed to help team members reach a consensus in a more efficient way [3]. Several studies have shown that planning poker can provide more reliable estimates compared to the estimation done by single experts [4]. This may results from group discussions, which are an important part of planning poker. Group discussions allow team members playing planning poker to identify important aspects of each task, which could be otherwise overlooked by individual experts [4]. Also, estimation in planning poker tends to provide more accurate and more realistic estimates compared to other expert-based methods, which tend to be affected by the optimism bias [4]. Thus it is not surprising that planning poker as a team-based estimation technique was used by 58% of the teams surveyed by the annual 15th State of Agile Report in 2021 [5]. Learning how to perform agile estimation in teams is becoming an important learning outcome in business and engineering studies and a valuable skill for students who are likely to work in agile teams and projects during their professional career. Planning poker itself may be used as an effective didactic method and as an engaging group-based
Learning Agile Estimation in Diverse Student Teams
289
activity for learning agile estimation [6]. From a didactic point of view, planning poker can be applied as a collaborative game-based simulation in teams of students. Some of the rare studies on planning poker in education have shown that planning poker can be successfully used in engineering programs as an applied estimation methodology in software business systems [6] and as a follow-up simulation exercise to lectures on relative estimation [7]. Learning relative estimation in agile teams in higher education needs to take into consideration that students usually have less experience in working on projects and may encounter broader problems in estimating due to the lack of experience [8]. However, there is a lack of available documented research studies about the integration of agile estimation into teaching and learning in higher education [9]. Additionally, there is also hardly any documented research on how planning poker is used as a didactic method to support students in learning agile estimation. This paper aims to contribute to filling this research gap. Furthermore, to the best of our knowledge, there is also no publication reporting on the use of a humanoid robot as a facilitator of planning poker. The application of humanoid robots to facilitate learning activities, such as planing poker, can be seen as a new approach to creating engaging and motivating learning experiences in classroom settings [10, 11]. In general, humanoid robots, such as NAO, can be used both as embodied social agents for interaction and as tools for teaching and learning [10, 11]. The study by [10] explored teachers’ perspectives on how the NAO robot could support learning activities in the classroom. The study showed that teachers envisioned the NAO robot as a helpful technology in providing guidance, information and emotional support to students, especially in lecture-style and group-based activities [10]. While humanoid robots may be assigned various roles in different educational settings, such as teacher/tutor, peer/co-learner and novice role [10], their dual role in education as didactic tools and social actors makes them to unique learning technologies [11]. Some of the key benefits of using humanoid robots compared to other technologies in education lies in their embodied form and physical presence, which have been shown to enhance interest and motivation, elicit higher rates of compliance and higher performance while learning [10]. Nevertheless, there is still little research related to learning activities with humanoid robots in realistic settings, which takes into account both group dynamics and complexity of a real classroom environment [11]. This paper presents results from a study on learning agile estimation through playing planning poker in student teams with the help of the humanoid robot NAO playing a double role of a facilitator of social interaction and information provider. Planning poker with NAO was applied as a didactic method to enhance learning about agile estimation in business and engineering study programs in real classroom settings. The paper presents the design of the robot-assisted planning poker simulation, the programming of the application, and the evaluation results from two studies with a total of 29 students from two different study programs. The evaluation aimed to investigate students’ perceptions of the NAO robot as the facilitator of planning poker, self-assessment of key learning outcomes related to agile estimation as well as possible effects of different cultural dimensions in a diverse sample of students on the perceptions of the robotic facilitator and the learning outcomes. The three key research questions of the study were: (1) How did students perceive the robot-assisted planning poker simulation and the NAO
290
I. Buchem et al.
robot as the facilitator of the game?; (2) How did students self-assess their learning outcomes related to agile estimation after playing planning poker in teams facilitated by the robot?; and (3) Were there any cultural differences related to the perception of the robot as the facilitator of the planning poker and the self-assessment of learning outcomes in group-based agile estimation? The reminder of the paper is structured as follows. Following this introduction, Sect. 2 presents the design and programming of the simulation “Planning poker with NAO”, which has been improved in three iterations with user tests. Section 3 outlines the design of both studies with bachelor students in business studies and master students in engineering studies. The results from both studies are described in Sect. 4 and focus on students’ perceptions of the NAO robot as the facilitator of planning poker, students’ self-assessment of learning outcomes and the exploration of possible effects of cultural dimensions on perceptions of the robotic facilitator and learning outcomes after playing planning poker with NAO. Finally, Sect. 5 ends the paper with a discussion of the study results and recommendations for future research.
2 Simulation Design The simulation “Planning poker with NAO” was developed in a small project team composed of two subject-matter experts and one developer. The two subject matter experts were teachers in agile project management. The developer was a student in the bachelor program humanoid robotics. The simulation was designed and developed iteratively in three stages, in which prototypes were tested with the target group of university students and continuously improved both in regards to the design and the programming. The programming was done with the Choregraphe software and additional Python codes. The three main iterations were: First Iteration. The first prototype was designed based on the analysis of the theoretical concept and experiences of subject-matter experts/teacher with practical implementations of planning poker. To begin with, a script was created in a Google Document and edited collaboratively by the members of the project team. This was followed by programming in Choregraphe and the first user test with a short evaluation. In order to evaluate the first prototype, a pre-test with a small group of three students was conducted to get early feedback from future users on the overall design of the simulation and on specific aspects such as robot’s speech quality and the comprehensiveness of the explanations given by the NAO robot during the simulation. The first test paid special attention to whether students without any previous knowledge about agile estimation and planing poker could follow the simulation. The evaluation used a grading system for different rating categories such as “language comprehensibility”, “explanation comprehensibility” and “speech quality”. The pre-test revealed a number of weak points in verbal explanations and formulations used in the first prototype. The first version was improved based on students’ feedback and iterated further. Second Iteration. The second prototype included an improved explanation of planning poker and agile estimation. The explanation was better tuned to students without any previous experience in planning poker. Also, robot’s speaking rate was decreased for
Learning Agile Estimation in Diverse Student Teams
291
better understanding of non-nativ speakers of English. This version of planning poker was tested in the first study with 15 bachelor students in the course on agile project management in the bachelor program Digital Business (B. Sc.). The study comprised of playing planning poker facilitated by NAO and the evaluation study which included an online survey and a follow-up discussion with study participants in class. The second study used self-created planing poker cards with the modified Fibonacci scale (0, 1, 2, 3, 5, 8, 13, 21, 40, 80, 100), which is one of the scales used in planning poker, next to the original Fibonacci scale (0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89), the linear scale (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10), and the progression numerical scale (0, 1, 2, 4, 6, 8, 16, 32, 64). The agile backlog, i. e. a prioritised the list of deliverables, used in the second study, included eight items to be estimated by an agile team. The items were not related to any specific project but rather addressed everyday tasks, which any student could relate to, such as “Take out paper waste” or “Buy milk, bread and butter”. Using such everyday items aimed at ensuring that the interaction in teams was enjoyable and students were not overwhelmed with too much information. Third Iteration. The third prototype was developed based on the results from the first study. The key changes compared to the second prototype included an improved dialog design with questions and answers instead of larger chunks of information, improved and shortened explanation of planning poker and agile estimation, addition of short interactive activities in the explanation part of the game, such as reading all numbers on planing poker cards with the modified Fibonacci scale by one team member, in order to engage students and enhance understanding of the numeric values on the cards, more variation in the responses of NAO during the estimation of the items to make the interaction more exciting, and a reflection question at the end of simulation to encourage students to recapitulate what they have experienced and learned during the simulation. Moreover, the number of items to be estimated was reduced from eight to four due to time limitations during the class. Also additional information about each item was supplemented in the explanation part of the game to make the content of each item more understandable to students. For example, the item “Take out paper waste” in the modified version was enriched with additional information and resulted in the following item: “Take out paper waste. Here is some information about this task: You are on the fifth floor right, so you have to go downstairs to bring the paper waste to the bin on the ground floor. There are five bins with paper waste altogether.” The third prototype was tested in the second study with 14 master students in the course on business process modelling and information management in the master study program Industrial Engineering/Project Management (M. A.). The visualisation of the design of the simulation “Planning poker with NAO”, which was tested in the second study, is shown in the figure below (Fig. 1). Based on the design presented in Fig. 1, the simulation “Planning poker with NAO” was developed using the Choregraphe software (Version 2.8.6) and additional Python code, The programmed simulation is composed of three main parts, i. e. (1) “Beginning” with “Start”, “Introduction”, and “Explanation”; (2) “Planning Poker” with “Backlog”, “Estimation”, “Data”, and “Decision”; and (3) Ending” with “Report”, “Wrap-up” and “Ending” (Fig. 1). After the start of the application, the NAO robot introduces students to the planning poker and agile estimation. Then, NAO explains how participants may
292
I. Buchem et al.
Fig. 1. Design of the simulation “Planning poker with NAO”.
interact with the robot, for example by speech and pressing the foot bumpers. Next, NAO explains how to play planning poker using cards with the modified Fibonacci scale. The actual play of planning poker begins with NAO presenting all items from the backlog to the team of students. After the presentation of all items, NAO reads one item and students do the estimation by choosing one of the Poker cards. In the next step of the estimation, students reveal their cards with numbers to other players in the teams and discuss with each other their choices. Once the consensus in the team has been reached, one of team members reports the final result to NAO and the robot saves a number finally agreed upon by the team by reaching a consensus. This process is repeated until all items from the backlog are done, i.e. consensus has been reached. Then, NAO creates a report in form of a PDF file, in which each result per item, per team is saved for further reference. In order to create the report, a python script takes every entry, matches the items to the numbers and saves the PDF in Choregraphe. Finally, NAO wraps up the simulation, asks students a reflection question and reviews key learnings from the game.
3 Study Design and Participants The simulation “Planning poker with NAO” was applied and evaluated in two pilot studies with a total of 29 university students, i. e. 15 bachelor and 14 master students. Both pilot studies were conducted on campus as part of regular classes. In both cases, the simulation “Planning poker with NAO” was integrated into a weekly seminar as a learning activity. Participants in both pilot studies included students, the course teacher and the programmer of the NAO robot. The simulation in both courses started with a short introduction from the course teacher and the programmer, who gave students some information about the programming of the robot and how to interact with the robot. Next, students played planning poker in small teams of six students with support of NAO (Fig. 1). After playing planning poker students were invited to fill in the online survey. Finally, the participants discussed their learning experience in class.
Learning Agile Estimation in Diverse Student Teams
293
The bachelor students studied in their third semester of the bachelor program Digital Business (B. Sc.). The master students studied in their first semester of the master program Industrial Engineering/Project Management (M. A.). Students from the bachelor study program “Digital Business” were enrolled in the course on Agile Project Management (APM). The APM course is taught in the 3rd semester by a university professor and an agile coach, both integrating diverse theoretical and practical perspectives on agile project management in a unified co-teaching approach. The APM curriculum covers a wide range of agile approaches including Scrum, Kanban, Lean and SAFe, which represent popular agile approaches and skills in high demand on the labor market. Planning poker is used in this course to introduce students to agile estimation in teams. Students from the master program Industrial Engineering/Project Management (M. A.) were enrolled in the course “Business Process Modelling and Information Management”. This course is taught in the 1st semester and is comprised of weekly seminars and practice-based classes. Students in this course learn how to develop feasible transformation concepts for a selected company. Students create User Story Maps and Minimum Viable Products (MVP) to create transformation concepts. Planning poker is introduced as one of the methods to estimate the effort in transformation projects. Table 1. The socio-demographics of the study sample, n = 29. Pilot study 1 (bachelor)
Pilot study 2 (master)
Simple size
15 students
14 students
Gender
73.3% male 26.7% female
57.1% male 42.9% female
Age
13.3% under 20 60% 20–24 13.3% 25–29 13.3% 30–34
0% under 20 42.9% 20–24 50% 25–29 7.1% 30–34
Cultural background
60% Western European 13.3% Asian 6.7% Middle Eastern 6.7% Eastern European 6.7% Northern European
50% Western European 14.3% Asian 21.4% Middle Eastern 7.1% Eastern European 7.1% Southern European
Poker experience
6.7% played planning poker
0% played planning poker
Robot experience
73.3% interacted with HR
14.3% interacted with HR
Students participating in both studies had different cultural backgrounds. The aggregated sample with 29 students composed largely of students with Western European (55%), Asian (13.8%), and Middle Eastern (14.5%) cultural background. Most students were male (65.2%) and between 20 and 24 years old (51%). Both cohorts of students differed in the level of their previous experience in interacting with a humanoid robot. In the sample of bachelor students from the first study 11 out of 15 (73.3%) already interacted with a humanoid robot (HR) before. In this study program, humanoid robots such as Pepper and NAO have been used in different courses. Compared to this, in the sample
294
I. Buchem et al.
of the master students in the second study, only 2 out of 14 (14.3%) interacted with a humanoid robot before. Both bachelor and master students had hardly any previous experience in playing planning poker: only one bachelor student played planning poker before, while none of master students had this experience. The key socio-demographic data of the study sample in summarised in Table 1 above.
4 Results Our work aimed to examine the following: First, how students perceived in general planning poker with NAO as a robot-assisted group-based learning activity, as well as NAO’s specific role as facilitator of this activity. Second, how students assessed their learning outcomes in agile estimation after playing planning poker with NAO. Third, wether cultural dimensions had any effects on the perceptions of the robot and of the learning outcomes. The online survey as part of the follow-up evaluation study of “Planning poker with NAO” investigated students’ perceptions of the NAO robot as the facilitator of the game, students’ self-assessment of learning outcomes related to agile estimation, and possible cultural differences in the perception of the robot and the learning outcomes. The sections below present results related to the three key research questions of the study. The data from the online survey was analysed with the help of IBM SPSS, version 29.0.0.0, using a range of descriptive and inferential statistic methods. The results related to the three research questions of the study are presented in the sections below. 4.1 RQ1: Perceptions of the NAO Robot as the Facilitator of Planning Poker The first research was: How did students perceive the robot-assisted planning poker simulation and the NAO robot as the facilitator of the game? Students’ perceptions of the NAO robot as the facilitator of the planning poker game were measured using the Human-Robot Interaction Evaluation Scale (HRIES), which is based on a multicomponent approach of anthropomorphism [12]. The HRIES scale is composed of 16 semantic items (adjectives) organised into four sub-dimensions, i.e. Intentionality, Sociability, Animacy, and Disturbance [12]. Each sub-dimension comprises a set of four adjectives, i. e. (1) rational, self-reliant, intelligent and intentional (Intentionality); (2) warm, likeable, trustworthy and friendly (Sociability); (3) human-like, real, alive and natural (Animacy); (4) scary, creepy, weir and uncanny (Disturbance) [12]. While the first three sub-dimensions intentionality, sociability, animacy represent positive perceptions of a robot, the disturbance dimension represents negative perceptions of a robot related to uncomfortable feelings and anticipations [12]. The HRIES scale has been already applied in a previous study by [13], in which the NAO robot was applied as the facilitator of a daily scrum meeting. The previous study on the NAO robot as the facilitator of daily scrum showed that students perceived the robotic facilitator as a warm, likeable, trustworthy, friendly, alive and rational interaction partner [13]. Similar to this predecessor study, the results of the study NAO facilitating planning poker described in this paper showed that the highest rating was reached for the sociability sub-scale and the lowest rating for the disturbance sub-scale. However, the 16 semantic
Learning Agile Estimation in Diverse Student Teams
295
items of the HRIES scale used in the study presented in this paper were assessed by students on a scale from 1 “fully disagree” to 5 “fully agree”, and not on the 7-point scale proposed by the HRIES scale [12]. The decision to use 5-point rather than 7-point scale was motivated by the need to keep the respondent burden low in view of a high number of questions and items applied in the online survey [14]. Given the 5-point scale, the highest average rating which was reached for the sociability sub-scale was M = 3.09 (Min 1.24; Max 4.00) and the lowest average rating which was reached for the disturbance sub-scale was M = 2.07 (Min 1.00; Max 5.00). The semantic items with the highest rating were “rational” (M = 3.45), which is an item in the intentionality sub-scale, followed by three items from the sociability sub-scale, i. e. “trustworthy” (M = 3.28), “friendly” (M = 3.17) and likeable (M = 3.10). Figure 2 below summaries the key mean values for all 16 semantic items of the HRIES scale. These results indicate that the participants in both pilot studies perceived the NAO robot in its role of the facilitator of the planning poker game as a sociable agent (trustworthy, friendly, likeable), as well as a rational agent, whose behaviour is based on rational reasons for action. At the same time, the low ratings for the four semantic items in the disturbance sub-scale, i. e. scary, creepy, weird or uncanny, indicate that the application of the NAO robot as the facilitator of planing poker did not trigger any strong uncomfortable feelings and anticipations towards the robot and may be an indicator of students feeling at ease with the robotic facilitator of the game [12].
Fig. 2. Results of the human-robot interaction scale (HRIES), n = 29.
Furthermore, students were asked to rate a set of statements about their perceptions of the planning poker simulation with NAO. The items belonging to this category were self-constructed and rated by students on a scale from 1 = very low to 5 = very high. The results showed that participants in both studies found the interaction with the NAO robot to be interesting (M = 4.24, SD = .739), exciting (M = 3.97, SD = .865), and motivating (M = 3.48, SD = .829). These results may indicate that the design and the
296
I. Buchem et al.
implementation of the robot-assisted learning activity has been successful in creating an engaging and motivating learning experience. Finally, students were asked if they would be able to make friends with NAO. The mean value of M = 2.79 shows that students were somehow hesitant and the ratings were spread out (SD = 1.424). This aspect was explored in more detail in question 3. 4.2 RQ2: Self-assessment of Learning Outcomes Furthermore, the study explored to what extent students felt they could learn agile estimation through playing planning poker in teams with the help NAO’s facilitation of the game. The second research question of the study was: How did students self-assess their learning outcomes related to agile estimation after playing planning poker in teams facilitated by the NAO robot? The online survey included three related questions: (1) How confident are you in agile estimating after playing planning poker with NAO?; (2) How easy was it for you to estimate tasks during planning poker with NAO?; and (3) How helpful was the interaction with the NAO robot to understand the planning poker method? The respective items were rated by students on a scale from 1 = not at all, to 5 = very much. The highest rating was reached for the learning outcome “understating planning poker” (M = 4.17, SD = .711), followed by “ease in agile estimation” (M = 3.58, SD = .688) and “confidence in agile estimation” (M = 3.28, SD = .960). These results indicate that the design and the implementation of planning poker with NAO led to a positive self-assessment of learning outcomes by students. Additionally, students in both pilot studies found that playing planning poker with NAO helped them understand how robots function (M = 3.38, SD = 1.049), which can be considered as an additional learning outcome. Furthermore, students were also in agreement that the simulation planning poker with the NAO robot should be used in the future to help students understand how to do estimation and planning in agile projects/teams (M = 3.55, SD = .910). 4.3 RQ3: Cultural Differences The third research question of the study was: Were there any cultural differences related to the perception of the NAO robot as a facilitator of the planning poker and to the self-assessment of learning outcomes related to agile estimation in teams? The study explored possible cultural differences in the perception of NAO as the facilitator of planning poker and in students’ self-perception of the attained learning outcomes. In order to determine whether there were any significant statistical differences related to the perception of the NAO robot as the facilitator of the planning poker among students with different cultural backgrounds, one-way ANOVA was computed for groups of students with different cultural backgrounds. Altogether four groups with more than two cases were included in the analysis, i.e. Western European (group 1, n = 16), Asian (group 2, n = 4), Middle Eastern (group 2, n = 4) and Eastern European (group 2, n = 2). The result of the one-way ANOVA for these four groups was not significant with the p-value .0523 against the pre-defined 95% confidence level. In the next step, the sample was grouped differently, this time into two cultural groups to assure more cases per groups, i. e. European (group 1, n = 20) and non-European
Learning Agile Estimation in Diverse Student Teams
297
(group 2, n = 9). This grouping allowed to compute t-test for independent samples in relation to the HRIES scale and other items related to the perception of the NAO robot and self-assessment of learning outcomes as described in the sections above. The analysis revealed only one statistically significant difference, related to the perception of the NAO robot as a possible friend (p = .037). The comparison of means for both groups showed that students with the European background rated the possibility to befriend the NAO robot slightly higher (M = 2.80, SD = 1.609) compared to students with the nonEuropean background (M = 2.87, SD = .972). This was the only statistically significant difference in perceptions of NAO. Next, the t-test for independent samples was computed in relation to a further cultural dimension, i. e. related to the study program (bachelor in business/master in engineering). The t-tests aimed to determine whether there were any significant differences related to the perception of NAO based on the HRIES scale and to the attainment of the learning outcomes between students from different study programs. The t-test revealed only one statistically significant difference related to the learning outcome “understanding how robots function”. The Levene’s test significance value was p = .012 and the mean values indicated that the bachelor students in digital business could reach a higher understanding of how robots function (M = 3.67, SD = .724) compared to master students in engineering studies (M = 3.07, SD = 1.269). Finally, the t-test for independent samples was computed in relation to the gender dimension (male/female) and perceptions of the NAO robot. There were no statistically significant differences either (Levene’s test p = .807). Similarly, there were no statistically significant differences related to gender and self-assessment of learning outcomes. Thus it can be concluded that the variances both in the HRIES scale and in learning outcomes were not significantly different for female and male students.
5 Discussion This paper presented the design, programming, implementation and evaluation of the game-based learning activity “Planning poker with NAO” in two pilot studies with bachelor and master students. The study explored students’ perceptions of the humanoid robot NAO in its role of the facilitator of the planning poker game, the self-assessment of the learning outcomes related to agile estimation and possible effects of cultural differences on students’ perceptions of the robot and the learning outcomes. The results indicate that, both bachelor and master students, independent of their different cultural backgrounds, perceived the NAO robot as a trustworthy, friendly and likeable facilitator of the planning poker game, which seems to be in line with the results of the predecessor study by [13]. Students with a European background rated the possibility of befriending NAO slightly higher compared to non-European students, which was the only statistically significant difference between students with different cultural backgrounds. Students also reported that planning poker with NAO helped them to understand agile estimation and recommended to use planning poker with NAO in classes on agile estimation and project management in the future. In general, it can be concluded that the simulation “Planning poker with NAO” was perceived as an engaging and motivating social learning experience by students, which
298
I. Buchem et al.
seems to be in line with results of similar studies in other content areas as for example reported by [11]. Planning poker with NAO helped students attain key learning outcomes including understanding how estimation in agile teams works in practice and gaining a good level of confidence in estimating effort in teams. However, the study has clear limitations. It must be emphasised that the results of the statistical analysis should be regarded with caution only as tentative results due to the small study sample and the fact that most students had a European background. Future studies should test the simulation with larger samples and try to achieve a balanced distribution of students with different cultural backgrounds to measure possible cultural effects in a more robust way. Next iterations of the simulation should further improve the design to increase students’ confidence in agile estimation. Future studies could also explore differences in human- vs. robot-facilitated planning poker as a didactic approach to teaching and learning agile estimation in teams. Finally, future implementations should enhance robotic capabilities by using Conversational AI to enhance understanding of the content of students’ discussions during agile estimation and generating meaningful feedback for learning and consensus-building.
References 1. Sudarmaningtyas, P., Mohamed, R.B.: Extended planning poker: a proposed model. In: 2020 7th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), pp. 179–184 (2020) 2. Digital.ai: 15th State of Agile Report (2022). https://info.digital.ai/rs/981-LQX-968/images/ AR-SA-2022-16th-Annual-State-Of-Agile-Report.pdf. Accessed 15 Jan 2023 3. Alhamed, M., Storer, T.: Playing planning poker in crowds: human computation of software effort estimates. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pp. 1–12 (2021) 4. Mahnic, V., Hovelja, T.: On using planning poker for estimating user stories. J. Syst. Softw. 85(9), 2086–2095 (2012) 5. Digital.ai: 15th State of Agile Report (2021). https://info.digital.ai/rs/981-LQX-968/images/ SOA15.pdf. Accessed 18 Dec 2022 6. Rojas Puentes, M.P., Mora Méndez, M.F., Bohórquez Chacón, L.F., Romero, S.M.: Estimation metrics in software projects. J. Phys. Conf. Ser. 1126 (2018) 7. Chatzipetrou, P., Ouriques, R.A., Gonzalez-Huerta, J.: Approaching the Relative Estimation Concept with Planning Poker. CSERC 2018 (2018) 8. Zhang, Z.: The Benefits and Challenges of Planning Poker in Software Development: Comparison Between Theory and Practice. Auckland University of Technology (2017). https:// openrepository.aut.ac.nz/handle/10292/10557. Accessed 18 Dec 2022 9. Lopez-Martinez, J., Ramirez-Noriega, A., Juarez-Ramirez, R., Licea, G., Martinez-Ramirez, Y.: Analysis of Planning Poker Factors between University and Enterprise. pp. 54–60 (2017) 10. Ceha, J., Law, E., Kuli´c, D., Oudeyer, P.-Y., Roy, D.: Identifying functions and behaviours of social robots for in-class learning activities: teachers’ perspective. Int. J. Soc. Robot. 14, 1–15 (2021). https://doi.org/10.1007/s12369-021-00820-7 11. Ekström, S., Pareto, L.: The dual role of humanoid robots in education: as didactic tools and social actors. Educ. Inf. Technol. 27, 12609–12644 (2022) 12. Spatola, N., Kühnlenz, B., Cheng, G.: Perception and evaluation in human-robot interaction: the human-robot interaction evaluation scale (HRIES)—a multicomponent approach of anthropomorphism. Int. J. Soc. Robot. 13, 1517–1539 (2021)
Learning Agile Estimation in Diverse Student Teams
299
13. Buchem, I., Baecker, N.: NAO robot as scrum master: results from a scenario-based study on building rapport with a humanoid robot in hybrid higher education settings. In: Salman Nazir (eds) Training, Education, and Learning Sciences. AHFE (2022) International Conference. AHFE Open Access, vol. 59, pp. 65–73 (2022) 14. Dolnicar, S.: 5/7-point “Likert scales” aren’t always the best option: their validity is undermined by lack of reliability, response style bias, long completion times and limitations to permissible statistical procedures. Ann. Tour. Res. 91, 103297 (2021)
Thinking Open and Ludic Educational Robotics: Considerations Based on the Interaction Design Principles Murilo de Oliveira(B) , Larissa Paschoalin, Vitor Teixeira, Marilia Amaral, and Leonelo Almeida Federal University of Technology, Curitiba, Paraná, Brazil {murilopaulo,larissapaschoalin,vittei.2000}@alunos.utfpr.edu.br, {mariliaa,leoneloalmeida}@utfpr.edu.br
Abstract. Digital technology is increasingly part of people’s daily lives, including education. Thus, with growing interest in the educational use of robotics, one of the observed barriers is the incorporation of this content into schools. Because this inclusion cannot be guaranteed by technology access alone, it requires contextual development, critical and interactive. The context of educational robotics requires complexity at different school levels and creativity to solve problems. Therefore, the playful presentation of this content proves to be an excellent choice, especially for children, as it promotes children’s logical thinking and physical and social skills through playful activities. Based on this context, this paper takes interaction design principles to evaluate the use of gamification artifacts to support introductory programming courses for children. In this study, the Roboquedo interaction test was evaluated. Its principles, in addition to using free technology with the potential for collaboration, are low-cost projects and a methodological approach based on social constructivist epistemology. Based on the interaction analysis, any adjustments needed to improve the applied principles and deliverables of interaction design and other possibilities of educational robotics and digital inclusion may be reflected. Keywords: Education Robotics · Participatory · Interaction Design Principles
1 Introduction The adoption of digital technology in education has become a reality in Brazilian communities. In this sense, it can be said that interest in the pedagogical use of robotics has expanded, with one of the obstacles being the introduction of this content in schools (both in elementary and high school), since insertion cannot be guaranteed only by access to technology [1]. For these segments to be truly included, development needs to be significant, and thus, as some authors of robotics in education point out, the environment (and, depending on the context, the school curriculum) must be appropriate [1]. According to the National Common Curricular Base (BNCC) [7], this introduction considers that the development of competences is related to the critical use of a technology, both © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 300–315, 2023. https://doi.org/10.1007/978-3-031-34550-0_21
Thinking Open and Ludic Educational Robotics: Considerations
301
directly – with emphasis on the technology used – and transversally – with the objective of diversified learning [7, 12]. Therefore, introducing children to programming concepts involves multiple factors, as learning new languages and logical thinking are necessary to achieve unique solutions, as well as aspects of creativity [5]. Wherefore, the complexity of the educational content must be compatible with different school-level methods, and creativity in problemsolving is desirable. In this sense, the playful way presentation of this type of content proves to be a viable option, since it develops children’s logical reasoning and physical and social skills [14], as games help to reduce the abstraction of learning [3], and the games promote a contextualized view of everyday life, encouraging the child to use creativity and curiosity in their favor [8]. Starting in this context, this paper employs principles of interaction design [13] and social constructionism [10] to develop playful artifact creations that support introductory programming lessons for children aged 6 to 11 years, in this context. In this case, the Roboquedo (in Portuguese, “Robô + Brinquedo”, a combination of the words Robot and Toy), developed by PET-CoCE1 , an artifact with low financial cost, in addition to using free technologies and with collaborative potential, which will be detailed in a later chapter. This article is divided into six sections: 1) Introduction of research content; 2) A theoretical foundation that considers educational robotics and its relationship to digital inclusion and playfulness; 3) Roboquedo’s presentation, exemplifying the modules that make up the toys analyzed in this article; 4) The methodology of interaction testing; 5) The results of interaction testing using reflections under the lens of interaction design, and finally 6) Reflections conclusion using the analysis and discussion obtained from the tests described in this study.
2 Reflections on Robotics in Education According to the BNCC [7], digital technologies are increasingly present in people’s daily lives, so it is necessary to provide, in the school context, knowledge that helps in approaching future problems and technologies that are still unknown, developing logical and critical thinking from their socially constructed knowledge [12]. For this, according to Freire [9], it is important to understand the context and needs of communities and people so they can develop, in addition to knowledge and a problematization of reality, the critical awareness of being inserted and being able to transform the social reality. In this sense, the implementation of computational thinking, as defined by the BNCC [7], includes the development of skills in relation to the critical use of a technology, both directly and transversally. Therefore, when dealing with robotics in education, these two aspects are considered: Robotics as a learning object or as a learning tool [1, 7]. The first strand, robotics as a learning object, maintains its focus on the study of robotics as a distinct subject; that is, elaborated educational activities will be developed 1 PET (Tutorial Education Program) groups are formed by university students under the tutelage
of professors with the objective of offering students actions to carry out activities included in the pillars of research, teaching and extension, aiming at a differentiated training of its members.
302
M. de Oliveira et al.
and directed to approach real problems in which students may be involved in the construction of robots, codes, and artificial intelligence as learning objects, in addition to the development of programming logic [1, 7]. However, even with the focus on robotics as a learning object, the introduction of this autonomous knowledge collaborates with the development of skills in other areas, since robotics also addresses content in communication, mathematics, and science, among others. In addition, it is possible to notice the development of collaborative, creative, abstract problem-solving, and critical thinking skills that may be needed in other areas of teaching [1]. As a tool for learning, robotics is used to learn and teach other school subjects. In this context, robotics is developed as an interdisciplinary learning activity based on cross-cutting projects established mainly in science, mathematics, computer science, and technology, offering support and a complement to learning in general [1]. This research appropriates robotics as a learning object, since the objective of this work is to analyze the teaching of robotics in a playful way through computational artifacts, paying attention to the principles of interaction design. In addition, it is important to emphasize that in this research, it is considered that in order to achieve the expected learning effects, there must be digital inclusion. 2.1 Digital Inclusion for Social Transformation The way people interact with technology has changed over the decades, but what is visible in public education policy, especially in Latin American countries, is the digital divide. According to Silveira [15], this phenomenon is related to irregular access to digital technology. This is because a portion of the socio-economically vulnerable population has precarious access to these technologies and is consequently excluded from a new social dynamic characterized by the extensive use of digital technology, creating a reality commonly referred to as digital exclusion. For Bonilla and Pretto [6], digital exclusion and inclusion goes far beyond the apparent duality between those who have access to digital technology (included) and those who do not (excluded). According to the authors, this concept is problematic because it ignores the complexity of the problem and creates initiatives to combat digital exclusion in a disjointed way with the social aspects that are rooted in its own genesis [6]. The process of digital inclusion goes beyond statistics, that is, the technical and administrative measures aimed at increasing the number of people with access to digital technologies, but also demands a qualitative perspective that allows critical analysis of the quality of this access. For digital technologies to be a means for social transformations, it is necessary that the digital inclusion actions carried out in schools are not limited only to technical qualification, but also to qualification that is based on issues of the social dimension that are present in the daily lives of school communities. Because of this, the present research adopted as a basis for its methodological approach the epistemology of social constructionism in the Latin American perspective of Montero [10]. The central point of this perspective, especially as applied to the school community, is to understand that knowledge is constituted as a social construction, that is, it essentially comes from the social relations that are established contextually through history, which in turn are
Thinking Open and Ludic Educational Robotics: Considerations
303
constituted from the interaction dynamics between subjects, as social actors, and their own realities [10]. According to Montero [10], everyone involved in the learning process in a school community is a social actor and thus a knowledge builder. Therefore, this approach presupposes an inclination towards methodological actions of a participatory nature [10], with an emphasis on a dialogic relationship, which, in contrast to the banking view of education, not only problematizes but also implies a practice toward the transformation of reality that best meets the needs and aspirations of the school community [9]. In the context of this research, this practice translates into digital inclusion in the learning process of children through educational robotics, based on a critical construction of the meaning of technology, especially with regard to issues of access, use, distribution, creation, and remodeling. This new construction regarding robotics in education aims to seek the autonomy and freedom of school communities in the process of appropriating technologies, which, according to Almeida and Riccio [2], are configured in technological developments with open and free access. For the authors, by having free access to digital technologies, the school communities involved manage to have greater control over the production and use of these technologies, allowing this knowledge to be shared more easily with other learning groups, thus building a more decentralized and democratic information transmission and creation network [2]. As a result, considering inclusive educational robotics entails considering open and free2 educational robotics. In addition to these two characteristics, it is also considered important that educational robotics for children actively involve them in the teaching and learning process, since this constitutes a process of collective construction. Thus, it is important that robotics in children’s education be approached in a playful way, since play, according to Vygotsky [17], is an essential part of child development. It is in the act of playing that children stimulate their process of creation and imagination, which marks the beginning of the development of their ability to work with abstract thinking and to socialize with other people and their environment [17]. It is preferable to have a perspective that is not only centered on the artifact and its eventual conception, but that the design process that makes it possible is articulated within the previously elucidated framework, in order to achieve a ludic and inclusive character in educational robotics. 2.2 Interaction Design Principles As the objective of the research is to think of an artifact to help teach robotics to children in a playful and inclusive way, the evaluation and construction process to achieve this purpose was based on the principles of Interaction Design (ID), proposed by Rogers, Sharps and Preece [13], namely: visibility, feedback, constraints, consistency and affordance. The first principle, visibility, as the name suggests, refers to how the functionality of the system is presented to the user. According to the author [13], it is important to be 2 The concept of "open access" for Swan, [16]: "Must be available for download, copy, distribu-
tion, print, search or link to the full texts of these papers, crawl for indexing, pass them as data to a software, or use for any other lawful purpose”.
304
M. de Oliveira et al.
able to easily identify the functionality provided by a particular interface. This means considering how best to arrange features and how to present them visually to make the system more intuitive to use. Related to this principle, there is also feedback, which is the ability of the system to respond appropriately to an interaction made by the user, and this response can be “[…] audio, tactile, verbal, visual, and combinations of these” ( p.28) [13]. The principle of constraints “[…]refers to determining ways of restricting the kinds of user interaction that can take place at a given moment” (p.28) [13]. This principle therefore prevents users from making mistakes when using the interface, which could ruin the interaction process or damage the artifacts itself. In this sense, creating a consistent design is very important, as it prevents the user from having to relearn how to interact with an interface to perform common tasks, which would end up making the interaction process slower and less intuitive. This is the principle of consistency which is defined as “[…] designing interfaces to have similar operations and use similar elements for achieving similar tasks” ( p.29) [13]. Finally, affordance “[…]is a term used to refer to an attribute of an object that allows people to know how to use it” (p.30)[13], that is, it is a principle that refers to the characteristics of the interfaces that give hints on how a given task should be performed, for example, “[…] a door handle affords pulling, a cup handle affords grasping, and a mouse button affords pushing” (p.30) [13]. According to the authors, these principles do not serve to say what an interface, whether physical or digital, should or should not have, but rather as a support so that the artifact development team can have criteria to analyze which functionalities are needed in the interfaces. in order to ensure a better interaction with the user [13].
3 Roboquedo As already mentioned, Roboquedo was developed with the aim of introducing logical thinking and programming into early childhood education in an inclusive and playful way [5, 17]. In this sense, the paper consists of the following elements: a robot (which can vary in its formats, described in more detail in Sect. 3 of this paper), a physical map with activities for children to perform, and a directional controller that currently takes two forms: tangible and another digital, to be presented throughout this section (according to Fig. 1). The purpose of the main activity suggested on the map is for children to drive the robot along the route suggested on the map via a physical interface or mobile device. Like a board game, the drive starts at the first square of the map and continues to the next square after the die is rolled. The rolled value indicates the number of squares that need to be visited, and the participating group provides direction commands through control commands. The cycle of this dynamic continues until the robot reaches the last square and passes through the entire map like a board. This activity facilitates learning concepts such as: the concepts of command, data input and output, processing and laterality, and collaboration/cooperation can also be worked on, as it involves children in an exchange and social interaction of knowledge and experiences [10]. The following subsections provide details for each element of Roboquedo [5].
Thinking Open and Ludic Educational Robotics: Considerations
305
Fig. 1. Dynamics of Roboquedo.
3.1 The Robot The robot was originally designed to look like a turtle as shown in Fig. 2a, but inside it has all the electronics and mechanics consisting of: an Arduino board, two motors, a board controller for the motors, a Bluetooth module, three wheels, and an acrylic structure to house the components [5].
Fig. 2. Robot versions.
The Arduino containing programming decodes the command (forward, backward, turn left or turn right) received from the Bluetooth module and performs the action requested by the control form: table command (Touchable Interface) or mobile device. After decoding the command received, through the control board, Arduino drives the motor according to the direction requested by the interactive interface, as shown in Sect. 3.2. For both interfaces, the motors rotate at programmable speeds. These technical features support the consistency principle [13] through the movements performed by the
306
M. de Oliveira et al.
command requested on the input device. It is important to point out that the electronic and programmable use chosen for Roboquedo, Arduino, was chosen so that its programming can be accessed and made available to everyone and free of charge. From the first version, a second one was built by 3D printing, (Fig. 2b). This version has a design inspired by platform games popular among children, with the intention that by having an appearance that resembles that of a toy, the robot will stimulate the imagination [17]. In addition, a third version was also created using styrofoam material and paint for decoration, (Fig. 2c). Unlike previous versions, this one was designed without the characteristics of a turtle, being closer to a fictional robot. 3.2 The Map The map for the robot to route is a physical square "Mat" 2.5 m on a side, allowing the robot to move across the board while developing activities. Figure 3 shows the design of this map/mat.
Fig. 3. Map in the form of a mat to develop the robot’s actions.
The map brings board ideas and present squares. Each square suggests in simple and representative language, an action to be performed by the children: List animals, dance, spin the turtle and put the turtle in the pond. After passing through all the squares (12 in total), the turtle must reach the final destination, which is the arrival at its house. The squares were divided as: “Square 1: Start”; “Square 2: Spin”; “Square 3: List names of forest animals”; “Square 4: Turtle Dance”; “Square 5: Go back one square”; “Square 6: Watch out for the alligator”; “Square 7: Alligator”; “Square 8: List the names of aquatic animals”; “Square 9: Take the turtle for a bath”; “Square 10: Try to cross the alligator path again”; “Square 11: List names of flying animals”; “Square 12: Take the turtle to the picnic” and “Square 13: Home (arrival)”. The principle of consistency and visibility [13] can be seen in the map, illustrating the squares from the beginning to the end of the board. In each square it is also possible to find
Thinking Open and Ludic Educational Robotics: Considerations
307
stimuli proposed for children. For example, the children’s own movements (dancing), or the movements they do with the robot (place the robot in the lake indicated on the map). The turtle performs the action requested on the data input device on the map, based on the feedback principle [13]. 3.3 Tangible Interface and Digital Interface Two forms were designed to control the robot: a tangible one and a digital one. The digital interface is used by the free application "Bluetooth RC Controller" (Fig. 4) for Android system devices, available on the Play Store, and can be accessed from any mobile device with this operating system. Its use is supported by Bluetooth technology via Arduino boards.
Fig. 4. Application interface that controls the robot.
The interface is made up of several buttons, but only the arrow-shaped buttons were programmed to control the robot. Each arrow or button pressed indicates the direction the robot should take on the map, as shown in Fig. 4. The interface represents the visibility principle [13] using directional arrows that show the robot’s position on the map. Furthermore, it is also supported by the feedback principle [13], as the robot will immediately execute the action defined by the selected arrow. The visual interface without predominance of texts and with indications (drawing of arrows that represent its actions) to help the robot navigate the map is in line with the preliminary concepts of computational thinking [7] for children and also with the possibility of exploring the rules of the game as metaphors for the rules of everyday life [17]. The second control option is performed on a tangible table following the principle of affordance and also feedback [13]. The tactile interface resembles the shape of a docking toy with holes for placing directional arrows. The table is made of acrylic (see Fig. 5) with rounded corners according to safety standards [4]. This material was used to provide a view of the inside of the data entry device [5], for the purpose of providing an opportunity to address minimal knowledge of electronic operation. The artifact is intended to provide learning skills [1]. Part of it consists of defining a step (abstraction) and understanding and using this sequence of steps and instructions
308
M. de Oliveira et al.
Fig. 5. Tangible interface for controlling the turtle robot [5].
(algorithms) to perform an action, that is, operate the robot to move on the map. Children can also recognize, through the robot, the parts needed to assemble (decompose) the robot, such as the wheels, the carcass, etc. In addition, activities such as the "List of 5 land animals" present on the map, for example, were designed to enable children to identify patterns in animals and separate them into groups (pattern recognition). These developments occur because Roboquedo’s content was designed to provide digital inclusion for children, developing computational thinking (abstraction, algorithms, decomposition, and pattern recognition) addressed in the reference curriculum in technology and computing and the BNCC [7, 12], because, still according to the BNCC [7], it is necessary to guarantee learning for the use of the technologies. However, in order to build and evaluate Roboquedo to ensure that it has the expected performance and characteristics, that is, playful and inclusive, it is important that the school community also participate in this process in order to guarantee a collective construction of the educational robotics [10]. Therefore, the research was devoted to facilitating interaction testing involving the research team and the teachers in conjunction with the children’s experience, building on previously demonstrated principles of interaction design.
4 Paths Followed in the Interaction Tests Based on preliminary studies, the work was segmented into four stages [5]. First, the context of the study was defined, and based on this definition, the requirements were listed so that an alternative project for Roboquedo could be created. After creating a concrete and interactive digital version, the team assessed the suitability, security, and resilience of the artifact. It is noted that in previous research, Roboquedo had no interaction with a target audience [5], only demonstrations at public events. Thus, starting from the last point of these studies [5], this article proposes the continuation of the research, which consists of evaluating the project according to the principles of interaction design and considering the participation of the school community in this process, as previously discussed. Thus, uniting the principles of interaction design with social constructionism [10] and the
Thinking Open and Ludic Educational Robotics: Considerations
309
dialogic education proposed by Freire [9], this research proposes to evaluate an artifact that is playful and provides digital inclusion [15] through robotics as learning, meeting the weighted needs of the community[9]. So that this could be achieved, two interaction tests were carried out with Roboquedo in person at a partner institution during the year 2022, in which it was intended to investigate how the aforementioned interaction design principles enable the promotion of inclusive, participatory, and open robotics learning. These. Interaction trials generated Field Notes, which formed the basis for the analysis presented here. The categories of analysis were based on the principles of interaction design [13]. The two interaction tests were carried out with two primary school classes under the guidance of their respective teachers. The first class consisted of 20 children, aged 6 to 7 years. The second class had 23 children in the same age group. Each test lasted two hours and was developed in a space separate from the classroom. Each test yielded reports containing the children’s use experiences, as well as reports from the teachers of these classes, to evaluate the alternatives developed because, as the authors point out: “The activities to discover requirements, design alternatives, build prototypes, and evaluate them are intertwined: alternatives are evaluated through the prototypes, and the results are fed back into further design or to identify alternative requirements” (p. 50) [13]. Therefore, based on the reports and observations of user experiences, issues of inclusion and playfulness of the artifact will be discussed in the results obtained through the evaluation of interaction design principles.
5 Results This section presents the results obtained with the two interaction tests3 and their triangulation with the studies in relation to the principles of interaction design [13], playfulness [17], robotics as learning [1], digital inclusion [3], participation and social constructionism [10] based on Sects. 2 and 4. The objective of these tests was to evaluate the principles of interaction design so that it is possible to carry out the intersection of these addressed issues and how they affect Roboquedo and the people involved with the artifact. 5.1 First Interaction Test The objective of the first interaction test was to observe, considering the teacher’s conduct, the children’s interaction with the artifact from the perspective of the principles of interaction design [13]. Therefore, in this first test, the children were able to interact with the robot, the map, and the robot’s control, which in this case was the digital interface, through a mobile device (tablet). In addition, for the activity to be carried out in an 3 Research Project Approved by the Research Ethics Committee under number CAAE
35555420.7.0000.5547. Implementation period: 2020 to 2022.
310
M. de Oliveira et al.
organized way and to value the data obtained, the children were divided into pairs: one to throw the dice and the other to control the robot, taking turns with the tasks as the dynamics progressed. As for the interface, it was possible to notice the principle of constraints [13] in the presence of buttons and information that are not necessary for the dynamics and, therefore, confuse its usability. Some children tried to control the robot using the arrows in the middle of the interface (Fig. 4) and the other function buttons that seemed available but were not programmed. Even so, some controlled the robot intuitively, due to their affordance [13], and the vast majority showed enthusiasm and willingness to take their turn, thus proving to be a playful activity [17]. More details will be discussed in Sect. 5.3. The principle of affordance [13] also appeared in “Square 2: Spin”, as the children found it difficult to understand its meaning and how to make the robot turn. However, there was still constant participation in the activities proposed in each square on the carpet, with the children, together, performing the actions and making themselves available to help each other, demonstrating the social relationships addressed by social constructionism [10]. At first glance, the robot drew attention because it looked like characters from popular platform games, however, due to its similarity and the principle of consistency [13], some children believed that the dynamics would have some relationship with the games, and, in this way, they also expected the robot to have other functions, which can generate some initial frustration. Some adjustments were necessary to the robot during the dynamics, and it was noted that maintenance generated curiosity in the children regarding the assembly of the robot, such as its interior, electronics, functioning, and programming, demonstrating through robotics as learning [1] and a development in the concept of decomposition in computational thinking [7, 12]. Based on the teacher’s reflections on the first interaction test, a second test was proposed in which the children, guided by the teacher, could participate in the creative process of making changes to improve the artifact. 5.2 Second Interaction Test To organize the second interaction trial and allow all children to participate in the use of Roboquedo, the activity was divided into three stations with three groups: Group 1, Group 2 and Group 3. The children participated in each station for fifteen minutes. When the time was up, they switched stations, as shown in Fig. 6a. This type of organization in groups was thought to encourage a little more the participatory process, not only among the children but also between the children and the teachers [10], as well as the research group, so that the collected observations could start from a horizontal and dialogic relationship [9], and thus better ensure that everyone involved could build knowledge during the test [10]. The first station (Fig. 6b) consisted of using the Roboquedo. Here, the procedure was the same as in the first interaction test, but the third version of the robot was used and because the design was not similar to that of a turtle, the children showed confusion when the square on the map referred directly to this animal, as in “Square 9: Take the turtle for a bath”, generating conflict in the principle of consistency [13].
Thinking Open and Ludic Educational Robotics: Considerations
311
Fig. 6. Division of station.
Other observations noted were that some children were already familiar with the digital control, on the other hand, others did not know how to handle the tablet. When investigating the explanation, it was possible to understand that children who already had experience with some type of technology (such as, for example, toys that use a remote control) found it easier to use the control. This situation, from the perspective of Silveira [15], offers us an indication of how digital exclusion can be reflected in the classroom, since there is inequality in access to certain technologies, which therefore ends up causing some children to have less difficulty learning to interact with the artifact than others. As a result, it is clear that considering participatory activities can be a good way to seek a more inclusive educational robotics. Meanwhile, in station two (Fig. 6c), the children were encouraged to propose, through drawings, improvements to the digital interface. Various materials were made available for children to explore their creativity, such as colored pencils, markers, and crayons, and thus it was possible to analyze the principle of visibility [13]. When explaining the activity, the teacher reinforced that the drawing needed to have up, down, and side arrows (to make the robot walk), according to affordance [13], but other than that, children could add whatever they wanted: ornaments, colors, buttons with new features, etc. In this section, a great difference was observed between the groups that had already passed through station 1 (Groups 2 and 3) and the one that had not passed (Group 1). The materials prepared by the children, who had already handled the tablet interface, were similar to the original control, observing the principle of consistency [13]. In addition, for some teams, a paper print of the original interface was also made available as a recommendation, however, this considerably influenced the children, who tried to copy the elements of what was printed.
312
M. de Oliveira et al.
Finally, in the last station (Fig. 6d), the objective was to stimulate the construction of a character in modeling clay to evaluate new alternatives that could replace the turtle in Roboquedo. Upon arriving at the station, the children expressed excitement at seeing the clay, and, from that, creative materials were produced (see Fig. 7), such as, for example, the ice cat, the fire dog, a fairy, a turtle, a koala head, a rainbow, a pig and its baby, among others. At this station, it was possible to observe the principle of visibility [13] and the participation of children bringing their realities to the play dough [17], and socializing with each other, because children, in addition to producing their material, talked to each other and gave tips for the work of others, in addition to sharing a part of the play dough they used. This socialization based on the exchange of knowledge of their contexts is a good indicator of a participatory social construction, as Montero points out [10]. The children also discussed personal matters about travel and their respective families, demonstrating an improvement in the relationship and in the didactic contents based on the knowledge they already had [7, 9]. In general, the activity demonstrated the children’s creativity and communication, which is also seen as one of the points to be developed through robotics as learning [1].
Fig. 7. Results developed at the station.
Group 1 started at station 2 as the first workshop activity, and the sheet with the interface remained on the table throughout the activity with this group. At first, the group showed itself to be more reserved and shy about drawing and asking for permission before adding anything to the drawing. Afterwards, due to the playful nature of the activity [17], they demonstrated that they were more relaxed and excited, giving ideas of functionality for the robot [10]. After station 2, the children went to station 3, where, after listening to the proposal for the activity and solving some doubts, they began to produce, obtaining a very creative result [8]. Finally, Group 1 ended its rotation at Station 1, carrying out activities with Roboquedo for the development of computational thinking [7]. Group 2 started its activities at station 1. In this group, it was noted that the use of the tablet was not appropriate because its size was larger than the children’s hands and sometimes the device fell. This activity also indicates that the children became distracted while counting the turns in “Square 2: Spin”, causing a noise at the feedback
Thinking Open and Ludic Educational Robotics: Considerations
313
principle [13]. Afterwards, at station 2, Group 2 was assigned to draw; however, before their arrival, the printed interface was removed so that it would not influence them. The children in the group talked a lot about personal matters, family members, and so on while drawing, and this was reflected in the drawings (with things that reminded the family, like the soccer team they support), demonstrating that drawing is a playful activity that develops children’s social skills [14]. Like Group 1, Group 2 started out more reserved but gradually loosened up over the course of the activity. Each child asked quite often for the opinion of the others in the group and of the instructors, and the presence of collective construction could be noted [10]. Some ideas for the robot emerged from this group, such as “making cereal,” “autopilot,” and “it could talk,” encouraging the participation of children in improving the artifact [9, 10]. Group 3 started activities at station 3, the clay station, as shown in Fig. 7. While they were at this station, they felt the desire to do various models, which were not necessarily related to the theme of the station. Although some children did not finish what they started. 5.3 Assessments Carried Out As previously discussed in Sect. 3, Roboquedo presented some ID principles, including consistency and visibility in the map [13], consistency and feedback in the robot [13], and visibility and affordance in the interface [13]. Even so, with the analysis of the tests, one can reinforce the presence of these principles and find improvements with regard to other principles. In the two interaction tests, the children demonstrated that they understood that the robot was controlled by the buttons on the digital interface, thus achieving the principle of affordance [13]. However, the buttons showed the need for the constraint principle [13], to avoid ambiguity in the attempt to control the robot through these buttons. Another situation addresses the consistency principle [13], as the children expected the robot to perform the same actions as in the virtual games it was inspired by. In addition, the third version of the robot was not aligned with the context of the map, generating conflicts of understanding. However, it was possible to perceive that the map itself contains the principle of consistency [13], being metaphorically attributed to a board (throwing the dice, moving the square, and performing the action of the square), and not presenting difficulties in its execution rules. Another factor that gives the participatory dynamics a ludic and dialogic aspect [9] [10] is the awakened curiosity [8]. Even with the electronic parts of the robot covered, the children sought to know and understand its operation, the affordance principle [13], and it was often necessary to remove the exterior of the robot, as shown in Fig. 8, fostering robotics as learning through concepts of decomposition [1, 7, 12]. In the second interaction test, it was possible to verify that the principles of interaction design – visibility, feedback, and affordance – in the digital interface were obeyed. These principles were reached when considering the materials produced by the children. In these materials, the indications demonstrated the principle of visibility [13], as the drawings contained buttons that explicitly presented their functions. In addition, it was possible to notice suggestions of sound and visual feedback, which is related to the
314
M. de Oliveira et al.
Fig. 8. Children investigating the inside of the robot.
principle of feedback [13]. In the suggestions, clarity was observed in each suggested function, such as directional arrows on the buttons, as expected in affordance [13]. The materials produced by the children focused on the action of controlling the robot. In addition, many of the materials were made with several colors, which may suggest the adoption of a chromatic palette different from the current one, with the use of different colors for each type of command and for the interface as a whole, following the principle of consistency [13].
6 Final Considerations The results of both interaction trials indicate that Roboquedo, with its different built models, can be appropriate in the educational context. Still, some suggestions for improvement need to be considered. Some examples of perceived improvements, based on the principles of interaction design are: a more attractive digital interface with bright colors, as was suggested by the children in the activity, the removal of obsolete buttons and, in addition, the application of constraints (principles of constraint), to avoid ambiguities involving the combined use of buttons. Improvements are also needed in the physical use of the digital interface by the children, thinking of another device that is smaller and more comfortable for children to handle. For the robot the need for changes based on the visibility principle was noticed. However, the change in the appearance needs to be consistent with the map, meeting the principle of consistency. Also, by the principle of feedback, the children pointed out the need for sound effects, on the part of the robot, to indicate certain actions. The children also showed confusion in counting the "Square 2: Spin" (affordance) and, thus, it is necessary to implement a counter of spins, in a more intuitive format, so that the activity is even more playful. Moreover, in addition to the principles of ID, some improvements in the mechanical part of the robot will be necessary. The material used during the dynamics demonstrated fragility, since in some situations it was necessary to stop the dynamics to make adjustments to the robot, which may compromise the proposal of a playful artifact for learning.
Thinking Open and Ludic Educational Robotics: Considerations
315
Therefore, due to these results obtained, future activities of the project should involve the realization of the improvements observed and analyzed and a new cycle of interaction tests to evaluate the artifact. Beyond the artifact evaluation aspect, it is necessary to think about educational robotics in a digital inclusion perspective, since there is an inequality in the access to technologies, especially regarding the quality of this access, which in a certain way can end up being an obstacle in the learning process of children. Acknowledgements. To Tutorial Education Program – Connections of Knowledge – Ministry of Education (MEC), Secretariat of Higher Education (SESu) and Secretariat of Continuing Education, Literacy and Diversity (SECAD).
References 1. Alimisis, D., Kynigos, C.: Constructionism and robotics in education. In: Teacher Education on Robotic-Enhanced Constructivist Pedagogical Methods, pp. 11–26 (2009) 2. de Almeida, D., Riccio, N.C.R: Autonomia, liberdade e software livre: algumas reflexões. Inclusão digital (2011) 3. Almeida, P.: Educação lúdica: técnicas e jogos pedagógicos. São Paulo (2003) 4. Associação Brasileira de Normas Técnicas. NBR NM 300–1: Segurança de Brinquedos, (2004) 5. Barros, G., et al.: Learning interactions: robotics supporting the classroom. In: Stephanidis, C., Antona, M., Ntoa, S. (eds.) HCI International 2021 - Posters. CCIS, vol. 1421, pp. 3–10. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78645-8_1 6. Bonilla, M.H.S., De Luca, N.: Inclusão digital: polêmica contemporânea. EDUFBA (2011). https://doi.org/10.7476/9788523212063 7. Brasil. Ministério da Educação (MEC): Secretaria de Educação Básica. Base Nacional Comum Curricular (2017). http://basenacionalcomum.mec.gov.br. Accessed 07 Jan 2023 8. Dallabona, S.R., Mendes, S.M.S.: O lúdico na educação infantil: jogar, brincar, uma forma de educar. Revista de divulgação técnico-científica do ICPG 1(4), 107–112 (2004) 9. Freire, P., Pedagogia do oprimido. rev. e atual. Rio de Janeiro: Paz e Terra, pp. 95–101 (2011) 10. Montero, M.: Introducción a la psicología comunitaria: Desarrollo, conceptos y procesos. Paidós (2004) 11. Projeto de Pesquisa Aprovado no Comitê de Ética em Pesquisa sob número CAAE 35555420.7.0000.5547. Período de realização: 2020 a 2022 12. Raabe, A., Brackmann, C. P., Campos, F.R.: Currículo de referência em tecnologia e computação: da educação infantil ao ensino fundamental. Centro de Inovação para a Educação Básica-CIEB (2018) 13. Preece, J., Rogers, Y., Sharp, H.: Interaction Design: Beyond Human-Computer Interaction. Wiley, Indianapolis (2011) 14. Santos, S.M.P.: O lúdico na formação do educador. Vozes, Petrópolis (1997) 15. Silveira, S.A.: Exclusão digital: a miséria na era da informação. Editora Fundação Perseu Abramo (2001) 16. Swan, A.: Policy guidelines for the development and promotion of open access. United Nations Educational, Scientific and Cultural Organization, Paris, França (2012) 17. Vygotsky, L.S.: Play and its role in the mental development of the child. Sov. Psychol. 5(3), 6–18 (1967)
Authoring Robot Presentation Behavior for Promoting Self-review Kenyu Ito(B) and Akihiro Kashihara The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo, Japan [email protected]
Abstract. Presentation is important for researchers to present their work. Researchers must consider not only what to present but also how to present with nonverbal behavior Also, researchers are required to review their presentations in advance. We focus on self-review. The most common way for researchers to self-review is to make a video of the presentation, and then to check it out. However, they would have quite uncomfortable feelings due to their looks and voice on the video. We have accordingly developed a system that uses a robot as a presentation avatar to reproduce the presentation researchers make. The system reduces psychological resistance and helps learners review the appropriateness of nonverbal behavior they have conducted in a model-based manner. On the other hand, it does not allow them to become aware of alternative nonverbal behavior for achieving the same presentation intention. In order to allow learners to become aware of alternative nonverbal behavior, we propose an authoring environment, where learners can review, and author (including reconstruct) nonverbal behavior in their presentation. This paper also reports a case study with the system whose purpose was to ascertain whether it enhances awareness of alternative nonverbal behavior learners made. Keywords: Presentation · Nonverbal Behavior · Self-Review · Robot · Authoring
1 Introduction Presentation is one of the most important activities for researchers. They are often required to review their presentation in advance, which is necessary to refine it. In the presentation, researchers must consider not only what to present but also how to present with nonverbal behavior. In particular, nonverbal behavior should be conducted according to presentation intention [1]. It is consequently important to review nonverbal behavior. There are two types of reviews: peer review and self-review. In this study, we focus on self-review. The most common way for researchers as learners to self-review is to make a video of the presentation, and then to check it out. However, they would have uncomfortable feelings due to their looks and voice on the video [2]. The psychological resistance prevents them from self-reviewing. We have accordingly developed a system that uses a © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 316–326, 2023. https://doi.org/10.1007/978-3-031-34550-0_22
Authoring Robot Presentation Behavior for Promoting Self-review
317
robot as a presentation avatar to reproduce the presentation learners make [3, 4]. We have also designed a presentation behavior model and provided a checklist including points to be reviewed [5]. The results of the case study with the system suggest that it allows learners to reduce their psychological resistance to self-review and that the checklist could promote their awareness of points to be modified in their presentation behavior. On the other hand, we have ascertained that the system allows learners to check the appropriateness of nonverbal behavior they have conducted and that it does not allow them to become aware of alternative nonverbal behavior for achieving the same presentation intention. Such awareness contributes to improving nonverbal behavior in the presentation. In order to resolve this problem, it is necessary to provide learners with some opportunities to consider alternative nonverbal behavior and to improve self-review skills. The main issue addressed in this paper is how to promote awareness of alternative nonverbal behavior in learners’ presentations. Our approach to this issue is to allow learners not only to review but also to reconstruct/author nonverbal behavior in their presentation that the robot reproduces based on the presentation behavior model. In this paper, we demonstrate a self-review system with a robot we have been developing. In this system, the robot reproduces learners’ presentation, and they reconstruct/author the nonverbal behavior conducted by the robot. This paper also reports a case study with the system whose purpose was to ascertain whether reconstruction and, authoring of nonverbal behavior contributes to enhancing awareness of alternative nonverbal behavior. The results suggest the possibility that learners become aware of alternative nonverbal behavior in their presentation.
2 Presentation Self-review 2.1 Self-review Presentations generally consist of three components: slide contents, oral explanations, and nonverbal behaviors. In reviewing the presentation, it is important to check all of them. In particular, nonverbal behavior including gesture, gaze, and paralanguage is vital for transmitting the slide contents with oral explanations. It must be also conducted according to the presentation intention. It is accordingly essential to review nonverbal behavior [1]. In this work, we focus on how to help unskilled researchers as learners self-review their presentations. In labs, researchers could get a review of their presentation from peers including lab members, which is called peer review. Conducting self-review before peer review allows learners to compare self-review results and peer review ones, and to notice points to be modified that could be unnoticed in their self-review. This contributes to improving their review skill. The most common way for learners to self-review is to make a video of the presentation, and then to check it out. However, they would have quite uncomfortable feelings due to their looks and voice on the video [2]. The psychological resistance prevents them from self-reviewing. In addition, it is difficult for learners to self-review due to their lack of knowledge about what should be reviewed.
318
K. Ito and A. Kashihara
Fig. 1. Presentation behavior model
2.2 Previous Work In order to reduce psychological resistance and improve skills in self-review, we developed a self-review system, which uses a robot as the presentation avatar to reproduce the presentation learners make according to the presentation behavior model as shown in Fig. 1. [3, 4]. We also designed a checklist including points to be reviewed according to this model [5]. The presentation behavior model represents the correspondence of presentation intentions to nonverbal behavior for archiving them. In the model, there are three layers, which are presentation intention, presentation behavior category, and components of presentation behavior. Each behavior category represents nonverbal behavior achieving its corresponding intention and has several basic components for composing nonverbal behavior. The results of the case study in our previous work suggested that the system reduced psychological resistance in self-review and the checklist promoted awareness of points to be modified in presentation. Although the system helps learners review the appropriateness of nonverbal behavior they have conducted in a model-based manner, it does not allow them to become aware of alternative nonverbal behavior for achieving the same presentation intention. Such awareness contributes to improving nonverbal behavior. In this work, we address this issue as follows.
Authoring Robot Presentation Behavior for Promoting Self-review
319
2.3 Purpose In order to allow learners to become aware of alternative nonverbal behavior and to improve their self-review skills, we propose an authoring environment, where learners can review, and author (including reconstruct) nonverbal behavior in their presentation that the robot reproduces based on the presentation behavior model. We expect that repeatedly authoring and reviewing nonverbal behavior conducted by the robot allows learners to enhance their awareness of alternative nonverbal behavior achieving the same intentions as the one they conducted.
3 Nonverbal Behavior Authoring System 3.1 Framework Figure 2 shows the framework of the nonverbal behavior authoring system. It has three phases, which are recoding, reproduction, and authoring. In phase 1, the system requires learners to make a presentation with their presentation documents (P-document) including slides. In addition, learners need to decide where and with what intention to perform nonverbal behavior on each slide before they make a presentation based on the presentation behavior model. We currently restrict presentation intentions to detailed ones related to “attention to oral/slide content”. We also restrict nonverbal behavior to the one obtained from the corresponding components involving gestures (face direction, deictic gesture), and paralanguage (voice volume, pitch). The system also captures the presentation data including P-documents, motion, and voice by PC and Microsoft Kinect V2.
Fig. 2. Framework for authoring nonverbal behavior with robot
320
K. Ito and A. Kashihara
Fig. 3. Checklist for self-review
In phase 2, the system uses a robot called NAO as a presentation avatar, and the robot reproduces the presentation based on the captured data. The robot also reproduces captured oral explanations with a higher pitch than the original, whose purpose is to reduce uncomfortableness in self-review. In this self-review process, learners review the robot’s presentation behavior with the checklist as shown in Fig. 3, with which they can check the review points as to face direction, deictic pointing, and paralanguage. The system expects the learners to ascertain whether their nonverbal behavior is suitable/sufficient for achieving their presentation intentions. In phase 3, the system provides learners with opportunities to author the robot presentation behavior based on the presentation model. In this phase, if learners find out unsuitable/insufficient points or alternative nonverbal behavior to improve the robot presentation (e.g., learners think “I can draw attention to the slide more by pointing at the slide.”), they are allowed to reconstruct or author the robot presentation behavior. Then they can check the authored presentation behavior. The system also expects them to consider new nonverbal behavior as a result of authoring the robot’s presentation.
4 Case Study 4.1 Purpose and Procedure We had a case study whose purpose was to ascertain whether the nonverbal behavior authoring system enhances awareness of alternative nonverbal behavior learners should make. The participants were 9 graduate and undergraduate students in informatics and engineering. We set two conditions for self-review: presentation self-review with authoring of robot presentation behavior (WA condition), and presentation self-review without robot presentation and with recorded video of the presentation (VS condition). We asked the participants to conduct self-review twice in order of VS condition, and WA condition.
Authoring Robot Presentation Behavior for Promoting Self-review
321
Figure 4 shows the procedure of the case study. First, we explained to the participants how presentation should be conducted with nonverbal behavior according to the presentation behavior model. The participants were then required to prepare their presentation with a P-document we provided in advance. They were also required to decide where and with what intention (attention control, attention awakening, or both) to perform nonverbal behavior on each slide of the presentation, and to fill out two forms representing the initial presentation scenario as shown in Fig. 5.
Fig. 4. Procedure of the experiment
Fig. 5. Initial scenario and checklist for self-review
After preparing presentation, the participants made the presentation, which was recorded. After an interval of a few minutes, they were required to self-review their
322
K. Ito and A. Kashihara
presentation twice under VS condition, then under WA condition. In VS condition, the participants were required to fill out the checklist shown in Fig. 5 whether their nonverbal behavior was suitable/sufficient for achieving their presentation intentions and whether any nonverbal behavior was needed to further improve the presentation. If they want to add the nonverbal behavior that was not included in the initial scenario, they were allowed to add it by adding it to the initial scenario in red. The scenario after the nonverbal behavior was added from self-review on VS condition is called “VS scenario”. In WA condition, the participants were allowed to use a user interface described in 4.2 to review their presentation reproduced by the robot with the video. After that, they were allowed to author the robot presentation behavior. They were also required to fill out the checklist shown in Fig. 5. The scenario after the nonverbal behavior was added from self-review on WA condition is called “WA scenario”. We then conducted the self-review questionnaire that included 7 questions with a 5 Likert scale at the end of each self-review (Table 1) and the post-questionnaire (Table 2). Table 1 shows the contents of the self-review questionnaire consisting of questions about self-review. Table 2 shows the contents of the post-questionnaire consisting of questions about self-review, impressions, and motivation. In addition, we asked the learners about the user interface in the post-questionnaire. Table 1. Self-review questionnaire
Table 2. Post self-review questionnaire
Authoring Robot Presentation Behavior for Promoting Self-review
323
Figure 6 shows the system used in the case study. We placed a monitor displaying the user interface in front of a participant, with the robot, and a monitor displaying slide contents on the right.To evaluate the extent to which authoring of the robot presentation behavior enhanced awareness of nonverbal behavior for improving the presentation, we compared the initial scenario, VS scenario, and WA scenarios, and calculated the number of nonverbal behaviors that were included in each scenario.
Fig. 6. The system used in the case study
Fig. 7. User interface used in the experiment
324
K. Ito and A. Kashihara
4.2 User Interface The system provides a user interface for learners. Figure 7 shows the overview of the user interface. It consists of three parts, which are part of entering and displaying the oral texts corresponding to the explanation of the slide shown (part 1), part of selecting behavior categories and basic components for nonverbal behavior used in the slide (part 2), and part of displaying a list of nonverbal behavior used in the slide (part 3). Learners take the following steps to author their presentation using the user interface. Select a slide, Check the presentation using the video, Enter the oral text of the slide presentation in part 1, and Select a sentence or words and add nonverbal behavior in part 1 involving the selection of the corresponding behavior category and basic components in part 2. They can repeatedly reproduce his/her presentation with the robot according to WA scenario, and author the robot presentation. 4.3 Result and Discussion Figure 8 shows the average number of nonverbal behavior that was evaluated as unsuitable/insufficient one in the initial scenario. From the two-sided t-test, there were no significant differences between the average number in WA and VS conditions (t(8) = 0.821, p = 0.435, n.s., d = 0.36), although self-review under WA condition was more elaborate than self-review under VS condition. This result suggests that authoring the robot presentation does not interfere with self-review and that it could enhance self-review of nonverbal behavior in the robot presentation.
Fig. 8. The average number of nonverbal behavior that were decided to be performed at the time of the initial scenario and evaluated as unsuitable/insufficient after each self-review right: the average number of presentation behavior category that were added from the initial scenario.
Authoring Robot Presentation Behavior for Promoting Self-review
325
Figure 9 shows the average number of nonverbal behavior in each scenario, summarized by behavior category. From the two-sided t-test, there was a significant difference between the average number in WA/VS scenario for the attention control (t(8) = 2.511, p = 0.012 < 0.05). Moreover, there was a tendency of significant differences between the average number in WA/VS scenario for the attention awakening (t(8) = 2.193, p = 0.060 < 0.10). Figure 10 shows the average number of components of nonverbal behavior that were added or removed from the initial scenario to the WA/VS scenario in the presentation. From the two-sided t-test, there were significant differences between the average number in WA/VS scenario (t(8) = 3.049, p = 0.016 < 0.05). Figure 10 also show the average number of presentation behavior category that was added from the initial scenario to the WA/VS scenario in the presentation. From the two-sided ttest, there were significant differences between the average number in WA/VS scenario (t(8) = 4.395, p = 0.002 < 0.01). These suggest that authoring the robot presentation promotes awareness of alternative nonverbal behavior for their presentation.
Fig. 9. The average number of the nonverbal behavior in each scenario, summarized by behavior category.
326
K. Ito and A. Kashihara
Fig. 10. Left: the average number of components of nonverbal behavior that were added or removed from the initial scenario.
5 Conclusion In this work, we proposed and developed the nonverbal behavior authoring system with a robot, in which the robot reproduces learners’ presentation behavior, and they author it to improve. The results of the case study with the system suggest that authoring the robot presentation enhances awareness of alternative nonverbal behavior in their presentation. In future, we plan to conduct another case study with more participants. We will refine the nonverbal behavior authoring system. In addition, we are developing a user interface for authoring that is more intuitive and easier to operate. We hope that the new system will promote self-review including improvement of nonverbal behavior.
References 1. Goto, M., Ishino, T., Kahihara, A.: Evaluating Nonverbal behavior of presentation robot for promoting audiences’ understanding. In: 82nd SIG on Advanced Learning Science and Technology (SIG-ALST), pp. 13–18 (2018). (in Japanese) 2. Holzman, S.P., Rousey, C.: The voice as a percept. J. Personal. Soc. Psychol. 4(1), 79–86 (1966) 3. Inazawa, K., Kashihara, A.: A Presentation avatar for self-review. In: The 25th International Conference on Computers in Education (ICCE), pp. 345–354 (2017) 4. Seya, R., Kashihara, A.: Improving skill for self-reviewing presentation with robot. In: Proceedings of the 28th International Conference on Computers in Education. (ICCE), vol. 1, pp. 312–317, virtual conference (2020) 5. Inazawa, K., Kashihara, A.: Promoting engagement in self-reviewing presentation with robot. In: 6th International Conference on Human-Agent Interaction (HAI), pp. 383–384 (2018)
Praise for the Robot Model Affects Children’s Sharing Behavior Qianxi Jia(B)
, Jiaxin Lee , and Yi Pang
Central China Normal University, Wuha 430070, Hubei, People’s Republic of China {qianxijia,jiaxinlee,pangyi}@mails.ccnu.edu.cn
Abstract. Social robots are increasingly appearing in every aspect of young children’s lives, and their use in homes and kindergartens is becoming commonplace. Social robots have been shown to promote the development of prosocial behaviors in young children, such as helping, and cooperating, but little is known about whether robot models that mimic prosocial behaviors can persuade young children to engage in this behavior. At the same time, young children also learn from the behavioral results of others, so based on previous studies, we explored whether behavioral praise for a robot model that mimics prosocial behavior can promote the development of children’s sharing behavior. 58 children aged 5–6 years were randomly assigned to two experimental conditions (praise robot model; no-praise the robot model), Children in the praise robot group heard the experimenter praise the robot for sharing. Children in the no praise robot group heard no verbal feedback. The results showed that descriptive praise of the strongly prosocial robot model promoted sharing behavior in children. This indicates that the vicarious reinforcement theory of social learning theory is suitable for children to learn prosocial behaviors from robots. This result has certain reference significance for the future application of robots to promote the development of prosocial behavior in children. Keywords: Children · Social robot · Prosocial behavior · Share
1 Introduction Robots are increasingly integrated into every aspect of people’s lives, and their application scenarios are also very diverse. They can be used in many industries and scenarios such as retail, logistics, medical care, education, and security. As a type of robot, social robots are capable of autonomously interacting with humans in socially meaningful ways [16], providing assistance to humans, and are capable of making measurable progress in rehabilitation, learning, and other activities [11]. There are numerous studies on the interaction between children and social robots, and social robots have been proven to promote children’s language, concepts, and other aspects of learning [12, 15]. In addition, people also begin to pay attention to the role of social robots in the development of prosocial behaviors of children. For example, studies have shown that social robots can promote children’s cooperation, help, and other behaviors [3, 26]. But we still don’t know much about whether robotic models that mimic prosocial behavior can persuade young children to participate in this activity, so research on this issue is needed. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 327–335, 2023. https://doi.org/10.1007/978-3-031-34550-0_23
328
Q. Jia et al.
Human children show prosocial behavior before the age of two [27]. Children’s prosocial behavior is voluntary behavior aimed at benefiting others, and individuals take a lot of actions to achieve this goal. Such as help, cooperation, etc. [4, 7]. The implementation of this behavior does not give any return to the individuals who carry out the prosocial behavior, and requires them to pay certain costs (such as time, money, etc.) [18]. Promoting the development of prosocial behavior in young children is beneficial to the children themselves. The tendency of early prosocial behavior has a positive impact on children’s academic performance and social preference 5 years later [5]. Studies have also pointed out that children show greater happiness when they share gifts with others [1]. Bandura believes that people learn by watching and imitating role models, and this tendency is more pronounced in childhood. He developed the concept of observational learning, in which motor skills, attitudes, and other behaviors are acquired through the behavior of others [20]. Sharing behavior is one kind of prosocial behavior. A study investigates the impact of robot models imitating prosocial behavior on children’s sharing behavior and the results show that compared with weakly prosocial robot models, strongly prosocial robot models promoted children to share more stickers [21]. The results of this study demonstrate that social learning theory also applies to young children learning prosocial behaviors from robots. At the same time, social learning theory also believes that children will learn from the behavior results of the actors. When children observe the behavior of others being rewarded, they are more inclined to show such behavior themselves. This is called vicarious reinforcement [20]. Vicarious reinforcement affects children’s sharing behavior, which increased when the experimenter praises the model’s generosity [23]. Praise is considered to be an important form of social reinforcement, and it is generally considered to be verbal reinforcement that can increase praised behavior [17], Praise refers to a person’s positive evaluation of another person’s products, performance, and attributes [13]. Praise can be divided into general praise and descriptive praise, and descriptive praise is also called behavior-specific praise. It specifically identifies the behavior being praised (such as “Well done for sharing the toy with your brother!”), while general praise is a statement of approval that doesn’t point out the action (e.g. “Good job!”) [22]. The type of praise is one of the reasons affecting the effect of praise. Compared with general praise, descriptive praise can better promote students to complete tasks and improve their academic self-concept [6]. Teachers’ use of descriptive praise can increase children’s adaptive behavior [10], plus, compared with general praise, when children hear descriptive praise, they are more likely to recognize this behavior [29]. Therefore, in this study, we also used descriptive praise for the highly prosocial robot model. Based on previous studies [21, 23], in this study, to explore the promotion effect of praise for the strongly prosocial behavior robot model on children’s sharing behavior. We hypothesized that children in the group that heard the robot model being praised shared more stickers than those in the group that did not hear the robot being praised. On the one hand, this can be explained by the theory of vicarious reinforcement. On the other hand, in the interaction between children and social robots, adults such as teachers and parents will inevitably participate in it, and they will ultimately decide whether social
Praise for the Robot Model Affects Children’s Sharing Behavior
329
robots can be used in the education of students. Their attitude towards robots also affect children’s attitude towards robots [19, 25] when we praise the generosity of a robot model that mimics prosocial behavior, it may also encourage young children to engage in that behavior. It should be emphasized that the shared objects themselves will also affect children’s sharing behavior. Children share less of what they own and more of what belongs to the class [8]. Most 4–6 year olds did not share “owned” items they had won in the game compared to “accidentally acquired” items [28]. Children’s sharing behavior decreased 10.4% when they are asked to share the rewards they had won compared to the rewards given to them for free [2], so in this study we set up a game situation where children participated in the game to receive the sticker reward before deciding whether to share. When the children in the praise robot model group shared more of the rewards they had won, this also seemed to indicate the effectiveness of the experimental operation.
2 Method 2.1 Participants We randomly recruited 58 children aged 5–6 to participate in our experiment (23 boys and 35 girls), and all the children obtained parental consent before participating in the game. There are several reasons why we chose 5- and 6-year-olds. First, children at this age develop more mature sharing behaviors than younger children. Will be influenced by social comparison [24]; Secondly, children aged 5–6 share much more than they shared the occasional prize when they are sharing the prize won in the competition, and the sharing behavior is different [2]; Finally, children in this age group have been able to easily compare the size of numbers up to 3, reducing the cognitive burden of children when playing games. 2.2 Tools and Materials We used a robot named “Wukong” (as shown in Fig. 1). It is 245 mm high, 149 mm wide, 112 mm thick, and weighs about 700 g. The joints of this robot, such as arms, knees, and necks, are designed to be movable, which is convenient for the robot to make some actions (such as walking). We used the APP accompanying the robot to program the robot. As described above, we needed to let children win the game to get sticker reward, Referring to previous studies [21, 23], we used PowerPoint to present a dice-throwing game lasting three rounds on the laptop computer (as shown in Fig. 2), each round of the game would be marked on the upper left corner of the page the number of the game rounds, the game appeared in the game is a three-dimensional dice made by 3D drawing. And then inserted it into PowerPoint, The dice was white, and the dots on each face were black. To reduce the cognitive load of children during the game, we adjusted the number of dots on each face of the dice to ensure that only 1, 2, and 3 dots could be rolled out during the game. At the same time, we added an animation effect to the dice. When children clicked the space bar on the keyboard, the dice would start to rotate, and when
330
Q. Jia et al.
it stopped, only one of the six sides would appear. This arrangement allowed it to rotate in our game as it did in the real world. Two dice would appear on the computer screen during the game, one belonged to the robot and the other belonged to the child. The picture of the Wukong robot was placed next to the dice belonging to the robot, while there was a cartoon child image next to the dice belonging to the child.
Fig. 1. Screenshot of the Wukong robot video presented in the experiment
Fig. 2. Screen shot of dice game interface
2.3 Procedure Studies have pointed out that there is no difference in children’s learning results when facing real robots and virtual robots [14]. In the process of this experiment, the robot was presented by playing pre-recorded robot videos, and the robot’s language and actions were program in advance.
Praise for the Robot Model Affects Children’s Sharing Behavior
331
Several female experimenters acted as the main subjects. They had received training in advance to reduce mistakes in the experiment process. When the experiment started, a female experimenter with a computer and a child were in the experimental environment. The experimenter first asked the children about their willingness to participate in the game, and the children were also told that they could quit at any time during the game. At the beginning of the experiment, the female experimenter orally asked the child about basic information, such as the child’s name and age, to enhance the sense of familiarity with the child. Next, the female experimenter gave the child a brief introduction to the toy dice to be used in the game and then introduced the rule of the game. Before the game began, the experimenter played the video of the robot’s self-introduction to the children (the robot said: “Hello children, my name is Wukong, today will be by me to play this fun dice game with you, I am very good at playing dice games, I will try to win!”). There were three rounds of the game, and the wins or losses had been fixed in advance. The standard for determining the winning or losing was the number of dice points between the robot and the child. In the first round, the robot lost, and the child won; in the second round, the robot won, and the child lost; in the third round, the robot and the child drew. We adjusted the number of stickers in the reference experiments [21] to avoid children’s sharing being affected by getting too many stickers or too few stickers. The winner got three stickers, the loser got two stickers, and when the child and the robot had the same number of points, it was a tie, and both the robot and the child got two stickers. In the introduction of the rules, children were told that they would be asked whether they wanted to share stickers with the children without stickers. After each round of the game, the experimenter asked the robot and the children how many stickers they wanted to share with the children without stickers. Children were told that they could choose to share the stickers or not. Then the corresponding sticker number was deducted. At the end of the game, the children were given real stickers based on the results of the experiment. We modeled a strongly prosocial robot, and when it won the game, it shared two stickers (when it won, the robot said: “I got three stickers, and I want to share two of them with the kids who have no stickers”), and when it lost the game, and the drew, it shared two stickers (when it lost and the drew, it said: “I was rewarded with two stickers, and I want to share both stickers with the child who have no stickers!”), the robot ended up with seven stickers in total and shared six, a similar ratio to previous studies [21, 23]. The number of stickers obtained by the robot and children was recorded on the paper by drawing circles. When the robot and children decided to share the stickers, the experimenter would draw “ ×” corresponding to the number they shared in the circle of that round. The circles with wrong numbers could not be changed into real stickers after the game was over, which would also inform the children in advance. The experimental materials and procedures of the two experimental conditions were identical, but the only difference was that when the robot finished sharing the stickers, the children in the control group did not hear any comments about the robot’s behavior from the experimenter, while the children in the experimental group heard praise from the experimenter for the robot’s behavior. When the robot got three and shared two, the experimenter would say: “Wukong, I want to praise you for sharing your two stickers
332
Q. Jia et al.
with the children without stickers, you are great!”. When the robot got two and shared two, the experimenter would say: “Wukong, I want to praise you for sharing all your stickers with other children without stickers, you are great!”. In the end of the game, children had to answer three questions. The experimenter first asked the children whether they liked the robot, and the children gave five answers on the Linkert scale (very like, like, average, dislike, very dislike). Then the experimenter asked the children the reasons for their choice based on their answers to this question (e.g. “Why do you like/dislike this Wukong robot?”), and finally, each child was asked the reason for sharing his or her sticker with another child who did not have one.
3 Results Of the 58 children who participated in our experiment, 29 were assigned to the praise robot model, and 29 were assigned to the no-praise robot model. 28(97%) of the children in the praise robot group shared stickers at least once, while 27 (93%) of the children in the no praise robot group shared stickers at least once, and 55 (95%) of the 58 children shared stickers at least once. The experiment hypothesized that children who heard praise for the strongly prosocial robot model would share more stickers than those who did not. The results of independent sample T-test showed that there were a significant difference in the number of stickers shared between the two groups, T (56) = 3.347, P < 0.05, indicating that the experimental hypothesis was valid. Children in the strongly prosocial robot model group who heard praise from experimenter shared 4.76 stickers on average (SD = 1.28), those in the strongly prosocial robot model group who did not hear praise from experimenter shared 3.45 stickers on average (SD = 1.68). The average number of stickers shared by the children in the praise group was higher than that in group who did not hear praise. At the end of the experiment, an interview was added. There was no difference in the liking degree of children in the praise robot model group and the no praise robot model group, T (56) = 0.856, P > 0.05. How much the toddlers liked the robot was not affected by whether they heard the experimenter praise the robot. We coded and classified the answers to the interview questions. By answering the question: “Why do you like/dislike this robot? “By combing the text information of the answer to this question, 58 pieces of effective information are obtained. The main reason why children like robot was the sharing behavior of robot (23), followed by the fun from the game itself (15), and the appearance of robot and the intelligence of robot (can talk, etc.) were also the reasons why children like Wukong (12). There were also 6 children who answered this question “don’t know”. Two children said they did not like the robot because they thought it had a bad voice and could talk strangely. After sorting out the reasons for children to share stickers, 55 information reference points were obtained. When children expressed their reasons for sharing stickers, the most important reason was to sympathize with and help the children who did not have stickers (20), followed by the reason that they shared these stickers with their good friends (15). The requirement from parents to be a good child was also the reason why children chose to share (5). In addition, 13 children did not have clear reasons for sharing. It is worth noting that only 2 children clearly expressed that they shared because the robot also shared.
Praise for the Robot Model Affects Children’s Sharing Behavior
333
4 Discussion We tested whether praise for a strongly prosocial robot model has an effect on children’s sharing behavior by conducting a single-factor, two-level intersubject experimental design. Independent sample T test used for data analysis, and the results significantly supported the experimental hypothesis, indicating that the experimenter’s praise for the strongly prosocial robot model would promote children’s sharing behavior, and the number of stickers shared by children increased, which was consistent with the existing research results. This indicates that vicarious reinforcement can affect children’s sharing behavior [23], and also indicates that robots, as a social agent, can promote children’s prosocial behavior development. Social learning theory is applicable to young children learning prosocial behaviors from robots [21]. But remarkably, even though most of the kids shared at least one sticker, three of the 58 kids in the study chose not to share a sticker at all -- two from the no praise robot group and one from the praise robot group. This also seems to indicate that the internal mechanism of children’s observational learning is relatively complex to some extent. Whether praise is effective depends on how children interpret praise [29]. Example and vicarious reinforcement may not absolutely promote children’s imitation. However, since the interpretation of praise in children’s mind was not discussed in the experimental setting, it can be further explored in future studies. Among the reasons for children’s fondness for robots, the prosocial behavior of robots accounted for a high proportion, indicating that the robot model with prosocial behavior can promote children’s fondness for robots. In addition, the appearance of robots and some intelligent functions of robots (such as being able to speak, say hello, etc.) would also affect children’s fondness for robots. The robot video used in the experiment was recorded in advance with some actions (such as waving when greeting). The human-like appearance of the robot and the natural interaction brought by the robot gave children a better game experience. When children were asked about the reasons for sticker sharing, their responses indicated the complexity of prosocial motivations. Which may stem from the desire for social approval, the adherence to internalized moral values, or the response to compassion and guilt [9]. Most of the children share from the help of children without stickers, they said that they did not want to make children without stickers sad, which also confirms that empathic help is a major factor affecting prosocial behavior [30]. However, it is worth noting that only two children clearly expressed their sharing because the robot also shared stickers. Therefore, it is necessary to further explore the internal mechanism of children’s observation and learning of prosocial behaviors from the robot. Our study confirms the feasibility of vicarious reinforcement in social learning theory for children to learn prosocial behaviors from robots, but the research still have shortcomings. First, we only discuss the impact of specific praise to the model on children’s sharing behavior. Later studies can also explore the effects of general praise and gesture praise for prosocial robots on children’s prosocial behavior. Secondly, during the experiment, we present the robot in front of children through video playback, so it is impossible to know whether the results of experimental operation will be different when the children and the robot actually play the game face to face. Finally, with the improvement of living standards, stickers may not be very attractive to children. Therefore, it
334
Q. Jia et al.
can be discussed whether children’s sharing behavior will be different when the reward is not stickers but other items.
References 1. Aknin, L.B., Hamlin, J.K., Dunn, E. W.: Giving leads to happiness in young children. PLoS ONE 7(6) (2012) 2. Balcı, A., Kotaman, H., Aslan, M.: Impact of earning on young children’s sharing behaviour. Early Child Dev. Care 191(11), 1757–1764 (2021) 3. Beran, T.N., Ramirez-Serrano, A., Kuzyk, R., Nugent, S., Fior, M.: Would children help a robot in need? Int. J. Soc. Robot. 3, 83–93 (2011) 4. Brownell, C.A., Svetlova, M., Nichols, S.: To share or not to share: when do toddlers respond to another’s needs? Infancy 14(1), 117–130 (2009) 5. Caprara, G.V., Barbaranelli, C., Pastorelli, C., Bandura, A., Zimbardo, P.G.: Prosocial foundations of children’s academic achievement. Psychol. Sci. 11(4), 302–306 (2000) 6. Chalk, K., Bizo, L.A.: Specific praise improves on-task behaviour and numeracy enjoyment: a study of year four pupils engaged in the numeracy hour. Educ. Psychol. Pract. 20(4), 335–351 (2004) 7. Dunfield, K.A., Kuhlmeier, V.A.: Classifying prosocial behavior: children’s responses to instrumental need, emotional distress, and material desire. Child Dev. 84(5), 1766–1776 (2013) 8. Eisenberg-Berg, N., Haake, R.J., Bartlett, K.: The effects of possession and ownership on the sharing and proprietary behaviors of preschool children. Merrill-Palmer Q. Behav. Dev. 27(1), 61–68 (1981) 9. Eisenberg, N., Spinrad, T.L., Morris, A.S.: Prosocial development. In: Zelazo, P.D. (ed.) The Oxford Handbook of Developmental Psychology, vol. 2, pp. 300–325. Oxford Handbooks Online (2013) 10. Fullerton, E.K., Conroy, M.A., Correa, V.I.: Early childhood teachers’ use of specific praise statements with young children at risk for behavioral disorders. Behav. Disord. 34(3), 118–135 (2009) 11. Gavrilova, L., Petrov, V., Kotik, A., Sagitov, A., Khalitova, L., Tsoy, T.: Pilot study of teaching English language for preschool children with a small-size humanoid robot assistant. In: 12th International Conference on Developments in eSystems Engineering (DeSE), pp. 253–260. IEEE Press (2019) 12. Ioannou, M., Bratitsis, T.: Teaching the notion of Speed in Kindergarten using the Sphero SPRK robot. In:17th International Conference on Advanced Learning Technologies (ICALT), pp. 311–312. IEEE Press (2017) 13. Kanouse, D.E., Gumpert, P., Canavan-Gumpert, D.: The semantics of praise. New Direct. Attribut. Res. 3, 97–115 (1981) 14. Kennedy, J., Baxter, P., Belpaeme, T.: Comparing robot embodiments in a guided discovery learning interaction with children. Int. J. Soc. Robot. 7, 293–308 (2015) 15. Konijn, E. A., Jansen, B., Mondaca Bustos, V., Hobbelink, V. L., Preciado Vanegas, D.: Social robots for (second) language learning in (migrant) primary school children. Int. J. Soc. Robot. 14, 827–843 (2022) 16. Lee, K.M., Peng, W., Jin, S.A., Yan, C.: Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human–robot interaction. J. Commun. 56(4), 754–772 (2006) 17. Neapolitan, J.: The effects of different types of praise and criticism on performance. Sociol. Focus 21(3), 223–231 (1988)
Praise for the Robot Model Affects Children’s Sharing Behavior
335
18. Oliveira, R., Arriaga, P., Santos, F.P., Mascarenhas, S., Paiva, A.: Towards prosocial design: a scoping review of the use of robots and virtual agents to trigger prosocial behaviour. Comput. Hum. Behav 114(2021) 19. Oros, M., Nikoli´c, M., Borovac, B., Jerkovi´c, I.: Children’s preference of appearance and parents’ attitudes towards assistive robots. In: 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 360–365. IEEE Press, Madrid (2014) 20. Peng, D.L. (ed.): General Psychology, 4th edn., Beijing Normal University Publishing Group, Beijing (2012) (in Chinese) 21. Peter, J., Kühne, R., Barco, A.: Can social robots affect children’s prosocial behavior? An experimental study on prosocial robot models. Comput. Hum. Behav. 120, 106712 (2021) 22. Polick, A.S., Carr, J.E., Hanney, N.M.: A comparison of general and descriptive praise in teaching intraverbal behavior to children with autism. J. Appl. Behav. Anal. 45(3), 593–599 (2012) 23. Presbie, R.J., Coiteux, P.F.: Learning to be generous or stingy: Imitation of sharing behavior as a function of model generosity and vicarious reinforcement. Child Dev. 42, 1033–1038 (1971) 24. Samek, A., et al.: The development of social comparisons and sharing behavior across 12 countries. J. Experiment. Child Psychol. 192 (2020) 25. Song, H., Deetman, M., Markopoulos, P., Ham, J., Barakova, E. I.: Learning musical instrument with the help of social robots: attitudes and expectations of teachers and parents. In: 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 351–357. IEEE Press (2022) 26. Strohkorb, S., Fukuto, E., Warren, N., Taylor, C., Berry, B., Scassellati, B.: Improving humanhuman collaboration between children with a social robot. In: 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 551–556. IEEE Press, NY (2016) 27. Svetlova, M., Nichols, S.R., Brownell, C.A.: Toddlers’ prosocial behavior: from instrumental to empathic to altruistic helping. Child Dev. 81(6), 1814–1827 (2010) 28. Wang, H.M., Chen, H.C., Zhang, G.Z.: Sharing behaviors on occasionally gained and possessive object in old children aged 4-6 years. Psychol. Dev. Educ. 21, 37–43 (2005). (In Chinese) 29. Weeland, J., et al.: Does caregivers’ use of praise reduce children’s externalizing behavior? A longitudinal observational test in the context of a parenting program. Dev. Psychol. 58(7) (2022) 30. Williams, A., O’Driscoll, K., Moore, C.: The influence of empathic concern on prosocial behavior in children. Front. Psychol. 5, 425 (2014)
Assessment of Divergent Thinking in a Social and Modular Robotics Task: An Edit Distance Approach at the Configuration Level Louis Kohler(B)
and Margarida Romero
LINE, Université Côte d’Azur, Nice, France [email protected], [email protected]
Abstract. Creativity is a complex human process evaluated through psychometric tasks, mostly in relation to divergent thinking (DT) or the capacity to generate new ideas. In social robotics, creativity has been supported in different interactions with social robots, in which the Human Robot Interaction (HRI) supports the participants’ creativity. However, as scores are typically used for such evaluations, they struggle to capture the inherent creative process involved in a task. In this paper, we focus on creative problem solving in Human Robot Interaction considering the edit distance by analyzing the evolution of the different configurations of a set of modular robotic cubes during the task resolution. For this objective, we consider the three DT components of fluidity, flexibility, and innovation as evaluated in the Alternate Use Test task. We then operationalize the edit distance (ED), usually used for quantifying the minimum number of operations required to transform one configuration into another. ED is a way to compute the differences between two intermediate creative solutions. We engaged 224 participants in playing the CreaCube task, and then we analyzed the videos for identifying in time each of the configurations. The results allow us to maintain the hypothesis related to the DT components in relation to ED and discuss the implications of this study in social robotics. Keywords: Problem-Based Learning · Educational Robotics · Social Creativity · Edit Distance · Divergent Thinking
1 Introduction Creativity is a complex human activity that can be observed in a wide diversity of tasks. Nevertheless, the study of human creativity has been mainly developed in individual semantic and drawing tasks, focusing mainly on divergent thinking (DT) as the capacity to generate a diversity of ideas. The evaluation of DT has been developed through individual measures such as the Alternate Uses Test (AUT) [1], whose scores are based on the capacity to generate new ideas about alternate uses of familiar objects within a certain time frame. The Divergent Thinking Test (DTT) has a similar approach, combining written and graphic elements. The DTT [2, 3] are composed of twelve drawings to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Zaphiris and A. Ioannou (Eds.): HCII 2023, LNCS 14041, pp. 336–349, 2023. https://doi.org/10.1007/978-3-031-34550-0_24
Assessment of Divergent Thinking in a Social and Modular Robotics Task
337
be completed within a limited time to measure an individual’s creativity in relation to fluency, openness, flexibility, originality, elaboration, and naming. These tests follow the tradition of psychometric measures in the field of psychology. The evaluation of creativity in collective contexts is a challenge, which has led to more ecological measures considering not only the evaluation of a test result but the creative process within the task duration. These evaluations have been developed based on the analysis of videos and on learning analytics in the context of Computer Supported Collaborative Learning (CSCL) activities [4]. In social robotics, creativity has been studied as a way to support participants’ creativity in terms of idea generation (DT) but also the duration of creative engagement [5]. Social robotics in the educational field has also permitted the study of the support of creative activities through humanoid robots which supports a diversity of HRI activities [6]. Collaboration between a social robot and a learner has also supported the Creative Problem Solving (CPS) process in activities such as the Tower of Hanoi [7]. In these different social robotics activities aiming to support creativity, the robots and the learning situation supports the creative process at the learner level, which shows interindividual differences (Fig. 1).
Fig. 1. Creative support of a social robot to the CPS task [7].
The individual differences in creative process has led to a corpus of research on creativity as an individual [8] or competency [9]; it also can be a result of a team-oriented process [10–12]. In this study, we consider creativity as an individual or collaborative reflective iterative process [13] that aims to design a new, innovative, and pertinent way to respond to a potentially problematic situation, which is valued by a group of references in a context-specific situation [14, 15]. While some primary teachers consider creativity to be specially related to the arts without considering the sciences and technology domains, there is a body of research conducted both as a transversal competency and in scientific disciplines, such as, for example, mathematics. In the domain of mathematics education, creativity is considered an essential element for everyone [16], but it has also been studied specifically in gifted education, especially in the context of problem-solving. In a more inclusive way, creativity can be identified in different approaches to the problems, some of which could lead to less plausible solutions [11]. Our study aims to analyze an educational robotics task, not only considering the score on the number of ideas in terms of fluidity, flexibility, and innovation but also the distance of edition as a measure to understand the way the intermediate configurations are developed during the task. The ED aims to consider the specificities of CPS as a process in which there is an “execution gap” between “the means made available in the task and the representation of the goal of the task that the subject could potentially achieve” [3]. The ED is a way to consider the difference between two intermediate
338
L. Kohler and M. Romero
solutions in the CPS. The higher the ED, the more different the intermediate solutions are among them in terms of the modular robotics configuration.
2 Edit Distance in Modular and Social Robotics 2.1 Educational Robotic Tasks ED in modular and social robotic tasks aims to operationalize the HRI operations made by the participant in relation to the modular artifacts or social robot. In this study, we analyze an educational robotic task is an ill-defined CPS activity aiming to engage the participants in a divergent thinking activity by allowing a high but finite number of configurations (n = 672). Once the experiment starts, participants must listen to a set of instructions provided by a voice device (that they can activate as many times as they want throughout the entire experiment) that explains the objective. The objective is to build an autonomous vehicle capable of moving from one specific point to another using four robotic cubes. The entire experiment is therefore carried out at the interface between the participant and the material. The four cubes are magnetized, and each of them offers a characteristic necessary to create a configuration: – Wheels: A cube composed of wheels allowing the vehicle to move when powered by the battery – Battery: A cube that will provide energy to others cubes when activated (via a button) – Sensor: A cube that will detects the presence of an objects in front of its sensors – Inverter: A cube that reverses the operation of every other cube (except the battery). As an example, the sensor will send a “positive” signal if there is nothing in front of it, instead of sending if it detects an object. Thus, a configuration is defined by the composition of these 4 cubes articulated together. Every configuration made by the participants is recorded via video (focusing only on their hands) in the database. This video is then analyzed through an interface (Fig. 2) that will record the information in JSON files.
Fig. 2. Placements of different components of the experiment.
Assessment of Divergent Thinking in a Social and Modular Robotics Task
339
Fig. 3. Encoding table used to record all the data on configurations
The implementation of such a model allows us to translate the data into a usable and analyzable corpus to understand the learning objectives [17]. Subsequently, a simple precision of these configurations is given according to the placement of the cubes within the configuration and regarding a specific reading direction, following this format: FXXXCCCC. The string XXXX refers to the configuration shown above (Fig. 3), thus going from “000” to “066”. Then, CCCC gave the arrangement of the cubes in the configuration with “B” for battery, “W” for wheels, “S” for sensor and “I” for inverter. However, for the same shape (e.g. F000 in the example below, Fig. 4), we can have different block placements within the configuration, respectively F000-ISBW and F000-WSBI.
Fig. 4. Coding of the configurations according to the cube type: ISBW (left) and WSBI (right).
2.2 An Adapted Definition of the Edit Distance The ED is an often-used method of string similarity retrieval. It corresponds to the minimum number of insertions, deletions, and substitutions required to transform one string into the other. When calculating the ED, one should therefore look at the optimal path between two strings. ED is a tool that has often been used in other contexts, such as biology for DNA sequence similarity [18].
340
L. Kohler and M. Romero
In this study, it is necessary to define the ED in the sense of the activity because the strings that are assigned to the configurations are different. Indeed, as the configurations are 3D objects, a letter of one of our strings can be in a different spatial plane from its adjacent one. Thus, the ED from a form F000-ISBW to F010-ISBW is not zero, even though the strings are equivalent (“ISBW”), as represented in Fig. 5.
Fig. 5. Differences between two different configurations (F000 and F010) having the same subconfiguration (ISBW).
Deletion, assembly, and rotations (just for a group of cubes) are therefore seen as actions that correspond to a "cost" or effort to transform one configuration into another in order to study such modifications. This ED methodology applied to the CreaCube task in educational robotics for the evaluation of DT aims to overcome the limits of psychometric tests and analyze tangible tasks with different types of materials or objects-to-think-with [19] and “visuo-spatial constructive play objects” (VCPOs) [8], including blocks, bricks, and planks, which can be extended to tangible interactive modular robotics to support DT assessment in Human Robot Interaction (HRI) activities.
3 Research Questions and Hypothesis Our study aims to characterize CPS by combining the analysis of the three components of DT (fluidity, flexibility, and innovation) with the analysis of the ED and the time duration of each edition between configurations. The goal is to have a longitudinal indicator allowing to characterize the creative process in a modular robotic task. Among the components of DT, fluidity refers to the total number of configurations created, regardless of their differences. Thus, analyzing the DT components, we expect to observe a lower ED than for flexibility and innovation (H1). Indeed, this would mean that the number of actions allowing the transition from one configuration to another similar configuration remains relatively lower than that of a new configuration or a rare one, thus showing a low-cost cognitive process. On the same idea, we analyze the time duration for each of the DT components. Since flexibility and innovation are components describing more complex processes than fluidity, we made the hypothesis that both innovation and flexibility require more time between each configuration than flexibility (H2).
Assessment of Divergent Thinking in a Social and Modular Robotics Task
341
Finally, we hypothesized that the joint action of both ED and time durations will help to differentiate the three DT components (H3). Our expectations are that a larger ED and longer time duration would lead to greater innovation and that a smaller ED and shorter time duration would lead to higher fluidity scores.
4 Methodology 4.1 Analysis of Divergent Thinking in Modular Robotics Through the CreaCube Task We analyze CPS using the CreaCube task, which engages the participant in a problem task without a known solution model. The task is developed within the framework of the ANR project CreaMaker. In this task, learners must combine four modular robotic cubes in such a way that they can move from an initial point to an end point. Learners must discover the affordances of the cubes needed to solve the task. The analysis of DT is a transposition of the operationalization of the AUT task by Guilford [1]. The fluidity of the CreaCube task is evaluated based on the total number of configurations developed for solving the problem. The flexibility considers the different figures at the macro level (the shape of the figure) without considering the changes at the micro level. The innovation considered the rareness of the configurations developed by the participants, corresponding to the figures which are rarer (