Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation (Intelligent Systems Reference Library, 196) 3030596079, 9783030596071

In a time of ongoing pandemic when well-being is a priority this volume presents latest works across disciplines associa

1,268 78 18MB

English Pages 567 [551] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Editors
1 Re – Reflecting on Recent Advances in Technologies of Inclusive Well-Being
1.1 Introduction
1.2 The Field
1.2.1 Editors and Concept Background in This Field
1.2.2 Current Volume
1.2.3 Contributions in This Book—See Table of Contents
1.2.4 Technology Adoption for Well-Being Intervention
1.2.5 Future Advancements
References
Part IGaming, VR, and Immersive Technologies for Education/Training
2 Gaming, VR, and Immersive Technologies for Education/Training
2.1 Introduction
2.1.1 Experiential Training of Hand Hygiene Using Virtual Reality [1]
2.1.2 Useful, Usable and Used? Challenges and Opportunities for Virtual Reality Surgical Trainers [7]
2.1.3 Four-Component Instructional Design Applied to a Game for Emergency Medicine [8]
2.1.4 Enhanced Reality for Healthcare Simulation [15]
2.1.5 MaxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training [16]
2.1.6 Serious Games and Multiple Intelligences for Customized Learning: A Discussion [17]
2.1.7 Mobile Application for Convulsive and Automated External Defibrillator Practices [19]
2.1.8 Lessons Learned from Building a Virtual Patient Platform [21]
2.2 Conclusions
References
3 Experiential Training of Hand Hygiene Using Virtual Reality
3.1 Introduction
3.2 Hand Hygiene—Related Work
3.3 Virtual Reality for Experiential Training
3.3.1 Experiential Learning Theory
3.4 Summary and Future Work
References
4 Useful, Usable and Used?
4.1 Introduction
4.1.1 Improving Healthcare Delivery, Patient Outcomes and Training Opportunities
4.2 Design Drivers in Developing VR Surgical Trainers
4.2.1 Is It Useful, Usable and Used?
4.2.2 Establishing System Requirements
4.2.3 Factors Influencing Usefulness, Usability and Use
4.3 Conclusion
References
5 Four-Component Instructional Design Applied to a Game for Emergency Medicine
5.1 Background and Significance
5.2 Game-Based Learning and Four-Component Instructional Design
5.2.1 Learning in a Game Environment
5.2.2 Four Component Instructional Design
5.2.3 4C/ID in Educational Games
5.2.4 4C/ID in Medical Education
5.3 Redesigning a Game for Emergency Care Using 4C/ID
5.3.1 Learning Tasks and Task Classes
5.3.2 Support and Guidance
5.3.3 Supportive Information
5.3.4 Procedural Information
5.3.5 Part-Task Practice
5.3.6 Design Process and Challenges
5.3.7 Plans for Evaluation
5.4 Discussion and Lessons Learned
5.5 Conclusion
References
6 A Review of Virtual Reality-Based Eye Examination Simulators
6.1 Introduction
6.2 Ophthalmoscopy Examination
6.2.1 The Ophthalmoscope and Eye Fundus Examination
6.2.2 Ophthalmoscope Alternatives
6.3 Simulation and Medical Education
6.3.1 Standardised Patients
6.3.2 Computer-Based Simulation
6.3.3 Virtual/Augmented/Mixed Reality
6.3.4 Simulation in Ophthalmology
6.4 Direct Ophthalmoscopy Simulators
6.5 Discussion
References
7 Enhanced Reality for Healthcare Simulation
7.1 Enhanced Reality
7.2 Enhanced Hybrid Simulation in a Mixed Reality Setting, Both Face-to-Face and in Telepresence
7.3 e-REAL as a CAVE-Like Environment Enhanced by Augmented Reality and Interaction Tools
7.4 The Simulation’s Phases Enhanced by e-REAL and the Main Tools Made Available by the System
7.5 Visual Storytelling and Contextual Intelligence, Cognitive Aids, Apps and Tools to Enhance the Education Process in a Simulation Lab or In Situ
7.6 The Epistemological Pillars Supporting e-REAL
7.7 Case-Study: Teamwork and Crisis Resource Management for Labor and Delivery Clinicians
7.8 Conclusion
References
8 maxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training
8.1 Introduction
8.1.1 Immersive Technologies
8.2 maxSIMhealth Projects
8.2.1 Immersive Technology-Based Solutions
8.2.2 Gamification- (and Serious Gaming-) Based Solutions
8.2.3 The Gamified Educational Network (GEN)
8.2.4 3D Printing-Based Solutions
8.3 Discussion
8.4 Conclusions
References
9 Serious Games and Multiple Intelligences for Customized Learning: A Discussion
9.1 Introduction
9.2 Multiple Intelligences
9.3 Challenges to Educators
9.4 Technology Opportunities
9.5 Serious Games
9.6 Conclusion
References
10 A Virtual Patient Mobile Application for Convulsive and Automated External Defibrillator Practices
10.1 Introduction
10.2 Background Review
10.2.1 Early Simulation
10.2.2 Modern Simulation
10.3 Mobile Application Development
10.3.1 Automatic External Defibrillation
10.3.2 Convulsive Treatment
10.3.3 Design and Development
10.3.4 Game/Learning Mechanics
10.4 Preliminary Study
10.4.1 Participants
10.4.2 Pre and Post-test
10.4.3 System Usability Scale
10.4.4 Game Engagement Questionnaire
10.5 Conclusion
References
11 Lessons Learned from Building a Virtual Patient Platform
11.1 Introduction: Simulation and Virtual Patients
11.2 Virtual Patient Platform Requirements
11.3 Obstacles and Challenges
11.4 Lessons Learned
11.5 A Way Forward
References
12 Engaging Learners in Presimulation Preparation Through Virtual Simulation Games
12.1 Background
12.1.1 Presimulation Preparation
12.1.2 Virtual Simulations
12.1.3 Virtual Simulation Games
12.1.4 Presimulation Preparation Using Virtual Simulation Games
12.2 Virtual Simulation Game Project
12.2.1 Rationale
12.2.2 Objective
12.2.3 Methods
12.2.4 Scenario Selection
12.2.5 Description of the Innovation
12.2.6 Usability Testing
12.2.7 Cost Utility and Learning Outcomes
12.3 Results
12.4 Discussion
12.4.1 Strengths and Limitations
12.5 Conclusions
References
Part IIVR/Technologies for Rehabilitation
13 VR/Technologies for Rehabilitation
13.1 Introduction
13.1.1 Game-Based (Re)habilitation via Movement Tracking [2]
13.1.2 Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development [3]
13.1.3 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training [4]
13.1.4 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi) [5]
13.2 Conclusions
References
14 Game-Based (Re)Habilitation via Movement Tracking
14.1 Introduction
14.1.1 Presence and Aesthetic Resonance: As a ‘Sense State’ Continuum
14.1.2 Play
14.1.3 Under Used Resource for Therapy
14.2 Gameplaying and Mastery
14.3 Method
14.3.1 Description of Material
14.3.2 Description of Procedure
14.3.3 Description of the Set up
14.3.4 Description of Analysis
14.4 Results
14.4.1 Tempo Spatial Movements
14.4.2 Interface and Activities
14.4.3 Resource for Therapy
14.5 Discussion
14.6 Conclusions
Appendix 1
Appendix 2
Appendix 3
Appendix 4
References
15 Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development
15.1 Introduction
15.2 Neurodiversity and Participatory Design
15.3 Ethical Considerations
15.4 Case Study Presentations
15.5 Case Study 1: Engaging Users in the Potential of Virtual Reality Opportunities for Learning in Schools
15.5.1 Brief Overview/introduction
15.5.2 Aims and Objectives
15.5.3 Context/Setting
15.5.4 Case Study Group/Characteristics
15.5.5 Findings
15.6 Case Study 2: Participatory Design Approach to Co-Create Training Materials on a Daily Living Task for Young Adults with Intellectual Disabilities
15.6.1 Brief Overview/introduction
15.6.2 Aims and Objectives
15.6.3 Context/Setting
15.6.4 Case Study Group/Characteristics
15.6.5 Findings
15.7 Overall Discussion and Conclusions
15.8 Implications for Practice and Further Work
References
16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training
16.1 Preamble/Introduction
16.1.1 Simulation and Targeted End-Users/participants
16.1.2 PoC—Design Justification
16.1.3 Technology and End-Users
16.2 Technologies and Terminology: From Virtual Reality (VR) to Virtual Interactive Space (VIS)
16.3 Background and Concept—Fieldwork and Theoretical Framework
16.4 Fieldwork
16.5 Hydrotherapy (with Innate Multimedia-Driven Causal Cycles of Action-Interactions)
16.6 Aquatic and Virtual ‘Immersion’ (Pun Intended)
16.7 Set-Up of PoC
16.8 Software Examples for Non-Aquatic Movement Tracking-Environments (Typically Dance)
16.9 Techniques—for Example with EyesWeb and EyeCon Software
16.10 Lighting
16.11 Projected Image Versus HMD
16.12 Conclusions
16.13 Summary
16.14 Further Challenges, Critique, and Reflections Toward Future Research
16.15 Closing Summary
References
17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)
17.1 Introduction
17.2 Biofeedback
17.3 Multisensory Stimulus: Sound, Sound Therapy, Music Therapy, Vibroacoustic Intervention
17.4 Soundbeam and Sound Therapy
17.5 Multisensory Stimulus: Visuals—Case Studies 1 and 2
17.6 Multisensory Stimulus: Tactile/Haptic = Vibroacoustic Therapeutic Intervention
17.7 VIBRAC and Review of the Field
17.8 Conclusion
17.9 Future Research in Interactive Vibroacoustic Therapeutic Intervention
17.10 Postscript
Bibliography
Part IIIHealth and Well-Being
18 Health and Well-Being
18.1 Introduction
18.1.1 Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk [1]
18.1.2 Electrorganic Technology for Inclusive Well-being in Music Therapy [2]
18.1.3 Interactive Multimedia: A Take on Traditional Day of the Dead Altars [3]
18.1.4 Implementing Co-design Practices for the Development of a Museum Interface for Autistic Children [4]
18.1.5 Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism [10]
18.2 Conclusions
References
19 Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk
19.1 Introduction: Technology as Daily Routine
19.2 Benefits
19.2.1 Technology for Mainstreaming Assistive Device
19.2.2 Technology for Education and Employment
19.2.3 Technology for Service Delivery
19.2.4 Technology for Social Interaction and Recreation
19.3 Risk
19.3.1 Assistive Technology Being Abandoned
19.3.2 Technology as Ethical Concerns
19.3.3 Technology as Social Disincentive
19.4 Conclusion
References
20 Electrorganic Technology for Inclusive Well-being in Music Therapy
20.1 Introduction and Background
20.2 Music and Music Therapy
20.3 Technology Empowered Musical Expression in Therapeutic Settings
20.4 Alternative Musical Instruments and the aFrame in Music Therapy
20.5 Musicality and Nuances of Expression
20.6 ATV Electrorganic aFrame
20.7 Adaptive Timbre Technology
20.8 The Electrorganic aFrame in Use
20.9 European Music Therapy Conference (EMTC), Aalborg Denmark 2019 (See Brooks [3])
20.10 Proof of Concept and Feasibility Trials in Practice
20.10.1 Next Steps—A Speculation
20.11 Conclusion
References
21 Interactive Multimedia: A Take on Traditional Day of the Dead Altars
21.1 Introduction
21.2 Day of the Dead
21.3 Literature Review
21.3.1 Technology-Enhanced Exhibitions
21.3.2 Exhibitions, Interventions, and Mental Well-being
21.4 Method
21.4.1 Traditional Altars
21.4.2 Narrative Elements
21.4.3 Interactivity and User Experience
21.4.4 Altar Installation
21.5 Exhibition
21.6 Conclusion
References
22 Implementing Co-Design Practices for the Development of a Museum Interface for Autistic Children
22.1 Introduction
22.2 Literature Review
22.2.1 The Emergence of Interactive Technologies for Children with Autism
22.2.2 Research on Co-Design Technology for Autistic
22.3 Study Design
22.3.1 Design and Development
22.3.2 Stage 1 Discovery
22.3.3 Stage 2 Concept Development
22.3.4 Stage 3 User-Testing- Evaluating the Interface
22.3.5 Stage 4 Re-Design the Platform
22.4 Discussion
22.4.1 Engagement and children’s Input Based on Their Abilities
22.4.2 Building Rapport
22.4.3 Individuals
22.4.4 Suitable Environments
22.4.5 Creativity Potentials
22.4.6 Teacher’s Involvement
22.5 Conclusion
References
23 Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism
23.1 Introduction
23.2 State of the Art
23.3 Design
23.3.1 Space
23.3.2 Multiplayer
23.4 Recording Session
23.5 Evaluation
23.5.1 Setup
23.5.2 Target Group and Sampling
23.5.3 Evaluating the Children
23.5.4 Evaluating the Guardians
23.5.5 Microsoft Desirability Toolkit
23.6 Ethical Issues
23.7 Conclusion
References
Part IVDesign and Development
24 Design and Development
24.1 Introduction
24.1.1 Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques [1]
24.1.2 Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment [6]
24.1.3 An Extensible Cloud-Based Avatar: Implementation and Evaluation [7]
24.1.4 Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition [8]
24.2 Conclusions
References
25 Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques
25.1 Introduction
25.1.1 Participatory Design
25.1.2 Participatory Design and Neurodevelopmental Disabilities
25.2 Transfer of Tacit Knowledge: Communicating the Lived Experience
25.3 Active Co-creation
25.4 Making Ideas Tangible: Prototyping
25.4.1 Prototyping Techniques
25.5 Empowerment Through Decision-Making
25.6 The Importance of Setting
25.7 Use of Proxies
25.8 Ownership
25.9 Conclusion
References
26 Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment
26.1 Introduction
26.2 Selection Classification
26.3 Accessible Digital Games
26.4 Accessible Board Games: Community and Industry Efforts
26.5 Game Accessibility Guidelines
26.6 Immersive Technologies (VR and AR) and Related
26.7 Conclusions
References
27 An Extensible Cloud Based Avatar: Implementation and Evaluation
27.1 Introduction
27.2 Previous Work
27.3 Building the Avatar
27.3.1 Lip-Syncing Spoken Words
27.3.2 Building a Realistic Utterance State Transition
27.4 Rendering the Avatar
27.4.1 Distributed Rendering in the Cloud
27.5 User Study
27.5.1 Method
27.5.2 Results
27.5.3 Discussion
References
28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition
28.1 Introduction
28.1.1 Review Process
28.2 Novel Game Interaction Using EEG and Eye-Tracking
28.2.1 Brain-Computer Interfaces
28.2.2 EEG in Games
28.2.3 Eye-Tracking Technology
28.3 Limitations and Design Recommendations
28.3.1 Design Considerations for BCI and Eye Tracking in Games
28.4 Conclusion
References
Glossary and Acronyms
Recommend Papers

Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation (Intelligent Systems Reference Library, 196)
 3030596079, 9783030596071

  • 2 2 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Intelligent Systems Reference Library 196

Anthony Lewis Brooks · Sheryl Brahman · Bill Kapralos · Amy Nakajima · Jane Tyerman · Lakhmi C. Jain   Editors

Recent Advances in Technologies for Inclusive Well-Being Virtual Patients, Gamification and Simulation

Intelligent Systems Reference Library Volume 196

Series Editors Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland Lakhmi C. Jain, KES International, Shoreham-by-Sea, UK

The aim of this series is to publish a Reference Library, including novel advances and developments in all aspects of Intelligent Systems in an easily accessible and well structured form. The series includes reference works, handbooks, compendia, textbooks, well-structured monographs, dictionaries, and encyclopedias. It contains well integrated knowledge and current information in the field of Intelligent Systems. The series covers the theory, applications, and design methods of Intelligent Systems. Virtually all disciplines such as engineering, computer science, avionics, business, e-commerce, environment, healthcare, physics and life science are included. The list of topics spans all the areas of modern intelligent systems such as: Ambient intelligence, Computational intelligence, Social intelligence, Computational neuroscience, Artificial life, Virtual society, Cognitive systems, DNA and immunity-based systems, e-Learning and teaching, Human-centred computing and Machine ethics, Intelligent control, Intelligent data analysis, Knowledge-based paradigms, Knowledge management, Intelligent agents, Intelligent decision making, Intelligent network security, Interactive entertainment, Learning paradigms, Recommender systems, Robotics and Mechatronics including human-machine teaming, Self-organizing and adaptive systems, Soft computing including Neural systems, Fuzzy systems, Evolutionary computing and the Fusion of these paradigms, Perception and Vision, Web intelligence and Multimedia. Indexed by SCOPUS, DBLP, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

More information about this series at http://www.springer.com/series/8578

Anthony Lewis Brooks Sheryl Brahman Bill Kapralos Amy Nakajima Jane Tyerman Lakhmi C. Jain •









Editors

Recent Advances in Technologies for Inclusive Well-Being Virtual Patients, Gamification and Simulation

123

Editors Anthony Lewis Brooks Aalborg University Esbjerg, Denmark Bill Kapralos maxSIMhealth Ontario Tech University Oshawa, ON, Canada Jane Tyerman Trent/Fleming School of Nursing Trent University Peterborough, ON, Canada

Sheryl Brahman Computer Information Systems Missouri State University Spring Filed, MO, USA Amy Nakajima SIM Advancement & Innovation Simulation Canada Toronto, ON, Canada Lakhmi C. Jain KES International Shoreham-by-Sea, UK University of Technology Sydney Sydney, Australia

ISSN 1868-4394 ISSN 1868-4408 (electronic) Intelligent Systems Reference Library ISBN 978-3-030-59607-1 ISBN 978-3-030-59608-8 (eBook) https://doi.org/10.1007/978-3-030-59608-8 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

As co-editors, we welcome you as readers of this volume, in anticipation that you will enjoy the contents and hopefully inform other potential readers of the scope of knowledge across fields associated to ‘Technologies for Inclusive Well-Being’ that are offered herein. We, the co-editors, are located at different corners of the globe, from Canada, Australia, UK, USA, and even little Denmark—now waving the flag for the European Union contribution after Brexit! While we work in different specific disciplines and industries, we have a common ground in being involved in education, research, and practices associated with health care and technologies targeting human ‘Well-Being’. Thus, aligned, we believe a richness of knowledge differences alongside motivational inspirations reside within the works presented in (and between) the pages you have in front of you that we anticipate can inform and inspire, stimulate and even surprise—and we are together, proud to be a part of producing this contribution to the field. This book on ‘Technologies for Inclusive Well-Being’ follows on from associated publications, i.e. the 2014 [1] and 2017 [2] volumes, also edited by members of our current editing team. The decision to edit another volume came about through amassed positive responses attributed to the earlier publications as indicated by the near 30,000 downloads at the time of writing (Spring 2020), and we anticipate similar numbers of this volume. Inclusive well-being would seem a hot and growing topic. Associated technologies to well-being continue to advance alongside adoptions in applied practices; as reflected by international conferences around the world, it was clear the demand to expand should include topics as per title herein. In line with this, in this edition, the co-editors team has grown to six, and we are pleased to welcome Dr. Jane Tyerman and Dr. Amy Nakajima, both from Ottawa, Canada (please see the ‘About the Editors’ material for more details). This Preface follows the 2017 volume in being titled Recent Advances in Technologies for Inclusive Well-Being. The 2020 sub-title informs on wider subjects of virtual patients, gamification, modelling, and simulation, thus building upon the earlier foci of ‘Wearables, Virtual Interactive Spaces (VIS)/Virtual Reality, Authoring tools, and Games (Serious/Gamification)’ in the 2017 volume that, in v

vi

Preface

turn, built upon the 2014 foci of Serious Games, Alternative Realities, and Play Therapy. The vision behind realising incremental volumes was to ongoingly achieve a meaningful contribution for wide readerships across scholars and students; practitioners, administrators, and leaders; across industries and disciplines associated with digital wellness aligned to the evolution in health industry [3]. In achieving such publications, it is acknowledged how it would not have been possible without those authors whose contributions have been shared to the best of our abilities as editors. This, of course, means that behind the scenes, there are many people involved beyond those mentioned herein—from Springer staff who have supported and made tangible this and the other volumes, to the numerous international scholar peer reviewers who gave time to read, reflect, and critique submissions over a long period offering their wise comments to support optimising each text. This publication covers wide ground, as introduced in the first chapter. Authors covering a gamut of disciplines come together under the inclusive well-being theme, and it is anticipated that there is something for everyone, be they academics, students, or an otherwise interested party. The main aim of the book is to disseminate this growing field through a combined effort to inform, educate, evoke— or even provoke, at least in thought—responses and discussions. While not the sole purpose, the editors, along with the authors, believe it important to bring such work presented out from behind the walls of establishments into the public sphere, so as to impact from a societal level. The challenge of bringing together a collection of seminal work relating to technology is that it is subject to encroachment—things move fast. We have been aware of this challenge and need to publish a contemporary volume within a schedule, considering the prerequisite for up-to-date(ness) of presented research. The initial timeline had to be extended due to counterbalancing to the editors’ different time zones, work and family commitments, and busy lives and distractions of the real world—for this delay, we apologise to authors. However, in stating this, we believe that the extension has resulted in an even stronger contribution, realised in a form to credit all involved. Acknowledgements are given to all authors for their submitted works and patience and understanding in the editorial team’s challenges to realise what is anticipated to be an impactful volume. We thank Springer’s publishing team for their input to realise the volume. The editors thank their own families whose tolerance in supporting us in tackling such endeavours to publish is often tested; we are indebted for their support. The last acknowledgement is given to you, the reader, whom we thank for coming onboard from your specific individual perspective; in thanking you for the interest in the work, we anticipate your curiosity being stimulated by individual texts so as to read, not only chapters labelled in line with your position but also to stray and explore chapters not aligned to your discipline. In line with this latter statement, we offer no suggestions about how to read the book. It is apt to mention at this time that this volume took longer than expected because of various delaying issues beyond our control, and accordingly, we apologise to the early submitting authors who have been patient in their wait to see the

Preface

vii

realised publication. Also, the final stages of the volume completion happened at an unprecedented time in the world—after devastating fires in the Australian region, a wider invisible global threat to life and daily activities as we knew it rose out of China in the form of the COVID-19 pandemic. The authors of the contents of this volume mostly contributed prior to the pandemic. Few, upon hearing of the initial incursions in Asia, could have forecast its rampant impact that has devastated societies, communities, and families across nations globally, with much loss of life, traumatic experiences, and irreparable damage to infrastructures and economies. Our hearts and thoughts go out to those affected in whatever situation they find themselves, and we wish the very best to all. Many people at this time are comparing, wonderingly, their life before the onset of COVID-19, and their life as lived experience during this pandemic, and they are asking themselves and others what the world will be like, following the cessations of restrictions after effective vaccines and medicines are invented as they must be. The future is in balance as latest news channels suggest the coming of a second wave, as deaths and cases again rise in some countries. Trepidation and anxiety are pervasive, as healthcare workers and those caring for the aged—doctors, nurses, carers, staff, and all others involved— engage daily at the front-line, battling on behalf of the human race and each individual affected. These heroes should never be forgotten! We extend our thanks to all who are involved in Well-Being issues in this regard and alongside others. Humbly, the co-editors ask: What will human Well-Being entail following the pandemic? How will future societies govern for Well-Being? In what form will future ‘Advances in Technologies for Inclusive Well-Being’ take? … and more. For now, we pray that to minimise impact, we all respect physical distancing, as advised by experts, we all maintain the highest level of hygiene, and if any signs are suspected, to self-quarantine. In so doing, we all give respect and love and support to others through this challenge for humankind. And … future generations ahead, there should be stories passed down of the heroes in health services worldwide that battled through this pandemic and continue to fight saving lives and caring for others. Their sacrifices should not be forgotten in how they promoted inclusive Well-Being in whatever form and shape that may have taken. In finally closing we, the editors, extend our warmest regards and encourage you to explore the texts herein, whetting your appetite, and to then dive further into the body of work, and possibly being stimulated to even visit the earlier volume— enjoy! From us all, we wish you optimal well-being, stay safe and keep healthy. Esbjerg, Denmark Springfield, USA Oshawa, Canada Toronto, Canada Peterborough, Canada Sydney, Australia

Anthony Lewis Brooks Sheryl Brahman Bill Kapralos Amy Nakajima Jane Tyerman Lakhmi C. Jain

viii

Preface

References 1. Brooks, A.L., Brahnam, S., Jain, L.C. (eds.): Technologies of Inclusive Well-Being: Serious Games, Alternative Realities, and Play Therapy. Studies in Computational Intelligence. Springer https://www.springer.com/gp/book/9783642454318 (2014) 2. Brooks, A.L., Brahnam, S., Kapralos, B., Jain, L.C. (eds.): Recent Advances in Technologies for Inclusive Well-Being: From Worn to Off-body Sensing, Virtual Worlds, and Games for Serious Applications. Intelligent Systems Reference Library. Springer https://www.springer. com/gp/book/9783319498775 (2017) 3. Health 5.0: the emergence of digital wellness. https://medium.com/qut-cde/health-5-0-theemergence-of-digital-wellness-b21fdff635b9

Contents

1

Re – Reflecting on Recent Advances in Technologies of Inclusive Well-Being . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Editors and Concept Background in This Field . . . 1.2.2 Current Volume . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Contributions in This Book—See Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Technology Adoption for Well-Being Intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Future Advancements . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part I 2

..

1

. . . .

. . . .

1 2 3 6

..

7

.. .. ..

7 9 13

....

17

....

17

....

19

....

20

.... ....

21 22

Gaming, VR, and Immersive Technologies for Education/Training

Gaming, VR, and Immersive Technologies for Education/Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Experiential Training of Hand Hygiene Using Virtual Reality [1] . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Useful, Usable and Used? Challenges and Opportunities for Virtual Reality Surgical Trainers [7] . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Four-Component Instructional Design Applied to a Game for Emergency Medicine [8] . . . . . . . 2.1.4 Enhanced Reality for Healthcare Simulation [15]

ix

x

Contents

2.1.5

MaxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training [16] . . . . 2.1.6 Serious Games and Multiple Intelligences for Customized Learning: A Discussion [17] . . . . . . . . 2.1.7 Mobile Application for Convulsive and Automated External Defibrillator Practices [19] . . . . . . . . . . . . 2.1.8 Lessons Learned from Building a Virtual Patient Platform [21] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

4

5

Experiential Training of Hand Hygiene Using Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lauren Clack, Christian Hirt, Andreas Kunz, and Hugo Sax 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Hand Hygiene—Related Work . . . . . . . . . . . . . . . . 3.3 Virtual Reality for Experiential Training . . . . . . . . . 3.3.1 Experiential Learning Theory . . . . . . . . . . 3.4 Summary and Future Work . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

..

23

..

24

..

25

.. .. ..

25 27 28

........

31

. . . . . .

. . . . . .

32 33 34 34 39 40

...

43

...

43

. . . .

. . . .

44 46 46 48

... ... ...

48 59 60

....

65

....

65

.... .... ....

68 68 68

. . . . . .

. . . . . .

. . . . . .

. . . . . .

Useful, Usable and Used? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chantal M. J. Trudel 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Improving Healthcare Delivery, Patient Outcomes and Training Opportunities . . . . . . . . . . . . . . . . . 4.2 Design Drivers in Developing VR Surgical Trainers . . . . . 4.2.1 Is It Useful, Usable and Used? . . . . . . . . . . . . . . 4.2.2 Establishing System Requirements . . . . . . . . . . . 4.2.3 Factors Influencing Usefulness, Usability and Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Four-Component Instructional Design Applied to a Game for Emergency Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tjitske J. E. Faber, Mary E. W. Dankbaar, and Jeroen J. G. van Merriënboer 5.1 Background and Significance . . . . . . . . . . . . . . . . . . . . . 5.2 Game-Based Learning and Four-Component Instructional Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Learning in a Game Environment . . . . . . . . . . . 5.2.2 Four Component Instructional Design . . . . . . . .

. . . . . .

. . . . . .

. . . .

Contents

5.2.3 4C/ID in Educational Games . . . . . . . . . . . . . . 5.2.4 4C/ID in Medical Education . . . . . . . . . . . . . . 5.3 Redesigning a Game for Emergency Care Using 4C/ID . 5.3.1 Learning Tasks and Task Classes . . . . . . . . . . 5.3.2 Support and Guidance . . . . . . . . . . . . . . . . . . 5.3.3 Supportive Information . . . . . . . . . . . . . . . . . . 5.3.4 Procedural Information . . . . . . . . . . . . . . . . . . 5.3.5 Part-Task Practice . . . . . . . . . . . . . . . . . . . . . 5.3.6 Design Process and Challenges . . . . . . . . . . . . 5.3.7 Plans for Evaluation . . . . . . . . . . . . . . . . . . . . 5.4 Discussion and Lessons Learned . . . . . . . . . . . . . . . . . 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

7

xi

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

A Review of Virtual Reality-Based Eye Examination Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Chan, Alvaro Uribe-Quevedo, Bill Kapralos, Michael Jenkin, Kamen Kanev, and Norman Jaimes 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Ophthalmoscopy Examination . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 The Ophthalmoscope and Eye Fundus Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Ophthalmoscope Alternatives . . . . . . . . . . . . . . . . . 6.3 Simulation and Medical Education . . . . . . . . . . . . . . . . . . . . 6.3.1 Standardised Patients . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Computer-Based Simulation . . . . . . . . . . . . . . . . . . 6.3.3 Virtual/Augmented/Mixed Reality . . . . . . . . . . . . . . 6.3.4 Simulation in Ophthalmology . . . . . . . . . . . . . . . . . 6.4 Direct Ophthalmoscopy Simulators . . . . . . . . . . . . . . . . . . . . 6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhanced Reality for Healthcare Simulation . . . . . . . . . . . . . . Fernando Salvetti, Roxane Gardner, Rebecca D. Minehart, and Barbara Bertagni 7.1 Enhanced Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Enhanced Hybrid Simulation in a Mixed Reality Setting, Both Face-to-Face and in Telepresence . . . . . . . . . . . . . . . 7.3 e-REAL as a CAVE-Like Environment Enhanced by Augmented Reality and Interaction Tools . . . . . . . . . . 7.4 The Simulation’s Phases Enhanced by e-REAL and the Main Tools Made Available by the System . . . . . 7.5 Visual Storytelling and Contextual Intelligence, Cognitive Aids, Apps and Tools to Enhance the Education Process in a Simulation Lab or In Situ . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

71 72 73 73 73 75 75 76 76 79 79 80 81

.

83

. .

84 86

. 86 . 87 . 88 . 89 . 90 . 90 . 91 . 91 . 99 . 100

. . . 103

. . . 104 . . . 105 . . . 111 . . . 114

. . . 122

xii

Contents

7.6 7.7

The Epistemological Pillars Supporting e-REAL . . . . . . . Case-Study: Teamwork and Crisis Resource Management for Labor and Delivery Clinicians . . . . . . . . . . . . . . . . . 7.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

9

. . . . 127 . . . . 128 . . . . 133 . . . . 136

maxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . maxSIMhealth Group 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Immersive Technologies . . . . . . . . . . . . . . . . . . . . 8.2 maxSIMhealth Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Immersive Technology-Based Solutions . . . . . . . . . 8.2.2 Gamification- (and Serious Gaming-) Based Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 The Gamified Educational Network (GEN) . . . . . . 8.2.4 3D Printing-Based Solutions . . . . . . . . . . . . . . . . . 8.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serious Games and Multiple Intelligences for Customized Learning: A Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enilda Zea, Marco Valez-Balderas, and Alvaro Uribe-Quevedo 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Multiple Intelligences . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Challenges to Educators . . . . . . . . . . . . . . . . . . . . . . . 9.4 Technology Opportunities . . . . . . . . . . . . . . . . . . . . . . 9.5 Serious Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 141 . . . .

. . . .

141 143 144 144

. . . . . .

. . . . . .

158 158 164 169 171 171

. . . . . 177 . . . . . . .

10 A Virtual Patient Mobile Application for Convulsive and Automated External Defibrillator Practices . . . . . . . . . . . Engie Ruge Vera, Mario Vargas Orjuela, Alvaro Uribe-Quevedo, Byron Perez-Gutierrez, and Norman Jaimes 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Background Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Early Simulation . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Modern Simulation . . . . . . . . . . . . . . . . . . . . . . 10.3 Mobile Application Development . . . . . . . . . . . . . . . . . . 10.3.1 Automatic External Defibrillation . . . . . . . . . . . 10.3.2 Convulsive Treatment . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

177 179 180 181 182 184 186

. . . . 191

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

192 193 194 195 196 196 197

Contents

10.3.3 Design and Development . . . . . . 10.3.4 Game/Learning Mechanics . . . . . 10.4 Preliminary Study . . . . . . . . . . . . . . . . . . 10.4.1 Participants . . . . . . . . . . . . . . . . 10.4.2 Pre and Post-test . . . . . . . . . . . . 10.4.3 System Usability Scale . . . . . . . . 10.4.4 Game Engagement Questionnaire 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

11 Lessons Learned from Building a Virtual Patient Platform Olivia Monton, Allister Smith, and Amy Nakajima 11.1 Introduction: Simulation and Virtual Patients . . . . . . . 11.2 Virtual Patient Platform Requirements . . . . . . . . . . . . 11.3 Obstacles and Challenges . . . . . . . . . . . . . . . . . . . . . 11.4 Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 A Way Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Engaging Learners in Presimulation Preparation Through Virtual Simulation Games . . . . . . . . . . . . . . . . . . . . . . . . . . Marian Luctkar-Flude, Jane Tyerman, Lily Chumbley, Laurie Peachey, Michelle Lalonde, and Deborah Tregunno 12.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Presimulation Preparation . . . . . . . . . . . . . . . 12.1.2 Virtual Simulations . . . . . . . . . . . . . . . . . . . . 12.1.3 Virtual Simulation Games . . . . . . . . . . . . . . . 12.1.4 Presimulation Preparation Using Virtual Simulation Games . . . . . . . . . . . . . . . . . . . . 12.2 Virtual Simulation Game Project . . . . . . . . . . . . . . . . 12.2.1 Rationale . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Scenario Selection . . . . . . . . . . . . . . . . . . . . 12.2.5 Description of the Innovation . . . . . . . . . . . . 12.2.6 Usability Testing . . . . . . . . . . . . . . . . . . . . . 12.2.7 Cost Utility and Learning Outcomes . . . . . . . 12.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.1 Strengths and Limitations . . . . . . . . . . . . . . . 12.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

198 200 202 203 204 205 205 206 207

. . . . . . 211 . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

212 214 216 218 218 219

. . . . . . 223

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

225 225 226 227

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

229 230 230 230 230 231 231 231 232 232 233 233 234 235

xiv

Part II

Contents

VR/Technologies for Rehabilitation

13 VR/Technologies for Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Game-Based (Re)habilitation via Movement Tracking [2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development [3] . . . . . . . . . . . . . 13.1.3 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.4 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi) [5] . . . . . . . . . . . . . . . . . . 13.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Game-Based (Re)Habilitation via Movement Tracking Anthony Lewis Brooks and Eva Brooks 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 Presence and Aesthetic Resonance: As a ‘Sense State’ Continuum . . . . . . . . 14.1.2 Play . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.3 Under Used Resource for Therapy . . . . 14.2 Gameplaying and Mastery . . . . . . . . . . . . . . . . . 14.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3.1 Description of Material . . . . . . . . . . . . . 14.3.2 Description of Procedure . . . . . . . . . . . 14.3.3 Description of the Set up . . . . . . . . . . . 14.3.4 Description of Analysis . . . . . . . . . . . . 14.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Tempo Spatial Movements . . . . . . . . . . 14.4.2 Interface and Activities . . . . . . . . . . . . . 14.4.3 Resource for Therapy . . . . . . . . . . . . . . 14.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 241 . . 241 . . 242

. . 244

. . 246 . . 249 . . 251 . . 251

. . . . . . . . . . 253 . . . . . . . . . . 253 . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

254 255 255 256 257 258 258 259 259 262 262 263 264 265 266 267 267 268 269 273

Contents

15 Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yurgos Politis, Nigel Newbutt, Nigel Robb, Bryan Boyle, Hung Jen Kuo, and Connie Sung 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Neurodiversity and Participatory Design . . . . . . . . . . . . . . . 15.3 Ethical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Case Study Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Case Study 1: Engaging Users in the Potential of Virtual Reality Opportunities for Learning in Schools . . . . . . . . . . . 15.5.1 Brief Overview/introduction . . . . . . . . . . . . . . . . . 15.5.2 Aims and Objectives . . . . . . . . . . . . . . . . . . . . . . 15.5.3 Context/Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5.4 Case Study Group/Characteristics . . . . . . . . . . . . . 15.5.5 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6 Case Study 2: Participatory Design Approach to Co-Create Training Materials on a Daily Living Task for Young Adults with Intellectual Disabilities . . . . . . . . . . . . . . . . . . . . . . . . 15.6.1 Brief Overview/introduction . . . . . . . . . . . . . . . . . 15.6.2 Aims and Objectives . . . . . . . . . . . . . . . . . . . . . . 15.6.3 Context/Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.6.4 Case Study Group/Characteristics . . . . . . . . . . . . . 15.6.5 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.7 Overall Discussion and Conclusions . . . . . . . . . . . . . . . . . . 15.8 Implications for Practice and Further Work . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 16.1 Preamble/Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.1 Simulation and Targeted End-Users/participants . . . 16.1.2 PoC—Design Justification . . . . . . . . . . . . . . . . . . . 16.1.3 Technology and End-Users . . . . . . . . . . . . . . . . . . 16.2 Technologies and Terminology: From Virtual Reality (VR) to Virtual Interactive Space (VIS) . . . . . . . . . . . . . . . . . . . 16.3 Background and Concept—Fieldwork and Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 Fieldwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Hydrotherapy (with Innate Multimedia-Driven Causal Cycles of Action-Interactions) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Aquatic and Virtual ‘Immersion’ (Pun Intended) . . . . . . . . . 16.7 Set-Up of PoC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xv

. . 275

. . . .

. . . .

276 277 279 279

. . . . . .

. . . . . .

280 280 280 282 282 284

. . . . . . . . .

. . . . . . . . .

289 289 289 290 290 291 294 296 297

. . 299 . . . .

. . . .

299 300 301 301

. . 302 . . 305 . . 308 . . 308 . . 309 . . 310

xvi

Contents

16.8

Software Examples for Non-Aquatic Movement Tracking-Environments (Typically Dance) . . . . . . . . . 16.9 Techniques—for Example with EyesWeb and EyeCon Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.11 Projected Image Versus HMD . . . . . . . . . . . . . . . . . . 16.12 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.14 Further Challenges, Critique, and Reflections Toward Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.15 Closing Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . 310 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

311 313 313 315 316

. . . . . . 317 . . . . . . 320 . . . . . . 321

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Biofeedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Multisensory Stimulus: Sound, Sound Therapy, Music Therapy, Vibroacoustic Intervention . . . . . . . . . . . . . . . . 17.4 Soundbeam and Sound Therapy . . . . . . . . . . . . . . . . . . . 17.5 Multisensory Stimulus: Visuals—Case Studies 1 and 2 . . 17.6 Multisensory Stimulus: Tactile/Haptic = Vibroacoustic Therapeutic Intervention . . . . . . . . . . . . . . . . . . . . . . . . 17.7 VIBRAC and Review of the Field . . . . . . . . . . . . . . . . . 17.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.9 Future Research in Interactive Vibroacoustic Therapeutic Intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.10 Postscript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part III

. . . . .

. . . . 325 . . . . 325 . . . . 326 . . . . 328 . . . . 329 . . . . 332 . . . . 333 . . . . 336 . . . . 337 . . . . 337 . . . . 338 . . . . 339

Health and Well-Being

18 Health and Well-Being . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.1 Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.2 Electrorganic Technology for Inclusive Well-being in Music Therapy [2] . . . . . . . . . . . . . . . . . . . . . . 18.1.3 Interactive Multimedia: A Take on Traditional Day of the Dead Altars [3] . . . . . . . . . . . . . . . . . . . . . 18.1.4 Implementing Co-design Practices for the Development of a Museum Interface for Autistic Children [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 345 . . 345

. . 346 . . 346 . . 348

. . 349

Contents

xvii

18.1.5

Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism [10] . . . . . . . . . . . . . . . . . . 351 18.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 19 Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk . . . . . . . . . Hung Jen Kuo, Connie Sung, Nigel Newbutt, Yurgos Politis, and Nigel Robb 19.1 Introduction: Technology as Daily Routine . . . . . . . . . . . 19.2 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2.1 Technology for Mainstreaming Assistive Device 19.2.2 Technology for Education and Employment . . . . 19.2.3 Technology for Service Delivery . . . . . . . . . . . . 19.2.4 Technology for Social Interaction and Recreation . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.1 Assistive Technology Being Abandoned . . . . . . 19.3.2 Technology as Ethical Concerns . . . . . . . . . . . . 19.3.3 Technology as Social Disincentive . . . . . . . . . . 19.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Electrorganic Technology for Inclusive Well-being in Music Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks and Carl Boland 20.1 Introduction and Background . . . . . . . . . . . . . . . . . . . . 20.2 Music and Music Therapy . . . . . . . . . . . . . . . . . . . . . . 20.3 Technology Empowered Musical Expression in Therapeutic Settings . . . . . . . . . . . . . . . . . . . . . . . . 20.4 Alternative Musical Instruments and the aFrame in Music Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.5 Musicality and Nuances of Expression . . . . . . . . . . . . . 20.6 ATV Electrorganic aFrame . . . . . . . . . . . . . . . . . . . . . 20.7 Adaptive Timbre Technology . . . . . . . . . . . . . . . . . . . . 20.8 The Electrorganic aFrame in Use . . . . . . . . . . . . . . . . . 20.9 European Music Therapy Conference (EMTC), Aalborg Denmark 2019 (See Brooks [3]) . . . . . . . . . . . 20.10 Proof of Concept and Feasibility Trials in Practice . . . . 20.10.1 Next Steps—A Speculation . . . . . . . . . . . . . . . 20.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . 353

. . . . .

. . . . .

. . . . .

. . . . .

354 355 355 356 357

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

357 360 360 361 363 365 366

. . . . . 373 . . . . . 373 . . . . . 374 . . . . . 375 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

376 377 378 379 382

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

383 385 387 388 389

xviii

Contents

21 Interactive Multimedia: A Take on Traditional Day of the Dead Altars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ramón Iván Barraza Castillo, Alejandra Lucía De la Torre Rodríguez, Rogelio Baquier Orozco, Gloria Olivia Rodríguez Garay, Silvia Husted Ramos, and Martha Patricia Álvarez Chávez 21.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Day of the Dead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.1 Technology-Enhanced Exhibitions . . . . . . . . . . . . . . 21.3.2 Exhibitions, Interventions, and Mental Well-being . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.1 Traditional Altars . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.2 Narrative Elements . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.3 Interactivity and User Experience . . . . . . . . . . . . . . 21.4.4 Altar Installation . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Exhibition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Implementing Co-Design Practices for the Development of a Museum Interface for Autistic Children . . . . . . . . . . . . . Dimitra Magkafa, Nigel Newbutt, and Mark Palmer 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2.1 The Emergence of Interactive Technologies for Children with Autism . . . . . . . . . . . . . . . . . 22.2.2 Research on Co-Design Technology for Autistic 22.3 Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.1 Design and Development . . . . . . . . . . . . . . . . . 22.3.2 Stage 1 Discovery . . . . . . . . . . . . . . . . . . . . . . 22.3.3 Stage 2 Concept Development . . . . . . . . . . . . . 22.3.4 Stage 3 User-Testing- Evaluating the Interface . . 22.3.5 Stage 4 Re-Design the Platform . . . . . . . . . . . . 22.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4.1 Engagement and children’s Input Based on Their Abilities . . . . . . . . . . . . . . . . . . . . . . . 22.4.2 Building Rapport . . . . . . . . . . . . . . . . . . . . . . . 22.4.3 Individuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4.4 Suitable Environments . . . . . . . . . . . . . . . . . . . 22.4.5 Creativity Potentials . . . . . . . . . . . . . . . . . . . . . 22.4.6 Teacher’s Involvement . . . . . . . . . . . . . . . . . . . 22.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 391

. . . .

392 393 394 394

. . . . . . . . .

395 396 397 399 401 405 412 416 417

. . . . 421 . . . . 421 . . . . 423 . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

423 424 425 426 428 432 434 434 435

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

435 437 437 438 438 439 440 441

Contents

xix

23 Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism . . . . . . . Lars Andersen, Nicklas Andersen, Ali Adjorlu, and Stefania Serafin 23.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3.1 Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3.2 Multiplayer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.4 Recording Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.5.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.5.2 Target Group and Sampling . . . . . . . . . . . . . . . . . 23.5.3 Evaluating the Children . . . . . . . . . . . . . . . . . . . . 23.5.4 Evaluating the Guardians . . . . . . . . . . . . . . . . . . . 23.5.5 Microsoft Desirability Toolkit . . . . . . . . . . . . . . . . 23.6 Ethical Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part IV

. . 445 . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

445 446 447 448 450 451 451 451 452 453 454 455 456 456 457

Design and Development

24 Design and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anthony Lewis Brooks 24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.1.1 Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques [1] . . . . . . . . . . . . . . . . . . . 24.1.2 Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment [6] . . . 24.1.3 An Extensible Cloud-Based Avatar: Implementation and Evaluation [7] . . . . . . . . . . . . . . . . . . . . . . . . 24.1.4 Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition [8] . . . . . . . . . . 24.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques Nigel Robb, Bryan Boyle, Yurgos Politis, Nigel Newbutt, Hung Jen Kuo, and Connie Sung 25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1.1 Participatory Design . . . . . . . . . . . . . . . . . . . . 25.1.2 Participatory Design and Neurodevelopmental Disabilities . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 461 . . 461

. . 462 . . 463 . . 464

. . 464 . . 466 . . 466

. . . . . 469

. . . . . 470 . . . . . 470 . . . . . 472

xx

Contents

25.2

Transfer of Tacit Knowledge: Communicating the Lived Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3 Active Co-creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4 Making Ideas Tangible: Prototyping . . . . . . . . . . . . . . . . 25.4.1 Prototyping Techniques . . . . . . . . . . . . . . . . . . 25.5 Empowerment Through Decision-Making . . . . . . . . . . . . 25.6 The Importance of Setting . . . . . . . . . . . . . . . . . . . . . . . 25.7 Use of Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.8 Ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26 Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment . . . . . . . . . . . . . . . . Frederico Da Rocha Tomé Filho, Bill Kapralos, and Pejman Mirza-Babaei 26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2 Selection Classification . . . . . . . . . . . . . . . . . . . . . 26.3 Accessible Digital Games . . . . . . . . . . . . . . . . . . . 26.4 Accessible Board Games: Community and Industry Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.5 Game Accessibility Guidelines . . . . . . . . . . . . . . . . 26.6 Immersive Technologies (VR and AR) and Related 26.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

473 474 476 477 479 480 480 481 481 482

. . . . . . . . 487

. . . . . . . . 487 . . . . . . . . 490 . . . . . . . . 492 . . . . .

. . . . .

. . . . .

27 An Extensible Cloud Based Avatar: Implementation and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enas Altarawneh, Michael Jenkin, and I. Scott MacKenzie 27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.2 Previous Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3 Building the Avatar . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.1 Lip-Syncing Spoken Words . . . . . . . . . . . . . . 27.3.2 Building a Realistic Utterance State Transition . 27.4 Rendering the Avatar . . . . . . . . . . . . . . . . . . . . . . . . . 27.4.1 Distributed Rendering in the Cloud . . . . . . . . . 27.5 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

494 495 496 498 499

. . . . . 503 . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

504 504 505 507 508 509 510 511 512 515 519 521

Contents

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition . . . . Samantha N. Stahlke, Josh D. Bellyk, Owen R. Meier, Pejman Mirza-Babaei, and Bill Kapralos 28.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.1.1 Review Process . . . . . . . . . . . . . . . . . . . . . . . . . . 28.2 Novel Game Interaction Using EEG and Eye-Tracking . . . . 28.2.1 Brain-Computer Interfaces . . . . . . . . . . . . . . . . . . 28.2.2 EEG in Games . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.2.3 Eye-Tracking Technology . . . . . . . . . . . . . . . . . . . 28.3 Limitations and Design Recommendations . . . . . . . . . . . . . 28.3.1 Design Considerations for BCI and Eye Tracking in Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxi

. . 523

. . . . . . .

. . . . . . .

523 524 525 526 527 531 532

. . 533 . . 534 . . 535

Glossary and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537

About the Editors

Dr. Anthony Lewis Brooks is Associate Professor at Aalborg University, Denmark, where he was director/ founder of the ‘SensoramaLab’ a complex investigating human behaviour, interactivity, interfaces, and technologies for inclusive well-being. He was a founding team member of the original Medialogy education, section leader, lecturer, coordinator, supervisor, and study board member. Originating from Wales, prior to academia in the 1980/90s, he created the concept Virtual Interactive Space (VIS) a flexible/modular/ tailorable/adaptable—sensor-based conglomerate system for optimising patient experiences via unencumbered gesture control of media to stimulate interactions. Through this, and resulting afferent efferent neural feedback human loop closure, thus, motivating and optimising human performance potentials in (re)habilitation training engagement and treatment programme compliance—a concept adopted widely in health care. He has international and national awards alongside approximately twenty plenary keynote credits at major international conferences. Brooks is Danish representative for UNESCO’s International Federation for Information Processing (IFIP) Technical committee (WG14) He achieved his Ph.D. under the University of Sunderland, Great Britain.

xxiii

xxiv

About the Editors

Dr. Sheryl Brahman is the Director/Founder of Missouri State University’s infant COPE (Classification Of Pain Expressions) project. Her interests focus on face recognition, face synthesis, medical decision support systems, embodied conversational agents, computer abuse, and artificial intelligence. Dr. Brahman has published articles related to medicine and culture in such journals as Artificial Intelligence in Medicine, Expert Systems with Applications, Journal of Theoretical Biology, Amino Acids, AI & Society, Bioinformatics, Pattern Recognition, Human Computer Interaction, Neural Computing and Applications, and Decision Support Systems. Dr. Bill Kapralos is an Associate Professor at Ontario Tech University. He is also an Honourable Guest Professor at Shizuoka University (Hamamatsu, Japan), and the Technical Lead of the Collaborative Human Immersive Interaction Laboratory (CHISIL), Sunnybrook Health Sciences Centre. His current research interests include immersive technologies, serious gaming, multi-modal virtual environments/ simulation/reality, the perception of auditory events, and 3D (spatial) sound generation. He has led several large interdisciplinary and international medical-based virtual simulation/serious gaming research projects with funding from a variety of government and industry sources. He was recently awarded a Greek Diaspora Fellowship (sponsored by the Stavros Niarchos Foundation), to conduct research in Greece. He is a past recipient of an Australian Government 2018 Endeavour Executive Fellowship to conduct research in Australia, a past recipient of a Natural Sciences and Engineering Research Council of Canada (NSERC) and Japan Society for the Promotion of Science (JSPS) Fellowship to conduct research in Japan, a past recipient of an IBM CAS Faculty Award, and a past co-recipient of a Google Faculty Award.

About the Editors

xxv

Dr. Amy Nakajima is an active clinician-teacher and Assistant Professor at the University of Ottawa, in the Faculty of Medicine, providing both didactic and simulation-based teaching at the undergraduate and postgraduate levels. She completed her residency training at the University of Saskatchewan and received her Royal College of Physicians and Surgeons of Canada (RCPSC) certification in Obstetrics and Gynecology in 2000, and more recently, a M.Sc. in Human Factors and Systems Safety at Lund University, Sweden. Her clinical focus involves working with patients who are traditionally considered vulnerable and marginalised. Dr. Nakajima is the Director, SIM Advancement & Innovation, Simulation Canada, and has developed and delivered Simulation Canada curricula, including their online Briefing, Debriefing and Facilitating Simulation: Practical Applications of Educational Frameworks course. She regularly provides faculty development, both locally at the University of Ottawa and nationally, through Simulation Canada, and at simulation and medical education conferences, including SIM Expo. Dr. Nakajima also contributes to patient safety and quality improvement teaching at the undergraduate, postgraduate, and faculty development levels. One of her interests is the teaching and assessment of patient safety competencies using simulation modalities. She has contributed to the development of national patient safety resources, including the Canadian Patient Safety Institute’s Patient Safety and Incident Management Toolkit, their Canadian Disclosure Guidelines: Being Open with patients and families and their Canadian Framework for Teamwork and Communication, and the RCPSC’s Handover Toolkit: a resource to help teach, assess and implement a handover improvement programme.

xxvi

About the Editors

Dr. Jane Tyerman is an Assistant Professor at the University of Ottawa, School of Nursing—Faculty of Health Sciences. She comes to the programme with over 25 years of nursing experience in critical care, medical-surgical, and psychiatric clinical practice. Additionally, she has over 15 years’ experience involving clinical simulation design, facilitation, evaluation, and virtual simulation game development. Dr. Tyerman is a leader in nursing simulation. She co-founded the Canadian Alliance of Nurse Educators Using Simulation (CAN-Sim), which connects nurse educators and allied health partners from across Canada and internationally to share knowledge, resources and expertise in areas of simulation research and education. She has developed and teaches various specialty simulation course for the Canadian Association of Schools of Nursing (CASN), the Canadian Alliance of Nurse Educators Using Simulation (CAN-Sim), International Nursing Association for Clinical Simulation and Learning (INACSL), and the Ontario Simulation Alliance (OSA). Dr. Tyerman is the 2019 recipient of the CASN Award for Excellence in Nursing Education based on her innovative use of simulation. Dr. Lakhmi C. Jain Ph.D., ME, BE (Hons), Fellow (Engineers Australia) is with the University of Technology Sydney, Australia, and Liverpool Hope University, UK. Professor Jain founded the KES International for providing a professional community the opportunities for publications, knowledge exchange, cooperation, and teaming. Involving around 5000 researchers drawn from universities and companies worldwide, KES facilitates international cooperation and generateS synergy in teaching and research. KES regularly provides networking opportunities for professional community through one of the largest conferences of its kind in the area of KES (http://www.kesinternational.org/organisation.php). His interests focus on the artificial intelligence paradigms and their applications in complex systems, security, e-education, e-healthcare, unmanned air vehicles and intelligent agents.

Chapter 1

Re – Reflecting on Recent Advances in Technologies of Inclusive Well-Being Anthony Lewis Brooks

Abstract This chapter reflects on ‘Technologies of Inclusive Well-Being’ related to the evolution of the health sector. The editorial team’s three volumes realized to date are overviewed aligned with contemporary related literatures in the field. A reflection on Covid-19 precedes a closing section consideration on the future ‘Technologies of Inclusive Well-Being’ and digital wellness impact in healthcare. Keywords Virtual patients · Health 5.0 · Digital wellness · Transforming healthcare

1.1 Introduction Contextually, an opening chapter of a book on Recent Advancements of Technologies of Inclusive Well-Being typically has foci upon: - offering an introduction to the field, the editors, and reason behind their collaborative effort in realising the book. Then readers are informed on what lies ahead in the volume’s pages; and then closing by a summary to lead into the subsequent opening section. However, in this case of writing an opening chapter of a third volume targeting introducing the field, presenting a position, and informing upon work in the field ‘Technologies of Inclusive Well-Being’, an artistic license is evoked to expand beyond the typical to reflect on what potentials may (speculatively) lie ahead beyond the abyss of the current global crises across health, economy, climate, poverty, and life itself. Thus, instead of following wholly a traditional route in the structure of this chapter a sharing on ‘Future Advances in Technologies of Inclusive Well-Being’ sums up to promote and provoke readership debate and discussion beyond the covers of this contribution relative to transforming of healthcare.

A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_1

1

2

A. L. Brooks

1.2 The Field The field of Technologies of Inclusive Well-Being is posited as running parallel to demographic trends. This is put forward as these are trends of our time that point to increased pressures and demands on healthcare service industry providers addressing growing needs. Health spending by powers that be has, in some countries, correspondingly increased significantly reflecting ageing populations, an epidemic of chronic disease, advances in biomedical knowledge, increased digital healthcare innovations and higher public expectations that place additional demands on services. Over the last decade advances include from Stem cell banking1 ; Robotic surgery2 ; 3D bio-printing of muscle tissues to complete organs for transplant 3 ; through to Early cancer detection4 …and more including Rehabilitation Robots. Many advances utilise mobile computing aligned to wearable sensors for personal monitoring of health condition (e.g. ECG, SpO2 , blood pressure, respiration, blood sugar/glucose, and temperature …) and other body management aspects to amass human data for fast and remote diagnostics. Business models are built upon collection of DNA/genome data that can be analysed to predict future healthcare needs—targeting prevention rather than cure for individuals subscribing to the service. However, the current focus on advancement—at time of writing (June 2020)—relates to an immediate need in respect of the Covid-19/Corona virus pandemic that has had an unprecedented impact and has promoted large corporates to work together with a focus on creating as soon as possible a safe vaccine. The bigger picture however is even more challenging than ever when one considers availability of healthcare outside of developed countries when even the ‘most’ developed struggle to contain the contagion. This unprecedented unforeseen enemy Covid19 is ‘invisibly’ challenging an already pressured industry and society itself whereby in the immediate there is need for a survival strategy—where urgent medical treatment is needed for those affected to not succumb as a result. Alongside this latter issue for survivors resides a longer-term situation where needed extended rehabilitation, both physical and mental, is forecast as further pressuring those in such healthcare provision. Our series of publications on how such service providers may take advantage of Technologies of Inclusive Well-Being is thus timely though not directly aligned to the recent outbreak. The next section exemplifies a stuttering technology adoption in the healthcare sector that possibly may have been consolidated to a position where healthcare authorities may have been, if not better prepared, able to respond swifter to the Covid-19.

1 https://futurehealthbiobank.com/. 2 https://www.intuitive.com/en-us. 3 https://organovo.com/. 4 https://grail.com/.

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

3

1.2.1 Editors and Concept Background in This Field For readers to comprehend the story behind this work a first stop is suggested to visit the profile of each editor where their backgrounds are shared (see TOC for section). From there a visit to the earlier volumes should indicate the commitment and dedication to the field of enquiry that is core of this work as exemplified in the following. The first volume [3] emphasised how digital technologies play an increasing role in supplementing intervention practices and methods. Substantiating this claim is the rising awareness illustrated by major research funding activities directed at such entities throughout developed countries. These activities have the mission of contributing to knowledge and of realizing emerging enterprise and industrial developments in the area, as well as of encouraging and of informing new educational programs involving technology that proactively look to contribute to a societal “wealth through health” regime. The 2014 volume was the first single volume that was titled and having focus upon ‘Technologies of Inclusive Well-Being’. It brought together the topics of serious games, alternative realities, and play therapy. The focus was on the use of digital media for the therapeutic benefit and well-being of a wide range of people—spanning those with special needs to the elderly to entire urban neighbourhoods. Further it brought together these topics to demonstrate the increasing trans/inter/multi-disciplinary initiatives apparent at that time in science, medicine, and academic research—interdisciplinary initiatives that had already profoundly impacted society. The content shared latest research on emerging intelligent paradigms in the field of serious games, alternative realities, and play therapy. It introduced and described intelligent technologies offering therapy, rehabilitation, and more general well-being care, as written by leading experts in the fields. The second volume [4] presented innovative, alternative and creative approaches that challenged traditional mechanisms in and across disciplines and industries targeting societal impact associated to ‘Recent Advances in Technologies for Inclusive Well-Being’—the title of the contribution. The title sub-heading suggested the content focus by listing ‘From Worn to Off-body Sensing, Virtual Worlds, and Games for Serious Applications’. A common thread throughout the book was human-centred, uni- and multi-modal strategies across the range of human technologies, including sensing and stimuli, extended virtual and augmented worlds; accessibility; digitalethics and more. A determined focus was on engaging, meaningful, and motivating activities that at the same time offered systemic information on human condition, performance and progress. The goal of the second book was to introduce and to describe some of the latest technologies offering therapy, rehabilitation, and more general well-being care. Included along with the work of researchers from the serious games, alternative realities (incorporating artificial reality, virtual reality (VR), augmented reality, mixed reality, etc.), and play therapy disciplines were the writings of digital artists who are increasing working alongside researchers and therapists to create playful and

4

A. L. Brooks

creative environments that were considered safe and adaptive by offering tailored interventions via apparatus, methods and emergent models. The chapters in the second book illustrated how complementary overlapping between topics had increasingly become an accepted norm. Such acceptance contributed to a readdressing and a questioning of associated values resulting in new themes and topics. Topics were selected to be wide in scope to offer academics opportunities to reflect on intersections in their work. For example, these were anticipated as being specific to the concepts of serious games, alternative reality, and play therapy, or to any number of related topics. The book contents highlighted how, unlike entertainment systems, the goals of alternative realities therapy and serious play demand the addition of sophisticated feedback systems that monitor user progress. These systems must encourage progress and intelligently and progressively adapt to users’ individual needs within an environment that is challenging, engaging, and user friendly for patients and health care professionals. Such systems were presented by authors who kindly shared their researches in illustrating how the field requires the evolution of new paradigms in test battery creation that take advantage of the controllable digital framework, embodied data feedback, and other opportunities uniquely offered by virtual interactive spaces. Such invention and adaption of measuring in research practices in this field is anticipated as ongoing towards such interventions as presented elsewhere in this volume. The earlier books reportedly impacted by both presenting how play therapy (and therapeutic play) typically focused on interactions between a professional therapist and children where the use of toys and other objects, i.e., physical artefacts, are expressive channels for communicating and interpreting a person’s condition. In the second volume, additional related opportunities to supplement such traditional practices via the use of digital media were posited. Further highlighted was how serious games linked to games (and gameplay) are used toward a ‘serious outcome’ to solve a defined problem: In other words, chapters informed on how games can be used ‘seriously’ in alternative realities, i.e., in computer generated environments that are interactive through embedded virtual artefacts. These computer-generated alternative realities are commonly referred to as extended (covering mixed, virtual, augmented, or artificial reality). Virtual reality in therapy and rehabilitation is not a new subject: many papers reporting research advancing the field with transfer to activities of daily living have appeared over the last decades. The second book contributed to the field by acknowledging the impact of digital media such as extended reality and by questioning potentials offered in traditionally ‘non-digitized’ more traditional practices. For example, using digital media in therapy with aggressive participants, for instance, may reduce destruction, breakages, and damages to physical artefacts. Instead, computer graphic environments are safe, adaptive, and interactive, providing a world where things can be “virtually broken” any number of times and repaired, offering to clinicians both qualitative and quantitative aspects of evaluation alongside a flexibility for creating new tools for developing the clinical outcomes required by therapy and other medical and educational interventions. An example of this in contemporary technologies is how using

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

5

virtual reality interactive environments enables a tailoring of content based upon a patient’s experience (see [2]). This can be with or without HMD use where human data can be collected and correlated to activities and iteratively content and interface designs can reflect user experiences. In practice, a patient Virtual Reality experience can be designed such that during therapeutic intervention what a patient looks at and their response to what they see is available (via physiological data, e.g. eye pupil dilation, galvanic skin response, breathing, face colour, heartbeat etc.). Depending on such patient experience responses, changes can be made for a subsequent session that reflects content and interface aligned to the targeted therapeutic outcomes. This ‘tailoring and adaption technique’ implemented correctly and creatively when using technologies for inclusive well-being can personalise and optimise experience of a patient in rehabilitation and other therapeutic intervention within a medical treatment program [2]. Such a bottom-up strategy, targeting optimal patient experience, can be thought to align with ‘Health 4.0’ and ‘Health 5.0’ keywords of ‘Digitisation’ and ‘Personalisation’ respectively—as introduced in the contemporary stages in the evolution of the healthcare sector (see Fig. 1.1). However, the ‘Digitisation’ and ‘Personalisation’ discussed in Kowalkiewicz’s [9] text are wider in range and meaning with a top-down perspective that involves relation to how the largest corporates who are designing the future of the healthcare sector such that the term Technologies for Inclusive WellBeing in future needs elaboration and segmentation to specifics aligning to digital wellness as suggested later in this chapter: Thus, Fig. 1.1 is included to evoke reader discussions.

Fig. 1.1 ‘Five stages of evolution of the health sector’ (used with permission—cf. Kowalkiewicz [9, 10] ©)

6

A. L. Brooks

1.2.2 Current Volume This third volume builds upon the earlier publications by expanding with content in the healthcare simulations area and more with cutting edge research reported by luminaries in their respective fields. As the titles of each volume might suggest there was a defined focus of topics when calling for chapters and often these, once collected, offer a slightly different trajectory to follow because of what has been received. The books, and coining of the common term Technologies of Inclusive Well-Being, were conceived to be a catalyst for debate on interactive computer technology, associated apparatus, and methods used in a manner whereby, for instance, creative industries, health care, human computer interaction, and technology sectors are encouraged to communicate with each other, to use different lenses in seeing challenges, and further to stimulate thinking about application design and intervention practices that are needed to supplement and to satisfy the societal demographic service needs of the future. Already posited in this chapter is the need for inter/trans-disciplinary education initiatives to support healthcare sector physiotherapists to get the most out of using such technologies: Stated here with respect as beyond word processors and printers used for administration and also beyond traditional therapy apparatus used in practices—and aligned to exploring through educating with emergent models to optimise motivated use. By offering a wider perspective, each volume targets to address the need for a series of core texts that can evoke and provoke, engage and demand, and stimulate and satisfy. Debate and discussion alongside uptake and adoption is targeted and communication is promoted should any researchers or students wish. By presenting recent advances in technologies for inclusive well-being, state-ofthe-art research on emerging intelligent paradigms, and the application of intelligent paradigms in well-being, the field of Technologies of Inclusive Well-Being is considered well-presented through various practical applications and case studies. Thankfully, reviewers of the earlier volumes tend to agree (cf Springer sites): This book is a sophisticated study of how games, based on a trilogy of multi-disciplinary technologies, are used to benefit the ‘well-being’ of an extremely diverse population, including at-risk elderly, the disabled, autistic and other problematic children, surgical procedures education, and urban design and architecture projects. … the concepts described can benefit anyone interested in how serious games may be used for learning and change, regardless of application. (Glenn, Computing Reviews, August 2014 [6]). The content of this book lies at the intersection of three specialties: medicine, virtual world technology, and research. … This book would be of interest to the general reader who wants to see how these emerging virtual world technologies are being employed in therapeutic applications. It would also be of interest to experts in these technologies who wish to move beyond entertainment, therapists who wish to explore uses of these new technologies …. (Artz, Computing Reviews, May 2014 [1]).

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

7

1.2.3 Contributions in This Book—See Table of Contents The chapters in this book are divided into four parts that reflect major themes currently at the intersection of the field. The titles of each theme are: Part 1: Gaming, VR, and immersive technologies for education/training—Part 2: VR/technologies for rehabilitation—Part 3: Health and well-being, and Part 4: Design and development.

1.2.4 Technology Adoption for Well-Being Intervention In the next paragraphs, a delimited focus is on therapists (occupational but suggested beyond as earlier in this chapter), which exemplifies how identifying the need for technology adoption was clearly stated three decades ago. Technology uptake by those in power as leaders of educations and healthcare service providers is presumed to have been initiated during these decades with reflections and constructive critique of approach to how technology use was embedded and integrated into therapist’s metaphoric toolboxes. Thus, new treatment methods, and technologies have emerged over this period to improve both diagnosis and therapy practices. Such advancements and adoption are ongoing thus align to this series of volumes. Aligned to technological developments in healthcare and well-being, Rehabilitation Robots are likely in 2020 one of the biggest talking points nowadays whereby their increased use is predicted especially as a result of the Covid-19 situation and pressure on elderly care home residents and staff who come face-to-face with those having the virus. How many ‘front-line health/care workers’ may have been saved if robotics had been adopted into the industry prior to the 2019 breakout. Other technologies seem to be appearing every day such as apps that can act as an interactive therapy training system alongside use of Extended (Virtual, Augmented, Mixed) Reality for immersive experiences (with or without Head Mounted Displays) for example when training, or gameplay to alleviate physical and mental conditions—again reportedly impactful from Covid-19. Increased long-term rehabilitation is predicted for survivors, which again burdens the already stretched healthcare providers. Such recent developments have pushed the industry in new directions and led (some) therapists to update their applied in-practice approaches in a variety of ways as evidence suggests potentials in healthcare and well-being. Alongside stating this however, it can be questioned if this technological uptake should, or could, have been much earlier and even led from within the industry. Behind this questioning we can reflect on three decades ago where the message was made clear of the need by Hammel and Smith [7] who opened their paper titled ‘The Development of Technology Competencies and Training Guidelines for Occupational Therapists’ with the sentence “The ability to use technology has become a survival skill in our society” where they state:

8

A. L. Brooks Occupational therapists have been using and will continue to use technology as part of their functional approach to treatment [11]. Due to the lack of education in this area, however, many occupational therapists are not skilled in or aware of the role they can play in the application of technology, especially within an interdisciplinary service provision team. Additionally, other service providers are rapidly implementing technologies in their practices without an awareness of the potential roles for occupational therapists in this area. These trends demonstrate the pervasive influence of technology in society and the need for occupational therapists and all rehabilitation professionals to be knowledgeable in its application. Access to technology has become as critical a need for persons with disabilities as is access to the physical environment. Therapists must be aware of and competent in the evaluation, prescription, operation, and adaptation of these technologies in order to meet the changing need of persons with disabilities. (7, p. 971).

In reflecting on technology training efforts, Hammel and Smith [7] further explained with examples how several rehabilitation professions were developing technology training guidelines and certification competencies. They point out how The American Occupational Therapy Association (AOTA) stressed the development of technology competence among its members. In the 1989–1991 Strategic Plans (AOTA, 1989–1991), technology training and dissemination were identified as primary goals. However, what actual technologies were being discussed, trained and disseminated must be asked? Those being trained in their disciplines would also likely argue that they didn’t have time allocated in their employment to learn new technology and to implement into practices. The author’s experiences (see also [2]) from this period in the early 1990s highlights a distinct lack of knowledge in many (re)habilitation therapist practices about any technologies besides computers for administration (and recreation breaks playing games). Ever present from the institution employees were—(1) worries about technology replacing their jobs instead of supplementing; (2) the costs associated to the institution healthcare provider/economy; and (3) the associated learning curves associated to the adoption of technology and their need to develop aligned knowledge, skills, and competences. Acknowledged is that in the author’s case, there were not many that understood the technology anyhow as it entailed introducing (circa early-mid 1990s) of bespoke invisible sensor-based interfaces that were mapped to give digital auditory (later expanding to include multimedia as visuals [VR], games and robotics) feedback responses to movement as a supplement to traditional “nontechnology” intervention. This so a person, for example with acquired brain injury (ABI), could be trained to sense their proprioception beyond learnt kinesthetics via alternative channels of stimuli. A challenge was also that the author was not a healthcare professional to have vocabulary fitting uptake/adoption contexts, and neither being a salesperson with economic profile and related vocabulary. In other words the concept may have been too complex as it was cored on that if a patient had damaged or lost a means to sense—for example in acquired brain injury where sense of balance can be influenced—then he/she could instead ‘hear’ their various torso, limb and associated balance positions and that this auditory channelling would, internal-to-the-human through afferent efferent neural feedback loop closure, ‘train’ the damaged proprioception and/or kinaesthetic mechanisms— linked to the brain plasticity to adapt and learn.

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

9

Suffice to say that at the start of the second decade of the twenty-first century, many therapists are much more open to such technology supplementing their practices. However, with affordable and availability of pervasive and ubiquitous digital technologies the challenge today is to determine what digital technologies are meaningful to adopt… this including issues such as company relationship, trust, etc. Alongside deciding how to ensure the finest training for optimum use of the technologies. To this end emergent models for optimising intervention and evaluation have developed from practices that still need to be widely adopted in order to constructively critique to put in place an optimum structured and systematic training model within therapeutic intervention education regards optimal use of technologies—such as Brooks’ [2] SoundScapes Emergent Model titled ‘Zone of Optimised Motivation’ (ZOOM). Notably (and associated to this position), is that ‘The World Confederation for Physical Therapy’ (WCPT—which represents more than 625,000 physical therapists in 121 member organisations), which is planned to be hosted in Dubai, in April 2021, has announced twelve confirmed focused symposia, featuring 55 speakers from around the world. One focused symposium event is titled “Technology in physiotherapy education: Technology enhanced physiotherapy education—Global Perspectives”: this suggesting that contemporary technology for inclusive well-being uptake and its education for therapists is still in need. This despite the author presenting his research Virtual Interactive Space (VIS) and the need for such educational frameworks at WCPT over two decades ago, prior to the millennium5 whereby transdisciplinary disciplines (e.g. those who create and those that use) would collaborate to optimise.

1.2.5 Future Advancements In the process of finalising this book the global pandemic around Covid-19 happened. The editing of a volume titled Recent Advancements of Technologies of Inclusive Well-Being promoted thought processes aligned to the pandemic and a questioning of healthcare and meaning of well-being in and post-crisis. In focusing on this volume’s topic there is no intention to diminish other crises that are prevalent in the world at time of writing, such as the climate/global warming; poverty; imbalances of global economies; homeland safety; sustainability; and more. The goal of this section is to share knowledge and insight of what may be ahead as well as to provoke readers to consider and discuss similarly. We are informed how each search Online, each storage in the cloud, and each e-mail exchange and social media posting use energy resources. This analogizing aligns to Lorenz’s butterfly effect where a seemingly irrelevant flap of a tiny wing can have huge and staggering life changing unprecedented consequences the other side of the world. One can reflect this story to a Chinese wet market (Wuhan) where, if reports are to be believed, such a butterfly effect (or rather a bat wing effect) took 5 https://www.researchgate.net/publication/257536704.

10

A. L. Brooks

place as a zoonotic disease instance around the cusp between 2019 and 2020 leading to the current COVID-19 pandemic where, at time of writing, there are no known vaccines nor specific antiviral treatments: This stated with trepidation due to reports that U.S. Secretary of State Mike Pompeo has said there is “enormous evidence” that the virus originated in a lab in Wuhan—thus not from nature.6 It can be questioned will we ever know for sure about anything with fake news and propaganda a way of contemporary life? Online activities are tracked with data collected and farmed to inform corporates of one’s profile, likes and dislikes, interests and disinterests—we are then bombarded with adverts to fit the profile—marketing to buy this, informing we need this, etc. We surf the World Wide Web (with a smile also considered as a ‘wet market’ given the surfing pun) with cookies and data packets being placed on the computers that we use—cookies that we need to agree to if we wish to continue to surf and access where we wish….more data collected. We were informed of safety and secure privacy of information collection after the Facebook–Cambridge Analytica (https://en.wikipe dia.org/wiki/Cambridge_Analytica) data scandal, yet after bankruptcy and closure related firms (notably Emerdata) still exist with the same staff and rumours suggest involvement in political elections and such societal changing actions as the United Kingdom – European Union split referred to as Brexit: Data collection is big business. Cybercrime we are informed is rising with people losing savings and more through data loss, one can see how one strategy fits with the other here: More data collection— more cybercrime. In the old days one had to be careful in what one threw in the garbage in case some identifying information on paper was retrievable by someone with intent to cheat another human—these days it would seem an industry based upon key presses! Returning to Covid-19 (https://en.wikipedia.org/wiki/Coronavirus_disease_ 2019) to give a more positive perspective on technology for well-being; in February 2020, a Chinese mobile application (app) was launched where individuals need to enter their name and national identification number. In-built surveillance data to the app can distinguish others in nearby proximity to flag potential risk of infection, recommend self-isolation/quarantine, whilst also alerting health authorities. Elsewhere in Asia and Israel, other data such as used on mobile phones for facial recognition, tracking, and artificial intelligence, are collected and analysed to track those likely infected through similar human–human contact. One would speculate that this is a way to use such profile data collection and analysing technologies towards a positive goal aligned to well-being (see also later in this chapter) rather than to sell or steal from another human-being or otherwise for large corporates to insistently market to an Online user’s profile to sell products. At the other end of the spectrum we have large corporations using ‘technology’ of different scale and magnitude to destroy the planet from resourcing fossil fuels to eradicating rain forests and polluting seas that are rising in level due to global warming and climate changes that has knock on effects including terminating coral 6 https://www.voanews.com/covid-19-pandemic/who-expert-believes-wuhan-wet-market-played-

role-covid-outbreak.

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

11

reefs. Could Covid-19 be nature’s way of getting back at the human race and greedy governments that allow this to take place for money. Closer to the topic herein the corporations of Amazon, Apple, Google, Microsoft, and others are reportedly, via their subsidiaries (and ‘skunkworks’ initiatives) in healthcare (and more), including increasing their already huge computer arrays to collect and analyse genome data associated to an individual’s DNA (Deoxyribonucleic acid). In doing this they target to predict a person’s likelihood of contracting a disease and offering a personal subscription medical service based upon the genome-based prediction towards improved future well-being. Prof. Ernst Hafen from Eidgenössische Technische Hochschule (ETH) Zürich informs how precision medicine based upon such personal data collection can impact since an individual’s genes influence the way he or she may react to drugs. In his text titled “The key lies in the genes” he shares on how such data collection can inform medical doctors on drugs and dosage: By identifying the right drug, in the right dose, at the right time for each patient, precision medicine has the potential to make the healthcare system more efficient and to treat patients more effectively. Essentially, precision medicine combines a patient’s genetic data with clinical, environmental and lifestyle information to guide decisions for the optimal prevention, diagnosis, and treatment of conditions. Pharmacogenomics deals with the influence of a patient’s individual genome on the effect of drugs. Using appropriate tests, physicians are able to determine in advance for patients individually which drugs are likely to work for them. The tests analyse key genes in our body involved in metabolism, transport, and elimination of the drug. Since genetics doesn’t change over time, a pharmacogenomics report listing all the drugs likely to work for us is valid for our entire lifetime.

Associated, is how Amazon, with impressive business insight and a market valuation of close to $1 trillion,7 is now licensed (as Amazon Pharmacy), through its acquisition of mail-order online PillPack (and others), to distribute prescription drugs. Aligned with the aforementioned-initiatives and -activities in predictive healthcare requirement subscriptions (e.g. genome DNA analysis) some experts predict disruptive advancements in future technologies and strategies associated to inclusive wellbeing and consumer-centric healthcare. This where at the end of 2019 the global online pharmacy industry alone was predicted to reflect a compound annual growth rate (CAGR) of 14.26% to reach $107.53 billion by 2025.8 There is no doubt that the tremendous advances in intelligent paradigms and cheap and easy availability of computing power have generated enough interests among researchers to develop new tools in virtually every discipline including healthcare. A sample of research reported in the books in 2001 and 2006 [8, 12], mentions the applications of intelligent systems for visually and hearing impaired, use of Virtual Reality, Digital Talking Books, Gait Training System for Computer-aided Rehabilitation and so on. In nearly more than a couple of decades, several technological advances have taken place in the area of well-being. Some of these include cognitive assistants [5] such as DayGuide, Active@Home, CoME,DALIA, iGenda, M3W, 7 https://healthcareweekly.com/amazon-pharmacy/. 8 https://www.globenewswire.com/news-release/2019/06/18/1870266/0/en/.

12

A. L. Brooks

MyGuardian, PersonAAL, and so on. Online communication technologies will play a greater role in many areas such as business, education, healthcare and so on. We are already witnessing a greater use of Zoom, Microsoft Teams, Google hangout and so on in this Covid-19 pandemic. Further innovations in gaming technologies and Virtual and Augmented Reality in well-being are appearing at a rate which was not imagined two decades ago. It is true that one cannot predict the future fully, but we believe that technologists can create and predict future to a certain extent. Closure of this chapter are a sharing of current- and a prediction of future- advances in technologies for inclusive well-being where increasingly we are using Online services for data sharing, collecting, analysis, and delivery of predictions, treatment requirements, and medicine. As mentioned earlier, there are many products illustrating the growth and scope of contemporary health and wellness technologies that enable self-monitoring of own health. These products can be as wearables in many forms that collect health, sleep and exercise activities related data. For example: wrist bands such as Apple Watch, Omron HeartGuide, FitBit … Self-adhesive wireless lightweight patches such as Philips Biosensor for monitoring vital signs and critical data; The FreeStyle Libre patch and reader/app by Abbott Diabetes Care that measures glucose levels in the interstitial fluid between the cells right under the skin; Pain relieving devices such as Omron’s transcutaneous electrical nerve stimulation (TENS) technology that uses self-adhesive heat application apparatus offering varying intensities and modes for different body parts to help to block pain messages, trigger the release of endorphins and to improve blood circulation. Other wearable devices are available that communicate wirelessly to apps such as blood pressure arm sleeve monitors by QardioArm; and the Complete™ Evolv® , 3, 5, 7 and 10 Series® [and more (https://omronhealthcare.com/blood-pressure/)] all by Omron. Other non-wearable personalised home apparatus includes Omron nebulisers to dispense medicine in pressurised air for respiratory treatment. There is a long list: Some of these systems/apparatus/devices link to apps and/or specific API, some are clinically tested and approved (e.g. FDA in USA) and many utilise cloud-based data collection and/or transfer, including via such communication smart-home devices as the Amazon Alexa. The UN World Health Organization defines health in its constitution as a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. Health is much more than just a biomedical condition. The Swiss Cause of Health Cohort (Swiss COHCOH), where this clip of text is resourced (https://causeofhealth.ch), consist of a team of scientists from the Swiss Federal Institute of Technology in Zurich (ETHZ) and from the University of Zurich (UZH). At their site the team state the fact that little is known about health because medical research mainly focuses on curing disease. The COHCOH initiative focuses on “the premise that an individual’s health is determined by the Health Triangle, a complex interplay between a unique individual (genome), the environment, and the individual’s behaviour in the environment”. The team further state how standardized longitudinal sets of health data from millions of people are needed to form the basis of “precision health or P4 (personalized, predictive, preventive and participatory) health.” (https://cause-of-health.citizenscience.ch/en/cause-of-health)

1 Re – Reflecting on Recent Advances in Technologies of Inclusive …

13

The products, approaches and strategies in this opening chapter arguably align with Health 5.0 and how future well-being will become increasingly personal and digital (including own monitoring of data via devices). It should also be more affordable and even free of charge due to the corporate digital giants needing us to stay healthy and productive such that they will invest in us to ensure we are feeling well and making enough money to spend on products and services offered and advertised by them. Kowalkiewicz’s [9] translation of ‘digital wellness’ is elaborated as “efforts made to increase, maintain or restore physical, mental, or emotional wellbeing, delivered at a global scale through the use of digital technologies, rather than by individuals working directly with patients.” The final statement on Future Advances in Technologies for Inclusive Well-Being is thus given to Kowalkiewicz [9, 10] who, in his insightful publication related to ‘digital wellness’ and the evolution of the healthcare sector (Fig. 1.1 herein), makes explicit: … the titans of the technology industry are focussing on health as the next industry to transform. And if you’re not sure what this may mean, just think how it was to be a customer of some other industries before they were transformed. Remember travel and hospitality (trying to book that room in Iceland in 1998)? How about news and media (newspapers were updated only once every 24 h)? Photography changed as well (24 or 36 frames in your camera and “express” development in just one hour). The health industry is about to receive a significant innovation push. It will progress toward the fifth stage of the sector: Health 5.0. Kowalkiewicz [9].

References 1. Artz, J.M.: Computing reviews ACM digtal library. https://dl.acm.org/doi/book/10.5555/259 1771#sec-reviews (2014) 2. Brooks, A.L.: SoundScapes: the evolution of a concept, apparatus and method where ludic engagement in virtual interactive space is a supplemental tool for therapeutic motivation. https:// vbn.aau.dk/files/55871718/PhD.pdf (2011) 3. Brooks, A.L., Brahnam, S., Jain L.: Technologies for Inclusive Well-Being. Springer (2014) 4. Brooks, A.L., Brahnam, S., Kapralos, B., Jain, L.: Recent Advances in Technologies for Inclusive Well-Being. Springer (2017) 5. Costa, A., Julian, V., Novais, P. (eds.): Personal Assistants: Emerging Computational Technologies. Springer (2018) 6. Glenn, B.T.: Computing reviews ACM digtal library. https://dl.acm.org/doi/book/10.5555/259 1771#sec-reviews (2014) 7. Hammel, J.M., Smith, R.O.: The development of technology competencies and training guidelines for occupational therapists. Am. J. Occup. Ther. 47, 970–979. https://doi.org/10.5014/ ajot.47.11.970 (1993) 8. Ichalkaranje, N., Ichalkaranje, A., Jain, L.C. (eds.): Intelligent Paradigms for Assistive and Preventive Healthcare. Springer (2006) 9. Kowalkiewicz, M.: Health 5.0: the emergence of digital wellness. https://medium.com/qut-cde/ health-5-0-the-emergence-of-digital-wellness-b21fdff635b9 (2017a) 10. Kowalkiewicz, M.: The transformational difference between digitisation and digitalisation. https://medium.com/qut-cde/digitise-or-digitalise-584c953e2d8 (2017b)

14

A. L. Brooks

11. Pedretti et al. 1992 in Hammel and Smith.: The development of technology competencies and training guidelines for occupational therapists. Am J Occup. Ther. 47, 970–979. https://doi. org/10.5014/ajot.47.11.970 (1993) 12. Teodorescu, H.N., Jain, L.C.: (eds.): Intelligent Systems and Technologies in Rehabilitation Engineering, CRC Press (2001)

Part I

Gaming, VR, and Immersive Technologies for Education/Training

Chapter 2

Gaming, VR, and Immersive Technologies for Education/Training Anthony Lewis Brooks

Abstract Future digital lives are predicted to extend beyond mobile smart phones with devices appearing as standard eyeglasses having settings for Extended Reality (XR). This will be so that what one really experiences and what is computer-generated will be so tightly mixed together that a person will not be able to distinguish between what is real and what is an illusion. Rather than sliding a finger across a touch screen on a smart phone, it will be possible to make things happen by moving our eyes or by brainwaves. Talking with someone or playing an online game will involve seeing that person in the same room and being able to touch and feel him/her via tactile technology. XR will be used in a variety of education situations with head mounted displays (HMDs) in classrooms for all children and also in home environments where those being educated have their own headset and system; medical students and surgeons will be educated in practical skills by using virtual humans rather than cadavers; oilrig and wind-farm workers will understand how to handle maintenance, repairs, and emergencies, before they ever leave the home office (Abridged from original call for chapters for this book). This chapter reviews texts selected for this volume on Gaming, VR, and immersive technologies for education and in training. It begins with a brief introduction text speculating impact related to well-being. Keywords Gaming · Virtual reality · Immersive technologies · Education · Training

2.1 Introduction This chapter introduces the first part of the third volume in the Technologies of Inclusive Well-Being series. Authors from around the globe submitted text from their work for inclusion that necessitated multiple peer reviews of amassed submitted works— it was a long and arduous task that has resulted in a select few that reside between the covers of this book. Ten chapters are selected for this opening part that includes A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_2

17

18

A. L. Brooks

over thirty-five contributing authors. The book contents overall are segmented into four parts with chapters being selected to each. Specifically, Part 1: Gaming, VR, and Immersive Technologies for Education/Training; Part 2: VR/Technologies for Rehabilitation; Part 3: Health and Well-Being; and Part 4: Design and Development. This chapter represents a focused, and sometimes extended, ‘miniscule-review of the field’ by introducing the chapters for readers. Source texts are cited and referenced to acknowledge use in creating these review snippets from the chapter authors to overview, introduce and inform readership of their contribution in the volume. In closing this introduction, a brief text shares possible questioning of healthcare systems where additional reader focus may be directed as and when appropriate. This in considering associated infrastructures where doctors are financially benefiting to promote drug company products whilst also financially benefitting as representational speakers. This questioning, which herein is considered related to technologies for inclusive well-being, begins at the World Health Organization (WHO). Sharing the following is inspired from reading the first chapter ([1] in this opening part of the book and relating to the current Covid-19 pandemic where so much is yet unknown. Instructions to reduce Covid-19 pandemic transmissions are primarily recommendations to:—socially isolate by a distance (e.g. 1–2 m; no large gatherings); wash hands regularly and thoroughly; and don’t touch face area without washing hands. The WHO text may enlighten readers on specific aspects of an issue that could arguably link to the virus situation and imposed recommendations. The topic is health care-associated infection (HAI)—also called “nosocomial” or “hospital” infection. According to The World Health Organization, HAI refers to: …an infection occurring in a patient during the process of care in a hospital or other health care facility which was not present or incubating at the time of admission. HAI can affect patients in any type of setting where they receive care and can also appear after discharge. Furthermore, they include occupational infections among staff. HAI represents the most frequent adverse event during care delivery and no institution or country can claim to have solved the problem yet. Based on data from a number of countries, it can be estimated that each year, hundreds of millions of patients around the world are affected by HAI. The burden of HAI is several fold higher in low- and middle-income countries than in high-income ones. There is also now a worldwide consensus that urgent action is needed to prevent and control the spread of antibiotic resistant organisms and in health care effective infection prevention and control (IPC) is one solution. (WHO—https://www.who.int/gpsc/country_work/burden_hcai/en/).

A prior WHO review and meta-analysis [2] found that “The burden of health-careassociated infection in developing countries is high. Our findings indicate a need to improve surveillance and infection-control practices.” The first chapter “Experiential Training of Hand Hygiene Using Virtual Reality”, which reports on a technology (Virtual Reality) training initiative action is long overdue given the 2011 and 2017 WHO reports/call for actions [3, 4] (see Chap. 6 cited works) that associates to the 2010 document (ibid). For example, how to thoroughly wash hands within such health environments in order to prevent or minimize transmissions. However, to be clear this author is not an expert on such issues and only brings attention to position

2 Gaming, VR, and Immersive Technologies for Education/Training

19

the first chapter with its focus on hand hygiene in context of current global pandemic situation and the rules the public has been given, i.e. social distancing and regular washing of the hands, and content of this volume. Considered beyond the opening chapter, aligned is an article by Sipherd [5] informing on how “more than 250,000 people in the U.S. die every year from medical errors. Other reports claim the numbers to be as high as 440,000. Medical errors are the third-leading cause of death after heart disease and cancer.” Again, this relates to the content of this book where simulation and technologies such as virtual reality are employed towards improvement of education and training. Also, potentially related is an article at a website called ‘Dollars for Docs’ informing how drug companies get doctors through cash payments, to “promote” (subscribe) their product1 —which seems frightening when taken in context of the previous ‘medical errors’ report. Thus, it seems on one side we are indebted to healthcare staff for their ‘frontline’ efforts, especially in current times in respect of Covid-19. Yet, on the other side, should we not question if the healthcare facilities (hospitals, etc. including aged care institutions), where nurses, doctors and other medical staff are working within and wherein patients are treated, are appropriately modern, clean and sanitized with staff training of the highest level (even in hand cleaning)? This questioned attention towards as much as possible, preventing transmissions via HAI and to optimize survival and recovery alongside best conditions for staff health to not contract disease/infection. This, alongside other issues in healthcare such as the medical errors through potential lack of training (as related by chapters in this volume where contemporary technologies are used to improve educations and training), and a system that supports where doctors are benefiting via payments from drug companies beg questioning? It is thus pertinent to comment that if this book series can contribute to questioning such healthcare issues related to technologies for inclusive well-being, then as editors we can feel we have in some small way contributed alongside those precious authors presented herein towards increasing reflections by readers on such important issues.

2.1.1 Experiential Training of Hand Hygiene Using Virtual Reality [1] The first chapter in Part One is titled Experiential Training of Hand Hygiene Using Virtual Reality by authors Lauren Clack, Christian Hirt, Andreas Kunz, and Hugo Sax. The chapter is authored by two authors from the Human Factors Lab (humanlabz) that is under the Infectious Diseases and Hospital Epidemiology, University Hospital Zurich, alongside two authors from The Innovation Center Virtual Reality, under The Swiss Federal Institute of Technology (ETH), both in Zürich, Switzerland. This contribution is timely given the current Covid-19 pandemic and recommendations for social distancing and (possibly more importantly) regular handwashing 1 https://www.propublica.org/article/profiles-of-the-top-earners-in-dollar-for-docs.

20

A. L. Brooks

ensuring a high quality of hygiene. Worrying is how the cited works inform that “Healthcare-associated infections (HAI) acquired during the course of treatment are the most frequent adverse events in healthcare delivery, affecting billions of patients worldwide” [3]. A call for action was subsequently announced titled “Global infection prevention and control priorities 2018–22: a call for action” [4]. There is no mention whether the call was actioned and then in 2019 Covid-19 (Corona Virus) pandemic started its global journey—however, this is also not subject of this text, rather a personal reflection given the widespread devastating impact in health, related economics, and society itself. The text informs how the authors target use of experiential learning theory to guide the development of a virtual reality hand hygiene trainer to impact healthcareassociated infections (HAI) acquired during treatment. The virtual reality immersive trainer environment gave visual feedback about microorganism transmission and infectious outcomes. The goal of the work was to enhance experiential learning and increase intrinsic motivation to perform hand hygiene. In line with this work, the mission statement of the humanlabz includes to understand and to optimize the interactions between healthcare providers and their working environments in order to facilitate safer behaviours and to ultimately impact patient safety [6]. Towards this endeavour, the lab team consider the physical, built environment, as well as the cognitive and social environments. Taking a systems perspective to quality improvement, the lab personnel state how they recognize that modifications in any area of the work domain will have repercussions in other areas as they research to introduce evidencebased practice in order to effectively inform future quality improvement efforts. The Innovation Center Virtual Reality (ICVR) team research and implement VRSystems focusing upon the human user, in this case the healthcare worker. One can imagine, given the current pandemic Covid-19 situation and need for improved hand hygiene, how this system could disseminate publicly. For more see the chapter text and the lab website (http://www.en.infektiologie.usz.ch/research/research_groups/ pages/hugo-sax.aspx) and center website (https://www.iwf.mavt.ethz.ch/research/vir tual_reality/index_EN).

2.1.2 Useful, Usable and Used? Challenges and Opportunities for Virtual Reality Surgical Trainers [7] The title of the second chapter in this part is Useful, Usable and Used? Challenges and opportunities for virtual reality surgical trainers by Chantal Trudel at the School of Industrial Design under the Faculty of Engineering and Design, Carleton University in Canada. The text discusses design considerations in the development of virtual reality surgical training simulators in reference to a variety of case studies. Improved healthcare delivery, patient outcomes and training opportunities from use of virtual

2 Gaming, VR, and Immersive Technologies for Education/Training

21

reality are introduced including presenting how educational resources are challenged through simulator issues such as scarcity of cadavers for practicing and conducting repetitive tasks and the high costs associated with these models. Other challenges are associated in the text. The chapter presents a preliminary framework outlining research priorities and areas that have been suggested by previous researchers to help focus the design development of virtual reality applications. Elements of this framework are discussed in reference to the case studies.

2.1.3 Four-Component Instructional Design Applied to a Game for Emergency Medicine [8] The third chapter in Part One is authored by a trio from The Netherlands, namely Tjitske Faber and Mary Dankbaar from the University Medical Center Rotterdam, Institute for Medical Education Research Rotterdam, and Jeroen van Merriënboer from the School of Health Professions Education, Maastricht University, Maastricht. This text informs of the original design and redesign of the abcdeSIM game that was developed in a close collaboration between medical practitioners, game designers, and educationalists (see [9]). The game is typically used as a preparation for courses on the ABCDE method that is used internationally to treat seriously ill patients as a guideline for performing the complex skill of resuscitation that is commonly trained in face-to-face-courses. In the game, players treat patients in a virtual emergency department. The trio used the van Merriënboer and Kirschner [10] Four-Component Instructional Design theory (4C/ID) to redesign the existing game. In this chapter, they explain why the game was redesigned and how the components of this instructional design theory can be applied to designing a serious game for medical education. Interestingly Paul Gee’s ‘System Thinking’ and ‘Cycles of Expertise’ [11] are discussed alongside Hirumi et al.’s [12] concept of cycles of learning and mastery [13] to justify the game redesign. Conclusions report that despite challenges, mainly visual screen space to display the reminders and tool information, several theoretically sound support options were achieved from the redesign.

2.1.3.1

A Review of Virtual Reality-Based Eye Examination Simulators [12]

The fourth chapter in this first part of the book is a truly international group of authors with the first three Michael Chan, Alvaro Uribe-Quevedo, and Bill Kapralos affiliated to Ontario Tech University in Canada; with Michael Jenkin affiliated to York University, Toronto, Canada; with Kamen Kanev affiliated under Shizuoka University in

22

A. L. Brooks

Hamamatsu, Shizuoka, Japan; and Norman Jaimes being affiliated under the Universidad Militar Nueva Granada, Bogota, Colombia. The text informs on a review of direct ophthalmoscopy simulation models for medical training where the co-authors highlight the characteristics, limitations, and advantages presented by modern simulation devices for eye examination. The history and challenges in the field of eye examinations are introduced, with simulators in the field described. The authors’ reasoning behind the limited adoption of simulators was due to the cost associated with the software and hardware. As society advances technologically, simulated clinical experiences become more functional and affordable, providing students with a wide variety of opportunities to learn new skills, practice team communication, and refine clinical competencies [14]. Such trends in medical training include SPs, models and part-task trainers, computer-based simulation, and virtual reality-based systems. This chapter closes by reviewing the remarkable advances that have occurred in the development of training and simulator systems. Relatedly, in browsing Online it is informing (for this author at least) to explore and identify the complexity of such environments via e.g. a product titled ‘EyeSim—Ophthalmology VR’—see https:// eonreality.com/portfolio/online-medical-training/ and the challenges that exist such as when traditional 2D training tools are representation of 3D problems relating complex subject matter, such as Visual Pathways, that are poorly represented in standard teaching material and the time to master these subjects could be shortened. Another challenge is the ‘hands-on’ training and online medical training such that students currently practice on real patients (actors) or dissect a cadaver. Both are suboptimal, as a cadaver does not function like a live subject and practicing on real patients can potentially compromise the safety of the patient. This limits the amount of practice students receive while in the classroom. A further challenge is the limited number of dysfunctions that are in such simulators whereby instructors are limited in how dysfunctions and diseases are demonstrated in classroom settings. These examples are often presented as case studies with limited hands on exploration.

2.1.4 Enhanced Reality for Healthcare Simulation [15] The fifth chapter is titled Enhanced Reality for Healthcare Simulation links Italy, Switzerland, and North America. Authors Fernando Salvetti and Barbara Bertagni have a double affiliation between Centro Studi Logos, Turin, Italy, and Logosnet, Lugano, Switzerland, and Houston, TX, USA. Author Roxane Gardner is multiaffiliated under The Center for Medical Simulation in Boston, and The Brigham and Women’s Hospital/Children’s Hospital/Massachusetts General Hospital and Harvard Medical School, Boston, USA. Rebecca Minehart is affiliated to The Massachusetts General Hospital, Harvard Medical School, and The Center for Medical Simulation, all in Boston, USA. The multiple image chapter informs of enhanced reality for immersive simulation (e-REAL® ) that is the merging of real and virtual worlds: a mixed reality environment for hybrid simulation where physical and digital objects co-exist and interact in real time, in a real place and not within a headset. The

2 Gaming, VR, and Immersive Technologies for Education/Training

23

first part of this chapter discusses e-REAL: an advanced simulation within a multisensory scenario, based on challenging situations developed by visual storytelling techniques. The e-REAL immersive setting is fully interactive with both 2D and 3D visualizations, avatars, electronically writable surfaces and more: people can take notes, cluster key-concepts or fill questionnaires directly on the projected surfaces. The second part of this chapter summarizes an experiential coursework focused on learning and improving teamwork and event management during simulated obstetrical cases. Effective team management during a crisis is a core element of expert practice: for this purpose, e-REAL reproduces a variety of different emergent situations, enabling learners to interact with multimedia scenarios and practice using a mnemonic called Name-Claim-Aim. Learners rapidly cycle between deliberate practice and direct feedback within a simulation scenario until mastery is achieved. Early findings show that interactive immersive visualization allows for better neural processes related to learning and behaviour change. An enhanced hybrid simulation in a mixed reality setting, both face-to-face and in telepresence is shared. e-REAL is a futuristic solution based upon a mixed reality set up at the Center for Medical Simulation in Boston, designed to be “global”, “liquid”, “networked” and “polycentric”, as well as virtually augmented, mixed, digitalized and hyperrealistic. The key-words that are summarizing the main drivers that guided the design of this solution, and that are leading the further developments, are presented as: Digital mindset; Visual thinking; Computer vision; Advanced simulation; Multimedia communication; Immersive and interactive learning; Augmented and virtual reality; Human and artificial intelligence cooperation; Cognitive psychology and neurosciences; Anthropology and sociology of culture; Hermeneutics; Narratology; Design thinking applied to andragogy and pedagogy; and Epistemology.

2.1.5 MaxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training [16] The sixth chapter in Part One is a product from the maxSIMhealth laboratory under Ontario Tech University, Canada. At the time of writing this chapter, the maxSIMhealth (www.maxSIMhealth.com) group consisted of (in alphabetical order): Artur Arutiunian, Krystina M. Clarke, Quinn Daggett, Adam Dubrowski, Thomas (Tom) Gaudi, Brianna L. Grant, Bill Kapralos, Priya Kartick, Shawn Mathews, Pamela T. Mutombo, Guoxuan (Kurtis) Ning, Argyrios Perivolaris, Jackson Rushing, Robert Savaglio, Mohtasim Siddiqui, Andrei B. B. Torres, Samira Wahab, Zhujiang Wang, and Timothy Weber. maxSIMhealth is a multidisciplinary collaborative manufacturing, design, and simulation laboratory at Ontario Tech University in Oshawa, Canada combining

24

A. L. Brooks

expertise in Health Sciences, Computer Science, Engineering, Business, and Information Technology, aiming at building community partnerships to advance simulation training. The team focus is on existing simulation gaps, while providing innovative solutions that can change the status quo, thus leading to improved healthcare outcomes comprised of cutting-edge training opportunities. maxSIMhealth utilizes disruptive technologies (e.g., 3D printing, gaming, and emerging technologies such as extended reality) as innovative solutions that deliver cost-effective, portable, and realistic simulation catering the high variability of users and technologies, which is currently lacking. maxSIMhealth is a novel collaborative innovation with aims to develop future cohorts of scholars with strong interdisciplinary competencies to collaborate in new environments and to communicate professionally for successful medical-tech problem solving. The work being conducted within maxSIMhealth is predicted to transform the current health professional education landscape by providing novel, flexible, and inexpensive simulation experiences. In this chapter, a description of maxSIMhealth is provided along with an overview of several ongoing projects.

2.1.6 Serious Games and Multiple Intelligences for Customized Learning: A Discussion [17] Authors of the seventh chapter in Part One are:- Enilda Zea from Universidad de Carabobo, Carabobo, Venezuela; Marco Valez-Balderas from Laurier University, Waterloo, Ontario, Canada; and finally, Alvaro Uribe-Quevedo affiliated to Ontario Tech University, Oshawa, Ontario, Canada. Titled as ‘Serious games and multiple intelligences for customized learning: A discussion’, this text states how teaching strategies need to swiftly respond to abrupt changes in delivery modes that provide engaging and effective experiences for learners. The introduction to this piece states “Life in the twenty-first century requires radical changes in teaching models that correspond to current learners’ behaviours due to the ubiquitous nature of current digital media” [18]. The current Covid-19 pandemic has made it evident the lack of readiness of several academic sectors when moving from face-to-face to online learning. While research into understanding the use of technologies have been gaining momentum when innovative tools are introduced, it is important to devise strategies that lead to effective teaching tools. Recently, user experience (UX) has been influencing content development as it considers the uniqueness of users to avoid enforcing one-size-fits-all solutions. In this chapter, the authors discuss multiple intelligences in conjunction with serious games and technology to explore how a synergy between them can provide a solution capable of capturing qualitative and quantitative data to design engaging and effective experiences.

2 Gaming, VR, and Immersive Technologies for Education/Training

25

2.1.7 Mobile Application for Convulsive and Automated External Defibrillator Practices [19] This contribution as the eighth chapter in Part One has a strong representation from the Universidad Militar Nueva Granada, in Bogota, Colombia, with authors Engie Ruge Vera, Mario Vargas Orjuela, Byron Perez-Gutierrez and Norman Jaimes. Author Alvaro Uribe-Quevedo is affiliated under the Ontario Tech University, in Oshawa, Ontario, Canada. The text ‘Mobile game for convulsive and automated external defibrillator practices’ informs that adoption of simulation in medical training aims at improving health care delivery as in the US alone, more than 400,000 deaths are caused each year by medical errors. This number being the third cause of death followed by cardiovascular diseases and cancer [20]. The chapter informs how simulation has proven effective in reducing deaths caused by some medical errors and in different scenarios towards realizing best practices. A timeline of manikins used in simulation training across different situations is presented alongside argument for how interdisciplinary teams further improve the training platforms and trainee educations. Extended (Virtual, Augmented, Mixed…) Reality technologies offers simulation developers new opportunities to improve simulation trainings. In this chapter, the authors present the development of two virtual manikin mobile applications, one for resuscitation employing a virtual automated external defibrillator and another for convulsive training treatment. The authors’ goal is to provide a mobile virtual approach to facilitate complementary practices via handheld devices by reproducing the tasks involved in each situation through a touch screen and motion-based interactions. To increase user engagement, the authors have added game elements that add realism to the simulation training by incorporating goals and metrics taken to assess performance and decision making. To evaluate engagement and usability, they have employed the System Usability Scale and the Game Engagement Questionnaire. A preliminary study shows that both apps are usable, engaging, and may help refreshing information about the procedures.

2.1.8 Lessons Learned from Building a Virtual Patient Platform [21] The ninth chapter presents insightful reflections on ‘Lessons Learned from Building a Virtual Patient Platform’. The authors, Olivia Monton and Allister Smith are affiliated to McGill University in Montréal, and Amy Nakajima is affiliated to Simulation Canada, in Toronto, as well as the Bruyère Continuing Care, The Ottawa Hospital, Wabano Centre for Aboriginal Health and University of Ottawa, in Ottawa, Ontario, Canada. In this work, Virtual Patients (VPs) and Simulation-based medical education (SBME) are introduced alongside the resulting company realized from the team’s initiative. These are acknowledged as an effective way to educate trainees at the

26

A. L. Brooks

provider, team, and systems-level, addressing different learning needs and fulfilling a variety of functions [6, 22]. The authors inform how simulation at the individuallevel promotes knowledge acquisition and skill-development of a healthcare provider, whereas systems-level simulation takes a broader view, exploring issues related to the components of healthcare, a complex, socio-technical system consisting of multiple and multiply interacting components, including the environment, the organization, the work itself, and persons, which include providers, patients and families. The text shares the journey to create the VP software platform, Affinity Learning, and a content-based VP company titled as ‘VPConnect’. The authors discuss experiences from partnering, as medical students, with members of academia, research, clinicians and industry to create a VP platform. Their insightful reflections are shared that specifically highlight—(a) The virtual environment as an effective, safe, and costefficient way to educate medical trainees; (b) The requirements behind a successful VP platform; (c) The obstacles and challenges faced in medical education innovation; and (d) future work.

2.1.8.1

Engaging Learners in Pre-simulation Preparation Through Virtual Simulation Games [23]

The final and tenth contribution in Part One is a chapter titled ‘Engaging learners in pre-simulation preparation through virtual simulation games.’ The authors are all based in Ontario, Canada, they are namely, Marian Luctkar-Flude and Deborah Tregunno, who are affiliated to Queen’s University, School of Nursing, in Kingston; Jane Tyerman and Michelle Lalonde, who are affiliated to the University of Ottawa, School of Nursing, in Ottawa; Lily Chumbley, who is affiliated to Trent University, in Peterborough; and Laurie Peachey, who is affiliated to Nipissing University, School of Nursing, in North Bay. The text informs on the use of technology in the form of VSGs (virtual simulation games) in nurses’ education where educators must carefully assess learning outcomes associated with various components of clinical simulation. Pre-simulation preparation, the authors inform, is a critical aspect of simulation education that has not been well-studied and that traditionally preparation activities include readings, lectures, and quizzes and non-traditional activities include video lectures, online modules, and self-assessments. However, in the authors’ experiences, learners may fail to adequately prepare for simulation such that there is a need for innovative approaches to optimize learning during the simulation. Whilst medical and nursing educations have seen increased use of virtual simulation and gaming this is not the case in pre-simulation preparation activities for clinical simulation. The chapter informs how over 30 validated clinical simulation scenarios have previously been developed by nurse educators from across Ontario for senior nursing students to enhance their transition to clinical practice. Each scenario is implemented with selfregulated pre-simulation preparation guided by a scenario-specific learning outcomes assessment rubric. The development of a series of VSGs aims to further enhance pre-simulation preparation for undergraduate nursing students participating in these scenarios. The authors propose that VSGs used for pre-simulation preparation will

2 Gaming, VR, and Immersive Technologies for Education/Training

27

prove to be more engaging to learners, resulting in better preparation and improved performance during live simulations with the result that the use of virtual simulation for pre-simulation preparation may translate to improved performance in real clinical settings with a positive impact on patient safety and well-being. Conclusions in the text point to how virtual simulation games are considered as an innovative pre-simulation preparation strategy that engages learners providing them with immediate feedback on their clinical decision-making. By the authors creating their own VSGs they were able to provide content aligned with intended learning outcomes and levelled to the learner experience level in order to better prepare trainees to participate in a live simulation where they could demonstrate their competence within a given clinical scenario. The authors note their anticipation of the advantages to using VSGs for pre-simulation preparation as they could include the promotion of self-regulated learning, enhanced knowledge, decreased anxiety, and enhanced preparation and performance during a live simulation scenario. Additionally, they anticipate that standardized pre-simulation preparation will reduce faculty preparation time and student assessment time and may decrease instructional time in the simulation laboratory. Collaboration and sharing of VSGs across nursing schools they predict will mediate the development costs and result in cost savings in the long-term. In closing they state how further research is needed to demonstrate the impact of VSGs on learning outcomes and transfer to practice.

2.2 Conclusions In concluding this first part of the book that has presented a brief introduction review for readers to get an idea of each chapter and its author(s) positioning—this reviewing has been via extracting as direct quotation or paraphrasing from the original works— as cited. ‘Gaming, VR, and immersive technologies for education/training’ (especially simulation use) are presented and disclaimer to this ‘topic categorization’ is stated as being conducted to the best of the editor’s abilities in order to segment the volume. It is anticipated that scholars and students will be inspired and motivated by these contributions to the field of Technologies of Inclusive Well-Being towards inquiring more on the topics and where appropriate to cite in their own research and studies. Whilst the opening statement informed how education/training and medical errors (plus potentially greed) impact well-being of society, there are also ongoing major advances in how to train staff and treat people and how to target optimal patient experiences and outcomes across situation that include in hospital, private practices and homes. The second part of this book follows the ten chapters—it is themed ‘VR/technologies for rehabilitation’—enjoy. Acknowledgements Acknowledgements are to the authors of these chapters in this opening part of the book. Their contribution is cited in each review snippet and also in the reference list to support reader cross-reference to the cited work. However, the references are without page numbers as they

28

A. L. Brooks

are not known at this time of writing. Further infor-mation will be available at the Springer site for the book/chapter [24].

References 1. Clack, L., Hirt, C., Kunz, A., Sax, H.: Experiential training of hand hygiene using virtual reality. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library 196 (2021) 2. Allegranzi, B., Bagheri Nejad, S., Combescure, C., Graafmans, W., Attar, H., Donaldson, L., Pittet, D.: Burden of endemic health-care-associated infection in developing countries: systematic review and meta-analysis. Elsevier (2010). https://doi.org/10.1016/S0140-6736(10)614 58-4 3. WHO (World Health Organization).: Report on the burden of endemic health care-associated infection worldwide (2011). https://www.who.int/infection-prevention/publications/burden_ hcai/en/ 4. WHO (2017). World Health Organization: Global infection prevention and control priorities 2018–22: a call for action. https://doi.org/10.1016/S2214-109X(17)30427-8 5. Sipherd, R.: The third-leading cause of death in US most doctors don’t want you to know about (2018). https://www.cnbc.com/2018/02/22/medical-errors-third-leading-cause-ofdeath-in-america.html 6. Auerbach, M., Stone, K.P., Patterson, M.D.: The role of simulation in improving patient safety. In: Grant, V.J., Cheng, A. (Eds.) Comprehensive Healthcare Simulation. Pediatrics, Springer (2016) 7. Trudel, C. (2020). Useful, usable and used? Challenges and opportunities for virtual reality surgical trainers. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 8. Faber, T.J.E., Dankbaar, M., van Merriënboer, J.J.G.: Four-Component Instructional Design applied to a game for emergency medicine. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive WellBeing: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) 9. Erasmus MC/VirtualMedSchool (2012). https://virtualmedschool.com/abcdesim 10. Merriënboer, J.J.G. van, & Kirschner, P.A.: Ten steps to complex learning: a systematic approach to four-component instructional design, 3rd ed. Routledge (2017) 11. Gee, J.P.: Learning by Design: Good Video Games as Learning Machines. E-Learning Digit Media 2, 5–16 (2005) 12. Hirumi, A., Appelman, B., Rieber, L., Eck, R.V.: Preparing Instructional Designers for GameBased Learning: Part 1. TechTrends 54, 27–37 (2010) 13. Ryan, R.M., Deci, E.L.: Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being Self-Determination Theory. Am Psyc 55, 68–78 (2000) 14. So, H.Y., Chen, P.P., Wong, G.K.C., Chan, T.T.N.: Simulation in medical education. Journal of the Royal College of Physicians of Edinburgh 49(1), 52–57 (2019) 15. Salvetti, F., Gardner, R., Minehart, R., Bertagni, B.: Enhanced Reality for Healthcare Simulation. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) 16. maxSIMhealth.com - (Lab/group submission) maxSIMhealth: An interconnected collective of manufacturing, design, and simulation labs to advance medical simulation training. In Brooks,

2 Gaming, VR, and Immersive Technologies for Education/Training

17.

18.

19.

20. 21.

22.

23.

24.

29

A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) Zea, E., Valez-Balderas, M., Uribe-Quevedo, A.: Serious games and multiple intelligences for customized learning: A discussion. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) Prior, D.D., Mazanov, J., Meacheam, D., Heaslip, G., Hanson, J.: Attitude, digital literacy and self efficacy: Flow-on effects for online learning behaviour. the Internet and Higher Education 29, 91–97 (2016) Vera, E. R., Orjuela, M. V., Uribe-Quevedo, A., Perez-Gutierrez, B., Jaimes, N.: Mobile game for convulsive and automated external defibrillator practices. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) Jones, F., Passos-Neto, C. E., Braghiroli, O. F. M.: Simulation in medical education: brief history and methodology. Principles Prac Clinical Res 1(2) (2015) Monton, O., Smith, A., Nakajima, A.: Lessons Learned from Building a Virtual Patient Platform. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well- Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library 196 (2021) Petrosoniak, A., Brydges, R., Nemoy, L. Campbell D. M.: Adapting form to function: can simulation serve our healthcare system and educational needs? Advances in Simulation 3(8) (2018) Luctkar-Flude, M., Tyerman, J., Chumbley, L., Peachey, L., Lalonde, M., Tregunno, D.: Engaging learners in presimulation preparation through virtual simulation games. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: virtual patients, gamification and simulation. Springer Intelligent Systems Reference Library 196 (2021) Chan, M., Uribe-Quevedo, A., Kapralos, B., Jenkin, M., Kanev, Jaimes, N.: A review of virtual reality-based eye examination simulators. In Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive WellBeing: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library 196 (2021)

Chapter 3

Experiential Training of Hand Hygiene Using Virtual Reality Lauren Clack, Christian Hirt, Andreas Kunz, and Hugo Sax

Abstract Hand hygiene is widely recognized as an important measure to prevent the transmission of microorganisms that may be involved in healthcare-associated infections. Yet correct hand hygiene remains challenging and healthcare providers struggle to improve adherence. While seemingly straightforward, performing hand hygiene at the right indication using the right technique becomes difficult within the context of complex work processes. We believe that an important barrier to hand hygiene is the invisible nature of microorganisms and delayed expression of healthcare associated infections. This delayed or missing feedback makes it difficult to associate unsafe behaviors with their negative consequences. In this chapter, we describe an application of experiential learning theory to guide the development of a virtual reality hand hygiene trainer in which visual feedback about microorganism transmission and infectious outcomes are introduced in the virtual environment. With this immersive trainer, our aim is to enhance experiential learning and increase intrinsic motivation to perform hand hygiene. Keywords Healthcare · Infection prevention · Hand hygiene · Training · Experiential learning · Virtual reality · Simulation

L. Clack (B) · H. Sax Infectious Diseases and Hospital Epidemiology, University Hospital Zurich, Zurich, Switzerland e-mail: [email protected] H. Sax e-mail: [email protected] C. Hirt · A. Kunz Innovation Center Virtual Reality, ETH Zurich, Zurich, Switzerland e-mail: [email protected] A. Kunz e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_3

31

32

L. Clack et al.

3.1 Introduction Healthcare-associated infections (HAI) acquired during the course of treatment are the most frequent adverse events in healthcare delivery, affecting billions of patients worldwide [35]. In the European Union, the burden of HAI has been estimated at 4.5 million infections, 37,000 attributable fatalities, and 16 million additional days of hospital stay annually [38]. Importantly, multiple systematic reviews and metaanalyses have demonstrated that a significant proportion in the range of 35–55% of these infections are preventable through the application of evidence-based infection prevention measures [9, 31, 33]. Hand hygiene, for example, is widely recognized as the single most important measure to limit transmission of microorganisms and prevent HAI [1]. Despite its proven efficacy, studies repeatedly report low healthcare provider (HCP) adherence to hand hygiene, often in the range of 40% [8]. Improving HCP adherence to hand hygiene is therefore an important priority to avoid preventable infections. Previous interventions to increase HCP adherence to hand hygiene through education and guideline dissemination have been limited in their success and sustainability [15]. Whereas traditional educational interventions seek to improve adherence by improving knowledge, research increasingly suggests that—rather than limited knowledge—low adherence is more likely due to challenging working conditions that make performing hand hygiene difficult. These conditions include high-workload [22], frequent hand-to-surface contacts [5], lack of visual feedback about contamination [28], missing or poorly located hand hygiene resources [20], and delayed or non-existent feedback about the consequences of missed hand hygiene [28]. To be successful, interventions to improve hand hygiene compliance should be designed to target the actual barriers and support enablers of hand hygiene while considering the context in which hand hygiene occurs [17]. Where such interventions involve training and education, these should be designed based on evidence and theory from education research and social and cognitive sciences to increase their chances of success. The work presented in this chapter addresses two important challenges to hand hygiene adherence. The first challenge is that microorganisms are invisible. The invisible nature of microorganisms makes it impossible for HCPs to observe transmission dynamics or to know whether transfer of potentially harmful microorganisms to the patient has occurred as a result of missed hand hygiene opportunities. The second challenge is that the negative patient outcomes resulting from missed hand hygiene (i.e. patient colonization or infection) occur only much later in time, and these are impossible to trace back to a single individual yet alone a single behavior. Both, the invisible nature of microorganisms and delayed presentation of patient outcomes, prevent HCPs from associating negative patient outcomes with missed hand hygiene opportunities. These factors prevent the associative learning that would be necessary to increase motivation to perform hand hygiene. Based on these challenges, we believe that a promising approach to increase hand hygiene adherence is to focus on increasing individual motivation through

3 Experiential Training of Hand Hygiene Using Virtual Reality

33

experiential learning [21]. We define motivation as the processes that energize and direct behavior, to include both unconscious and automatic processes as well as conscious choices and intentions [18]. Experiential learning has been defined as the process whereby knowledge is created through the transformation of experience [12]. We propose that by reintroducing vivid visual and emotional feedback, that is, by making microorganisms visible and providing immediate feedback about infectious patient outcomes, we can foster experiential learning and support the non-volitional processes that drive habitual infection prevention behavior. In the following sections, we begin by reviewing previous works within the field of hand hygiene promotion and discussing the value of virtual reality for healthcare training in general, and hand hygiene specifically. We then present specific design considerations to promote effectiveness of hand hygiene training in a virtual environment (VE), using an adapted experiential learning theory exploratory learning model as a guiding framework [7]. Finally, we demonstrate an application of these design considerations to the development of our experiential hand hygiene trainer.

3.2 Hand Hygiene—Related Work The improvement of HCP adherence to hand hygiene has been established as a global priority and continuous efforts are being made to improve hand hygiene across the world. Traditionally, many of these efforts involve HCP education about when and how hand hygiene should be conducted according to established rules such as the World Health Organization’s (WHO) My Five Moments for Hand Hygiene [27]. Such educational interventions have traditionally taken the form of guideline dissemination, bedside education, and hands-on training. With some notable exceptions, these have often failed to achieve significant or sustained improvement in hand hygiene behaviors, likely because such rule-based interventions fail to elicit the motivation necessary to reliably change HCP behavior [17]. Rather than such rule-based approaches, motivation can be improved through associative learning that elicits strong feelings and supports habit formation. Initial applications using serious gaming [29] and virtual reality [2, 3] for hand hygiene education have also been reported. For example, Sax and Longtin [29] developed a serious game, in which HCPs viewed filmed sequences of a physician caring for a patient, then the gaming HCP decided whether hand hygiene was necessary. Similarly, Bertrand et al. [3] reported a virtual agent-based simulation trainer in which participating HCPs are first taught the rules (i.e. WHO indications for hand hygiene [27]), then participants assess hand hygiene performance of a virtual HCP as a percentage of correctly performed hand hygiene opportunities. While an innovation in the field of infection prevention, these efforts have the same limitation as traditional forms of hand hygiene education in that they use a rule-based approach to increase HCP knowledge about when and where to perform hand hygiene, rather than an experiential approach in which new mental models about best practices are formed on the basis of vivid experience.

34

L. Clack et al.

In a parallel stream of research, Lane et al. have employed and patented electronic sensors to automatically measure and prompt hand hygiene behaviors [13]. For example, such sensors can detect whether hand hygiene has been performed upon entering or exiting a patient room and provide alerts to remind hand hygiene [10, 16]. A major benefit of such systems is that they provide real-time behavior-shaping feedback. However, they are currently limited in their ability to detect actual behaviors, and therefore cannot provide feedback on all clinically relevant indications for hand hygiene (e.g. before aseptic tasks). In the current project, we build on the strengths and overcome the limitations of existing interventions by creating a VE in which HCP behavior can be observed and trained. Specifically, the VE we propose goes beyond previous interventions by providing vivid visual feedback about microorganism transmission and infectious outcomes to encourage experiential learning.

3.3 Virtual Reality for Experiential Training Virtual reality has been increasingly recognized as a promising technology for healthcare training by allowing users to experience and train for situations that would otherwise be difficult to produce in the real world [11, 30]. Virtual reality provides a unique opportunity to train both technical and non-technical skills in a realistic, risk-free learning environment. Virtual learning environments can also overcome geographical limitations as training can be conducted outside the confines of a lab or classroom, for example using web-based platforms. Avatars representing colleagues and patients can be introduced to increase realism and practice meaningful interactions. The appearance and behavior of these avatars as well as the overall VE can also be modified in a controlled manner that would be difficult or impossible to achieve in real life settings. A major benefit of virtual reality for healthcare training is that it offers an immersive learning environment where rich experiential learning can occur [7]. For our purposes, virtual reality has the specific advantage of allowing users to experience visual feedback that would be impossible in reality. Given that a VE for experiential hand hygiene training has—to the best of our knowledge—yet to be established, several technical and behavioral research questions emerge. A multidisciplinary approach, bringing together expertise from infectious diseases and hospital epidemiology, psychology, and virtual reality, is indispensable.

3.3.1 Experiential Learning Theory An ‘exploratory learning model’ that stems from Kolb’s experiential learning model has been proposed to guide the development of experiential training using 3D applications [7, 12]. Both of these models are descriptive rather than analytical, as they

3 Experiential Training of Hand Hygiene Using Virtual Reality

35

Fig. 3.1 Stages of the exploratory learning cycle in immersive environments [7]

describe an iterative process rather than causal links. Kolb’s experiential learning model is composed of four stages that occur cyclically, including: novel experience, reflection, forming abstract concepts and active experimentation in new situations [12]. The exploratory learning model builds on Kolb’s experiential learning model in that it takes into consideration the exploratory processes that occur within 3D and virtual training environments, thereby adding an ‘exploration’ stage to the model (Fig. 3.1). Central to both of these models is that the learner plays an active rather than passive role in the dynamic learning process and that one should experience all stages of the cycle for effective learning to occur. To increase the chances that training in the VE will be effective in fostering experiential learning and sustainably changing HCP hand hygiene behaviors, we employ the exploratory learning model [7] as a guiding framework to inform the development of our virtual reality hand hygiene trainer. In the following sections, we describe how the design and functionalities of our trainer are intended to support each stage of the experiential learning cycle. In the following sections, we refer to HCPs who participate in hand hygiene training as ‘learners’. The work presented here is based on an advanced experimental prototype that is currently undergoing further refinement [4, 6].

3.3.1.1

Novel Experience Stage

The experience stage of the learning cycle begins when a new experience is encountered. The learner is exposed to a novel situation with unfamiliar stimuli, in either

36

L. Clack et al.

the real world or a virtual context, of which they must make sense. Given that the VE can be modified in ways that would not be possible in the real world, there exist a multitude of opportunities to introduce new experiences. For our hand hygiene trainer, we have introduced two specific functionalities that are intended to expose HCP learners to novel experiences: microorganism visualization and time-warping. • Microorganism visualization: Due to the invisible nature of microorganisms in the real world, HCPs are largely unaware of the contamination status of objects in the healthcare environment and they don’t receive feedback about how their own manipulations may result in propagation of harmful microorganisms. In our hand hygiene trainer, we make virtual microorganisms visible in the VE so that HCPs can experience the transmission of microorganisms first-hand. Technically, this is achieved by tracking the physical location of the controllers and producing a colored overlay when they come into contact with objects and surfaces in the VE (Fig. 3.2). • Time-warping: In the real world, healthcare-associated patient infections or colonization with multidrug-resistant microorganisms can only be clinically detected with a significant delay and it is nearly impossible to trace these infectious outcomes back to a specific individual or behavior. For this reason, HCPs are rarely informed about the direct consequences of missed hand hygiene opportunities. In our hand hygiene trainer, we employ time-warping—that is making time in the VE move faster than in reality—to allow HCPs to immediately experience the down-the-line consequences of missed hand hygiene opportunities (Fig. 3.2). • Behavior and appearance of avatars: Many healthcare tasks are performed by teams or dyads (e.g. fully-trained HCP and trainee) caring for a patient. The social interactions that occur, both among HCPs and between HCPs and patients, play an important role in forming experiences in real and virtual contexts. For example, the presence of a senior physician performing hand hygiene or the belief that hand hygiene is important to your colleagues may increase the likelihood of performing hand hygiene [23]. Further, behavior of the patient, who could remind HCPs to do hand hygiene or react negatively to missed hand hygiene opportunities, is increasingly recognized for the potential to influence HCP hand hygiene behaviors [32]. This is important for our virtual hand hygiene trainer, where we can modify behavior and appearance of avatars representing colleagues and patients to influence the HCPs learning experience.

3.3.1.2

Exploration Stage

Within VEs, the learner is empowered to explore and push the boundaries of what they know while engaging with the environment [7]. The extent to which the training environment allows the learner to explore, as opposed to following a fixed script, is therefore an important design consideration. For our hand hygiene trainer, we have developed multiple care scenarios that serve as storylines to guide the training sessions. The participating HCPs are given a care task to complete that guides their

3 Experiential Training of Hand Hygiene Using Virtual Reality

37

Fig. 3.2 Left: Microorganism visualization on patient bed rail. Right: Visual portrayal of timewarping

activity within the VE. Within the bounds of the care task, HCPs are free to explore and to complete the task as they see fit. To facilitate this exploration, our trainer is designed to allow for real walking within the VE. • Real walking: Multiple researchers have shown that real walking, as opposed to teleportation using handheld controllers, for navigating VEs improves the user’s cognitive map and thus helps understanding the context of the training scenario [14, 25, 34]. Real walking is possible in our current hand hygiene trainer when the physical training space is larger than the VE, otherwise teleporting must be used for navigation (Fig. 3.3). Real walking becomes challenging, however, when the virtual world outsizes the available physical space. Here, a compression of the VE becomes necessary, for example using redirected walking [24] by introducing an unnoticeable mismatch between a user’s movement in the real and the VE. Future iterations of our trainer building on our previous work [4, 6, 19, 36, 37] may include redirected walking to improve the immersion for the learners.

3.3.1.3

Reflection Stage

During the reflection stage of the experiential learning cycle, any inconsistencies between the new experience and previous understanding are reflected upon [12]. Reflection is of particular importance for virtual training activities to facilitate the transfer between virtual and lived experiences [7]. Some have argued that the experience itself must be interrupted to initiate a phase of reflection [12]. To foster such reflection, our virtual hand hygiene training is embedded within a process that includes periods of debriefing. • Debriefing: The debriefing process, whereby individuals reflect on their own practice, is already well established within medical training, for example, following medical simulation [26]. Debriefing after a training scenario offers HCPs an opportunity to reflect on their performance and develop insights that can inform later

38

L. Clack et al.

actions [26]. In the current project [4, 6], debriefing takes the form of a live, post-training session in which the HCP learner has an opportunity to reflect on the virtual training experience. During debriefing, the learner may review videoexcerpts from their training exercise and view descriptive data about the extent of contamination that was established as a result of missed hand hygiene opportunities in the VE. The goal of this debriefing is to help learners to understand, analyze, and synthesize how their behaviour influences microorganism transmission and infectious patient outcomes in order to improve future hand hygiene performance.

3.3.1.4

Forming Abstract Concepts or Mental Models

The process of reflecting on new experiences inevitably gives rise to new ideas or leads to modification of one’s existing ‘mental models’ or ‘frames’ [26, 28]. Mental models can be defined as internal mental images gathered through experience and observation that lead to abstract concepts and representations of how an individual understands the world around her [28]. These abstract concepts or ‘models’ are then projected onto future experiences and shape future behaviors. Mental models are particularly important in the field of infection prevention, where individual understanding about microorganism transmission and the consequences of preventative measures like hand hygiene will influence the way they behave to prevent infections.

3.3.1.5

Active Experimentation in Different Situations

During the active experimentation stage, also referred to as ‘testing in different situations’, the newly acquired reflections and mental models are tested as the learner

Fig. 3.3 Navigating the virtual environment with teleporting

3 Experiential Training of Hand Hygiene Using Virtual Reality

39

is exposed to different situations. For our hand hygiene trainer, this means reentering the virtual training environment after having reflected and formed new mental models about transmission dynamics and the protective role of hand hygiene. Equipped with these new models, participating HCPs reenter the VE with heightened awareness about transmission dynamics. The ultimate assessment of our hand hygiene trainer and subject of future work is to evaluate the extent to which training in the VE translates to improved real life hand hygiene behaviors. We hypothesize that the reflections and mental models developed during virtual trainings and refined during debriefings will translate into an augmented awareness that will remain with HCPs once they return to their daily work, supporting improved hand hygiene performance in the real world.

3.4 Summary and Future Work The exploratory learning model based on Kolb’s experiential learning theory was useful for guiding the development of our virtual reality hand hygiene trainer [7, 12]. While using the virtual hand hygiene trainer, HCPs should pass through all stages of the learning cycle: (1) experiencing first hand the visual transmission of microorganism and receiving immediate feedback about infectious patient outcomes together with social feedback from virtual patients and colleagues; (2) exploring the virtual environment through real walking within the bounds of the given task scenario; (3) reflecting on the experience during debriefing; (4) forming new mental models and abstract concepts (e.g. about how HCP behaviors may lead to transmission of microorganisms and the protective role of hand hygiene in preventing infectious patient outcomes); and (5) testing these new mental models through further experiences in new situations. It is worth noting that, while it is important for learners to pass through all stages of the experiential learning cycle, these phases must not necessarily occur sequentially and the learner must not necessarily be consciously aware of the learning process that is occurring. The formation of new mental models, for example, is a largely unconscious process that occurs in everyday life each time a new situation is encountered. We expect that the vivid experience of seeing microorganism transmission and infectious patient outcomes as a result of missed hand hygiene will itself have a strong impact on the learner’s internal mental images. By introducing debriefing to the training process, we aim to bring this typically unconscious mental activity to conscious awareness and thereby strengthen the learning process. The experiential learning approach described in this chapter is consistent with previous research by Nicol et al. [21], who found that vivid vicarious experience, such as personal exposure to an infectious outbreak or caring for a patient affected by healthcare-associated infection, was of greater importance than formal education in explaining HCP hand hygiene behaviors [21]. They highlighted that the emotional impact of having been personally involved in such a situation heightened HCP awareness and resulted in sustained improvements in hand hygiene [21]. With our

40

L. Clack et al.

virtual trainer, we aim to elicit the same heightened emotional reaction in a risk-free environment. An added value of training in virtual reality is that the behaviors and resulting transmission pathways of HCPs participating in training in the VE can be automatically captured and analyzed. This produces a rich data set about HCP behaviors without the need for costly and time-consuming direct observations. The virtual healthcare setting therefore represents a powerful tool for both training and studying HCP infection prevention behaviors. This paper describes an innovative approach to improve HCP hand hygiene performance through vivid virtual reality training embedded within an experiential learning process. This trainer aims to improve hand hygiene by reintroducing the otherwise missing feedback about the consequences of unsafe behavior and therefore increase HCP motivation to perform hand hygiene. We believe this hand hygiene trainer has the potential to improve HCP hand hygiene performance in the real world, which will be the focus of future evaluation. Once validated, this system could be adapted for use in other clinical settings (e.g. surgical, ambulatory care) and extended to integrate hand hygiene training into a wide range of clinical processes. Acknowledgements This project has been generously supported by a donation from Dr. HansPeter Wild to the University Hospital Zurich Foundation. We would further like to acknowledge Dirk Saleschus, project manager, and Marcel Wenger, innovation manager, for their contributions to this work.

References 1. Allegranzi, B., Pittet, D.: Role of hand hygiene in healthcare-associated infection prevention. J. Hospital Infect. 73(4), 305–315 (2009) 2. Bertrand, J., Babu, S.V., Gupta, M., Segre, A.M., Polgreen, P.: A 3D virtual reality hand hygiene compliance training simulator. In: Scientific Meeting of the Society for Healthcare Epidemiology of America (2011) 3. Bertrand, J., Babu, S.V., Polgreen, P., Segre, A.: Virtual agents based simulation for training healthcare workers in hand hygiene procedures. In: International Conference on Intelligent Virtual Agents, pp. 125–131 (2010) 4. Clack, L., Wenger, M., Sax, H.: Virtual reality enhanced behaviour—Change training for healthcare-associated infection prevention. Front. Publ. Health 45 (2017) 5. Clack, L., Scotoni, M., Wolfensberger, A., Sax, H.: V “First-person view” of pathogen transmission and hand hygiene—Use of a new head-mounted video capture and coding tool. Antimicrob. Resist. Infect. Control 108(6) (2017) 6. Clack, L., Hirt, C., Wenger, M., Saleschus, D., Kunz, A., Sax, H: VIRTUE—A virtual reality trainer for hand hygiene. In: 9th IEEE International Conference on Information, Intelligence, Systems and Applications (2018) 7. De Freitas, S., Neumann, T.: The use of ‘exploratory learning’ for supporting immersive learning in virtual environments. Comput. Educ. 52(2), 343–352 (2009) 8. Erasmus, V., Daha, T.J., Brug, H., Richardus, J.H.K., Behrendt, M.D., Vos, M.C., van Beeck, E.F.: Systematic review of studies on compliance with hand hygiene guidelines in hospital care. Infect. Control Hospital Epidemiol. 31(3), 283–294 (2010) 9. Harbarth, S., Sax, H., Gastmeier, P.: The preventable proportion of nosocomial infections: an overview of published reports. J. Hospital Infect. 54(4), 258–266 (2003)

3 Experiential Training of Hand Hygiene Using Virtual Reality

41

10. Higgins, A., Hannan, M.M.: Improved hand hygiene technique and compliance in healthcare workers using gaming technology. J. Hospital Infect. 84(1), 32–37 (2013) 11. Hoffman, H., Vu, D.: Virtual reality: teaching tool of the twenty-first century? Acad. Med. 72(12), 1076–1081 (1997) 12. Kolb, D.A.: Experiential Learning. Englewood Cliffs (1984) 13. Lane, S., Strauss, K., Coyne, M.: Systems and methods for improving hand hygiene compliance. US Patent 6,975,231 (2005) 14. Larrue, F., Sauzeon, H., Wallet, G., Foloppe, D., Cazalets, J.R., Gross, C., N’Kaoua, B.: Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment. J. Cogn. Psychol. 26(8), 906–918 (2014) 15. Larson, E.L., Quiros, D., Lin, S.X.: Dissemination of the CDC’s hand hygiene guideline and impact on infection rates. Am. J. Infect. Control 35(10), 666–675 (2007) 16. Marra, A.R., Zinsly Sampaio Camargo, T., Magnus, T.P., Blaya, R., dos Santos, G.B., Guastelli, L.R., et al: The use of real-time feedback via wireless technology to improve hand hygiene compliance. Am. J. Infect. Control 42(6), 608–611 (2014) 17. Michie, S., Van Stralen, M.M., West, R.: The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement. Sci. 6(1), 42 (2011) 18. Mook, D.G.: The Organization of Action. WW Norton (1987) 19. Nescher, T., Huang, Y.Y., Kunz, A.: Planning redirection techniques for optimal free walking experience using model predictive control. In: IEEE Symposium on 3D User Interfaces, pp. 111–118 (2014) 20. Nevo, I., Fitzpatrick, M., Thomas, R.-E., Gluck, P.A., Lenchus, J.D., Arheart, K.L., Birnbach, D.J.: The efficacy of visual cues to improve hand hygiene compliance. Simul. Healthcare 5(6), 325–331 (2010) 21. Nicol, P.W., Donovan, R.J., Wynaden, D., Cadwallader, H.: The power of vivid experience in hand hygiene compliance. J. Hospital Infect. 72(1), 36–42 (2009) 22. O’boyle, C.A., Henly, S.J., Larson, E.: Understanding adherence to hand hygiene recommendations: the theory of planned behavior. Am. J. Infect. Control 29(6), 352–360 (2001) 23. Pittet, D., Simon, A., Hugonnet, S., Pessoa-Silva, C.L., Sauvan, V., Perneger, T.V.: Hand hygiene among physicians: performance, beliefs, and perceptions. Ann. Internal Med. 141(1), 1–8 (2004) 24. Razzaque, S., Kohn, Z., Whitton, M.C.: Redirected walking. In: Proceedings of EUROGRAPHICS, vol. 9, 105–106 (2001) 25. Ruddle, R.A., Volkova, E., Bülthoff, H.H.: Walking improves your cognitive map in environments that are large-scale and large in extent. ACM Trans. Comput.-Hum. Interact. (TOCHI’11) 18(2), 10:1–10:20 (2011) 26. Rudolph, J.W., Simon, R., Raemer, D.B., Eppich, W.J.: Debriefing as formative assessment: closing performance gaps in medical education. Acad. Emerg. Med. 15, 1010–1016 (2008) 27. Sax, H., Allegranzi, B., Uckay, I., Larson, E., Boyce, J., Pittet, D.: My five moments for hand hygiene: a user-centred design approach to understand, train, monitor and report hand hygiene. J. Hospital Infect. 67(1), 9–21 (2007) 28. Sax, H., Clack, L.: Mental models: a basic concept for human factors design in infection prevention. J. Hospital Infect. 89(4), 335–339 (2015) 29. Sax, H., Longin, Y: Immersive hand hygiene trainer for physicians–a story-based serious game. In: BMC Proceedings, vol. 89, no. O31. BioMed Central (2011) 30. Saxena, N., Kyaw, B.M., Vseteckova, J., Dev, P., Paul, P., Lim, K.T.K., Kononowicz, A., Masiello, I., Tudor Car, L., Nikolaou, C.K., Zary, N., Car, J. : Virtual reality environments for health professional education (protocol). Cochrane Database Syst. Rev. 2 (2016) 31. Schreiber, P., Sax, H., Wolfensberger, A., Clack, L., Kuster, S.: Swissnoso: the preventable proportion of healthcare-associated infections 2005–2016: systematic review and meta-analysis. Infect. Control Hospital Epidemiol. (in press), Presence (2018) 32. Stewardson, A.J., Sax, H., Gayet-Ageron, A., Touveneau, S., Longtin, Y., Zingg, W., Pittet, D.: Enhanced performance feedback and patient participation to improve hand hygiene compliance of health-care workers in the setting of established multimodal promotion: a single-centre, cluster randomised controlled trial. Lancet Infect. Dis. 16(12), 1345–1355 (2016)

42

L. Clack et al.

33. Umscheid, C.A., Mitchell, M.D., Doshi, J.A., Agarwal, R., Williams, K., Brennan, P.J.: Estimating the proportion of healthcare-associated infections that are reasonably preventable and the related mortality and costs. Infect. Control Hospital Epidemiol. 32(2), 101–114 (2011) 34. Usoh, M., Arthur, K., Whitton, M.C., Bastos, R., Steed, A., Slater, M., Brooks Jr., F.P.: Walking > walking-in-place > flying, in virtual environments. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 359–364 (1999) 35. World Health Organization: Report on the Burden of Endemic Healthcare-Associated Infection Worldwide (2011) 36. Zank, M., Kunz, A.: Where are you going? Using human locomotion models for target estimation. Vis. Comput. Int. J. Comput. Graph. 32(10), 1323–1335 (2016) 37. Zank, M., Kunz, A.: Optimized graph extraction and locomotion prediction for redirected walking. In: IEEE Symposium on 3D User Interfaces, pp. 120–129 (2017) 38. Zarb, P., et al.: The European Centre for Disease Prevention and Control (ECDC) pilot point prevalence survey of healthcare-associated infections and antimicrobial use. Eurosurveillance 17(46), 20316 (2012)

Chapter 4

Useful, Usable and Used? Challenges and Opportunities for Virtual Reality Surgical Trainers Chantal M. J. Trudel Abstract This chapter discusses design considerations in the development of virtual reality surgical training simulators in reference to a variety of case studies. The chapter presents a preliminary framework outlining research priorities and areas that have been suggested by previous researchers to help focus the design development of virtual reality applications. Elements of this framework are discussed in reference to the case studies. Keywords Virtual reality · Medicine · Surgical simulators · Surgical pedagogy · Surgical performance · Human factors · Design · Interaction design · Human–computer interaction · Design process · Usability · Presence · Teamwork

4.1 Introduction Over the past two decades, virtual reality (VR) systems (also referred to as virtual environments or VEs) have become increasingly sophisticated and widely researched and adopted in healthcare education, training and services. For example, in 2003 Riva found 951 papers referencing ‘virtual reality’ in MEDLINE (PubMed) and 708 results in PsycINFO [1], whereas a keyword search for the same term in January 2020 returned more than 10,000 results in PubMed and more than 12,000 results in PsycINFO, representing continued investment in VR research and development for healthcare. Considering the attention VR simulators have garnered in surgical applications and the high costs typically associated with such applications, it would seem reasonable to assume that significant advancements have been made in terms of the usefulness, usability and the actual use or acceptance of such systems. However, a review of some early and more current surgical applications suggests that the medical field is focused almost solely on the proposed usefulness or effectiveness of these systems to support patient outcomes, which is understandable. But what appears C. M. J. Trudel (B) Faculty of Engineering and Design, School of Industrial Design, Carleton University, 1125 Colonel By Dr, Ottawa, ON K1S 5B6, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_4

43

44

C. M. J. Trudel

to be missing, or at least not emphasized, is a focus on usability issues as well as techniques to study factors related to the acceptance or actual use of such systems, factors which may not be readily obvious but may in fact help maximize usefulness. A review of several case studies suggests that researchers and clinicians working with virtual reality applications in surgery could be missing an opportunity to improve patient outcomes by studying and improving not just the effectiveness or usefulness of surgical VR applications, but also studying the usability, measures for acceptance and actual sustained use (or lack thereof) of such systems.

4.1.1 Improving Healthcare Delivery, Patient Outcomes and Training Opportunities Virtual reality applications in healthcare are driven by a number of objectives which include: reducing the rate of error in patient care [2, 3]; increasing virtual training opportunities to supplement reductions in clinical practice time [2] specifically, limited access to training inside operating rooms (ORs) [4]; and providing safer, controlled environments to facilitate training without compromising patient safety [1, 5, 6]. In surgery, the use of VR is facilitating the practice of basic and complex procedures both in the field and through simulated training in lab environments. Simulators using these technologies can address resource issues in training such as scarcity of cadavers for practicing and conducting repetitive tasks and the high costs associated with these models [6]. VR simulations have been used to address errors or inefficiencies related to: distractions and interruptions in surgical workflow [7]; poor equipment design, layout and/or malfunctions; poor communication and decisionmaking among multidisciplinary team members; infrequent exposure to unexpected scenarios such as surgical emergencies; and changes in hospital policies that may influence clinical practice [3, 8]. In order to properly address these deficiencies and pedagogical objectives, conceptual frameworks or taxonomies are required to match appropriate VR applications to specific types of surgery and contexts-of-use, levels of surgical skill desired and factors related to supporting optimal surgical performance [8, 9]. Windsor has cautioned such frameworks “should be driven by educational imperatives and not by technological innovation”. This requires evidence that the technology has a valuable contribution to learning and assessment and other proposed systemic advantages but is also cost-effective to ensure the system’s long-term viability and ability to sustain actual use [9]. In line with this view, Milburn and colleagues highlight the evidence supporting the equivalence of low-and high-fidelity simulation in teaching basic skills [8]. Based on this information, this group cautions against the adoption of more resource intensive and costly high-fidelity simulations such as VR applications, over more cost-effective lower-fidelity models that may suit the specified educational objectives and requirements.

4 Useful, Usable and Used?

45

Virtual reality simulations are becoming increasingly sophisticated and able to provide several types and levels of simulation which can occur in a variety of locations including educational facilities, specialized simulation centres, on-line within a trainee’s own home, or within the operating room environment itself [8]. The potential to access simulations in a variety of settings, and in some cases on-demand, also allows for varied and repeated exposure to specialized training. This offers a very different approach from traditional surgical training models based predominantly on apprenticeship in operating theatres or working in laboratories with animal specimens, models which pose challenges for human and site resources, consistency in instruction [10], reduced residency hours to participate in training and risks to patient safety in OR settings [11]. Simulations using VR technologies have the potential to create a more level playing field in surgical training by increasing access to more standardized educational opportunities in a variety of controlled environments and scenarios. VR simulations may also offer students with different learning strengths and abilities a more equitable opportunity to access resources when needed, and repeatedly if available, to improve their knowledge and refine their skills [10]. If appropriately integrated into medical curricula, surgical VR simulators can lead to reductions in technical learning curves preparing surgical trainees and surgeons for practice in the field and the potential to improve patient safety, outcomes and efficiencies in healthcare delivery [8]. Further, studies have demonstrated that simulationbased mastery learning which goes beyond demonstrating baseline competencies, can significantly improve the skills of all participants with evidence of skills retainment levels up to one year after training [12]. But despite the enthusiasm for implementing VR simulation training within medical programs to support such initiatives, some experts are concerned that this type of training, in reality, occurs perhaps once or infrequently [10]. This situation suggests VR applications need to include ‘access’ considerations in design development to support repetition within the context of different and increasingly difficult learning challenges [10]. This is particularly important considering the reality of OR environments and the stressful conditions surgical teams may experience. VR simulators should be able to train people in completing tasks safely, while immersed in stressful situations and conditions. This approach is supported by situational learning models [13] which advocate that learning is best achieved by actually engaging in a process in its proper social context [7]. The issues of limited access to the reality of complex and stressful OR scenarios may be further compounded by factors such as the “interface familiarization curve” which refers to a steep learning curve that people may experience in VR simulators, thus requiring repeated exposure to learn the nuances of the system itself before the educational content can be properly absorbed. Difficulties in learning and using an interface may result from poor design and learners may need time to overcome system limitations and master control and feedback before acquiring surgical competence that can be measured and predict performance in the field [14]. Systems should be intuitive and easy to use, not requiring long time commitments to simply learn the system in order to begin practicing the required skills, especially by students or trainees who commonly experience high workloads [5].

46

C. M. J. Trudel

Issues with access, learnability and usability undermine the movement towards ‘outcomes-based medical education,’ which is focused on improving the performance of learners, facilitating practice of specialized skills and achieving mastery of the required competencies [10]. In this model of education, the goal is for students to achieve mastery in learning not just competence, with consistency across learners. Fundamental to achieving this level of mastery are the time and appropriate resources [15] to acquire knowledge and practice such skills, recognizing that the time and resources needed to achieve mastery may vary between individuals based on their characteristics and abilities [16]. Differences between learners, where some may require more or less training than others to achieve certain educational outcomes over others [16], is a factor that has been recognized by organizations studying the availability and role of simulation in surgical education and training. For example, Milburn et al. argue that maximizing the educational benefit of simulation requires a sensitivity towards equity by improving accessibility to these applications, awareness of these systems and the standards of facilities to permit trainees and trainers to fully use these resources [8]. With this pedagogical context in mind, this review of early and more current research studies in VR surgical trainers discusses the concepts of usefulness and usability as well as considerations to support actual use and acceptance of such systems. Although these cases vary in terms of surgical specialization, what they have in common is a primary focus on performance or effectiveness, with less emphasis placed on usability considerations and other factors related to adoption or actual use. This may be due to the high level of complexity involved in designing such studies and scoping research objectives. What they demonstrate is the utility of developing a framework to assist in defining a scope of work and strategy that helps better position or map study objectives to factors that support usefulness, usability, acceptance, and actual or viable use. Such a framework could support research objectives that better serve stakeholder needs and requirements in achieving specific educational goals.

4.2 Design Drivers in Developing VR Surgical Trainers 4.2.1 Is It Useful, Usable and Used? In discussing VR simulators in surgical applications, there are three basic domains differentiated in interaction design that can help frame our understanding of where the development and analysis of such systems are perhaps focused or situated. When we ask whether a product, environment, service or system is useful, usable or used, are we talking about similar concepts or are we in effect asking very different things? Some researchers and practitioners in interaction design have discussed the importance in differentiating between usefulness, usability and actual use in discussing the value of design [17, 18] or similarly the importance of not reducing a design’s assessed value to just usability constructs and measures [17, 19].

4 Useful, Usable and Used?

47

Before discussing such differences, some basic definitions from three reputable dictionary sources may be helpful. Useful is defined as “capable of being put to use” or “serviceable for an end or purpose” [20]; “effective; helping you to do or achieve something” [21]; or “[a]ble to be used for a practical purpose or in several ways” or “[v]ery able or competent in a particular area” [22]. The definition of usable differs slightly and is defined as “capable of being used” or “convenient and practicable for use” [23]; “that can be used” or “able to be used for a purpose” [24]; and finally “[a]ble or fit to be used” [25]. Used is defined as “that has already been put to the purpose it was intended for; not new” or “accustomed, habituated” [26]; “employed in accomplishing something” or “that has endured use” [27]; and finally, “[h]aving already been used” [28]. In looking at these definitions, useful implies the provision of some level of effectiveness that allows people to do something or achieve a purpose (implied by the words “capable of being put”, and “serviceable”). But it does not necessarily imply that this level of effectiveness is necessarily ‘convenient’ or is a ‘good fit’ (with a person’s capabilities) as the word usable implies more directly. The term used as described above is perhaps the most interesting and different from the other terms in that, when we say something is used versus usable or useful, we are confirming it is serving its intended purpose—the thing in question has been vetted effectively by virtue of being used and therefore met the objectives it was designed to meet and succeeded in its development. Based on the surgical educational objectives discussed earlier, one word seems missing in this review and appropriate to discuss here, which is the term accepted defined as “regarded favorably”, “given approval or acceptance” or “generally approved or used” [29]; “generally agreed to be satisfactory or right” [30]; and lastly, “[g]enerally believed or recognized to be valid or correct” [31]. Studying acceptance in design is important, since the literature has long acknowledged that what designers and engineers perceive to be advances in the performance of systems have often been derailed by people’s unwillingness to accept and use the proposed systems [32, 33]. The Interaction Design Foundation discusses these terms within the context of various examples, noting that there may be a subjective quality to what one thinks is useful that may not have usability involved in the evaluation. For example, I find a work of art useful in supporting my mood, but it is not usable; or products, such as a door which may be useful to enter and exit an environment, but not necessarily usable perhaps due to poor decisions made in the execution of the design or lack of consideration for differences in people’s characteristics and abilities [18]. In turn, the usability of a product may impact its perceived level of usefulness or utility, and if a design is not perceived to be useful, is it unlikely to be accepted by individuals? Perhaps the most important thing the organization notes in its discussion of these terms, is that evaluating whether a design is used or will be used (accepted) is critical to its proposal and development since “[i]f a design is not used, it doesn’t matter how useful and usable it is” [18]. This message has been longstanding in the field of interaction design with Davis noting in 1989 [32] the importance of predicting and explaining the actual use of systems and calling for better measures that move beyond theoretical use and usability studies to study actual use in order to bring better

48

C. M. J. Trudel

value to both individuals using the product and organizational stakeholders such as manufacturers and vendors. Other researchers see the value in focusing on better defined subdimensions of user experience such as efficiency, ease of use, satisfaction, and enjoyment to more fully understand the relations between such constructs relative to individual characteristics, contextual factors (e.g., tasks, culture), impact or results (e.g., attitudes, behavior) and design features [19].

4.2.2 Establishing System Requirements Establishing project goals, analyzing individual and stakeholder requirements and an iterative evaluation process are essential to guiding the design of interactive technologies [17] and particularly with complex systems such as VR applications [34]. Methodologies that can help define issues and opportunities, and better understand and develop individual and organizational requirements will depend on a variety of factors (e.g., the problem identified and related factors, the specific research question, available sample size, contextual constraints in studying the issue, budget constraints, etc.). Techniques to study VR systems may include rating scales, interviews, questionnaires, field or lab observation, focus groups, task analysis, to name just a few common methods [17, 35, 36]. In addition to these traditional research techniques, low-fidelity and high-fidelity prototyping and simulations of designs in lab or field settings are essential to evaluation [17]. But applying this evaluative process does not guarantee that a project will capture all the requirements necessary to make a design useful, usable, and accepted by individuals being introduced to the system, or later, actually used in a real-world context.

4.2.3 Factors Influencing Usefulness, Usability and Use The objectives driving VR development vary, with some studies focused on demonstrating the usefulness of a system (how a system might meet current gaps and needs), where others may focus more on usability (how easy it is to actually use a system), the latter of which may help optimize the usefulness of a system as discussed earlier. Acceptance and the actual use of such systems are other facets to consider in the successful development of VR systems. Early work by Sharples and colleagues suggested key research areas that may help further advance the field of VR development which touch on these concepts, including studying the effects of use, the effectiveness of the VR system, usability, presence, collaboration and the impact of technology developments (Fig. 4.1), areas which are often interdependent [37]. With regards to the interdependence of factors, it would be reasonable, for example, to hypothesize that the success of a collaborative technology could depend on its ability to seamlessly integrate a variety of technologies and enhance presence to help facilitate the completion of a group task. Similarly, a VR system’s ability to support

4 Useful, Usable and Used?

49

Fig. 4.1 Human factors priorities for VR development. Adapted from Sharples et al. [37]

individuals in completing a task or facilitating collaboration could be undermined by awkward body postures or other factors which may lead to physical discomfort. Physical and psychological effects from use have been studied (Fig. 4.2) by Nichols et al. and Cobb et al. [38, 39] and summarized in Sharples et al. [37]. Although the VR surgical studies presented in this review have focused primarily on the effectiveness of the applications, descriptions and photos from some of the studies suggest complex physical arrangements and demands. Depending on the amount of time the applications are used or the repetition of tasks and/or cognitive effort required of tasks, physical or psychological impacts may be experienced by

Fig. 4.2 Physical and psychological effects of VR use Adapted from Cobb et al. [60], Nichols et al. [61] and Sharples et al. [37]

50

C. M. J. Trudel

Fig. 4.3 Factors that may influence one’s sense of presence. Adapted from Slater [40, 42, 44]

individuals using these systems (e.g., awkward posture demands, discomfort due to equipment design, etc.). Such factors may warrant more detailed evaluation. Presence is another consideration in the design of VR applications and there are a number of factors that may affect one’s sense of immersion in the experience (Fig. 4.3). Presence has been defined by Slater as having the ‘sense of being there’ [40]. It has also been described as a change in one’s situation awareness towards the VR environment or the extent to which a person loses cognizance of their immediate, real environment and becomes convinced of their presence in the VR application [41]. Deficiencies in certain factors (e.g., seeing pixels due to resolution limits, bumping into walls in virtual environment systems) may ‘break’ one’s sense of presence [40, 42]. Witmer and Singer have identified the importance of providing a seamless experience capable of responding to our expectations in a meaningful way, stating that “presence depends on the ability to focus on one meaningful, coherent VE stimulus set” which “enables the focusing of attention” [43]. But how is this sense of cohesion achieved? Does it require realism or improved performance? Slater argued that a greater sense of realism does not necessarily result in a greater sense of presence as variables within the application will have different marginal effects [44]. He also argues that good performance does not equate to stronger presence. An individual performing poorly in VR may perform poorly on the same task in real life, which may in fact enhance their sense of presence or being there since they experience comparable phenomena in real life [44]. Draper et al. [41] share a similar view, cautioning against the view in the engineering literature that asserts that presence improves human performance in VR applications.

4 Useful, Usable and Used?

51

Fig. 4.4 Interaction design principles relevant to VR. Adapted from Rogers et al. [17]

The surgical VR examples discussed in this review are relevant here as they touch upon where realism may be important or unnecessary. But what is consistent in every example is their ability to measure performance. Aside from factors related to supporting immersion or presence, Rogers, Sharp & Preece summarize basic principles in interaction design (see Fig. 4.4) that may influence physical or psychological comfort, the effectiveness of interactive systems, usability of the interface and instrumentation, collaboration, and technology selection and development [17].

4.2.3.1

VR Examples in Surgical Education and Training

Surgical simulators are used for pre-operative planning and to train students and practitioners in complex procedures where there is little opportunity for hands-on practice [1, 5]. This review of studies on surgical simulators reveals a tendency of the researchers to focus on measuring performance and task outcomes. But due to the complex nature of human-virtual environment interactions, measures are required to assess multiple criteria which may inadvertently influence human performance. Some common factors contributing to human performance in VR applications include navigational qualities, the degree of presence provided by the application, and feedback on individual performance from standardized tests [45]. Surgical simulators present various usability challenges that may make use difficult or unpleasant, and in turn, unintentionally influence the system’s effectiveness. This might include experiences such as inaccurate or awkward haptic feedback from using long-stemmed

52

C. M. J. Trudel

instruments, reduced depth perception due to loss of stereopsis, or poor hand–eye coordination resulting from limited degrees of freedom in motion for instrumentation [46]. One early VR simulation that focused on providing effective surgical training is the Limb Trauma Simulator, a system that was used in training for emergency combat care [47]. The system consisted of a computer model displayed on a monitor synchronized to stereo glasses, and an input device to simulate instrument control in a 3D software model of an anatomically correct healthy limb with simulated bullet wounds [47]. The simulator allowed trainees to see a 3D visual representation of the wound, as well as interact with controlled bleeding, debridement and hemostasis using input devices such as scalpels and forceps that provided force feedback [5]. Some of the challenges with such early technologies was that the richness of the interactive properties (computational load) reduced the visual fidelity of the image resulting in a low-resolution image [5]. Restrictions on the degrees of freedom to control the instruments and force feedback in use reduced a trainee’s ability to feel the range of reaction torques that one would actually experience in the field [47], limitations which may have undermined the trainee’s sense of presence. Kaber and Zhang [48] have noted the importance of including multi-modal experience in VR trainers since the ability to transfer the multi-modal aspects of such skills in the field will impact actual performance and patient safety. The characteristics of complex tasks may rely on multiple interacting modalities such as a person’s perception of information through a variety of senses, as well as temporal relations between motor control and visual perception. Studies comparing early laparoscopic simulators such as the MIST-VR (consisting of a computer, monitor, and laparoscopic tools without haptic feedback) and the ProMIS (consisting of a computer-enhanced videoscopic system, physical laparoscopic simulator, a full-scale model of an upper torso, computer, monitor, laparoscopic tools and interactive haptic feedback) have shown that preliminary development on these systems were able to train novice surgical trainees in complex laparoscopic skills [46, 49]. Kanmuri et al. [49] found that trainees performed similarly on these different systems provided the training was objectives-based and several opportunities were given to achieve the training objectives. In weighing the benefits of each system, the authors stressed the importance of easy access to automated performance metrics, which they noted was a feature of the MIST-VR but not the ProMIS [49]. Satava and Jones [5] support the need for usable performance metrics in order to objectively measure technical skills and provide meaningful analysis of performance outcomes. The authors of the laparoscopic study also highlighted the potential influence of factors external to the simulator design on one’s perception of system effectiveness. For example, they found that performance outcomes were influenced more by the quality of training and the clinical assessment methodology, than by features of the simulator [49]. The Kanmuri et al. [49] study focused primarily on the ability to complete task objectives and measure performance, but how did possible presence-enhancing features such as haptic feedback fare in the comparison? The authors found the nonhaptic user interface of the MIST-VR resulted in very similar training results to

4 Useful, Usable and Used?

53

that achieved with the ProMIS system, which allowed for interaction with real physical objects. At the time, the haptic feature of the ProMIS may have been lacking in technological maturity, and therefore perceived value, with the authors noting that further development was required to achieve a cost-effective system to support the sensation of authentic force feedback [49]. The lack of haptic believability suggests that it did not contribute, and may have in fact, detracted from a sense of presence. Other studies have shown that haptic feedback is of more value to skilled participants, which reinforces the importance of defining who the audience for the product will be and their specific requirements [17]. For example, a study by Panait et al. [50] showed haptics contributing to greater precision, fewer errors and faster task completion in more advanced surgical tasks whereas the performance of more basic tasks did not benefit from haptic-enhanced simulation. However, for some procedures such as hysteroscopic surgery, haptics may be a critical component for any skill level in reducing learning curves and improving performance. A study by Neis et al. [4] used the HystSim™ VR trainer by VirtaMed to assess the effectiveness of the tool to improve psychomotor skills used in hysteroscopic surgery. The study focused on evaluating the performance of inexperienced and experienced participants following a standardized HystSim™-VRT program, which involved three rounds of polyp and myoma resection. The authors highlight specific challenges involved in the design and implementation of operative hysteroscopy VR trainers compared to laparoscopy trainers. Hysteroscopy VR trainers require simulated fluid use, and the ability to guide a cauterizing instrument into the endometrial cavity and resect an object representing different pathologies—levels of technological complexity and integration that manufacturers are attempting to incorporate. The participants in this study were divided into a basic group with minimal to moderate experience in diagnostic and operative hysteroscopy, while the advanced group had moderate to high experience. Participants were introduced to the system, handled the instruments for 5 min before using their dominant hand to use the rectoscope and left hand for fluid handling using inlet and outlet valves. Participants were asked to do three rounds of an easy task, which involved removing a polyp in a simulated uterus and a moderately difficult task resecting a type 1 myoma in the simulated uterus [4]. The authors selected and evaluated specific measures needed to support two key learning objectives: supporting patient safety and optimizing OR costs, an objectives/requirements-based approach critical to successfully designing complex interactives [17, 34]. This involved evaluating the amount of movement occurring in instrument use (a measure of ergonomic performance), fluid use (measure of patient safety) and the amount of time to resect the pathologies (measure of cost effectiveness). The study demonstrated the system’s potential value with regards to access to complex surgical training and repeated practice. Over the three rounds of evaluation, there was a significant increase in the median performance score for both groups, a significant decrease in time needed to perform the task, less movement occurring in instrument use with each session, and the use of fluid also reduced. Yet despite this evidence of the system’s usefulness and effectiveness, the authors felt haptic feedback could create a greater sense of presence by giving people the ability to

54

C. M. J. Trudel

feel a simulation of the uterine cavity and cervix. This would improve learning by helping clinicians develop a tactile/proprioceptive understanding of these anatomies [4]. Bajka et al. [51] have also discussed safety–critical limitations that haptics may help mitigate, such as not being able to distinguish between critical and non-critical contact with the uterine cavity wall. Other possible advantages of haptic feedback include the ability to distinguish the resistance of different pathologies. Currently, this type of training is occurring with plastic and animal models with some novel pelvic models recently introduced by VirtaMed [4]. Supporting the learning of such unique surgical instrumentation and the integration and assessment of such instrumentation can pose challenges for comprehensive design development and assessment. Studies on virtual reality haptic (VRH) trainers for endonasal surgery have described the usefulness and possible effectiveness of using such technology [6]. Some of the proposed benefits include ergonomic training on unique instrumentation and methods to help visualize procedures to improve performance as well as allowing time to practice and learn from errors without impacting patient safety [52–54]. But the evaluation of such trainers is sparse, making the necessary design work, iterations and refinement needed to create a successful application difficult to achieve. Thawani et al. [6] conducted the first study evaluating the effectiveness of such simulators using NeuroTouch’s VRH simulator in training endoscopic endonasal surgery [55–57]. During this procedure, the surgeon is looking at a screen, which shows a visual of the endoscopic procedure that must be coordinated or mapped to the tactile and proprioceptive feedback being received through endoscopic and drill tools manipulated by the surgeon’s hands, as well as foot pedals for irrigation and drilling. The study [6] compared the performance of an experimental and control group who were designated to each group based on their scores from a visual-analog scale (VAS) assessment. Participants with the lowest 3 VAS scores were assigned to the experimental group and would experience simulator training, while participants with the highest 3 VAS scores were assigned to the control group with no simulator training, the hypothesis being that training should demonstrate improvement in the experimental group and performance should remain the same with the untrained group. The authors [6] did not use the program’s software to evaluate improvements (e.g., to assess time, completion of tasks, tactile forces and excess force used) due to observed inconsistencies in the performance metrics. Although resource intensive, this deficiency was addressed by having performance evaluated by an expert in 2 simulated sessions (before and after training) and intra-operatively using the VAS scale based on six independent measures. As anticipated, the study found reliable improvement in the simulated performance in the trained group, but not in the performance of the untrained group [6]. Participants were given the task of using the endoscope to identify the right sphenoid ostium, withdrawing the tool from the nasal passage. They were then required to insert the drill and the endoscope in the passage to create an opening in the sphenoid sinus followed by the final task of inserting the endoscope into the sphenoid sinus. Participants received instruction on how to position the pedals; how to hold, orient and maneuver the instruments; and introduce instruments simultaneously into

4 Useful, Usable and Used?

55

the nasal passage. During the task, the individual holds the endoscope in their left hand and the drill in the right hand [6]. The screen used to view the procedure is described as adjustable, but in photos of the experiment appears to be positioned outside the acceptable ergonomic visual display zone of 45° from horizontal line of sight [58]. Although participants were allowed to practice in the session, they were not permitted to practice at other times and two to three simulation training sessions were required to achieve the minimum expected performance score. The authors noted that although the built-in performance metrics were not reliable and that the study could have benefited from having more than one expert evaluator, they emphasized that the alternatives in training are sparse and that the system was useful for practice in holding, moving instruments, and positioning [6]. What’s interesting about this study is the description of the task they were assigned to perform and what appears to be a level of difficulty with performing the task in and of itself. The authors [6] do not comment on the fidelity of the simulator compared to the real procedure, which would have been interesting and important to note to better understand whether the complexity in use is specific to the procedure in-and-of-itself and the simulator design accurately reflected the complexity of the procedure, or whether the complexity was a result of poor simulator design, or a combination of these scenarios. As Slater [44] noted in discussing presence, it’s important to understand whether the simulator captures the fidelity of the procedure in the specific context-of-use to assess whether performance is based on the simulator’s design or what can be expected in performing the actual procedure. As the trained participants improved in an interoperative assessment, one could assume that the apparent lack of ease of use and learning required to work around the condition in the simulator (due to suboptimal equipment design and organization of the equipment) may reflect the reality of the procedure. This poses interesting challenges from a design perspective, as one might have to design a not-so-user-friendly, or less than optimal, VR experience to create a sense of presence and effectively capture the reality of the experience. The VR surgical simulators discussed thus far offered training and assessment of performance on specific tasks but lacked assessment of contextual factors. Contextual factors may include errors resulting from: interactions with poorly designed equipment; poor communication and decision-making among surgical team members; or changes resulting from organizational policies or practice, factors that may inadvertently affect surgical performance and patient outcomes [3]. Scerbo et al.’s early experiments with virtual environment systems like the CAVE (Cave Automatic Virtual Environment) integrated with task-specific simulators focused on the possible influence of contextual factors in performance by exposing trainees to work scenarios occurring with teams under stressful conditions [3]. Within the ‘Virtual Operating Room’ participants interacted with humans and avatars in a rendered CAVE environment, using real and virtual instruments and medical simulators to perform a procedure [3]. The process involved filming a real procedure, performing a cognitive task analysis on the procedure, and developing a step-by-step script for each actor to play in the scenario [3]. The personalities of the avatars were modeled using McCrae and Costa’s Five Factor Model combined with

56

C. M. J. Trudel

the Abridged Big-Five Circumplex model to define unique characters that could influence team dynamics and explore educational goals that depend on effective collaboration. Communication took place through wireless headsets designed to recognize speech commands and activate steps in the interactions. Performance was assessed by comparing deviations from the task timeline in the virtual event to those of the original script. The results showed the actors did exhibit some divergence from the timeline, which meant some of the simulation transitions had to be triggered manually, pointing to the need for flexible speech interactions to facilitate the desired workflow and ease-of-use. Speech recognition was used to cue interactions, which the author claims provided a more intuitive interface [3]. Such design features may support critical requirements such as quick and intuitive system uptake and reduced learning curves [5]. Similar to Scerbo’s study [3], Sankaranarayanan et al. [7] were interested in studying the influence of inadvertent contextual factors such as distraction and interruptions on surgical performance, and the use of VR to train surgeons under such conditions. The authors describe the need to move beyond what they refer to as ‘Gen 1’ simulators which are focused primarily on the development of psychomotor skills (e.g. hand–eye coordination) as well as fine and gross motor skills for manipulating instruments, cutting, suturing to simulators that incorporate training in perhaps the less obvious, less tangible, cognitive skills required for surgical excellence. Trainers should also address cognitive skills related to factors such as planning, communication, problem-solving, decision-making and mental workload within a context of excessive noise, equipment access issues or malfunctions, staff changes in position, phone and pager disruptions, to name just a few examples [7, 59]. The authors [7] also criticized the fidelity of ‘Gen 1’ simulators used in laboratory settings, which fail to capture the complex conditions experienced in ORs. However, Huber and colleagues [15] have pointed to the presence-related limitations of the HMD and VE used in this study, with the HMD being limited by a 45° field of view and the virtual scenario consisting of a plain computer-generated room. Using the Gen1-VR and Gen2-VR systems, participants conducted a peg-transfer task which has been demonstrated to replicate the fundamentals of laparoscopic surgery on a Virtual Basic Laparoscopic Skill Trainer-peg transfer (VBLaST-PT©). In this experiment, participants were purposefully exposed to distractions and interruptions during one of the conditions to evaluate the effects of environmental factors on performance. The participants all experienced 3 randomized conditions: (1) using the original Gen1 VR system where the participant interacts with a virtual patient and OR on a 2D monitor; (2) using the Gen2-VR© where the participant interacts in the same scenario but using a head-mounted display (HMD) without exposure to distractions or interruptions; and (3) using the Gen2-VR© in the same scenario using the HMD and exposed to distractions and interruptions [7]. Performance during each condition was automatically calculated by the simulator and additional information on the possible influence of distractions and interruptions was collected through a 5point Likert scale questionnaire to assess the realism of the task, sense of immersion and the influence of distraction and interruptions. The distraction stimulus consisted of presenting intermittent music that was least rated for preference by participants.

4 Useful, Usable and Used?

57

Interruption was achieved by fogging the camera lens for a short period and randomly disabling the opening and closing of the left and right instrument tips. The study showed that performance on the peg-transfer task decreased with the introduction of distractions and interruptions. Feedback from the participants revealed they were not significantly affected by the introduction of music, but that equipment malfunctions significantly hindered their performance. These external factors represent a greater level of fidelity or presence to what might be experienced in a real procedure, and performance under such conditions may be more indicative of actual performance [7]. Using da Vinci® OR surgical laboratory trainers and robotic assistive systems, Hoogenes and colleagues [11] studied how well the skills obtained using such trainers transferred to performing a Urethrovesical Anastomosis (UVA) procedure on a novel 3D printed bladder model. The dV-Trainer (dV-T) from Mimic Technologies and the da Vinci Surgical System (dVSSS) from Intuitive Surgical help train robotic skills, assess surgical performance and provide assessment feedback to the trainees. The dVT can be set up in a lab or any other convenient environment as it consists of a portable desktop simulator with foot pedals and handheld controls, controls which are similar to the console design on the dVSSS. The dVSSS system is used to perform robotic assisted surgery in ORs. A simulator software package is integrated into the dVSSS console, which includes a 3D display. Both systems use Mimic Technologies’ VR software and scoring for performance. The 3D printed bladder model was developed through an iterative design process to try to achieve a high level of fidelity to a real human bladder. Unique polymers similar to the human bladder and urethra were used, which could more accurately represent the processes of cutting, incising, needle insertion and suturing. To further augment the experience, a realistic silicone torso was developed to simulate a human pelvis and allow for the accurate representation of docking instruments relative to the body and comparable site lines in a real OR setting. But the authors are clear to note the limits of fidelity, namely that the model could not represent the multiple physiological factors occurring in a real UVA procedure and that the bodily working space did not entirely represent the constraints posed by a real pelvis [11]. Two groups were studied: medical students, junior trainees and residents were assigned to a basic group; and experienced trainees, senior residents and fellows to an advanced group. Participants conducted a number of tasks on each simulator and results differed depending on the task being performed. The performance benefits of training with the higher fidelity dVSSS system were only significant for the basic group with no significant difference for the advanced group, suggesting that higher fidelity simulators might be of greater benefit to junior trainees. Participants’ perception of the VR trainers was also assessed through a questionnaire, which revealed that both simulators were considered useful to build familiarity with the da Vinci robot controls, basic robotic skills and associated UVA tasks. But participants found the dVSSS provided a greater realistic representation of using the da Vinci robot. But the realism of the dVSSS preferred by participants comes with compromises. Since the system requires the use of the actual robotics console that is located in the OR, this may limit access to the system [11]. The study also commented on the

58

C. M. J. Trudel

systems’ potential usefulness, as the time needed to complete the UVA task was 2.5 h, evidence to support the feasible integration of such training within a surgical curriculum which can be subject to time and resource constraints [9–11]. The final case discussed here focused on developing a proof-of-concept for the technical feasibility of creating a highly immersive virtual reality operating suite. Huber and colleagues were concerned about focus on supporting psychomotor skills in surgical VR trainers (e.g. hand–eye coordination, spatial orientation, tool use relative to fulcrum effect) to the exclusion of VR simulation that could represent and enhance the realism of a more wholistic surgical experience [15]. The authors argued that realism and increasing a person’s sense of presence in the experience is only possible in team training sessions, which requires extensive resources such as time, infrastructure and human capital. The researchers combined a VR laparoscopy simulator (LapSim), a VR-HMD, VR game components and 360° video of a standard laparoscopy scenario taken from their department’s OR to create a ‘user-friendly’, familiar and highly immersive OR scenario [15]. The system consists of a VR laparoscopy simulator without haptic feedback (LapSim) which includes a monitor, keyboard, mouse, computer, Simball™ 4D Joysticks, and a double-foot switch. The components were mounted to a height adjustable open chassis system to support access to components. Instrumentation consisted of left- and right-handed grasper instruments and a camera instrument located in the centre (which was not used in the experiment). Software from the manufacturer facilitated the performance of peg transfer tasks, pattern cutting, cholecystectomy and appendectomy tasks and recorded task-specific performance measures to monitor improvement and provide evaluative feedback. A custom VR HMD was developed using the HTC Vive to combine head tracking and stereoscopic depth effects and better approximate human vision over LapSim’s 2D monitor display. Low latency in the HMD OLED display helped reduce simulator sickness and the HTC Vive’s 110° field of view and 4.6 by 4.6 m tracking area supported an experience at room-scale (approximately the size of a minor operating theatre). The virtual surroundings were created using Unity™ where the virtual monitor was controlled using the joysticks and performance was optimized by downsampling the texture prior to rendering the environment [15]. These design decisions were driven by continuous clinical feedback during the development process to improve usability and enhance fidelity to the real environment. For example, clinicians placed equipment, people and other artifacts to accurately represent the conditions and constraints and identified that the joysticks should be visible in the HMD so they could be located to interact physically in the environment. Similarly, the foot pedal was not visualized in the HMD but was not an issue according to clinicians as they noted it’s often not visible in a real procedure due to sightlines and the use of surgical covers. The design was also driven by advancements in technology as early version HMDs posed issues for latency effects and ensuing motion sickness, as well as limited field of view, factors which were improved on in this study [15]. Clinical feedback on the design included the technical ease of use with regards to set-up and mobility. The movement of virtual instruments was also considered

4 Useful, Usable and Used?

59

very accurate. However, concerns were noted on the limited display resolution in the HMD for precise tasks such as fine dissection where blood vessels need to be differentiated. With regards to physical anthropometrics, the 500 g weight of the HMD became uncomfortable with extended use and this discomfort may have undermined immersion. What’s interesting to recognize here, is that ten years after the study by Scerbo et al. [3], Huber and his colleagues [15] note similar concerns with usability, presence and collaboration highlighting the need to focus on supporting interactions using speech recognition to help trigger scenarios, improving immersion using multiple 360° videos, introducing authentic OR sounds and simulating stress to better support teamwork and situated learning [13].

4.3 Conclusion There are numerous studies in virtual reality applications for healthcare and this review has touched upon just a few examples from early and more current surgical applications, applications which demonstrate a wide variety of use contexts but all with the objective of improving and measuring performance in healthcare delivery. The effectiveness of the systems seem to be the baseline for adoption and future research and effectiveness is measured primarily by accuracy and/or the transferability and comparability of outcomes to field applications. If a medical student or surgical trainee performs well on a simulator, there is evidence to suggest they will perform well in reality. If a system is deemed to be accurate in the field to meet regulated standards of performance and patient outcomes, then it will be considered for adoption. But what’s hinted at in these studies is the need to study other relevant factors such as improved usability, and where deemed relevant, the desire to enhance a sense of presence, or introduce features that can help capture complex individual and team experiences in the OR. Features that are capable of enhancing presence were shown to improve the effectiveness of the system for some groups, as the studies on haptic feedback in laparoscopic simulators demonstrated, for example. This review also revealed that questions of pedagogical value and strategies related to ‘accessing’ such technologies did not appear to be central to these studies. The concept of equity and access to VR systems in surgical curricula should not be underestimated in design development, if we are to support a consistent organizational approach to improving patient outcomes. For example, a 2012 national surgical trainee survey of 1130 respondents conducted by the UK’s Association of Surgeons in Training found that “only 41.2% had access to skills simulator facilities”. Furthermore, for those individuals with access, only 16.3% were available outside of scheduled working hours, and only 54.0% had access in their current work place [8]. Therefore, as design teams continue to develop such systems, measures which can help foster the ‘accessibility’ of such systems, whether it be at an individual or organizational level should be considered critical project drivers.

60

C. M. J. Trudel

In order to study these layers of factors and their interdependence in surgical VR development, a comprehensive framework could assist researchers to better position their study objectives, while providing clarity on what value is being studied, and what remains to be studied in order to add value. Such a framework would assist researchers to strategize and better communicate where their system sits with regards to factors that support usefulness, usability, as well as actual use and sustainability within surgical curricula. A preliminary framework has been proposed here based on the work of human factors specialists, interaction and VR researchers and the studies discussed in this review touch on the priorities proposed in this framework to a lesser or greater degree. Studies may have focused on exploring the basic physiological or psychological effects of the system, basic effectiveness assessed through performance metrics, the usability of the system, how the application enhances presence, the system’s contribution to collaboration in the OR and/or the contribution of recent technology developments to support OR experience and education. Other novel factors that may contribute to pedagogical goals and associated user experience have yet to be identified and their value assessed in the development of surgical VR trainers. Some important factors that have been identified in the pedagogical literature on surgical VR trainers include features which support their viable inclusion in surgical training programs and improved access to the technology. Other factors that should be explored in the development of such frameworks include ethical considerations and other complex phenomena such as interactions related to infection prevention and control. The development of VR applications in healthcare is thriving as evidenced by the volume of literature being produced. But researchers caution that using virtual reality surgical trainers “is not a substitute for experience” and that this experience “can only be achieved by real-life surgery” [4]. Without a thorough analysis of factors that may enhance the usefulness, usability and acceptance of VR surgical trainers— factors which include user experience, physical and psychological effects, teamwork requirements and systems integration—such systems will continue to be limited in their potential.

References 1. Riva, G.: Applications of virtual environments in medicine. Methods Inf. Med. 42(5), 524–534 (2003) 2. Johnston, C.L., Whatley, D: Pulse!!—A virtual learning space project. In: Haluck, R.S., Hoffman, H.M., Mogul, G.T., Phillips, R., Robb, R.A., Vosburgh, K.G., Westwood, J.D. (eds.) Medicine Meets Virtual Reality 14 Accelerating Change in Healthcare: Next Medical Toolkit, vol. 119. IOS Press, Amsterdam (2006) 3. Scerbo, M.W., Belfore, L.A., Garcia, H.M., Weireter, L.J., Jackson, M.W., Nalu, A., Baydogan, E., Bliss, J.P., Seevinck, J.: Virtual operating room for context-relevant training. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 51, no. 6, pp. 507–511 (2007)

4 Useful, Usable and Used?

61

4. Neis, F., Brucker, S., Henes, M., Taran, F.A., Hoffmann, S., Wallwiener, M., Schönfisch, B., Ziegler, N., Larbig, A., Leon De Wilde, R.: Evaluation of the HystSimTM-virtual reality trainer: an essential additional tool to train hysteroscopic skills outside the operation theater. Surg. Endosc. 30(11), 4954–4961 (2016) 5. Satava, R.M., Jones, S.B.: In: Hale, K.S., Stanney, K.M. (eds.) Handbook of Virtual Environments: Design, Implementation, and Applications. Lawrence Erlbaum Associates, London (2002) 6. Thawani, J.P., Ramayya, A.G., Abdullah, K.G., Hudgins, E., Vaughan, K., Piazza, M., Madsen, P.J., Buch, V., Sean Grady, M.: Resident simulation training in endoscopic endonasal surgery utilizing haptic feedback technology. J. Clin. Neurosci. 34, 112–116 (2016) 7. Sankaranarayanan, G., Li, B., Manser, K., Jones, S.B., Jones, D.B., Schwaitzberg, S., Cao, C.G.L., Dea, S.: Face and construct validation of a next generation virtual reality (Gen2-VR©) surgical simulator. Surg. Endosc. 30(3), 979–985 (2016) 8. Milburn, J.A., Khera, G., Hornby, S.T., Malone, P.S.C., Fitzgerald, J.E.F.: Introduction, availability and role of simulation in surgical education and training: review of current evidence and recommendations from the Association of Surgeons in Training. Int. J. Surg. 10(8), 393–398 (2012) 9. Windsor, J.A.: Role of simulation in surgical education and training. ANZ J. Surg. 79(3), 127–132 (2009) 10. Motola, I., Devine, L.A., Chung, H.S., Sullivan, J.E., Issenberg, S.B.: Simulation in healthcare education: a best evidence practical guide. AMEE Guide No. 82. Med. Teach. 35(10), e1511– e1530 (2013) 11. Hoogenes, J., Wong, N., Al-Harbi, B., Kim, K.S., Vij, S., Bolognone, E., Quantz, M., Guo, Y., Shayegan, B., Matsumoto, E.D.: A randomized comparison of 2 robotic virtual reality simulators and evaluation of trainees’ skills transfer to a simulated robotic urethrovesical anastomosis task. Urology 111, 110–115 (2018) 12. Barsuk, J.H., McGaghie, W.C., Cohen, E.R., Balachandran, J.S., Wayne, D.B.: Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J. Hospital Med. 4(7), 397–403 (2009) 13. Lave, J., Wenger, E.: Situated learning: Legitimate Peripheral Participation. Cambridge University Press, Cambridge (1991) 14. Chaudhry, A.1., Sutton, C., Wood, J., Stone, R., McCloy, R., Ann R.: Learning rate for laparoscopic surgical skills on MIST VR, a virtual reality simulator: quality of human-computer interface. Ann. R. Coll. Surg. Engl. 81(4), 281–286 (1999) 15. Huber, T., Wunderling, T., Paschold, M., Lang, H., Kneist, W., Hansen, C.: Highly immersive virtual reality laparoscopy simulation: development and future aspects. Int. J. Comput. Assist. Radiol. Surg. 13(2), 281–290 (2018) 16. McGaghie, W.C., Issenberg, S.B, Petrusa, E.R., Scalese, R.J.: A critical review of simulationbased medical education research: 2003–2009. 44(1), 50–63 (2010) 17. Rogers, Y., Sharp, H., Preece, J.: Interaction Design—Beyond Human Computer Interaction, 3rd edn. Wiley, West Sussex (2012) 18. Interaction Design Foundation: Useful, Usable, and Used: Why They Matter to Designers. Retrieved on June 5th 2018 from https://www.interaction-design.org/literature/article/usefulusable-and-used-why-they-matter-to-designers (2018) 19. Tractinsky, N.: The usability construct: a dead end? Human-Comput. Interact. 33(2), 131–177 (2018) 20. Merriam Webster Dictionary: Useful. Retrieved on June 5th 2018 from https://www.merriamwebster.com/dictionary/useful (2018) 21. Cambridge Dictionary: Useful. Retrieved on June 5th 2018 from https://dictionary.cambridge. org/dictionary/english/useful (2018) 22. Oxford Dictionary: Useful. Retrieved on June 5th 2018 from https://en.oxforddictionaries.com/ definition/useful (2018) 23. Merriam Webster Dictionary: Usable. Retrieved on June 5th 2018 from https://www.merriamwebster.com/dictionary/usable (2018)

62

C. M. J. Trudel

24. Cambridge Dictionary: Usable. Retrieved on June 5th 2018 from https://dictionary.cambridge. org/dictionary/english/usable (2018) 25. Oxford Dictionary: Usable. Retrieved on June 5th 2018 from https://en.oxforddictionaries. com/definition/usable (2018) 26. Merriam Webster Dictionary: Used. Retrieved on June 5th 2018 from https://www.merriamwebster.com/dictionary/used (2018) 27. Cambridge Dictionary: Used. Retrieved on June 5th 2018 from https://dictionary.cambridge. org/dictionary/english/used (2018) 28. Oxford Dictionary: Used. Retrieved on June 5th 2018 from https://en.oxforddictionaries.com/ definition/used (2018) 29. Merriam Webster Dictionary: Accepted. Retrieved on June 5th 2018 from https://www.mer riam-webster.com/dictionary/accepted (2018) 30. Cambridge Dictionary: Accepted. Retrieved on June 5th 2018 from https://dictionary.cambri dge.org/dictionary/english/accepted (2018) 31. Oxford Dictionary: Accepted. Retrieved on June 5th 2018 from https://en.oxforddictionaries. com/definition/accepted (2018) 32. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13(3), 319–340 (1989) 33. Bowen, W.: The puny payoff from office computers. In: Forester, T. (ed.) Computers in the Human Context: Information Technology, Productivity, and People. MIT Press, Cambridge (1986) 34. Patel, H., Sharples, S., Letourneur, S., Johansson, E., Hoffmann, H., Lorissone, J., Salua, D., Stefanif, O.: Practical evaluations of real user company needs for visualization technologies. Int. J. Hum. Comput. Stud. 64(3), 267–279 (2006) 35. Sinclair, M.A.: Participative assessment. In: Wilson, J.R., Corlett, N.E. (eds.) Evaluation of Human Work. CRC Press, Taylor & Francis Group, Boca Raton (2005) 36. Shepherd, A., Stammers, R.B.: Task analysis. In: Wilson, J.R., Corlett, N.E. (eds.) Evaluation of Human Work. CRC Press Taylor & Francis Group, Boca Raton (2005) 37. Sharples, S., Stemon, A.W., D’Cruz, M., Patel, H., Cobb, S., Yates, T., Saikayasit, R., Wilson, J.R.: Human factors of virtual reality—Where are we now? In: Pikaar, R.N., Koningsveld, E.A.P., Settels, P.J.M. (eds.) Meeting Diversity in Ergonomics. Elsevier, Oxford (2007) 38. Nichols, S., Haldane, C., Wilson, J.R.: Measurement of presence and its consequences in virtual environments. Int. J. Hum. Comput. Stud. 52(3), 471–491 (2000) 39. Cobb, S.V.G., Nichols, S., Ramsey, A., Wilson, J.R.: Virtual reality-induced symptoms and effects (VRISE). Presence 8(2), 169–186 (1999) 40. Slater, M.: Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B 364, 3549–3557 (2009) 41. Draper, J.V., Kaber, D.B., Usher, J.M.: Speculations on the value of telepresence. Cyber Psychol. Behav. 2(4), 349–362 (1999) 42. Slater M (n.d.) Notes on Presence. Retrieved March 1, 2014 from https://citeseerx.ist.psu.edu/ viewdoc/download?doi=10.1.1.100.3517&rep=rep1&type=pdf 43. Witmer, B.G., Singer, M.J.: Measuring presence in virtual environments: a presence questionnaire. Presence 7(3), 225–240 (1998) 44. Slater, M.: Measuring presence: a response to the witmer and singer presence questionnaire. Presence 8(5), 560–565 (1999) 45. Stanney, K.M., Mourant, R.R., Kennedy, R.S.: Human factors issues in virtual environments: a review of the literature. Presence: Teleoper. Virtual Environ. 7(4), 327–351 (1998) 46. Cao, C.G., Zhou, M., Jones, D.B., Schwaitzberg, S.D.: Can surgeons think and operate with haptics at the same time? J. Gastrointest. Surg. 11(11), 1564–1569 (2007) 47. Delp, S.L., Loan, P., Basdogan, C., Rosen, J.M.: Surgical simulation: an emerging technology for training in emergency medicine. Presence: Teleoper. Virtual Environ. 6(2), 147–159 (1997) 48. Kaber, D.B., Zhang, T.: Human factors in virtual reality system design for mobility and haptic task performance. Rev. Hum. Fact. Ergon. 7(1), 323–366 (2011)

4 Useful, Usable and Used?

63

49. Kanumuri, P., Ganai, S., Wohaibi, E.M., Bush, R.W., Grow, D.R., Seymour, N.E.: virtual reality and computer-enhanced training devices equally improve laparoscopic surgical skill in novices. J. Soc. Laparoendsc. Surg. 12(3), 219–226 (2008) 50. Panait, L., Akkary, E., Bell, R.L., Roberts, K.E., Dudrick, S.J., Duffy, A.J.: The role of haptic feedback in laparoscopic simulation training. J. Surg. Res. 156(2), 312–316 (2009) 51. Bajka, M., Tuchschmid, S., Streich, M., Fink, D., Szekely, G., Harders, M.: Evaluation of a new virtual-reality training simulator for hysteroscopy. Surg. Endosc. 23(9), 2026–2033 (2009) 52. Lemole, G.M., Banerjee, P.P., Luciano, C.: Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery 61(1), 142–148 (2007) 53. Neubauer, A., Wolfsberger, S.: Virtual endoscopy in neurosurgery: a review. Neurosurgery 72(Suppl 1), 97–106 (2013) 54. Schulze, F., Bühler, K., Neubauer, A.: Intra-operative virtual endoscopy for image guided endonasal transsphenoidal pituitary surgery. Int. J. Comput. Assist. Radiol. Surg. 5(2), 143–154 (2010) 55. Choudhury, N., Gélinas-Phaneuf, N., Delorme, S.: Fundamentals of neurosurgery: virtual reality tasks for training and evaluation of technical skills. World Neurosurg. 80(5), e9-19 (2013) 56. Cohen, A.R., Lohani, S., Manjila, S., Natsupakpong, S., Brown, N., Cavusoglu, M.C.: Virtual reality simulation: basic concepts and use in endoscopic neurosurgery training. Child’s Nervous Syst. 29(8), 1235–1244 (2013) 57. Rosseau, G., Bailes, J., del Maestro, R., Cabral, A., Choudhury, N., Comas, O., Debergue, P., De Luca, G., Hovdebo, J., Jiang, D., Laroche, D., Neubauer, A., Pazos, V., Thibault, F., Diraddo, R.: The development of a virtual simulator for training neurosurgeons to perform and perfect endoscopic endonasal transsphenoidal surgery. Neurosurgery 73(Suppl 1), 85–93 (2013) 58. Pheasant, S., Haslegrave, M.: Bodyspace. CRC Press, Boca Raton (2006) 59. Zheng, B., Martinec, D.V., Cassera, M.A., Swanström, L.L.: A quantitative study of disruption in the operating room during laparoscopic antireflux surgery. Surg. Endosc. 22(10), 2171–2177 (2008) 60. Cobb, S.V.G., Nichols, S., Ramsey, A., Wilson, J.R.: Virtual reality-induced symptoms and effects (VRISE). Presence 8(2), 169–186 (April 1999). https://doi.org/10.1162/105474699 566152 61. Nichols, S., Haldane, C., Wilson, J.R.: Measurement of presence and its consequences in virtual environments. Int. J. Hum. Comput. Stud. 52(3), 471–491 (2002). https://www.sciencedirect. com/science/article/abs/pii/S1071581999903439

Chapter 5

Four-Component Instructional Design Applied to a Game for Emergency Medicine Tjitske J. E. Faber, Mary E. W. Dankbaar, and Jeroen J. G. van Merriënboer Abstract The ABCDE method, used internationally to treat seriously ill patients, is a guideline for performing the complex skill of resuscitation that is commonly trained in face-to-face-courses. In the abcdeSIM game, used as a preparation for these courses, players treat patients in a virtual emergency department. We used the Four-Component Instructional Design theory (4C/ID) to redesign the existing game. In this chapter, we explain why the game was redesigned and how the components of this instructional design theory can be applied to designing a serious game for medical education.

5.1 Background and Significance Caring for acutely ill patients is a demanding task in which doctors have to combine medical knowledge with procedural skills, problem-solving skills, and communication skills. Proficiency in these skills is often directly related to patient safety. For the young doctor or medical student, the endeavor can be daunting when not sufficiently trained. The ABCDE method is used internationally to assess and treat acutely ill patients. By applying the principles of ‘treat first what kills first’ the health practitioner can structure the approach to the acutely or seriously ill patient using a simple mnemonic: Airway, Breathing, Circulation, Disability, Exposure (ABCDE). The ABCDE mnemonic is taught to medical students, physicians, and nurses in emergency medicine courses across different contexts, among others including trauma, medicine, obstetrics, and pediatrics. Although such face-to-face courses are generally effective, there is room for improvement. Costs are high, due to a high facultyto-student ratio and being several days away from the hospital to ensure sufficient T. J. E. Faber (B) · M. E. W. Dankbaar Erasmus MC, University Medical Center Rotterdam, Institute of Medical Education Research Rotterdam, Rotterdam, The Netherlands e-mail: [email protected] J. J. G. van Merriënboer School of Health Professions Education, Maastricht University, Maastricht, The Netherlands © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_5

65

66

T. J. E. Faber et al.

practice in simulation training. Another issue with training in a one-day course is that there is less opportunity for distributed practice [4]. E-learning may be used to supplement courses and potentially decrease costs because once developed, elearning is very cost-effective for teaching large groups. However, when comparing a one-day course supplemented with e-learning with a conventional two-day course in ALS (Advanced Life Support) training, the e-learning group scored lower pass rates on cardiac arrest simulation tests [25]. Results on other skill and knowledge tests were similar. This suggests that the blended course was non-inferior for knowledge and technical skills, but less suitable for teaching how to integrate knowledge, procedural and communication skills in a cardiac arrest simulation test. Games can be used in combination with face-to-face training and have the potential to teach complex cognitive skills in an engaging, flexible and patient-safe way [15]. The abcdeSIM (Erasmus MC/VirtualMedSchool 2012; more detailed description at https://virtualmedschool.com/abcdesim) is an educational simulation game developed to prepare residents for face-to-face emergency care training. The game was developed in a close collaboration between medical practitioners, game designers, and educationalists. The player takes on the role of a physician presented with an acutely ill patient in a virtual emergency department. The virtual nurse provides a brief handover containing information on the patients’ condition. All the tools and information available in a real-life emergency department are available to the player. The player can perform a physical examination, start treatments (e.g. insert an IV cannula, start high flow oxygen), administer medication and order additional diagnostic tests (e.g. laboratory testing or a chest X-ray). In fifteen minutes, the player must complete a full ABCDE assessment of the patient and start the necessary treatments. Several patient cases are available with different medical conditions and levels of sickness. Vital parameters and the condition of the patient are generated by a complex physiological model that is influenced by the players’ actions. This results in a realistic feel of the scenario. After completing the assessment and starting treatment, the player can decide to move on to ‘secondary survey’. Players are then provided with feedback on their actions and a game score, as well as a narrative on how the patient fared after their care. The game score depends on the number of acceptable decisions made according to the ABCDE approach. Unnecessary interventions subtract points from the final score. Completing a case faster rewards additional credit. An image of the game interface can be found in Fig. 5.1. Family practice residents who played the abcdeSIM game before face-to-face training spent more time on self-study and showed improved clinical performance [7, 8]. However, fourth-year medical students playing the game did not perform better than students who studied the same cases in a text-based format without game elements. The students playing the game showed higher motivation and experienced higher cognitive load, but did not perform better on assessment of their ABCDE skills [7, 8]. Why do more experienced residents (in acute care) show improved performance after using the game, and students fail to benefit from the game, although they are very motivated to play? An explanation can be found in the difference in prior knowledge between both groups. Learners who have little prior knowledge will learn less and require more instruction and support. Another explanation

5 Four-Component Instructional Design Applied to a Game …

67

Fig. 5.1 In abcdeSIM, the learner is presented with a virtual patient in an emergency department. All tools and options available in the real-life situation are represented in the virtual environment

could be the expertise reversal effect [16]: designs and techniques that are effective for low-knowledge individuals can lose their effectiveness and even have negative consequences for more proficient learners. The abcdeSIM is an open learning environment with little support, which makes it more challenging for novices to learn from the game when compared with experts. Maybe the students playing abcdeSIM experience too little support and guidance in the game, not knowing where to start improving their skills. How can we offer more support and guidance in an educational game, without depreciating the game experience? Instructional design theories aim to help develop more effective instruction. The Four-Component Instructional Design (4C/ID) model is a whole task learning approach based on cognitive psychology. It provides a framework for developing educational or training programs, aimed at the acquisition of complex cognitive skills, or complex learning. Complex learning is “the integration of knowledge, skills, and attitudes; coordinating qualitatively different constituent skills; and transferring what was learned to daily life and work” [24]. The treatment of an acutely ill patient is an example of a complex skill. 4C/ID has been recommended for designing medical education [35] and for designing educational games [14], but we found no records of educational games for medical education based on 4C/ID. We set out to redesign abcdeSIM, using current insights from instructional design and game design. In this chapter, we will describe the elements of 4C/ID and their theoretical and current applications in educational games and medical education. Next, we will show how we redesigned the existing abcdeSIM game according to

68

T. J. E. Faber et al.

4C/ID, and present an outline for the evaluation process for the redesigned game. Finally, we will review recommendations for applying 4C/ID to similar projects.

5.2 Game-Based Learning and Four-Component Instructional Design 5.2.1 Learning in a Game Environment Commercial games, even when created to primarily entertain, are commonly designed to be very effective instructional media; the goal of the instruction for the majority of commercially available games is ‘to learn to play the game’ (e.g. to navigate the environment, equip and use weaponry, outwit opponents and solve puzzles). They are usually immensely effective in reaching this goal, within the game context, without recourse to direct instruction outside the game [12, 13]. Teachers and instructional designers attempt to harness the power of games to achieve external learning goals. They do this either by creating games for learning, sometimes referred to as serious, educational or instructional games, or by applying gamification, which is the use of game elements in non-game settings [18], to conventional teaching programs. A detailed discussion of the mechanisms behind game-based learning and a thorough overview of these mechanisms and relevant design elements can be found elsewhere [1, 11–13]. In the following paragraphs, we will highlight a number of mechanisms used in games to enhance learning in relation to the 4C/ID model.

5.2.2 Four Component Instructional Design The four components of instructional design as described by van Merriënboer and Kirschner [24] consist of learning tasks grouped in complexity levels, supportive information, procedural information and part-task practice. We will provide a brief overview of each component in turn, and highlight its applicability and relevance to the game context.

5.2.2.1

Learning Tasks and Task Classes

Learning tasks engage the learner in whole-task practice, confronting the learner with all or most of the constituent skills important for performing the complex skill, including their associated knowledge and attitudes. Learning tasks are authentic whole-task experiences based on real-life tasks. These experiences can include case studies, projects, problems, scenarios and so forth. The learning tasks stimulate learners to construct cognitive schemata, specifically mental models that allow for

5 Four-Component Instructional Design Applied to a Game …

69

reasoning within the domain, and cognitive strategies that guide problem-solving in the domain [24]. By focusing on whole tasks, transfer of the skills, learned in the educational setting, to clinical practice is facilitated. A similar mechanism is used in computer and video games, which James Paul Gee refers to as System Thinking [12]. Each element of a game fits into the overall system. This allows players to get a feel for the “rules of the game” – what works and what doesn’t, in other words, to construct a mental model. For educational games, this means that if the rules and the game tasks are aligned clearly with the instructional goals, getting a learner to learn the rules of the game will support them in reaching the learning objectives. However, if a task presented at the start of training is too complex, it results in cognitive overload for the learner, which impairs learning and performance. By grouping learning tasks into task class levels, the student starts with less complex tasks and progresses toward more complex tasks [24]. Task class levels reflect the concept of cycles of learning [13] or cycles of expertise [12] that are used to, on the one hand, engage players through the principle of mastery [28], and on the other hand force the player to practice a skill set to an automatic level of mastery. The use of levels to gradually expose the player to increasingly complex tasks or environments is ubiquitous in entertainment games. Within a task class, support and guidance is offered at a high level initially and decreased in subsequent tasks. After mastering one task class, the learner moves to the next task class, again starting with a high level of support and guidance. Task support focuses on providing the learner with assistance with the products involved in the training (product-oriented). This may include asking the learner to study a worked-out example or case study or completing an intermediate solution [31]. Solution-process guidance focuses on assisting learners with the processes inherent to solving the learning task (process-oriented), this may include a modeling example, which is an example of an expert performing the task with a simultaneous explanation of why the task is being performed as it is performed. While solving unfamiliar problems, experts use Systematic Approaches to Problem Solving (SAPs), that is, a general prescriptive plan that specifies the goals and sub goals that must be reached when solving problems in a particular domain. The ABCDE approach is an example of an SAP used to solve the problem of assessing and stabilizing a critically ill patient. This concept of first confronting a learner with less complex whole task experiences and gradually increasing complexity aligns with the concept of fish tanks in games [12]. A fish tank can be described as a simplified ecosystem that clearly displays some critical variables and their interactions that might otherwise remain obscured in the highly complex real-world ecosystem. Fish tanks are often offered to players either as tutorials or in early levels. In these scaled-down versions of a game, key elements and relationships are rendered salient, allowing the learner to take the first steps towards understanding the game as a whole system. Another relevant analogy is found in Lloyd Rieber’s microworlds [27]. A microworld presents the learner with a simple case of the domain and must match the learner’s cognitive and affective state. In fact, a learner should require little or no training to begin using a microworld, much like a child does not require training to use a sandbox.

70

5.2.2.2

T. J. E. Faber et al.

Supportive Information

Supportive information is domain-general information and is not task-specific, but task-class specific. The information explains to the learners how a learning domain is organized and how to approach problems in that domain. It supports the learner in developing general schemas and problem-solving approaches. Typically, supportive information is presented before learners work on a new task class and kept available for reference during their work on a new task class. Supportive information is important for learning non-recurrent constituent skills, which are skills that cannot be automated but can be performed based on cognitive schemata, such as problem-solving or reasoning. Huang and Johnson [14] recommend carefully designing supportive information and determining the amount of supportive information available by analyzing learners’ performance on prior tasks. They caution that providing all supportive information without performance analysis may reduce the level of challenge perceived by learners. They suggest using supportive information to provide cognitive feedback, that is, the return of some measure of the output of the learner’s cognitive processes. This feedback assists learners in reflecting on the quality of their problem-solving processes and solutions. This allows them to construct more effective cognitive schemas to improve future performance. Also, if the game takes place in an unrealistic context, supportive information may provide additional support for learners to be able to operate effectively in the fantasy world [14]. An example of supportive information available to players in commercial games can be found in online user communities, where players share information with each other.

5.2.2.3

Procedural Information

Procedural information is information that supports the learner in performing recurrent aspects of a skill, that is, aspects that are always performed in the same way and that can become routines. This information, intended to support rule automation, should be presented in a just-in-time (JIT) manner. In making the information available during task performance it can easily be embedded in cognitive rules via rule formation [24]. Well-designed games also provide procedural information “just in time” and “on demand” in a game, so that the user can start playing the game without referring to a manual. During gameplay, procedural information can, for example, be provided by the system (a new tool is highlighted when it becomes available for use), a nonplayable character (an NPC tells you where a valuable item may be found), or objects in the game (you push a door and it does not open) [13]. Quickly after obtaining new information, the player should be provided an opportunity to use it [12].

5 Four-Component Instructional Design Applied to a Game …

5.2.2.4

71

Part-Task Practice

Finally, part-task practice can be necessary for recurrent aspects of a task for which a high level of automaticity is required after the training. It is typically applied for recurrent constituent skills that are critical in terms of safety (e.g. recognizing an obstructed airway), skills that enable the performance of other skills (e.g. measuring blood pressure), or skills that have to be performed simultaneously with other skills (e.g. being able to work under sterile conditions). Part-task practice can only begin after the aspects of the task have been introduced in a meaningful learning task [24]. Huang and Johnson [14] suggest part-task practice can help achieve rule automation in games. Gee [12] emphasizes that practicing skills in games is most effective and appealing to players when the skill is part of accomplishing things they need and want to accomplish. He states that in well-designed games, skills are posited as a strategy for accomplishing a goal, and only secondarily as a set of discrete skills. This aligns with the well-established notion of deliberate practice: when the value of automating a skill is clear, the learner can choose to engage in focused, repetitive practice of a well-defined task at an appropriate level of difficulty, while meticulously monitoring their performance through feedback from educational sources [22]. These four components are the building blocks for training programs designed according to the 4C/ID model.

5.2.3 4C/ID in Educational Games Huang and Johnson [14] have proposed an educational game design framework based on 4C/ID and cognitive load theory to enhance learner experience and the development of transferable cognitive schemata. They state the 4C/ID model is suitable for designing and researching educational games due to its characteristics, including affordability to design complex learning environments; flexibility to be applied for nonlinear and compact design sequences; scalability for design projects in various groups; validity and reliability of measuring learning outcomes; and emphasis on performance transfer. Subsequently, 4C/ID has been used in designing games for learning in several fields. For a vocational mechatronics training programme, Lukosch et al. [20, 21] designed a 4C/ID based educational game, combining a simulated workplace containing assignments with a sandbox environment in which students could freely experiment and display their skills. They argue that the clear structure provided by 4C/ID, with recognizable tasks, steps, and actions, should lead to fast acceptance of a game amongst teachers, even those unfamiliar with educational gaming. Enfield [10] used 4C/ID to redesign the Diffusion Simulation Game, a game for teaching application of strategies for diffusion of an innovation, in a curriculum related to change management. In six iterative cycles, they redesigned an existing game found ineffective in meeting its intended learning objectives. The redesign involved educationalists as well as instructional design experts, and the content was

72

T. J. E. Faber et al.

validated by subject matter experts. The designers intended to reduce cognitive load and increase learning. Post-test scores provided evidence that learners who played the redesigned game successfully met most learning objectives. The authors found that 4C/ID, while providing guidance on when to present information, did not provide specific guidance on how to provide information within a digital environment. They also experienced a lack of guidance related to concepts which learners find difficult to accept. Players were unwilling to use strategies that did not align with their own beliefs. Both issues were identified and solved in consecutive design cycles. The authors conclude that using 4C/ID within an interactive system design process provides fundamental guidance in redesigning an educational game. They suggest providing an initial learning task prior to any supportive information. This learning task may serve to familiarize the learner with the gameplay, increase the appeal of the game, and provide an experience to reflect back on when supporting information is introduced. For research methods and statistics, the CHERMUG (Continuing Higher Education in Research Methods Using Games) project aimed to develop a game to support students in developing an understanding of research methods and statistics [33, 34]. The authors performed a cognitive task analysis and used their findings in the design of a set of mini-games applying 4C/ID. They found the 4C/ID method was a welcome addition to assist in the design of the games, but caution that more research is required to be able to decide to what extent 4C/ID is suitable for various kinds of educational games, including more complex games.

5.2.4 4C/ID in Medical Education For medical education, a systematic review of simulation-based interventions looking at instructional design features identified the following best practices: range of difficulty, repetitive practice, distributed practice, cognitive interactivity, multiple learning strategies, individualized learning, mastery learning, feedback, longer time spent learning, and clinical variation [4]. These features are all present in a 4C/ID based instructional format and align with the principles of game-based learning as discussed above. Using the principles of 4C/ID to develop whole-task centered medical education is recommended to increase transfer from the curriculum to the workplace, and to meet the call for the more explicit use of evidence-based principles in the practice of medical education [35]. 4C/ID has been investigated in medical education research in various contexts, including clinical reasoning for undergraduate dental students [26], evidence-based medicine in classroom and clinical setting [23], communication skills for nursing students [29], and case presentation for medical students [6]. Tjiam et al. [30, 32] provide a description of designing non-game simulator-based skills training, using a cognitive task analysis integrated with 4C/ID, analogous to the blueprint suggested for educational games by Huang and Johnson described above. Pertaining to emergency care, a training format for post-partum hemorrhage simulation training

5 Four-Component Instructional Design Applied to a Game …

73

based on 4C/ID increased the speed and number of executed tasks when compared with the best practice training format [9]. To our knowledge, no educational games for medical education or emergency care have been designed according to 4C/ID. Thus, we seek to answer the following research question: How can 4C/ID theory be applied to redesign an existing educational game for training complex skills in emergency care? We hypothesize that to apply the 4C/ID model to an existing game while preserving the game experience, we will require input from various perspectives including game designers. Not all elements may be as easily translated to the game context.

5.3 Redesigning a Game for Emergency Care Using 4C/ID To improve the instructional value in abcdeSIM using 4C/ID, we investigated and redesigned the game, looking closely at each component in turn.

5.3.1 Learning Tasks and Task Classes The original abcdeSIM game contains five scenarios featuring six different patient cases. The patients were presented in a fixed order that was, true to the nature of an emergency department, not organized according to complexity or disease. Players could only access the next case after completing the previous case. The fifth scenario involved two patients and was intended to stress the importance of a quick assessment to see which patients require the most urgent care. To improve the learning tasks, a content expert classified the available patient cases into three complexity levels based on the level of physiological disturbance and the interventions required to successfully complete the case. Two additional cases were added to ensure adequate distribution of cases across the complexity levels. Table 8.1 outlines the cases and complexity levels.

5.3.2 Support and Guidance While solving unfamiliar problems, experts use Systematic Approaches to Problem Solving (SAPs), that is, a general prescriptive plan that specifies the goals and sub goals that must be reached when solving problems in a particular domain. The ABCDE approach is an example of an SAP used to solve the problem of assessing and stabilizing a critically ill patient. We created time-based reminders on essential case-specific interventions to help the learner stay on track and follow the ABCDE

74

T. J. E. Faber et al.

Table 8.1 An overview of cases in the redesigned abcdeSIM game Case description

Diagnosis

Physiological disturbance

Phases requiring intervention

Complexity level

46-year-old female Deep venous presenting with a thrombosis painful swollen leg

Mild

None

Low

56-year-old male Acute presenting with exacerbation of severe shortness of COPD breath

Moderate

B

Low

56-year-old male, returning after several weeks with fever and shortness of breath

Moderate

B

Moderate

56-year-old female Acute myocardial Moderate presenting with infarction upper abdominal pain

B, C

Moderate

32-year-old male presenting with hematemesis (vomiting blood) and dizziness

Severe

B, C

Moderate

35-year-old male Pneumosepsis presenting with fever and shortness of breath

Severe

B, C

High

46-year-old female Subarachnoid presenting with hemorrhage seizures and a partially obstructed airway

Severe

A, D

High

39-year-old female Anaphylactic presenting with shock shortness of breath after being stung by a wasp

Severe

A, B, C

High

Acute exacerbation of COPD triggered by bacterial pneumonia

Hemorrhagic shock due to gastrointestinal bleeding

approach. An example is shown in Fig. 5.2. Learners start the game by completing a click-through tutorial, to familiarize themselves with the controls. Next, they watch a modeling example: a screencast of an expert performing the in-game assessment of a low complex case with a simultaneous explanation. They then play the low complex cases, each case decreasing in number of reminders. The final case in a task class is played without reminders.

5 Four-Component Instructional Design Applied to a Game …

75

Fig. 5.2 Time-based reminders (top right) help the learner stay on track. The reminder is displayed only when the learner has failed to perform the required invention at the specified time

5.3.3 Supportive Information Supportive information is offered in two ways. An e-module explaining detailed information on the ABCDE method was already provided alongside the game tutorial and could be accessed in between cases. To provide the opportunity to use the supportive information available in the e-module during play, we created a pause button, in order to prevent the patient from deteriorating (as would normally happen in the absence of game interventions). This button may also serve as a way for the learner to deal with cognitive overload, or draw on other types of help, such as peer or faculty support, depending on the setting in which the game is embedded in the curriculum. As additional support for schema construction, we added a transfer form to the cases, prompting the learner to describe the patients’ condition according to the ABCDE structure. The transfer form is shown in Fig. 8.3.

5.3.4 Procedural Information Some procedural information was already embedded in abcdeSIM. For example, the virtual nurse provided immediate corrective feedback if the player supplied too little oxygen for the chosen oxygenation device, or when they would select inappropriate invasive treatments such as a chest tube or neck brace. We designed three additional procedural supports. First, when selecting a tool, just-in-time information about using

76

T. J. E. Faber et al.

Fig. 5.3 A transfer form prompts the learner to describe the patients’ condition according to the ABCDE structure

the tool appears, to enable automation of routine aspects of the task (Fig. 5.4). Second, to facilitate tool use, we enabled hit areas to be shown when selecting an instrument (Fig. 5.5). Finally, on-demand feedback during the case from a virtual supervisor enables the learner to see where they can improve their performance (Fig. 5.6).

5.3.5 Part-Task Practice We did not identify opportunities for part-task practice within the game. However, the ABCDE approach as a whole contains several skills that will benefit from parttask practice, such as placing an IV cannula or intraosseous needle, performing a physical examination, or interpreting vital signs. This practice could be implemented in addition to the game, in a blended (online and offline) design curriculum.

5.3.6 Design Process and Challenges All support options were designed in a collaborative effort between game developers, content experts, and educationalists. For example, when designing procedural support, the educationalist observed that students would select tools, but were unable to quickly apply the tool in the correct way. The game developer looked for technical

5 Four-Component Instructional Design Applied to a Game …

77

Fig. 5.4 When selecting a tool, just-in-time information about using the tool appears. The learner can check a box to prevent the information from appearing the next time the tool is selected

Fig. 5.5 When a tool is selected, in this case a non-rebreathing mask, the hit area appears to facilitate application of the tool

78

T. J. E. Faber et al.

Fig. 5.6 On-demand feedback during the case from a virtual supervisor enables the learner to see where they can improve their performance

options to provide more information to the students within the existing software and came up with several options. In a group discussion, the tool information pop-up windows were selected. The content expert then wrote the tool descriptions. We implemented all support to allow any combination of support options at a time. This way, the support a player receives can decrease in the course of his or her gameplay. We designed a standard game flow according to 4C/ID principles, where players start with the low complexity tasks and high levels of support, working towards low complexity without support, then continuing with moderate complexity levels and high support, et cetera. During the redesign process, we encountered some issues. Working in an existing game implied there were limited technical options for adding functionality within the game. A specific challenge was the amount of available visual space. The original game was designed to use almost all visual space in the game screen. This meant that additional information provided in-game would stack with visuals that were necessary for the primary gameplay. We solved this by creating pop-up information windows for the transfer form, tool information, and supervisor feedback. While the pop-up window is visible, the simulation keeps running in the background and sounds keep playing. For the reminders, we felt this was not an appropriate interface design since these messages come from the system, instead of being requested (more or less consciously) by the learner. Therefore, the reminders appear in the top right corner, partly obscuring the crash cart. To remove a reminder, the learner must click an ‘X’ in the top right corner (see Fig. 5.2. We believed that this will increase the chance that

5 Four-Component Instructional Design Applied to a Game …

79

they notice the message. However, this display mechanism may lead to frustration in the case when one wants to take something out of the crash cart quickly and is first confronted with a message from the system. Another option is to use audio messages. These messages would have to be short and clear, for example, “IV fluids have run out.” User testing will be necessary to evaluate if the mechanisms we created support the learner as intended. Second, increasing the number of patient cases proved challenging. There was limited display space available for adding patient information, so the cases could not be infinitely complex. Furthermore, as the virtual patients in abcdeSIM consist of over 200 separate photographs, adding a single patient requires creating a whole new stack. We solved this in two ways: by using cases from another version of abcdeSIM, negating the need for new images, but requiring a substantial amount of visual design work to make the images fit with the emergency department environment; and by allowing one patient to occur twice with a different condition. For new games, the use of 3D graphics instead of photographs could be suggested to increase flexibility in the design, with a cautionary note that this will decrease visual fidelity.

5.3.7 Plans for Evaluation To assess the effect of the various support options and the viability of the chosen game flow, medical students will be invited to test the redesigned game in a mixedmethod study. In the first phase of evaluation, we will combine cued retrospective interviewing with direct observation of gameplay, to explore which support options are subjectively experienced as helpful and which cause frustration. We will triangulate this data using the System Usability Scale (SUS) [2] and Questionnaire for User Interaction Satisfaction (QUIS) [3] to measure user experience, and a ten-item cognitive load measurement questionnaire [19] to quantify cognitive load. Findings will be consolidated into points for further improvement and implemented. This will be the first assessment phase in a process of iterative improvements. In the second phase, we will investigate the learning curve by measuring performance improvements over time using trace data, that is, a log of all actions performed in the game by the user. We expect that this will provide insight on how to implement which support options, to eventually improve the learning effect of the game.

5.4 Discussion and Lessons Learned In this project, we developed several support options for a serious game for emergency care skills in harmony with the 4C/ID model. Potential issues remain regarding the impact of these changes on the learning and especially the gaming experience. First, lowering the complexity for the initial levels may result in decreasing challenge. Motivation for playing often arises from experiencing a state of flow, when an optimal

80

T. J. E. Faber et al.

balance exists between challenge and required skill [5]. To maintain motivation, it is important to augment the perceived value of the task. This might be achieved by first exposing the student to a case without support as suggested by Enfield [10], however, too much challenge in relation to the players’ abilities will decrease motivation. For optimal gameplay, a delicate balance must be sought between challenge and support. We will pursue this balance through rigorous testing. A second concern is the added cognitive load resulting from the support mechanisms themselves. The increased load may be considered germane to the learning task, but if the support does not match the players’ needs it may instead increase extraneous load, that is, the cognitive load caused by processes that do not contribute to learning but instead burden available working memory [17]. Offering ‘just enough’ support for a particular learner is challenging, as there are few indications of learning needs during play. Finally, the need for support can be different for each player. Very specific supports, such as the time-based and intervention-based reminders, may not align with the players’ needs. They may arrive too soon or too late, causing cognitive overload or frustration. By adapting the support to the actions and achievements in previous cases, it may be possible to tailor the support to the level of the player. This adaptivity could further increase effective learning in the game context. In summary, this study demonstrates applying the Four-Component Instructional Design Model to a serious game for emergency care. Input from various perspectives, including educationalists, game designers, and content experts was essential in implementing additional support and improving the learning tasks. The FourComponent Instructional Design model provided a structured approach for improving the instructional design of a serious game for emergency care medicine.

5.5 Conclusion The Four-Component Instructional Design model provides a structured approach for design choices in a serious game. Working in an existing game, we encountered some challenges, such as scarce available visual space to display the reminders and tool information. Still, by following the 4C/ID principles and closely collaborating with a team of educationalists, content experts, and game designers, we were able to create several theoretically sound support options. Acknowledgements We wish to acknowledge IJsfontein, game design company in Amsterdam, the Netherlands for providing input on suitable ways to implement support and making the required technical adjustments to the game.

5 Four-Component Instructional Design Applied to a Game …

81

References 1. Arnab, S., Lim, T., Carvalho, M.B., et al.: Mapping learning and game mechanics for serious games analysis. Br. J. Educ. Technol. 46, 391–411 (2015) 2. Brooke, J.: SUS—A quick and dirty usability scale. Usability Eval. Ind. (1996) 3. Chin, J.P., Diehl, V.A., Norman, K.L.: Development of an instrument measuring user satisfaction of the human-computer interface. In: CHI’88 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp 213–218 (1988) 4. Cook, D.A., Hamstra, S.J., Brydges, R., Zendejas, B., Szostek, J.H., Wang, A.T., Erwin, P.J., Hatala, R.: Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med. Teach. 35, e867-898 (2012) 5. Cziksentmihalyi, M..: Flow: The Psychology of Optimal Experience. HarperPerennial (1991) 6. Daniel, M., Stojan, J., Wolff, M., et al.: Applying four-component instructional design to develop a case presentation curriculum. Perspect. Med. Educ. 7, 276–280 (2018) 7. Dankbaar, M.E.W., Alsma, J., Jansen, E.E.H., et al.: An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation. Adv. Health Sci. Educ. Theory Pract. 21, 505–521 (2016) 8. Dankbaar, M.E.W., Roozeboom, M.B., Oprins, E.A.P.B., et al.: Preparing residents effectively in emergency skills training with a serious game. Simul Healthc 12, 9–16 (2016) 9. de Melo, B.C.P., Falbo, A.R., Muijtjens, A.M.M., et al.: The use of instructional design guidelines to increase effectiveness of postpartum hemorrhage simulation training. Int. J. Gynecol. Obstet. 137, 99–105 (2017) 10. Enfield, J.: Designing an Educational Game with Ten Steps to Complex Learning. ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106 (2012). Tel: 800–521–0600; Web site: https://www.proquest.com/en-US/products/dissertations/indivi duals.shtml 11. De Freitas, S., Van Staalduinen, J.-P.: A game based learning framework linking game design and learning outcomes. In: Learning to Play: Exploring the Future of Education with Video Games (2009) 12. Gee, J.P.: Learning by design: good video games as learning machines. E-Learn. Digit. Media 2, 5–16 (2005) 13. Hirumi, A., Appelman, B., Rieber, L., Van, E.R.: Preparing instructional designers for gamebased learning: Part 1. TechTrends 54, 27–37 (2010) 14. Huang, W.D,. Johnson, T.: Instructional game design using cognitive load theory. In: Ferdig, R.E. (ed.) Handbook of Research on Effective Electronic Gaming in Education. pp 1143–1165 (2009) 15. Kalkman, C.J.: Serious play in the virtual world: can we use games to train young doctors? J. Grad. Med. Educ. 4, 11–13 (2012) 16. Kalyuga, S.: Expertise reversal effect and its implications for learner-tailored instruction. Educ. Psychol. Rev. 19, 509–539 (2007) 17. Kalyuga, S., Plass, J.L.: Evaluating and managing cognitive load in games. In: Ferdig, R.E. (ed.) Handbook of Research on Effective Electronic Gaming in Education. IGI Global (2009) 18. Kapp, K.M.: The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. Pfeiffer (2012) 19. Leppink, J., Paas, F., Van der Vleuten, C.P.M., et al.: Development of an instrument for measuring different types of cognitive load. Behav. Res. Methods 45, 1058–1072 (2013) 20. Lukosch, H., Van, B.R., Meijer, S.: A game design framework for vocational education. Int. J. Soc. Behav. Educ. Econ. Bus. Ind. Eng. 6, 770–774 (2012) 21. Lukosch, H., van Bussel, R., Meijer, S.A.: A Serious Game Design Combining Simulation and Sandbox Approaches, pp. 52–59. Springer, Cham (2014) 22. Macnamara, B.N., Hambrick, D.Z., Oswald, F.L.: Deliberate practice and performance in music, games, sports, education, and professions: a meta-analysis. Psychol. Sci. 25, 1608–1618 (2014)

82

T. J. E. Faber et al.

23. Maggio, L.A., ten Cate, O., Irby, D.M., O’Brien, B.C.: Designing evidence-based medicine training to optimize the transfer of skills from the classroom to clinical practice: Applying the four component instructional design model. Acad. Med. 90, 1457–1461 (2015) 24. van Merriënboer, J.J.G., Kirschner, P.A.: Ten Steps to Complex Learning: A Systematic Approach to Four-Component Instructional Design, 3rd edn. Routledge, London (2017) 25. Perkins, G.D.: Improving the efficiency of advanced life support training. Ann. Int. Med. 157, 19 (2012) 26. Postma, T.C., White, J.G.: Developing clinical reasoning in the classroom—Analysis of the 4C/ID-model. Eur. J. Dent. Educ. 19, 74–80 (2015) 27. Rieber, L.P.: Seriously considering play. Educ. Technol. Res. Dev. 44, 43–58 (1996) 28. Ryan, R.M., Deci, E.L.: Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being self-determination theory. Am. Psychol. 55, 68–78 (2000) 29. Susilo, A.P., van Merriënboer, J., van Dalen, J., et al.: From lecture to learning tasks: use of the 4C/ID model in a communication skills course in a continuing professional education context. J. Cont. Educ. Nurs. 44, 278–284 (2013) 30. Tjiam, I.M., Schout, B.M.A., Hendrikx, A.J.M., et al.: Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design Designing simulator-based training: an approach integrating cognitive task analysis and four-component instructional design. Med. Teach. 34, e698–e707 (2012) 31. van Merriënboer, J.J.G., Clark, R.E., de Croock, M.B.M.: Blueprints for complex learning: The 4C/ID-model. Educ. Technol. Res. Dev. 50, 39–61 (2002) 32. van Merriënboer, J.J.G., Tjiam, I.: Development and teaching of complex skills in invasive procedures. In: Catheter-Based Cardiovascular Interventions, pp. 173–186. Springer, Berlin Heidelberg, Berlin, Heidelberg (2013) 33. van Rosmalen, P., Boyle, E.A., Nadolski, R., et al.: Acquiring 21st century skills: gaining insight into the design and applicability of a serious game with 4C-ID. In: De Gloria, A. (ed.) Games and Learning Alliance. GALA 2013. Lecture Notes in Computer Science, vol. 8605, pp. 327–334. Springer, Cham (2014a) 34. van Rosmalen, P., Boyle, E.A., van der Baaren, J., et al.: A case study on the design and development of minigames for research methods and statistics. EAI Endorsed Trans. GameBased Learn 1, e5 (2014) 35. Vandewaetere, M., Manhaeve, D., Aertgeerts, B., et al.: 4C/ID in medical education: How to design an educational program based on whole-task learning: AMEE Guide No. 93. Med. Teach. 37, 4–20 (2015)

Chapter 6

A Review of Virtual Reality-Based Eye Examination Simulators Michael Chan, Alvaro Uribe-Quevedo, Bill Kapralos, Michael Jenkin, Kamen Kanev, and Norman Jaimes

Abstract Eye fundus examination requires extensive practice to enable the adequate interpretation of the anatomy observed as a flat image seen through the ophthalmoscope, which is a handheld device that allows for the non-invasive examination of the back of the eye. Mastering eye examination with an ophthalmoscope is difficult due to the intricate volumetric anatomy of the eye when seen as a two-dimensional image when examined through the lens of an1 ophthalmoscope. The lack of eye examination skills in medical practitioners is a cause of concern in today’s medical practise as misdiagnosis can result in improper or prompt treatment of life-threatening conditions such as glaucoma, high blood pressure, or diabetes amongst others. Past and current solutions to the problem of ophthalmoscope education have seen the use of pictures, illustrations, videos, cadavers, patients, and volunteers. More recently, simulation has provided a higher-end instrument to expose trainees to otherwise impossible conditions for learning purposes safely. However, simulation costs associated with purchasing and maintaining modern simulators has lead to complications related to their acquisition and availability. These shortcomings in eye examination M. Chan · A. Uribe-Quevedo (B) Ontario Tech University, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada e-mail: [email protected] M. Chan e-mail: [email protected] B. Kapralos maxSIMhealth, Ontario Tech University, Oshawa, ON, Canada e-mail: [email protected] M. Jenkin York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada e-mail: [email protected] K. Kanev Shizuoka University, 3 Chome-5-1 Johoku, Naka Ward, Hamamatsu, Shizuoka 432-8011, Japan e-mail: [email protected] N. Jaimes Universidad Militar Nueva Granada, Cra11N101-80, Bogota, Colombia e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_6

83

84

M. Chan et al.

simulation have led to research focusing on cost-effective tools using a breadth of solutions involving physical and digital simulators ranging from mobile applications to virtual and augmented reality, to makerspace and practical eye models. In this chapter, we review direct ophthalmoscopy simulation models for medical training. We highlight the characteristics, limitations, and advantages presented by modern simulation devices. Keywords Eye examination · Simulation · Virtual reality · Augmented reality

6.1 Introduction Despite doctors possessing reasonably sound knowledge of medicine, a number of studies, including the work of Fischer et al., have found doctors to be deficient in regards to clinical skill performance, problem-solving, and the application of knowledge to patient care [1]. These deficiencies are generally found across all aspects of medicine and have led to renewed interest in the way in which doctors are regulated, and in the mechanisms used to train them. Of particular interest here is that the increased demand for patient safety has driven the adoption of simulation as a mechanism to reduce medical error which is estimated to be the third leading cause of death in the United States [2]. The history of simulation in training and educations spans many centuries and is widespread throughout various fields of human endeavour. Games such as chess represent perhaps the earliest attempts at wargaming, and the sport of jousting enabled knights to practice and hone their skills. The oldest description of simulation in health care education can be found in the Sushruta Samhita, a collection of medical texts written in approximately 500 CE. These writings describe 1100 illnesses, including their management, and approximately 300 surgical procedures. This collection also contains sections dedicated to the production and use of simulators. The simulators described in these passages are primarily comprised of natural materials, such as holes in moth-eaten wood to represent wounds for probing. A majority of these simulators would be categorised as part-task trainers today, although, a whole-body patient simulator for skills practice is also described [3]. Signs of simulation-use in medical and medical-related education have also been noted in ancient China, where the practise of acupuncture was taught through the use of life-sized wax-coated bronze figures filled with water, invented by Wang Weiye, the court physician of Emperor Song in 1023 CE [4]. The skills of the user were evaluated based on whether or not water leaked from the acupuncture point after needling. Alongside the simulators, a manual was printed and distributed by the central government health bureaucracy for students to use as a reference. However, the acupuncture channels described in Wang’s manual were not aligned to any body structure because the study of anatomy was non-existent, and dissection was forbidden [5]. Although simulation has been a prevalent component of medical education throughout human history, the systematic and sustained use of simulation in health

6 A Review of Virtual Reality-Based Eye Examination Simulators

85

care education is more recent, dating to the start of the 18th century [6]. It was during this time period that the Chamberlen family, responsible for the invention of the obstetric forceps, lost their monopoly on instrumental deliveries, and more men (later known as men-midwives) had expressed interest in attending births. Simulators were used to educate midwives and men-midwives on baby delivery, and how to manage more complicated births. The use of simulation increased throughout the following two hundred years, along with the recognition that appropriate education and training would lead to applications in other fields as technology advanced. In order to help allay issues related to medical performance deficiency, medical education has shifted towards a system-based core curriculum that allows for the development of skills targeting patient safety [7]. One of the primary goals of simulation-based medical education (SBME) is a focus on the learners obtaining and honing clinical psychomotor skills within the cognitive domain, in addition to developing skills within the affective domain (such as communication training with simulated patients) [2]. Although simulation does not guarantee learning, when used in the proper environment, it can prove to be instrumental in the education and training of adult learners through experiential learning [2]. Photographs are a crucial training aid in ophthalmoscope training. They present a number of advantages over the direct eye fundus examination. They allow the instructor and the trainee to visualise the same image, which allows better guidance and assessment in comparison with an ophthalmoscope, where the trainee obtains descriptions and directions, and reports orally on these while having the sole view of the fundus [8]. In a retention study, Kelly et al. [8] found that trainees prefer digital fundus photographs over direct ophthalmoscopy, with 20% of the trainees citing discouragement by clinical preceptors as a primary reason for not performing the full examination during training exercises. Trainee enthusiasm for the clinical usage of ocular fundus photography, suggests that more widespread availability of non-mydriatic fundus photography could allow for more frequent and accurate examinations within the clinical setting [9]. Student preference for images led Kelly et al. to conclude that trainees preferred them because of their higher resolution, larger size, and lack of both patient and ophthalmoscope interactions that can increase the examination assessment. Building on photography-based training, multimedia tools often include interactive mechanics that allow the instructor and student to share the same view of the eye fundus with the objective of providing better guidance, feedback, and a full examination training experience [10]. In this chapter, we present a review of direct ophthalmoscopy (DO) simulators. The direct ophthalmoscopy (fundoscopy) examination procedure involves interpreting the intricate anatomy of the eye, when viewed through the lens of an ophthalmoscope. DO is a difficult procedure to master as it requires extensive practice to properly interpret the intricate anatomy of the eye [11]. van Velden et al. [12] proposed a series of three factors for ophthalmoscopy training; formal instruction, adequate practice time, and refresher training. Benbassat et al. [13] suggested that although different medical associations have varying expectations concerning what medical trainees should know, all students should be able to identify the red reflex, optic disk, and recognise signs of clinical emergencies and retinopathies. Our focus on DO

86

M. Chan et al.

stems from our research in developing an eye examination training tool employing emerging technologies. The goal of this review is to highlight advances and trends in the field of eye examination as current consumer-level technologies and makerspace are becoming disruptive in terms of advances in the availability of learning and training tools. Moreover, an eye examination is a routine procedure that it is still relevant today [14].

6.2 Ophthalmoscopy Examination Traditionally, education in the eye fundus examination begins with an introduction to the concepts associated with the semiology of the eye and the various pathologies related to the visual apparatus. Following this knowledge acquisition phase, a practical component takes place, where trainees learn and apply their knowledge towards identifying various fundus conditions by way of practice with classmates, or through the use of digital photographs [15]. Digital photographs are regarded as a standard method of fundus examination practice, as trainees are able to analyse a variety of common and rare physical afflictions otherwise challenging to observe in real-life practice as a result of patient availability [16]. Photographs also enable the trainee and instructor to confirm specific aspects of the ophthalmoscope view as both can visualise the same structures. In addition, students are expected to utilise direct ophthalmoscopes in real-life examination practice [10]. Yusuf et al. [11] identify limited practice as a concern regarding the ophthalmoscope training. The inadequate levels of competency and proficiency in ophthalmoscope operation have been identified resulting from the limited time dedicated to eye examination training, which, when coupled with the inherent complexity of interpreting 2D eye fundus images and patient interactions, can lead to a challenging training experience [17]. Moreover, the deficiencies in examiner aptitude are not limited to just novice trainees; it also encompasses experienced doctors in the field of ophthalmology as well [18].

6.2.1 The Ophthalmoscope and Eye Fundus Examination Direct ophthalmoscopy devices produce an upright, or un-reversed image of approximately 15 times magnification which is reviewed by the medical practitioner. The procedure involves ensuring that the patient is comfortable (in order to limit involuntary head movement); the room lights should be dimmed, and the doctor has support to maintain a proper viewing angle while approaching the patient. Pupil dilation may vary depending on the patient’s age. It is worth noting that a combination of direct and indirect viewing techniques may take place to obtain a better view of the eye fundus [19].

6 A Review of Virtual Reality-Based Eye Examination Simulators

87

In order to operate the ophthalmoscope, the medical practitioner is required to adjust the ophthalmoscope diopter and the size of the incident light beam to approximate the size of the targeted pupil (light reflection causing glare and dazzle can result if the light beam is larger than the pupil), along with its intensity (the maximum intensity can also be discomforting to the patient). Once the eye fundus can be visualised, the main landmark within the eye is the optic disk since it is situated in the same location for every person and it is easily identified. After the optic disk is located, it is easier to adjust the ophthalmoscope and look for other eye structures [19]. Ophthalmoscope examination requires the medical practitioner to direct the light at the pupil at an angle of 15 to 20 degrees and 25 cm from the patient’s line of sight. As the practitioner moves closer to the patient, a red reflex (i.e., the red-orange reflection of light from the back of the eye) should appear in normal conditions; this provides a guide to continue approaching until the light, and the ophthalmoscope is 2 cm away from the eye. It is worth noting that only practice can yield the necessary expertise when finding the proper angle and distance approach to the patient. Other challenges when performing the examination include avoiding prolonged examination (more than 15 s with a 10-s break when needed), as prolonged examination can lead to fatigue and discomfort to the patient [19].

6.2.2 Ophthalmoscope Alternatives New methods are continually being developed to enhance eye fundus teaching and practice. For example, the Arclight Ophthalmoscope is a solar-powered, multipurpose ophthalmic device for retinal examination and otoscopy (examination of the ear) that has been compared to traditional handheld ophthalmoscopes in terms of effectiveness and ease of use, finding no significant differences between the two [20]. Furthermore, the Arclight device has been gaining momentum due to its low cost, providing a more affordable examination device that can help improve training by increasing access to it [21]. Commodity display and computational platforms have also been applied to this problem. Mobile computing advances have produced digital cameras with higher resolutions that can be added to smartphones. For example, the D-EYE mobile phone add-on allows placing the magnifying device on top of the camera and then through an application, enables the examiner to point and photograph the eye fundus. A study conducted to compare performance versus a direct ophthalmoscope revealed that those using the D-EYE provided more accurate diagnosis [22].

88

M. Chan et al.

6.3 Simulation and Medical Education In the context of medical education and training, simulation is defined as the artificial, yet faithful imitation and representation of clinical situations through the use of suitably analogous and digital apparatuses [7]. Simulation has been established as a training tool that can be used standalone or complementary to training as learners are able to practice delicate procedures, and equipment handling without exposure to hazardous conditions and life-threatening repercussions [23]. This method of learning also facilitates the transition from traditional motto of medical education, “See one, do one, teach one”, into the more contemporary and successful, “See one, practice many, do one” [24]. By way of simulation-based training, learners are provided with the opportunity to practice using cognitive, psychomotor, executive, and interpersonal functions [7]. Prior to the use of computer-based simulators in modern-day training, physical models were used as educational tools regarding anatomy and disease, along with literature and theatre representations of various medical signs and symptoms [25]. In addition to these techniques, cadavers, live practice with students and patients have been used to help further develop cognitive and psychomotor skills in future doctors [26]. Medical education has evolved considerably since the 1900’s from the apprenticeship model of learning where students see, learn, and do, to demanding precise objectives to measure competency in medical knowledge, skills, and behaviours [27]. The apprenticeship model, long a cornerstone of medical training, has limitations associated with variability and reproducible of conditions required to train competent health professionals, quantitative assessment in terms of the training received, and feedback on the efficacy of the training [28]. A simulation is typically comprised of two components: (1) the scenario, and (2) the simulator. The scenario describes the simulation and includes the goals, objectives, feedback or debriefing points, narrative description of the clinical simulation, staff requirements, simulation room set up, simulators, props, simulator operation, and instructions for SPs [29]. Simulators can include manikins, cadavers, animals, devices, technologies, computer programs and virtual spaces, scenarios, SPs, and a host of other methods of imitating real-world systems [29]. Debriefing sessions following such simulation-based training enables learners to reflect on their actions and make connections to real events, which further facilitates learning, abstraction, and conceptualisation [30]. Technological advancements have led to the resurgence and development of more sophisticated simulators in medical training, particularly in ophthalmoscopic training [31]. One example of a more modern sophisticated medical simulator is the ResusciAnne, a simulation manikin developed for practising ventilation during cardiopulmonary resuscitation (CPR) in the 1960s by Norwegian toy manufacturer Asmund Laerdal [32]. Although the model lacked any computer components, it presented an airway capable of obstruction where trainees were able to realistically hyper-extend the neck, and tilt the chin to open the airway for sufficient inflation [33]. An even more advanced simulator named Sim One was developed in 1967, by Dr. Abrahamson, an engineer, and Dr. Judson, a physician, both from the University of Southern

6 A Review of Virtual Reality-Based Eye Examination Simulators

89

California School of Medicine [34, 35]. Documented as the first computer-controlled manikin capable of visible chest rising and falling during breathing, Sim One included a synchronised heartbeat, blood pressure, coordinated temporal and carotid pulses, and a movable jaw and eyes. Sim One was used to teaching anaesthesia residents endotracheal intubation in a safe environment, and could also provide physiological responses to four intravenously-administered drugs, and two gases through a mask or intubation tube. An analytic comparison was conducted between five medical residents using the simulator, and a control group consisting of another five medical residents. The medical residents who used the simulator yielded better performance ratings and required fewer trials to reach success in time than those in the control group [34]. Despite the effectiveness of the simulator, adoption was limited due to the cost associated with the software and hardware. As society advances technologically, simulated clinical experiences become more functional and affordable, providing students with a wide variety of opportunities to learn new skills, practice team communication, and hone clinical competencies [31]. Such trends in medical training include SPs, models and part-task trainers, computer-based simulation, and virtual reality-based systems. These advances are reviewed below.

6.3.1 Standardised Patients The concept of SPs originated in 1963 by a neurologist from the University of Southern California and revolved around using real people acting as a patient as a method of training. These ‘patients’ are carefully trained actors who are taught to utilise specific verbal and physical triggers to portray various patient conditions accurately. As such, these actors are also knowledgeable in the context of the simulation [36]. SPs are used to realistically imitate healthcare environments in order to engage medical education learners and to enhance the suspension of disbelief [25]. Although the first experience was formally reported in 1964, the method of training was not very popular initially as it was regarded to be too expensive and unscientific [27]. SPs can be considered to be a desirable alternative to medical education with real patients for a number of reasons. The first advantage of SPs lies in the readiness and availability of the simulator, as students are able to practice procedures at times and locations suitable for the specific training, instead of relying on real patients at a hospital or clinic [31]. Students are also able to experience multiple scenarios with SPs, rather than a single encounter with a live patient. SPs are also able to modify their behaviour to replicate patient behaviour during the period of consultation and treatment. This allows learners to be familiarised with continuous care within a reasonable amount of time. Lastly, SPs are considered to be more ethical as a method of medical education as they are not real patients with real medical conditions or emergency scenarios [37]. The use of SPs also presents a few disadvantages. The overall reliability of the SP to consistently recreate the same simulation experience for all learners has been called into question [36], and the amount of time required

90

M. Chan et al.

for adequate training is limited. Nonetheless, Barrows argues that SPs are not meant to replace traditional methods of training, rather they are meant to act as supplements to enrich the overall learning experience, and to provide more practice for learners while working with live patients [31].

6.3.2 Computer-Based Simulation Computer-based simulators in medical training began with the introduction of mathematical models for physiological and pharmacologic anaesthetic drug effect simulation [38]. Simulators such as SLEEPER and the Anesthesia Simulator Recorder were developed for anaesthesia training, allowing trainees to practice the procedure through repetition and feedback, and have been praised for their realism and affordability [39]. Despite their convenience, computer-based models may lack key experiential and kinesthetic elements provided by higher fidelity training mechanisms (i.e., realism) that are critical for the development of psychomotor proficiency and dexterity used in clinical skills [31]. A comparison study conducted by Beal et al. concluded that although higher-fidelity simulation was more effective than lowfidelity ones, in terms of skills acquisition, there were no significant differences with other teaching approaches [31].

6.3.3 Virtual/Augmented/Mixed Reality Virtual reality (VR) is defined as the replication of an environment that simulates the physical presence of places in the real or virtual world, allowing users to interact in that world [40]. Through the use of specialised hardware and software, environmental replication is achieved by stimulating a number of the human senses such as sight, hearing, and touch [41]. For instance, tactile and kinesthetic perception can be replicated through the use of haptic systems such as controllers with vibration feedback sensors. In addition, visual and audio cues can be provided through appropriate computer displays and speaker systems. A common concern with virtual reality systems is that their goal is to completely replace the normal perceptual cues with those from some alternate (virtual) reality [42]. Virtual environments are typically isolating, requiring other team members and instructors to be simulated in the environment as well, if they are required, as medical tasks typically require social skills [43]. Augmented reality (AR) can be defined as a technology that projects virtual elements, such as menus and objects, into the real world [44]. AR was first introduced as a method of training for airline and Air Force pilots in the 1990s and is widely used as a tool for education in the present day. Similarly to VR, AR initially required expensive hardware and sophisticated equipment to use, although augmented reality programs can now be developed for more consumer-friendly devices such as mobile

6 A Review of Virtual Reality-Based Eye Examination Simulators

91

phones and computers. As a result, augmented reality can be used within classrooms from kindergarten to university [45]. AR has been shown to be a beneficial learning tool in education [46]. For example, AR allows students to engage in authentic explorations of the real world. By overlaying virtual elements such as menus, over real-world objects, users are able to make more detailed observations that would otherwise be overlooked to the naked eye [44]. In 2009, Dunleavy, Dede, and Mitchell observed that AR’s greatest advantage lay in its unique ability to create immersive hybrid learning environments that combine digital and physical objects, thereby facilitating the development of processing skills such as critical thinking, problem-solving and communicating through interdependent collaborative exercises [47].

6.3.4 Simulation in Ophthalmology With respect to simulation in ophthalmology, the need to improve eye training has led to the development of various simulators, from those employing interchangeable images (e.g., printed or digital pictures) examined through sockets simulating the eye in a manikin head [48]. From the earliest days of ophthalmoscope training, educational resources included the use of imagery (sketches, photographs) to guide students through the training process. Pictures, illustrations, multimedia, 3D models, cadavers, videos, lectures, and live demonstrations provided complementary media to enable learners further to explore content.

6.4 Direct Ophthalmoscopy Simulators Direct Ophthalmoscope (DO) simulators development has focused on overcoming the limitations of traditional ophthalmology training by enhancing different aspects of the simulation task. This section reviews low- to high-end eye fundus examination simulators, including both physical and computer-simulated tools. Ophthalmoscopy training can be conducted using different didactic tools (e.g., pictures, illustrations, multimedia, 3D models, cadavers, videos, lectures, and live demonstrations). The procedure follows appropriate steps and techniques that are generally taught for a successful examination [49]. The Plastic Canister Model, described by Chung and Watzke in 2004, is a training model for direct ophthalmoscopes that simulates a mydriatic pupil. Through the use of a plastic canister with an 8-mm hole in the centre of one end, users are able to view a 37-mm photograph of a normal retina through the use of a traditional direct ophthalmoscope as shown in Fig. 6.1 [50]. Results with this simulator have been mixed. A review by Ricci and Ferraz [51], highlighted common problems with the device including low photograph quality, intense light reflection and a loss of spatial perception by the examiners. A study performed by Kelly et al. [52] aimed

92

M. Chan et al.

Fig. 6.1 Depiction of the eye examination plastic canister. Interchangeable circular eye fundus photographs are placed at the back of the canister and then covered with the lid that has a hole mimicking the pupil aperture. Image created based on the plastic canister description found in [50]

at examining first-year medical student preferences for eye examination learning to assess accuracy used three different modalities; human volunteers, the plastic canister model simulator, and photos of the ocular fundus [10]. Post-test results showed that 71% of students preferred human volunteers to simulators with regards to learning how to use the direct ophthalmoscope. Furthermore, 77% of the students preferred utilising fundus photographs over simulators for ocular anatomy education. The students were also more accurate at identifying ocular fundus features through the use of fundus photographs over simulators, with 70% preferring to use photographs over direct ophthalmoscopy. Despite this, Ricci and Ferraz describe how enhancements to the model, such as the use of high-quality photos, matte printing paper, and an indication of where the patient’s nose would be, yielded a more favourable outcome regarding student efficiency for the initial practice of ophthalmoscopy [48]. One problem with the use of a plastic canister to provide a simulated display is its lack of a simulation of the patient’s head. This limits training of the approach to the patient and proper alignment of the ophthalmoscope with the eye itself. In 2007 the Human Eye Learning Model Assistant (THELMA) addressed this issue by including a Styrofoam head in their system. THELMA employed two different types of equipment to simulate the ocular fundus; the Slide Method, and the Plug Method [48]. The Slide Method consists of fundus photographs projected into a device similar to the Plastic Canister Model, and the Plug Method utilises an apparatus that is similar to an eyeball, with a diameter of 17 mm to allow for a field of view of 60◦ when viewed with a direct ophthalmoscope. Real-sized photographs of the fundus are placed within the device to increase realism. However, the amount of light required to view the photos depends on the ophthalmoscope, as well as the quality of the printing paper. In the following years, The EYE Exam Simulator (developed by Kyoto Kagaku Co.,

6 A Review of Virtual Reality-Based Eye Examination Simulators

93

Kyoto, Japan), and the Eye Retinopathy Trainer (developed by Adam Rouilly Co., Sittingbourne, UK were released, building upon THELMA’s core features [53]. McCarthy in 2009 [54] made use of a modified EYE Exam Simulator to assess its feasibility as an assessment of fundoscopic skills. During the test, a group of 11 ophthalmology students and 467 emergency medicine (EM) residents were instructed to make visual contact with the ocular fundus using a handheld ophthalmoscope. Participants drew everything that could be visualised, and recorded any pathology seen. The drawing analysis at the end of the participant’s use of the ophthalmoscope revealed that many participants failed to create any visual representation, and if there was one, it was usually of low quality. Feedback from the simulator provided by the participants was regarded as “neutral”, with no indication of support for training with the model, although the EM residents did express interest in future simulation training. Some explanations regarding as to why the test yielded unfavourable results include the small group size of participants, the use of dark pictures with low illumination, and the eccentric placement of visual markers. Despite the results of McCarthy’s test in 2009, a similar study was conducted in 2014 by Larsen et al. [55] regarding the use of the simulator but with an instructor present during training to assist students. At the end of each session, the students were asked to identify what was seen in the simulation with a photograph. The study concluded that even a high-quality simulation had a lesser impact on students without guidance [55]. VRmagic, a company based in Mannheim, Germany, developed the EYEsi Direct Ophthalmoscope Simulator (EYEsi DOS) to offer a more realistic training experience for students. Ricci and Ferraz described the simulator as a complex and highly sophisticated piece of equipment, featuring a touch-screen interface attached to an artificial human face, allowing for an evaluation of a normal pathological fundus with a handheld ophthalmoscope [48]. As an enhancement to the teaching of the diagnostic skills required for direct ophthalmoscopes, the simulator’s ability to provide feedback based on the user’s view, and control of technical and physiological elements (e.g., light, blood vessel colour, and pathological spots), provides a distinct advantage over other traditional simulators [50]. Its biggest drawbacks are its cost, the need for trained staff, and the lack of comparative studies to prove its efficacy [51]. In 2017 a virtual reality ophthalmoscope trainer was developed at Birmingham City University [56]. This device was designed to engage students in learning complex ophthalmoscopic skills by combining VR and gamification techniques (i.e., the use of game mechanics in routine activities for increasing engagement, adhesion and participation). This VR-based learning application contained five sections; an interaction tutorial, red reflex and retinal navigation, pathology identification, and a final quiz. Within the tutorial level, users were taught how to use the application, including the head-based movement for locating objects and utilising the VR headset’s triggers to interact with them. The red reflex, a component of the application was focused on teaching the user how to locate the red reflex of the eye by shining the virtual ophthalmoscope into the patient’s eye at a certain angle and zooming in and out with the lens settings. After the red reflex tutorial is completed, users are provided with background information on retinal examinations before being guided

94

M. Chan et al.

through a series of procedure steps to help navigate the anatomic landmarks of the virtual eye. Users are then instructed on how to follow the four main blood vessels out from the optic disc, and then to navigate the four quadrants of the retina through the use of audio-visual commentary and feedback. Upon completion of each section, users are presented with a set of eight different images of the eye and are tasked with identifying the conditions of the eye utilising the skills they obtained previously. The application applies standard gamifaction strategies and makes use of virtual rewards, such as badges, that are given to users as a method of recognising task progress. Rewards and reward tiers are granted based on metrics such as accuracy and task completion time, and this is done to indicate the user’s level of achievement when learning ophthalmology skills with the simulator. The application was tested with a group of fifteen undergraduate medical students to evaluate its efficacy as a learning tool for ophthalmoscopes [56]. Students were asked if the application improved their understanding of the processes undertaken with the examination procedure. They were also asked whether or not they were able to recognise the anatomic landmarks of the eye and any physical abnormalities. Questions that assessed ease of control with the application, user confidence, and the effectiveness of the teaching method were included with the evaluation. Students reported an increase in their overall understanding of eye anatomy, their ability to identify anatomic landmarks, and physical abnormalities within the eye. An increase in confidence with the ophthalmic examination was also noted amongst participants, and they felt that the application was easy and enjoyable to use [56]. Given the nature of the ophthalmoscope examination and the ophthalmoscope itself, it can be difficult to provide training in a group setting. Tangible user interfaces can provide an effective approach to overcome this problem. Codd-Downey et al. [57] describe an AR-based approach that utilizes a tangible user interface to enable multiple trainees to interact with a common eye simulation. Figure 6.2 shows the tabletop structure used in their system with AR markers positioned at its corners. Individual users can use their own AR device—a tablet-based interface is shown in the figure—to provide personalized per-user overlays to the common shared training experience. The integration of both technologies presents an active learning experience that could be used to engage all learners in a common educational experience. The system described in [57] leverages commodity cell phone and tablet devices to provide tracked visual displays to each user. The exploitation of such commodity hardware provides a cost-effective mechanism for integrating intangible devices into ophthalmoscope training. Soto et al. [58] describe a cellphone-powered VR-based system that combines a mobile VR headset with interpupillary adjustable lenses in conjunction with a Bluetooth game controller. Figure 6.3 presents the stereoscopic visualisation of the eye fundus and the external eye anatomy available with this system. The process requires trainees to locate the red reflex while rotating the eyeball. Once located, the scenario changes to an internal view of the eye where the optic nerve, macula and diverse blood vessels can be identified. User interactions to navigate the examination are based on a first-person shooter setting, where the left joystick allowed moving the camera towards the eye, while the right joystick allowed rotating the camera, and actions were confirmed with a button. A user study revealed

6 A Review of Virtual Reality-Based Eye Examination Simulators

95

Fig. 6.2 Multiple users interact with an intangible display (the tabletop) while being presented with personalized AR views through commodity hardware (here through an Android tablet) [57]

that although stereoscopy was well received by participants, interactions employing a game controller were challenging because of the unfamiliarity with such device as participants expressed that they were not experienced in playing video games [58]. A later refinement by Acosta et al. [49] of the eye fundus examination trainer focused on overcoming the challenges identified when using a game controller as a user input device. AR was employed as the underlying technology and interactions were modelled employing touch gestures, as shown in Fig. 6.4. In this iteration, the learner employs a printed marker that serves as a reference for rendering the virtual head to perform the eye examination. The marker can be placed on any flat surface. Figure 6.4 shows the application flow from start to examination. Here, the interactions were more natural due to the familiarity with touch screens, but the model visualisation was challenging due to the limitations of the AR technology used. For example, lighting, the quality of the marker, and how the markers are held can negatively affect the experience. One of the main challenges associated with the AR interactions shown in Fig. 6.4 is pointing the mobile device to the target, and holding the target so that the information is properly visualised. Holding the marker and device can lead to arm strain if used for prolonged periods. To remove this interaction problem and facilitate the interactions and marker manipulation, a Styrofoam head was added to the system as a tangible reference for the user, as shown in Fig. 6.5 [49]. The use of the head and

96

M. Chan et al.

Fig. 6.3 Mobile VR eye fundus examination. Upper panel shows the user wearing the HMD and interacting with the simulation using a wireless Bluetooth control. Lower panels show simulated stereo imagery presented to the user. The lower left panel shows the external eye view while the lower right shows the simulated ophthalmoscope display [58]

the marker attached in the position of the eye required learners to employ both hands to operate the virtual ophthalmoscope within the mobile application and the Styrofoam at the same time. As a consequence, the interactions were difficult to master as the smartphone required to be kept as still as possible to ensure good tracking and AR rendering. A further refinement to this work saw the inclusion of a 3D printed ophthalmoscope replica to use in conjunction with AR [49]. The objective was to improve the interactions and facilitate the virtual examination while using a device mimicking the basic operations of a real ophthalmoscope. This approach provides both a physical cue to its location as more accurately modeling its input controls. The device includes an Arduino Micro, a Bluetooth module, and a potentiometer for operating the magnification of the lens. A flat surface attached to the simulated ophthalmoscope handle provides a surface for placing a tracking target. When the marker is within the field of view of the smartphone camera, the virtual eye is rendered for examination, as shown in Fig. 6.5. The previously described AR and VR approaches provide only a localized simulation of some virtual environments. More sophisticated and large scale simulations have also been developed. Nguyen et al. [23] describes a VR ophthalmoscope simulation for replicating the direct ophthalmoscopy procedure on a simulated patient Fig. 6.6. The virtual ophthalmoscope controls are mapped to an HTC Vive controller and allow for users to adjust lens zoom and light intensity. The system includes a number of visual aids within the virtual environment for aiding the user in conducting the procedure, as well as the instructor to evaluate trainee progress. For instance, a separate window appears on the wall behind the virtual patient allowing both the

6 A Review of Virtual Reality-Based Eye Examination Simulators

97

Fig. 6.4 AR eye fundus examination flow. The numbering indicates a sample order, (a) is the main menu where users can start the examination or view their history. (b) Provides a list of scenarios for practising. (c) Shows information about a chosen condition and it allows the user to start the examination. (d) Informs the user to point the phone at the maker to start the training. (e) Shows the virtual patient head overlaid on top of the marker. Finally, (f) shows the touch controllers for light intensity and lens magnification [49]

98

M. Chan et al.

Fig. 6.5 Two mobile augmented reality modes are presented. On the left, a Styrofoam head with an eye-shaped printed markers overlays a virtual head for conducting the fundus examination. On the right, a mobile VR headset is used in conjunction with a 3D printed ophthalmoscope replica holding a printed marker where the eye fundus is projected for examination [49]

Fig. 6.6 User employing the HTC Vive VR headset and an HTC Vive controller to perform an eye fundus examination on a virtual patient. The image on the left shows the virtual patient and a view taken from the headset showing the virtual ophthalmoscope [23]

instructor and the user share the examination. Other visual aids include a heads-up display that contains anatomic landmark information to aid trainees in diagnosing the patient’s physical condition. The virtual simulation features two navigation modes; one for training, and the other for evaluation. While both modes enable the user to conduct a full ophthalmoscopic examination, the training mode includes the visual aids, as well as a set of tasks that are meant to debrief the user following patient diagnosis. After completion of the training mode, users can begin the evaluation mode, where the full examination is conducted with metric evaluation. Users are assigned scores for both modes, where cognitive tasks are evaluated in questionnaire form, and skills are evaluated based on the user’s performance during the examination. Factors that are considered during the skills evaluation process include maintaining fundus visibility, keeping

6 A Review of Virtual Reality-Based Eye Examination Simulators

99

the examination duration to 35 s or less, identification of the anatomic landmarks, and the proper procedure approach and patient treatment. Formal testing was conducted to gauge the efficacy of the simulator, which involved nine medical students who possessed a basic understanding of human eye anatomy [23]. The participants were tasked with approaching the virtual patient, adjusting the lens and light settings of the ophthalmoscope, and establishing a visualisation of the fundus. Although each person completed the tasks within a five-minute time frame, four of the participants were not able to see the fundus correctly as a result of not moving close enough to the virtual patient. All participants reported difficulty in operating the HTC Vive controller, and it is hypothesised that this is a result of the controller having a different button and dial layouts than a real-life ophthalmoscope. However, the participants expressed interest in seeing similar software developed for other medical procedures.

6.5 Discussion The DO eye fundus examination is a procedure that allows medical practitioners to observe the back of the eye as a method of diagnosing patient physical health. Although fundus examinations are regarded as a critical component for full-body diagnosis, the skills necessary to perform the examination are regarded as difficult to teach and require a considerable amount of time to practice and master. From perhaps the earliest days of DO, training has adopted a range of training tools and simulators to enhance the training received by direct examination of patients and patient stand-ins. Unlike any medical procedures, the use of an ophthalmoscope has, until very recently, been restricted to the operator of the device. This makes training extremely challenging as it is difficult for the student and the instructor to exist within a common representation of the task. One can easily imagine the teacher asking the student if they see a particular feature, and the student, not wishing to appear foolish, answering, “Of course”, even though they do not. Beyond the unique nature of the DO in restricting the shared experience of the instructor and the pupil, the use of patients is not an ideal solution for training. Patients may present many wonderful examples of normal conditions, but on-schedule presentation of disease/damage can not be guaranteed. Simulation, even as simple as the use of photographs, helps to provide the trainee with a broader range of disease/damage that is likely to be available in a trainee. Given the difficulties associated with direct ophthalmoscopes, alternative methods of practice have been implemented as supplements to traditional forms of exercise such as peer-to-peer practice with an ophthalmoscope. Through the use of simulation, students and trainees are able to practice medical procedures that would otherwise require limited allotted time and supervision. Although there are a number of techniques that have been used for instruction, many modern-day training methods cannot still be used as easily accessible forms of practice, and lack an accurate manner of user progress evaluation. This chapter has reviewed the remarkable advances that

100

M. Chan et al.

have occurred in the development of training and simulator systems for DO. Even relatively low-cost technology now exists that can provide high quality simulation for DO training. These devices, when coupled with proper supervision and training, can provide a highly effective training regime for medical professionals. One can only hope that advances in simulation systems for other medical procedures and tests will advance as well. As technology continues to evolve, simulated clinical experiences allow for delicate procedures to be practiced with greater accuracy and variety than traditional learning methods.

References 1. Fisher, J., Viscusi, R., Ratesic, A., Johnstone, C., Kelley, R., Tegethoff, A.M., Bates, J., SituLacasse, E.H., Adamas-Rappaport, W.J., Amini, R.: Clinical skills temporal degradation assessment in undergraduate medical education. J. Adv. Med. Educ. Prof. 6(1), 1–5 (2018) 2. Cook, D.A., Andersen, D.K., Combes, J.R., Feldman, D.L., Sachdeva, A.K.: The value proposition of simulation-based education. Surgery 163(4), 944–949 (2018) 3. Shah, S.: Ophthalmology in ancient time—The Sushruta Samhita. J. Clin. Ophthalmol. Res. 6(3), 117–120 (2018) 4. Ma, K.W.: Acupuncture: its place in the history of Chinese medicine. Acupuncture Med. 18(2), 88–99 (2000) 5. White, A., Ernst, E.: A brief history of acupuncture. Rheumatology 43(5), 662–663 (2004) 6. Owen, H.: Simulation in Healthcare Education: An Extensive History. Springer, Cham, Switzerland (2016) 7. Pavlovic, A., Kalezic, N., Trpkovic, S., Videnovic, N., Sulovic, L.: The application of simulation in medical education—Our experience “From Improvisation to Simulation”. Srpski arhiv za celokupno lekarstvo 146(5–6), 338–344 (2017) 8. Kelly, L.P., Garza, P.S., Bruce, B.B., Graubart, E.B., Newman, N.J., Biousse, V.: Teaching ophthalmoscopy to medical students (the TOTeMS study). Ame. J. Ophthalmol. 156(5), 1056– 1061 (2013) 9. Bruce, B.B., Thulasi, P., Fraser, C.L., Keadey, M.T., Ward, A., Heilpern, K.L., Wright, D.W., Newman, N.J., Biousse, V.: Diagnostic accuracy and use of nonmydriatic ocular fundus photography by emergency physicians: Phase II of the FOTO-ED study. Ann. Emerg. Med. 62(1), 28–33 (2013) 10. Schulz, C., Moore, J., Tamsett, E., Smith, C.: Addressing the ‘Forgotten Art of Fundoscopy’: evaluation of a novel teaching ophthalmoscope. Eye 30(3), 375–384 (2015) 11. Yusuf, I., Salmon, J., Patel, C.: Direct Ophthalmoscopy should be taught to undergraduate medical students—Yes. Eye 29(8), 987 (2015) 12. van Velden, J.S., Cook, C., du Toit, N., Myer, L.: Primary health eye care: evaluation of the competence of medical students in performing fundoscopy with the direct ophthalmoscope. S. Afr. Fam. Pract. 52(4), 341–343 (2010) 13. Benbassat, J., Polak, B.C., Javitt, J.C.: Objectives of teaching direct ophthalmoscopy to medical students. Acta Ophthalmologica 90(6), 503–507 (2012) 14. Nguyen, M., Quevedo-Uribe, A., Kapralos, B., Jenkin, M., Kanev, K., Jaimes, N.: An experimental training support framework for eye fundus examination skill development. Comput. Methods Biomech. Biomed. Eng. Imag. Vis. 7(1), 26–36 (2019) 15. Bruce, B.B., Bidot, S., Hage, R., Clough, L.C., Fajoles-Vasseneix, C., Melomed, M., Keadey, M.T., Wright, D.W., Newman, N.J., Biousse, V.: Fundus photography versus ophthalmoscopy outcomes in the emergency department (FOTO-ED) phase III: web-based, in-service training of emergency providers. Neuro-Ophthalmology 42(5), 269–274 (2018)

6 A Review of Virtual Reality-Based Eye Examination Simulators

101

16. Lamirel, C., Bruce, B.B., Wright, D.W., Delaney, K.P.: Quality of nonmydriatic digital fundus photography obtained by nurse practitioners in the emergency department : The FOTO-ED study. OPHTHA 119(3), 617–624 (2011) 17. Imonikhe, R.J., Finer, N., Gallagher, K., Plant, G., Bremner, F.D., Acheson, J.F.: Direct ophthalmoscopy should be tto undergraduate medical atudents—Yes. Eye (Basingstoke) 30(3), 497 (2016) 18. Stainer, M.J., Anderson, A.J., Denniss, J.: Examination strategies of experienced and novice clinicians viewing the retina. Ophthal. Physiol. Opt. 35(4), 424–432 (2015) 19. Roux, P.: Ophthalmoscopy for the general practitioner. S. Afr. Fam. Pract. 46(5), 10–14 (2004) 20. Parthasarathy, M.K., Faruq, I., Arthurs, E., Lakshminarayanan, V.: Comparison between the arclight ophthalmoscope and a standard handheld direct ophthalmoscope: a clinical study. In: Current Developments in Lens Design and Optical Engineering XIX, vol. 10745, p. 107450V. International Society for Optics and Photonics, San Diego, CA, USA (2018) 21. Lowe, J., Cleland, C.R., Mgaya, E., Furahini, G., Gilbert, C.E., Burton, M.J., Philippin, H.: The arclight ophthalmoscope: a reliable low-cost alternative to the standard direct ophthalmoscope. J. Ophthalmol. 2015, 1–6 (2015) 22. Mamtora, S., Sandinha, M.T., Ajith, A., Song, A., Steel, D.H.: Smart phone ophthalmoscopy: a potential replacement for the direct ophthalmoscope. Eye 32(11), 1766 (2018) 23. Nguyen, M., Quevedo-Uribe, A., Kapralos, B., Jenkin, M., Kanev, K., Jaimes, N.: An experimental training support framework for eye fundus examination skill development. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 7(1), 26–36 (2017) 24. Datta, R., Upadhyay, K.K., Jaideep, C.N.: Simulation and its role in medical education. Med. J. Armed Forces India 68(2), 167–172 (2012) 25. Bradley, P.: The history of simulation in medical education and possible future directions. Med. Educ. 40(3), 254–262 (2006) 26. Sharma, M., Horgan, A.: Comparison of fresh-frozen cadaver and high-fidelity virtual reality simulator as methods of laparoscopic training. World J. Surg. 36(8), 1732–1737 (2012) 27. Rosen, K.R.: The history of medical simulation. J. Crit. Care 23(2), 157–166 (2008) 28. Ott, T., Schmidtmann, I., Limbach, T., Gottschling, P., Buggenhagen, H., Kurz, S., Pestel, G.: Simulation-based training and OR apprenticeship for medical students: a prospective, randomized, single-blind study of clinical skills. Der Anaesthesist 65(11), 822–831 (2016) 29. Alinier, G.: Developing high-fidelity health care simulation scenarios: a guide for educators and professionals. Simul. Gaming 42(1), 9–26 (2011) 30. So, H.Y., Chen, P.P., Wong, G.K.C., Chan, T.T.N.: Simulation in medical education. J. R. Coll. Phys. Edinburgh 49(1), 52–57 (2019) 31. Beal, M.D., Kinnear, J., Anderson, C.R., Martin, T.D., Wamboldt, R., Hooper, L.: The effectiveness of medical simulation in teaching medical students critical care medicine: a systematic review and meta-analysis. Simul. Healthcare 12(2), 104–116 (2017) 32. Lind, B.: The birth of the resuscitation Mannequin, Resusci Anne, and the teaching of mouthto-mouth ventilation. Acta Anaesthesiologica Scandinavica 51(8), 1051–1053 (2007) 33. Jones, F., Passos-Neto, C.E., Braghiroli, O.F.M.: Simulation in medical education: brief history and methodology. Principles Pract. Clin. Res. 1(2), 1–8 (2015) 34. Abrahamson, S., Denson, J.S., Wolf, R.: Effectiveness of a simulator in training anesthesiology residents. BMJ Qual. Saf. 13(5), 395–397 (2004) 35. Fritz, P.Z., Gray, T., Flanagan, B.: Review of Mannequin-based high-fidelity simulation in emergency medicine. Emerg. Med. Austral. 20(1), 1–9 (2008) 36. Dotger, B.H., Dotger, S.C., Maher, M.J.: From medicine to teaching: the evolution of the simulated interaction model. Innov. Higher Educ. 35(3), 129–141 (2010) 37. Barrows, H.: An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad. Med. Philadelphia 68, 443–443 (1993) 38. Cooper, J.B., Taqueti, V.R.: A brief history of the development of Mannequin simulators for clinical education and training. Postgr. Med. J. 84(997), 563–570 (2008) 39. Maran, N.J., Glavin, R.J.: Low-to high-fidelity simulation—A continuum of medical education? Med. Educ. Suppl. 37(1), 22–28 (2003)

102

M. Chan et al.

40. Perry, S., Burrow, M., Leung, W., Bridges, S.: Simulation and curriculum design: A global survey in dental education. Austr. Dental J. 62(4), 453–463 (2017) 41. Scalese, R.J., Obeso, V.T., Issenberg, S.B.: Simulation technology for skills training and competency assessment in medical education. J. Gen. Internal Med. 23(1), 46–49 (2008) 42. Munshi, F., Lababidi, H., Alyousef, S.: Low-versus high-fidelity simulations in teaching and assessing clinical skills. J. Taibah Univ. Med. Sci. 10(1), 12–15 (2015) 43. Howard, M.C., Gutworth, M.B.: A meta-analysis of virtual reality training programs for social skill development. Comput. Educ. 144, 103707 (2020) 44. Akçayir, M., Akçayir, G.: Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11 (2017) 45. Chiang, T.H., Yang, S.J., Hwang, G.J.: Students’ online interactive patterns in augmented reality-based inquiry activities. Comput. Educ. 78, 97–108 (2014) 46. Cheng, K.H., Tsai, C.C.: Affordances of augmented reality in science learning: suggestions for future research. J. Sci. Educ. Technol. 22(4), 449–462 (2013) 47. Dunleavy, M., Dede, C., Mitchell, R.: Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. J. Sci. Educ. Technol. 18(1), 7–22 (2009) 48. Ricci, L.H., Ferraz, C.A.: Ophthalmoscopy simulation: advances in training and practice for medical students and Young ophthalmologists. Adv. Med. Educ. Pract. 8, 435 (2017) 49. Acosta, D., Gu, D., Uribe-Quevedo, A., Kanev, K., Jenkin, M., Kapralos, B., Jaimes, N.: Mobile e-training tools for augmented reality eye fundus examination. In: Interactive Mobile Communication. Technologies and Learning, pp. 83–92. Springer, Hamilton, ON, Canada (2018) 50. Chung, K.D., Watzke, R.C.: A simple device for teaching direct ophthalmoscopy to primary care practitioners. Am. J. Ophthalmol. 138(3), 501–502 (2004) 51. Ricci, L.H., Ferraz, C.A.: Simulation models applied to practical learning and skill enhancement in direct and indirect ophthalmoscopy: a review. Arquivos Brasileiros de Oftalmologia 77(5), 334–338 (2014) 52. Kelly, L.P., MacKay, D.D., Garza, P.S., Bruce, B.B., Bidot, S., Graubart, E.B., Newman, N.J., Biousse, V.: Teaching ophthalmoscopy to medical students (TOTeMS) II : a one-year retention study. Am. J. Ophthalmol. 157(3), 747–749 (2014) 53. Androwiki, J.E., Scravoni, I.A., Ricci, L.H., Fagundes, D.J., Ferraz, C.A.: Evaluation of a simulation tool in ophthalmology: application in teaching funduscopy. Arquivos Brasileiros de Oftalmologia 78(1), 36–39 (2015) 54. Danielle, M., McCarthy Heather, R., Leonard, J.A.V.: A new tool for testing and training ophthalmosopic skills. J. Grad. Med. Edu. 4(1), 92–96 (2012) 55. Larsen, P., Stoddart, H., Griess, M.: Ophthalmoscopy using an eye simulator model. Clin. Teacher 11(2), 99–103 (2014) 56. Wilson, A.S., O’Connor, J., Taylor, L., Carruthers, D.: A 3D virtual reality ophthalmoscopy trainer. Clin. Teacher 14(6), 427–431 (2017) 57. Codd-Downey, R., Shewaga, R., Uribe-Quevedo, A., Kapralos, B., Kanev, K., Jenkin, M.: A novel tabletop and tablet-based display system to support learner-centric ophthalmic anatomy education. In: International Conference on Augmented Reality. Virtual Reality and Computer Graphics, pp. 3–12. Springer, Otranto, Italy (2016) 58. Soto, C., Vargas, M., Uribe-Quevedo, A., Jaimes, N., Kapralos, B.: AR stereoscopic 3D human eye examination app. In: 2015 International Conference on Interactive Mobile Communication Technologies and Learning (IMCL), pp. 236–238. IEEE, Thessaloniki, Greece (2015)

Chapter 7

Enhanced Reality for Healthcare Simulation Fernando Salvetti, Roxane Gardner, Rebecca D. Minehart, and Barbara Bertagni

Abstract Enhanced reality for immersive simulation (e-REAL®) is the merging of real and virtual worlds: a mixed reality environment for hybrid simulation where physical and digital objects co-exist and interact in real time, in a real place and not within a headset. The first part of this chapter discusses e-REAL: an advanced simulation within a multisensory scenario, based on challenging situations developed by visual storytelling techniques. The e-REAL immersive setting is fully interactive with both 2D and 3D visualizations, avatars, electronically writable surfaces and more: people can take notes, cluster key-concepts or fill questionnaires directly on the projected surfaces. The second part of this chapter summarizes an experiential coursework focused on learning and improving teamwork and event management during simulated obstetrical cases. Effective team management during a crisis is a core element of expert practice: for this purpose, e-REAL reproduces a variety of F. Salvetti (B) · B. Bertagni Centro Studi Logos, Turin, Italy e-mail: [email protected] B. Bertagni e-mail: [email protected] Logosnet, Lugano, Switzerland Logosnet, Houston, TX, USA R. Gardner · R. D. Minehart Center for Medical Simulation, Boston, MA, USA e-mail: [email protected] R. D. Minehart e-mail: [email protected] R. Gardner Brigham and Women’s Hospital, Boston, MA, USA Children’s Hospital, Boston, MA, USA R. Gardner · R. D. Minehart Massachusetts General Hospital, Boston, MA, USA Harvard Medical School, Boston, MA, USA © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_7

103

104

F. Salvetti et al.

different emergent situations, enabling learners to interact with multimedia scenarios and practice using a mnemonic called Name-Claim-Aim. Learners rapidly cycle between deliberate practice and direct feedback within a simulation scenario until mastery is achieved. Early findings show that interactive immersive visualization allows for better neural processes related to learning and behavior change. Keywords Enhanced reality · Virtual · Augmented and mixed reality · Virtual worlds · Hybrid simulation · Teamwork · Mnemonics · Name-Claim-Aim

7.1 Enhanced Reality Enhanced reality for immersive simulation (e-REAL®) is the merging of real and virtual worlds: a mixed reality (MR) environment for hybrid simulation where physical and digital objects (VR) co-exist and are available for tactile interaction, in a real learning setting—and not within a headset [67, 69]. e-REAL integrates tools and objects from the real world onto one or more walls, embedded with proximity sensors enabling tactile or vocal interaction with the virtual objects. Examples of physical objects include: • Ultrasound and sonography simulators • Pulmonary ventilators • Defibrillators. Examples of digital objects include: • Realistic avatars and medical imagery (Fig. 7.1) • Human organs and systems (Figs. 7.2 and 7.3) • Overlay of electronic information and images onto projected surfaces (Figs. 7.3, 7.4 and 7.5). Figure 7.1 illustrates a real medical tool operated on a patient simulator by learners who are in a dialogue with an avatar (that is, the virtualized colleague displayed on the wall), with medical imagery displayed both on a monitor and the walls. Figures 7.2, 7.3 and 7.4 illustrate 2D, 2.5D and 3D images visible without special glasses, which can be manipulated by the hands without special joysticks (active pens). Learners are able through hand gestures to virtually take notes, highlight, erase, zoom inside/outside or rotate virtual organs and other displayed objects 360°, to cluster concepts by grouping them within boxes or by uploading additional medical imagery to gain a better understanding about what they are analyzing, to take screenshots and share them, to complete questionnaires, etc. Figure 7.5 exemplifies the mirroring of a perioperative environment which can be overlaid with digital computer-generated information pertaining to a simulated patient, ultrasound images, ECG tracks, outputs from medical exams. The overlay of information is to enhance the user experience.

7 Enhanced Reality for Healthcare Simulation

105

Fig. 7.1 Courtesy of the Red Cross Simulation Center “Gusmeroli” and Accurate Solutions S.r.l., Bologna (Italy): learning medical procedures by a skill trainer during a hybrid simulation within an e-REAL environment, with interactive medical imagery displayed on both the two walls and a monitor; a highly realistic avatar (left wall) looking at and verbally interacting with them

In a nutshell, the e-REAL system enables a multilayer vision: the many levels of the situation are made available simultaneously, by overlaying multisource info—e.g. words, numbers, images, etc.—as within an augmented reality display, but without needing to wear special glasses. By visualizing relations between topics, contextual factors, cognitive maps and dynamic cognitive aids, e-REAL improves the learners’ cognitive retention [29, 30, 33, 63, 65].

7.2 Enhanced Hybrid Simulation in a Mixed Reality Setting, Both Face-to-Face and in Telepresence e-REAL is a synthesis of virtual reality (VR) and augmented reality (AR) within a real setting, a one of a kind mixed reality (MR) solution based on immersive interaction. In a nutshell, AR alters one’s ongoing perception of a real-world environment, whereas VR replaces (usually in a complete way) the user’s real-world environment with a simulated one.

106

F. Salvetti et al.

Fig. 7.2 Courtesy of the Environmental Design and Multisensory Experience Lab at the Polytechnic School of Milan (Italy): learners within an e-REAL lab are facing 2D, 2.5D and 3D images (left wall) and a beating heart (right wall) that can be 360° rotated and analyzed also internally with a multilayer approach (by a zoom), with an overlay of digital information on both walls

VR is a communication medium that makes virtual experiences feel highly realistic. The term ‘virtual reality’ has been widely used and often creatively exaggerated by Hollywood producers and science-fiction writers for decades. Consequently, there are many misconceptions and expectations about the nature of the technology [5]. We define ‘virtual reality’ as synthetic sensory information that leads to the perception of environments and their content as if they were not synthetic [11]. Since the 1960s, VR has been used by the military and medicine for training and simulations, but it has also become fertile ground to evaluate social and psychological dynamics in academic settings [4]. For example, journalists use virtual reality to situate their readers within stories, educators use virtual technologies for experiential learning, and psychiatrists leverage virtual reality to mitigate the negative effects of psychological traumas [46]. AR is a general term applied to a variety of display technologies capable of overlaying or combining alphanumeric, symbolic, or graphical information with a user’s view of the real world [4]. We define ‘AR’ as an interactive experience of a real-world environment where the objects that reside in the real-world are augmented by computer-generated perceptual information—sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory. Overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is seamlessly interwoven

7 Enhanced Reality for Healthcare Simulation

107

Fig. 7.3 Courtesy of the Red Cross Simulation Center “Gusmeroli” and the University of Bologna (Italy): a learner facing an e-REAL wall and manipulating a 2D image showing a brain cancer, divided in 8 pieces, during an experiment aimed at determining whether cognitive retention improves when visualization is broken into multiple smaller fragments first and then recomposed to form the big picture

with the physical world such that it is perceived as an immersive aspect of the real environment [61]. MR takes place not only in the physical world or in the virtual world, but is a mix of the real and the virtual [23, 49]. We define ‘mixed reality’, as a hybrid reality, in other words the merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time. There are many mixed-reality applications to help students learn through the interaction with virtual objects. For example, teachers can instruct students remotely by using 3D projections within a head-mounted display. In e-REAL digital and physical objects co-exist in the real world, not within a headset, making e-REAL currently unique. e-REAL, as a MR environment for hybrid simulation, can be a stand-alone solution or even networked between multiple places, linked by a special videoconferencing system optimized to process operations with minimal delay (technically: low latency). This connectivity allows not only virtual objects sharing (like medical imagery, infographics, etc.) in real time, but also remote cooperation by co-sketching and co-writing (Fig. 7.6).

108

F. Salvetti et al.

Fig. 7.4 Courtesy of the Center for Medical Simulation in Boston (MA, USA): overlay of information manually added on the e-REAL wall using a tracking system that allows for electronic writing, due to proximity sensors tracking the nails of the writer

e-REAL is a futuristic solution, designed to be “glocal” [64], “liquid” [7], “networked” [16] and “polycentric” [80], as well as virtually augmented, mixed, digitalized and hyper-realistic [67]. The key-words characterizing the main drivers that guided the design of this solution, and that are leading the further developments, include: • Digital mindset, that is not merely the ability to use technology but is a set of attitudes and behaviors that enable people and organizations to foresee possibilities related to social media, big data, mobility, cloud, artificial intelligence, and robotics. • Visual thinking, that according to Rudolph Arnheim implies that all thinking— not just thinking related to art—is basically perceptual in nature, and that the dichotomy between seeing and thinking, or perceiving and reasoning, is misleading. • Computer vision, an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos; from the perspective of engineering, it seeks to understand and automate tasks completed by the human visual system. • Advanced simulation that is a highly realistic imitation of a real-world object, process or system. • Multimedia communication that is a system of relaying information or entertainment that includes many different forms of communication: for example, it might include video, audio clips, and still photographs.

7 Enhanced Reality for Healthcare Simulation

109

Fig. 7.5 Courtesy of the Centre de Simulation Médicale CARE at the University of Liège (Belgium): mirroring of a perioperative environment expected to be overlaid with e-REAL digital computer-generated information regarding a simulated patient—such as ultrasound images, ECG tracks, outputs from medical exams—for the purpose of enhancing the user experience, a deep and multisource understanding of the situation

• Immersive and interactive learning, that encourages students to learn by doing and allows learners to cross conceptual and theoretical boundaries with the help of simulation or game based tools. It is one of the most promising methods in the history of learning by immersing the students or professionals in an interactive learning environment in order to teach them a particular skill or technique [3, 67]. • Augmented and virtual reality within a hybrid environment, that allows learners to experience abstract concepts in three-dimensional space, transforming the passive learning into technology-assisted immersive learning. • Human and artificial intelligence cooperation, that does not require sheer computational power, but relies on intuition, and pre-evolved dispositions toward cooperation, common-sense mechanisms which are very challenging to encode in machines. • Cognitive psychology and neurosciences, that are different domains overlapping in the area of the neural substrates of mental processes and their behavioral manifestations.

110

F. Salvetti et al.

Fig. 7.6 e-REAL Multimedia Design Labs at Fondazione Piazza dei Mestieri in Turin (Italy): Interactive teleconferencing system enhanced with the speech analysis app developed by Centro Studi Logos jointly with the Tiny Bull Studios and the Polytechnic School of Turin (Italy)

• Anthropology and sociology of culture, whose viewpoints are inspired by observing cross-cultural differences in social institutions, cultural beliefs and communication styles. • Hermeneutics that refers to the interpretation of a given text, speech, or symbolic expression such as art. It also fosters a multi-layer approached by opening the meta-level related to the conditions under which such interpretation is possible. Consequently, hermeneutics foster learners’ metacognition by activating thinking about thinking and knowing about knowing, which contributes to higher-order thinking skills. • Narratology that is the study of narrative strategies and structures as well as the ways that these affect human perception. • Design thinking applied to andragogy and pedagogy that revolves around a deep interest in developing an understanding of the learners for whom educational content is designed—for example, questioning and re-framing problems in learner-centric ways, questioning assumptions and implications, helping to develop empathy with the target users. • Epistemology that is the study of knowledge, justification and the rationality of belief which addresses such questions as: What makes beliefs justified? What does it mean to say that we know something? And fundamentally: How do we know that we know?

7 Enhanced Reality for Healthcare Simulation

111

All these domains in our opinion must be related with a systemic and interdisciplinary approach: the one that is at the core of the research guidelines developed by Centro Studi Logos in Turin (Italy) since 1996.

7.3 e-REAL as a CAVE-Like Environment Enhanced by Augmented Reality and Interaction Tools e-REAL uses ultra-short throw projectors and touch-tracking cameras to turn blank walls and empty spaces into immersive and interactive environments. It is designed as an easy, user-centered and cost-effective solution to the old CAVE environments, which are too rigid, difficult to be managed, and expensive. CAVE—computer-assisted virtual environment—is an immersive VR and AR environment where projectors are directed to between three and six of the walls of a room-sized cube; usually the image projections change as the user walks around and moves his or her head. The name is also a reference to the allegory of the Cave in Plato’s Republic, in which a philosopher contemplates perception, reality and illusion [4, 20]. As shown in Fig. 7.7, these systems come in a variety of geometries and sizes, including rear-projection or flat panel-based displays, single and multiprojector hemispherical surfaces, each typically displaying field sequential stereo imagery. Most are designed to accommodate multiple users, each of whom wear LCD shutter glasses controlled by a timing signal that alternately blocks left- and right-eye views in synchronization with the display’s refresh rate. Most systems incorporate some method of tracking the position and orientation of the lead user’s head to account for movement and to adjust the viewpoints accordingly. In such multi user scenarios, all other participants experience the simulations in 3D, but passively. There are a number of critical reasons to develop e-REAL as an alternative to the CAVE for immersive simulation in education and training. Allowing users to work

Fig. 7.7 Courtesy of Centro Studi Logos, Turin (Italy): representative CAVE environments

112

F. Salvetti et al.

Fig. 7.8 Courtesy of the University of Eastern Piedmont, Simnova Center, Novara (Italy): e-REAL portable pop-up designed as an immersive and interactive setting for the “SimCup” of Italy

without special glasses is an important reason. Avoiding joysticks or other devices (usually haptic gloves) in order to interact with the visual content is another reason. Other reasons include a higher degree of realism and the opportunity to have all the users, and not only one person at a time, interact with the content. e-REAL is an innovative solution, it is very easy to use and 10–12 times less expensive than a CAVE. With both permanent or portable fixtures, it is so simple that two buttons are enough to manage it all—from a control room or remotely by the Team Viewer™ software, without the need for 3D glasses or joy-sticks to interact with the virtual objects (see Figs. 7.8 and 7.9). e-REAL offers a unique user experience, a combination of visual communication and direct interaction with the content—by gesture or spoken commands— immersing people in an entirely interactive ecosystem. Figures 7.10, 7.11, 7.12, 7.13, 7.14, 7.15, 7.16, 7.17, 7.18 and 7.19 provide a visual explanation about the main features of the system. Each e-REAL lab comes packed with a starter kit that enables countless activities using simple gesture and spoken commands. A number of apps and contents are available off-the-shelf, and many others can be quickly tailored. Each e-REAL can be customized with a number of multimedia contents and MR tools: • • • • •

Multimedia libraries; Interactive tutorials; Holographic visualizations; Real-time and live holograms; Podcasts and apps;

7 Enhanced Reality for Healthcare Simulation

113

Fig. 7.9 Courtesy of Centro Studi Logos, Turin (Italy), Logos Knowledge Network, Lugano (Switzerland), LKN, Berlin (Germany), Logosnet (Houston, TX, USA). Representative e-REAL permanent installation: (1) a regular control room, (2) a briefing and debriefing room with an entire common wall transformed into a large electronic and interactive whiteboard, which allows avoiding a regular projection or a standardized electronic whiteboard, (3) the immersive and interactive room for both the simulation and a first rapid onsite debriefing—enriched by the contextual factors displayed on the walls. This setting is also very useful for simulations that can be enhanced by pausing and adding further visualizations and notes

• Task (or skill) trainers, healthcare tools, wearable devices such as glasses, headsets, watches and gloves. Summarizing, the main technical features are: • VR and AR that happens in the real world (MR for hybrid simulation) using 2D-2.5D-3D projections on the walls, not within special glasses; • Visualization is interactive, immersive and often augmented; • Speech recognition may be part of the adventure as well; • Users do not require special glasses, gloves, head-mounted displays, etc.; • It is very easy to use: only two buttons are needed (one to start and stop the server, and another from a remote controller, to switch on and off the projectors); • A number of pre-loaded scenarios are available; • It is easy to import and show existing content (images, videos); • It is easy to create and edit new content, with tailored multimedia editors; • Both permanent and portable fixtures are available. The following link provides a more detailed description of the settings and the available tools:

114

F. Salvetti et al.

Fig. 7.10 Courtesy of the Red Cross of Italy in Bologna (Italy). e-REAL representative setting based on a multimedia animated visual storytelling made interactive by touch sensors tracking fingers or by vocal commands. Ultra-short throw projectors are working on common walls (level 4 or 5 finish) transformed into a maxi and touchable screen by the proximity sensors

https://www.youtube.com/watch?v=RZn3fdZNp3w&feature=youtu.be (courtesy of the Center for Medical Simulation in Boston, MA, USA, and the Polytechnic School of Milan, Italy).

7.4 The Simulation’s Phases Enhanced by e-REAL and the Main Tools Made Available by the System 1. Briefing and debriefing phases Briefing and debriefing phases are strongly enhanced by e-REAL, by facilitation of cooperative learning and systems thinking fostered by dynamic visualization—an approach aimed at building a shared understanding of the non-linear behavior of complex systems (e.g. communication within a working team, car crashes, internal feedback loops or flows), based on representations that go beyond traditional static forms such as sketches, animations, or real time graphics [1, 9, 37, 42, 43, 53, 58]. Systems thinking focuses on the way that a system’s constituent parts interrelate and

7 Enhanced Reality for Healthcare Simulation

115

Fig. 7.11 Courtesy of Logos Knowledge Network, Lugano (Switzerland). e-REAL setting with medical imagery on the side walls, a 3D beating heart on the top-right corner of the main screen which can be rotated 360° and can be overlaid with annotations and visualization of medical exams on the top-left corner of the same central wall

Fig. 7.12 Courtesy of the Red Cross of Italy in Bologna (Italy). e-REAL setting with a perioperative environment mirrored on a wall displaying interactive procedural guidelines. A second small surface (right), made by a simple curtain, is used to project visual mnemonics and check-lists that can be commanded vocally or by the flick of the hands

116

F. Salvetti et al.

Fig. 7.13 Courtesy of the Red Cross of Italy in Bologna (Italy). e-REAL setting designed for crisis resource management enhanced by visualization of the available therapeutic alternatives, with a tracking system to keep track of all the decisions taken. By clicking a virtual button, learners may directly pop-up medical imagery to achieve a deeper understanding of the situation they are dealing with

Fig. 7.14 Courtesy of the Red Cross of Italy in Bologna (Italy). Simulation’s closing phase designed to allow the instructors to provide a rapid first debriefing regarding both the therapeutic decisions taken by the learners and the verbal communication with the patient. Guidelines are available for a rapid search on the side curtain (right)

7 Enhanced Reality for Healthcare Simulation

117

Fig. 7.15 Courtesy of the Red Cross of Italy in Bologna (Italy). Wall mirroring with personal tablets and smartphones within an e-REAL setting

Fig. 7.16 Courtesy of Logosnet, Houston (TX, USA). In situ simulation setting for residents and interns, enhanced with an e-REAL system displaying interactive check-lists and mnemonics, 2-3D images and videos on two common walls

118

F. Salvetti et al.

Fig. 7.17 Courtesy of Logosnet, Houston (TX, USA). Setting for in situ simulation enhanced with an e-REAL system displaying 2-3D images and videos on a common wall

how systems work over time and within the context of larger systems—contrasting with traditional analysis, which studies systems by breaking them down into their separate elements. With e-REAL, briefing and debriefing phases are performed by: • Representing or summarizing a case with visual storytelling. • Showing a video during which you can write relevant key-words, highlight details, and add related multimedia content to the screen to enrich the cognitive map. • Clustering relevant concepts and key-words on an electronic whiteboard. • Moving content from one wall to another. 2. Use of the interactive wall with the smart interactive whiteboard tool for briefing and debriefing phases The e-REAL touch-walls (or e-Walls) work both as virtualized electronic whiteboards and as interactive scenarios. This is a virtualized model, developed without the limitations of the electronic whiteboards. The e-REAL system is commonly operated using simple gesture or spoken commands. The system is an interactive surface designed to: (1) enhance briefing and debriefing sessions; (2) dynamically visualize on a large surface; (3) cluster

7 Enhanced Reality for Healthcare Simulation

119

Fig. 7.18 Overlaying of notes on a brain cancer displayed on the CMS-e-REAL wall. Courtesy of the Center for Medical Simulation in Boston (CMS-MA, USA) and—from the left to the right—of Robert Simon (Principal Consultant at CMS), Roxane Gardner (Senior Director Clinical Programs and Director of the Visiting Scholars and Fellowship Program at CMS), Sarah Janssens (Director of Clinical Simulation at Mater Education in Brisbane, AUS), David Gaba (Associate Dean for Immersive and Simulation-based Learning and Director of the Center for Immersive and Simulationbased Learning at Stanford University School of Medicine, CA), Stephanie Barwick (Head of Partnerships, Programs and Innovation at Mater Education in Brisbane, AUS)

Fig. 7.19 Courtesy of the Center for Medical Simulation in Boston (MA, USA). Use of behavioral and cognitive key performance indicators during a debriefing: the e-REAL features allow writing and annotation, highlighting, erasing, moving, clustering, packing within the boxes or unpacking again two or more tags

120

F. Salvetti et al.

concepts and notes; (4) to physically touch and grasp ideas and multiple perspectives; (5) make the intangible, tangible; (6) facilitate cooperative learning; and (7) encourage systems thinking. • A number of writing and annotation functions make it possible to write, draw, highlight, color pick, erase and delete on any background (e.g., movie, scenario, written text). • A snapshot function allows users to save a screenshot (in PNG format) into a user’s predefined folder securing all the annotations. If mailing lists are available, screenshots may be directly sent. • A multimedia gallery is available to store content that can be uploaded (videos, audios, images, PDF files) by the instructor and projected on the wall simply by tapping on them, to facilitate the instructor’s ease to undertake briefings and debriefings. • Another tool allows users to move, rotate and scale content. It is also possible to create a group in order to cluster concepts and elements (words, images, etc.) that can be packed in boxes and unpacked, moved, rotated and scaled all together. • Puzzles can be created and played on the wall as a multi-perspective exercise. Puzzle pieces can be moved and rotated. When a piece fits, it is “magnetically” stitched to the other(s), when the puzzle is complete, it is converted to an Image Widget (so that it can be moved, rotated, etc.). 3. Simulation phase The simulation phase is strongly enhanced by the e-REAL system, through the multisensory scenarios embedding virtual and augmented reality elements and tools. Digital content can coexist with tools from the real world, such as a patient simulator and/or a medical trolley that can be on-stage during a briefing or a debriefing phase or a simulation phase within a virtualized environment. Learners usually work on and around one or more patient simulators—i.e. mannequins with lifelike features and, usually, with responsive physiology—or simulated patients like actors or avatars, or in a disaster scenario. They are asked to make critical decisions, complete physical exams, recognize a situation requiring rapid intervention, practice technical skills, communicate with a patient, and the health care team, interpret test results. Learners are also trained to manage unforeseen events between parallel processing (that is, more than one task at a time) or performing one task at time in a sequence—taking into consideration critical contextual factors such as a lack of time, scarcity of resources and tools, and previous impacting factors. Similar to being immersed within a videogame, learners are challenged by facing cases within multifaceted medical scenarios that present a “more than real” wealth of information. This is augmented reality in a hybrid environment—which contributes to individual cognitive maps by enabling a multilayer view and making the invisible, visible, the anatomy under the skin of the patient simulator can be enlarged, turned or rotated to appreciate how structures are interrelated.

7 Enhanced Reality for Healthcare Simulation

121

4. Virtual patients and other avatars, real or virtual tools and devices Within the e-REAL simulation setting, medical tools and devices can be real or virtual. When they are virtual, usually they are high fidelity models. It is also possible to replace physical simulation mannequins with custom-made 3D virtual patients (avatars). Whether obese, pregnant, young, old, vomiting, missing limbs, bleeding, or expressing any number of other physical signs and symptoms, e-REAL enables reproduction of patients (in a number of places, such as the Simnova Center for Medical Simulation from the University of Eastern Piedmont in Novara, Italy). 5. Augmented reality (AR) displays AR displays can be easily embedded within the e-REAL setting. Using AR, for example, a procedure can be performed partly in the real world and partly in the AR environment, or an entire procedure can be performed via “telemedicine” by an operator wearing special glasses and guided by an expert, who is tracking and keeping record of the info captioned by the AR displays. AR allows knowledge sharing and cooperation among persons and teams. Learners can cooperate by sharing a virtualized common scenario, displayed on the eREAL wall, even when they are performing in different physical environments. They can talk to each other and look at their own avatars acting in the same virtualized scenario because of special sensors capturing the body’s dynamics. 6. Holograms Holograms may be part of the e-REAL setting, utilizing wearable augments such as special glasses: Microsoft Hololens™, Epson Moverio™, etc. https://youtu.be/nrz dKzvKbIw (courtesy of the Polytechnic School of Turin, the University of Eastern Piedmont, Simnova Center in Novara, and Centro Studi Logos, Turin, Italy). Also human-sized holograms can be reproduced within the e-REAL setting. Those holograms may be pre-recorded or may even be live, talking and interacting dialogically with the learners. https://youtu.be/E2awcWvfgNA (courtesy of Logosnet, Houston, TX, USA). 7. Speech analysis as a further option for the debriefing phase Speech analysis is a powerful training tool to track—individually—both the tone of voice and spoken words of the learners, providing a semantic and pragmatic overview of interpersonal communication. According to the Polytechnic School of Turin and to ISTI-CNR (a branch from the Italian Research Council), the fidelity of a speech recording and transcription is approximately 94%. An operator, such as a simulation engineer, may be able to amend and modify the transcript so that the fidelity of the transcript achieves 100% semantic accuracy [19, 38]. Functions and visual outputs include the following:

122

F. Salvetti et al.

• • • •

An integral transcript or a dialogue which can be visualized. Audio clips, automatically divided phrase by phrase, are also available. A word counter shows the number of spoken words per minute. An internal search engine enables keyword search, highlighting the words in the transcript. • A word cloud tool visually summarizes the most spoken words. • A Voice Analysis tool is available in order to measure and visualize waveform (Decibel), perceived loudness (Hertz) and pitch. Some of these features are visible in Fig. 7.6. A video introduction is available via the following URL: https://youtu.be/3-hOd SYOmwg (courtesy of Centro Studi Logos, Turin, Italy).

7.5 Visual Storytelling and Contextual Intelligence, Cognitive Aids, Apps and Tools to Enhance the Education Process in a Simulation Lab or In Situ Visual storytelling techniques are part of the simulation scene, to represent a realistic context where learners are proactively involved to analyze scenarios and events, to face technical issues, to solve problems. The most effective learning occurs when being immersed in a context: realistic experience is lived and perceived as a focal point and as a crossroad [33]. Effective visualization is the key to help untangle complexity: the visualization of information enables learners to gain insight and understanding quickly and efficiently [24, 78, 79]. Examples of such visual formats include sketches, diagrams, images, objects, interactive visualizations, information visualization applications and imaginary visualizations, such as in stories and as shown in Figs. 7.20 and 7.21. Visualizations within e-REAL show relationships between topics, activate involvement, generate questions that learners didn’t think of before, and facilitate memory retention. Visualizations act as concept maps to help organize and represent knowledge on a subject in an effective way [17, 24, 78, 79]. Half of the human brain is devoted directly or indirectly to vision, and images are able to easily capture our attention [81]. Human beings process images very quickly: average people process visuals 60,000 times faster than text [56]. Humans are confronted with an immense amount of images and visual representations every day: digital screens, advertisements, messages, information charts, maps, signs, video, progress bars, diagrams, illustrations, etc. [1, 18, 26, 27, 31, 34, 59, 66, 78, 79, 85]. The use of symbols and images are extremely effective to warn people, as they communicate faster than words and can be understood by audiences of different ages, cultures and languages [35]. Images are powerful: people tend to remember about 10% of what they hear, about 20% of what they read and about 80% of what they see and do [39].

7 Enhanced Reality for Healthcare Simulation

123

Fig. 7.20 Courtesy of Centro Studi Logos, Turin, Quadrifor, Rome, and the Polytechnic School of Milan (Italy): e-REAL simulation lab to deliver the training program “Big Data for Beginners” (designed and taught by Fernando Salvetti) aimed at growing a digital mindset and skills related to big data visualization

Fig. 7.21 Courtesy of Centro Studi Logos, Turin, and Simnova, Novara (Italy): e-REAL simulation lab at the Simnova Center from the University of Eastern Piedmont in Novara (Italy) held during the SimCup of Italy 2019, open to all the Italian medical and nursing schools students and attended by hundreds of learners cooperating in teams comprised of 4 members each

Contextual factors are key to learning [33]. In e-REAL, learners practice handling realistic situations, rather than learning facts or techniques out of context. Context refers to the circumstances that form the setting for an event, statement, or idea. Context related factors can be influential and even disruptive: for example, a loud background noise within a virtually recreated operating room in e-REAL impacts

124

F. Salvetti et al.

negatively on the surgical team’s ability to communicate and may consequently contribute to their committing an error. The most effective learning occurs through being immersed in context, requiring the ability to understand the limits of our knowledge and action, and to adapt that knowledge to an environment different from the one in which it was developed [33, 36]. A context related experience within an e-REAL setting is similar to being immersed within a videogame with our entire bodies. Characteristics of games that facilitate immersion can be grouped into two general categories: those that create a rich mental model of the game environment and those that create consistency between the things in that environment [12, 83, 84]. The richness of the mental model relates to the completeness of multiple channels of sensory information, meaning the more those senses work in alignment, the better. The richness also depends on having a cognitively demanding environment and a strong and interesting narrative. A bird flying overhead is good. Hearing it screech is better. Cognitively demanding environments in which players must focus on what’s going on in the game will occupy mental resources. The richness of the mental model is good for immersion, because if brain power is allocated to understanding or navigating the world, it’s not free to notice all of its problems or shortcomings that would otherwise remind them that they’re playing a game. Finally, good stories—with interesting narratives, credible because intrinsically congruent as much as possible [6, 13, 22, 32, 45, 54]—attract attention to the game and make the world seem more believable. They also tie up those mental resources [84]. Turning to game traits related to consistency, believable scenarios and behaviors in the game world means that virtual characters, objects, and other creatures in the game world behave in the way in which learners expect [12, 84]. Usually game developers strive for congruence among all the elements. Learners are challenged both cognitively and behaviorally in a fully-immersive and multitasking learning environment, within interactive scenarios that usually present also a wealth of information. The many levels of the situation are made available simultaneously, by overlaying multisource—words, numbers, images, etc.—within an environment designed by AR techniques [4] (Fig. 7.22). e-REAL submerges learners in an immersive reality where the challenge at hand is created by sophisticated, interactive computer animation. Importantly, the system includes live and real time interaction with peers, instructors, tutors, facilitators and mentors. Thus, it adds a very important social component that enhances learning outputs, skills, cognitive and metacognitive processes. The process of learning by doing within an immersive setting, based on knowledge visualization using interactive surfaces, leaves the learners with a memorable experience [70] (Fig. 7.23). From an educational perspective, learners are not assumed to be passive recipients and repeaters of information but individuals who take responsibility for their own learning. The trainer functions, not as the sole source of wisdom and knowledge, but more as a coach or mentor, whose task is to help them acquire the desired knowledge and skills.

7 Enhanced Reality for Healthcare Simulation

125

Fig. 7.22 Courtesy of Logos Knowledge Network, Lugano, Prof. Martin Eppler and the Institute for Media and Communication Management from the St. Gall University (Switzerland): overlaying of multisource info within the e-REAL classroom at the Red Cross Training Center “Gusmeroli” in Bologna (Italy)

Fig. 7.23 Courtesy of the Red Cross Training Center “Gusmeroli” and Accurate Solutions Srl in Bologna, Prof. Michele La Rosa and CIDOSPEL from the University of Bologna, Centro Studi Logos, Turin (Italy). Overlay of digital information from different sources within the e-REAL classroom at the Red Cross Training Center

A significant trend in education in the nineteenth and twentieth centuries was standardization. In contrast, in the twenty-first century, visualization, interaction, customization, gamification and flipped learning are relevant trends [63]. In a regular flipped learning process, students are exposed to video lectures, collaborate in online discussions, or carry out research on their own time, while engaging in concepts in the classroom with the guidance of a mentor. Critics argue that the flipped learning model

126

F. Salvetti et al.

has some drawbacks for both learners and trainers [63]. A number of criticisms have been discussed with a focus on the circumstance that flipped learning is based mainly on video-lectures that may facilitate a passive and uncritical attitude towards learning, in a similar way to didactic face-to-face lectures, without encouraging dialogue and questioning—within a traditional classroom [8, 48, 72, 74, 76, 77]. The e-REAL setting is a further evolution of a flipped classroom, based on a constructivist approach. Constructivism is not a specific pedagogy, but rather a psychological paradigm that suggests that humans construct knowledge and meaning from their experiences. From our constructivist point of view, knowledge is mainly the product of personal and interpersonal exchange [10, 41, 52, 55, 60, 63, 64]. Knowledge is constructed within the context of a person’s actions, so it is “situated” [52]: it develops in dialogic and interpersonal terms, through forms of collaboration and social negotiation. Significant knowledge—and know-how—is the result of the link between abstraction and concrete behaviors. Knowledge and action can be considered as one: facts, information, descriptions, skills, know-how and competence—acquired through experience, education and training [10, 60]. Knowledge is a multifaceted asset: implicit, explicit, informal, systematic, practical, theoretical, theory-laden, partial, situated, scientific, based on experience and experiments, personal, shared, repeatable, adaptable, compliant with socio-professional and epistemic principles, observable, metaphorical, linguistically mediated [52]. Knowledge is a fluid notion and a dynamic process, involving complex cognitive and emotional elements for both its acquisition and use: perception, communication, association and reasoning. In the end, knowledge derives from minds at work. Knowledge is socially constructed, so learning is a process of social action and engagement involving ways of thinking, doing and communicating [67, 68, 82]. Compared to a traditional learning approach incorporating didactic lessons, learner performance gain was found to be 43%, in terms of increased speed and ease of learning, as reported by students, and 88% of learners also reported increased engagement and enjoyment as demonstrated in the tests performed by the applied research team at the Environmental Design and Multisensory Experience Lab from the Polytechnic School of Milan (Italy). These results have been accepted for presentation at ICELW 2020 (Columbia University, New York), and briefly discussed within a research paper [15]. Moreover, due to the decreased cost of the e-REAL immersive room compared to CAVE-like environments, the e-REAL’s added value is even more evident. The e-REAL environment, to be experienced in a natural way without special glasses, is supposed to reduce the extensive use of the brain’s working memory that is overloaded by traditional lectures [21, 47, 73], and during conversion of a 2D to a 3D representation as what usually happens with common images used during traditional teaching [66]. Tests and experiments are in progress at the Polytechnic School of Milan and at the Center for Medical Simulation in Boston, to explore educational outputs related to cognitive aids, displayed as VR objects usually on a wall and sometimes within AR glasses or by indoor micro-projection mapping directly on the mannequins or on the other tools available as skill trainers [15].

7 Enhanced Reality for Healthcare Simulation

127

Throughout the simulation process (briefing, performance, debriefing) both within a simulation lab or in situ, learners can interact with the content using spoken commands or natural gestures without the constraint of wearing glasses, gloves or headsets, nor joysticks (when they wish, they can use active pens instead of their fingers). No screens are needed: e-REAL sensors turn any surface into a touch screen.

7.6 The Epistemological Pillars Supporting e-REAL The e-REAL learning approach is designed to have the learner working on tasks that simulate an aspect of expert reasoning and problem-solving, while receiving timely and specific feedback from fellow students and the trainer. These elements of deliberate practice [25] and feedback are general requirements for developing expertise at all levels and disciplines and are absent in lectures [44, 62, 71]. During an e-REAL session, both clinical and behavioral aspects of performance are addressed. A number of skills and competencies both technical and non-technical (behavioral, cognitive and meta-cognitive) are challenged: on one side technical knowledge and know-how, and, on the other side, behavioral, cognitive and metacognitive skills, leadership and followership, team-work facilitation, team spirit and effectiveness, knowledge circulation, effective communication, relationships and power distance, fixation error management and metacognitive flexibility. Feedback is provided throughout sessions with a focus on key performance indicators. The e-REAL system allows trainers to feedback about key aspects of performance, using different tracking options. The system also allows multi-source feedback during the simulation-based session, combining self-assessment, feedback from the other participants and the trainer. This activity improves the learners’ awareness of their own competencies. Summarizing, we can say that e-REAL is a set of innovative solutions aimed at enhancing learning with a systemic, multilayer and multi-perspective approach. Tools such as speech analysis, visual communication and conceptual clustering are part of the solution. Integrating—and enhancing—technical skills with these related to the behavioral, cognitive and metacognitive domains is a major aim. Innovations based on visual thinking and immersive learning, (such as e-REAL, other augmented reality tools, advances in tablet technology and mobile applications, wearable devices and multimedia libraries), are successful because they upgrade people’s knowledge, skills and abilities. The main goal within e-REAL is allowing a multi-perspective mindset during a simulation session. Visualizing the “invisible” by overlaying information that focuses on both technical and behavioral aspects of a performance, and merges the virtual and the real, creates a multilayer and therefore augmented, multi-perspective, and systemic simulation that contributes to a better understanding. Nothing is revolutionary within a simple VR headset, but if VR content and scenarios are “actualized” [40]—or enhanced—within a real simulation setting, the

128

F. Salvetti et al.

merging of the real and virtual world adds value to the learning process. In such a way, e-REAL becomes more than real!

7.7 Case-Study: Teamwork and Crisis Resource Management for Labor and Delivery Clinicians 1. The program and a key cognitive aid: Name-Claim-Aim Teamwork and Crisis Resource Management for Labor and Delivery Clinicians (Introductory and Advanced Levels) is experiential coursework focused on learning and improving teamwork and event management during simulated obstetrical cases. It is an interprofessional program based on advanced simulation, delivered many times per year in Boston (MA, USA) at the Center for Medical Simulation (CMS) in a realistic clinical setting [51, 69]. Each case is immediately followed by a facilitated debriefing led by experienced instructors and faculty members of CMS. Participants include obstetricians, obstetrical nurses, midwives and obstetrical anesthesiologists. e-REAL is integrated into this program and used to deepen learning and to enhance cognitive retention of the main mnemonic used during the program [14]. Effective team management during a crisis is a core element of expert practice. Medical simulation can contribute enormously to enhance teamwork during a crisis [28], fostering situational awareness and contextual intelligence [36] which refers to the abilities to apply knowledge to real world scenarios and situations, as well as cognitive retention of essential steps and procedures to be performed during an ongoing crisis. A crisis management organizational approach using a mnemonic called NameClaim-Aim is being used in order to facilitate crisis management and decision making: knowledge and skills are essential components of the decision-making and the actions performed during crises, but they are not sufficient to manage the entire situation which includes the environment, the equipment and the patient care team. After several decades worth of dedicated simulation education for anesthesiologists and labor and delivery teams, teamwork experts at the CMS have found that these teams still struggle to routinely organize themselves in crises during simulation courses, let alone in the clinical environment [50, 51]. Stories from course participants of all professions indicate that there exists a real challenge to both focus on the clinical picture and apply organizational principles to the team, and more often than not, the organization within the team is under-prioritized. Part of this may be due to the intense cognitive load experienced by those who are managing a stressful clinical crisis. It can be difficult to also remember the eleven crisis resource management (CRM) principles introduced by Gaba et al. [28] and apply them routinely while actively managing a resuscitation (Fig. 7.24). Appreciating the impact of stress on high level thinking [2], faculty at CMS collapsed these 11 key points into 5 key CRM concepts of role clarity, effective

7 Enhanced Reality for Healthcare Simulation

129

Fig. 7.24 Crisis resource management (CRM) key points

communication, effective use of personnel, effective management of resources and global assessment (Fig. 7.25). The role of “Event Manager,” rather than “Team Leader,” is expressly promoted at CMS to facilitate distributed leadership in crises. This distinction has proven to be effective in teams of expert practitioners because it deliberately seeks to flatten hierarchies which may inhibit speaking-up behavior from team members, which may successfully counteract failures of perception [57]. The Event Manager coordinates the communication and the team’s efforts, overseeing the organization and application of CRM principles, in addition to actively soliciting input and decision-making regarding medical care, if necessary. Moreover, the Event Manager acts to facilitate role designation, orchestrate and coordinate team function. Based on these challenges, the mnemonic “Name-Claim-Aim” was developed at CMS to incorporate 10 of the 11 CRM principles in an easy-to-remember, and easily applied, framework (Fig. 7.26). Cognitive aids were developed to help facilitate learning of this mnemonic and an “Event Manager Checklist” was created to facilitate effective role designation. Participants have been given this cognitive aid, designed as an ID badge-sized card, to easily access during their simulation course. In addition, the “Name-Claim-Aim” and “Event Manager Checklist” have been adopted by the Massachusetts General Hospital (MGH) (Boston, MA, USA) for inclusion in the latest version of their Emergency Manuals (Figs. 7.27 and 7.28).

130

F. Salvetti et al.

Fig. 7.25 Courtesy of the Center for Medical Simulation (CMS), Boston, MA. Five key crisis resource management (CRM) concepts by the CMS ©

Fig. 7.26 Courtesy of the Center for Medical Simulation (CMS), Boston, MA. Application of Name-Claim-Aim ©

2. Interactive videos and rapid debriefing Rapid Cycle Deliberate Practice (RCDP) is a novel simulation-based education model that is currently attracting interest, being implemented, explored and researched. In RCDP, learners rapidly cycle between deliberate practice and directed feedback within the simulation scenario until mastery is achieved [75]. Common RCDP implementation strategies include: splitting simulation cases into segments,

7 Enhanced Reality for Healthcare Simulation

131

Fig. 7.27 Courtesy of the Center for Medical Simulation (CMS), (Boston, MA). Name-Claim-Aim mnemonic aid ©

Fig. 7.28 Courtesy of the Massachusetts General Hospital (MGH) (Boston, MA): Name-ClaimAim in their Emergency Manuals ©

micro-debriefing in the form of “pause, debrief, rewind and try again” and providing progressively more challenging scenarios. During the Labor and Delivery program, clinicians are shown short dynamic videos: they are challenged to recognize a situation requiring rapid intervention, communication, knowledge sharing, decisionmaking and management of unforeseen event—while taking into consideration critical contextual factors such as a lack of time, scarcity of resources and tools, and a multitude of additional impactful factors. e-REAL is being used, enabling learners to interact with multimedia scenarios recreating very different situations [69]. Learners are requested to be compliant with the Name-Claim-Aim mnemonic to manage the

132

F. Salvetti et al.

crisis by coordinating the team roles and efforts. The interactive videos feature unexpected clinical or non-clinical, emergent scenarios, including extreme, dangerous environmental threats (Figs. 7.29, 7.30 and 7.31).

Fig. 7.29 Courtesy of the Center for Medical Simulation (CMS) (Boston, MA), the Polytechnic School of Milan (Italy) and Logosnet (Houston, TX): Interactive e-REAL wall with a number of tailored multimedia content ©

Fig. 7.30 Courtesy of the Center for Medical Simulation (CMS) (Boston, MA), the Polytechnic School of Milan (Italy), and Logosnet (Houston, TX): Alpine environment with photorealistic avatars expected to occur a sport traumatism ©

7 Enhanced Reality for Healthcare Simulation

133

Fig. 7.31 Courtesy of the Center for Medical Simulation (CMS) (Boston, MA), the Polytechnic School of Milan (Italy), and Logosnet (Houston, TX): Fire accident in a final stage of execution, already overlaid by the mnemonic Name-Claim-Aim ©

3. Multilayer vision for an enhanced use of neural processes: key questions The e-REAL system enables a multilayer vision: the many levels of the situation are made available simultaneously, by overlaying multisource info—e.g. words, numbers, images, etc. Visualizations show relationships between topics, activate involvement, generate questions that learners didn’t think of before and facilitate memory retention. Visualizations function as concept maps to help organize and represent knowledge on a subject in an effective way. By visualizing relations between topics, contextual factors, cognitive maps and dynamic cognitive aids [27], e-REAL allows more effective learning and storing of information into memories based on experiences and practice. At the same time, eREAL helps instructors to immediately identify errors and difficulties of the trainees, facilitating an effective debriefing (Figs. 7.32, 7.33 and 7.34).

7.8 Conclusion As Pierre Lévy was used to say, reality in the digital age is becoming more and more virtual [40]. In healthcare simulation, the dematerialization of the learning environment is allowed by new technologies that offer options to improve the usability of traditional e-learning methods. Sharing and mixing up the latest trends from digitalization and virtualization, neurosciences, artificial intelligence, and advanced simulation allows us to establish a new paradigm for education and training. So far, the ongoing exploratory projects within the e-REAL set up at the Center for Medical Simulation in Boston are:

134

F. Salvetti et al.

Fig. 7.32 Courtesy of the Center for Medical Simulation (CMS) (Boston, MA), Logosnet (Houston, TX) and Demian Szyld, Senior Director Institute for Medical Simulation and Faculty Development Program at CMS: Car accident overlaid by a decision tree designed to foster clinical observation and inquiry, verbal communication skills and active listening

1. The further use of the e-REAL visualizations in Labor and Delivery programs and in Anesthesia programs. 2. The design of distance-based simulations to take care of COVID-19 related situations: logistics, team safety, relationship with patients and families. 3. The introduction of online learning modules where different types of virtual objects are co-existing: artificial but realistic avatars and real actors performing as standardized patients or family members or colleagues, photorealistic 3D tools, indoor or outdoor scenarios. 4. The development of self-learning solutions to improve results related to critical conversations, debriefing sessions, video-interviews and video-conferences. 5. The use of AR head-mounted displays to provide guidance during remote on-site simulations. 6. The visualization of check-lists and mnemonics (virtualized and displayed on screens or walls) to foster team performance.

7 Enhanced Reality for Healthcare Simulation

135

Fig. 7.33 Courtesy of the George Washington University School of Nursing (Ashburn, VA), and Logosnet (Houston, TX): Outdoor scenario designed to allow learners to visually detect difficulties and risks related to an emergency situation

Fig. 7.34 Courtesy of the George Washington University School of Nursing (Ashburn, VA), and Logosnet (Houston, TX): A detail from an outdoor scenario designed to allow learners to visually detect difficulties and risks related to an emergency situation

136

F. Salvetti et al.

Acknowledgements The authors wish to thank Ms. Starly Santos, Research Analyst at Logosnet (Switzerland and USA) and Student at the Columbia University in New York (NY–USA): Her cooperation was of the greatest value. A great thank-you also to Prof. Bill Kapralos, Ontario Tech University (Oshawa, CA) and to Dr. Amy Nakajima, University of Ottawa (CA), who read the first release of this chapter and made a great number of valuable corrections and suggestions.

References 1. Arnheim, R.: Visual Thinking. University of California Press, Berkeley and Los Angeles, CA (1969) 2. Arnsten, A.F.: Catecholamine modulation of prefrontal cortical cognitive function. Trends Cogn. Sci. 2(11), 436–447 (1998) 3. Auer, M., Guralnick, D., Uhomoibhi, J. (eds.): Interactive collaborative learning. In: Proceedings of the 19th ICL Conference, vol. 1. Springer, Cham, CH (2017) 4. Aukstakalnis, S.: Practical Augmented Reality. A Guide to the Technologies, Applications, and Human Factors for AR and VR. Addison-Wesley, Boston (2017) 5. Bailenson, J.N., Blascovich, J., Beall, A.C., Noveck, B.: Courtroom applications of virtual environments, immersive virtual environments, and collaborative virtual environments. Law Policy 28(2), 249–270 (2006) 6. Batini, F., Fontana, A.: Storytelling kit. 99 esercizi per il pronto intervento narrative. Rizzoli, Milano (2010) 7. Bauman, Z.: Liquid Modernity. Polity Press, Cambridge, UK (2008) 8. Bergmann, J., Sams, A.: Flip Your Classroom. Reach Every Student in Every Class Every Day (2012) 9. Bergstrom, B.: Essentials of Visual Communication. King Publishing, London, UK (2008) 10. Bertagni, B., La Rosa, M., Salvetti, F. (eds.): Learn How to Learn! Knowledge Society, Education and Training. Franco Angeli, Milan (2010) 11. Blascovich, J., Loomis, J., Beall, A., Swinth, K., Hoyt, C., Bailenson, J.N.: Immersive virtual environment technology as a methodological tool for social psychology. Psychol. Inq. 13, 103–124 (2002) 12. Blazer, L.: Animated Storytelling. Simple Steps for Creating Animation and Motion Graphics. Pearson, London, UK (2016) 13. Bremond, C.: Logique du récit. Éditions du Seuil, Paris (1973) 14. Buttimer, M.: Name/Claim/Aim Around the World. https://harvardmedsim.org/blog/nameclaim-aim-around-the-world/ (2020) 15. Calabi, D., Bisson, M., Venica, C.: Design and medical training experimental hypotheses for training in immersive environments. In: 3rd International Conference on Environmental Design, pp. 527–532. Polytechnic of Milan, Italy (2019) 16. Castells, M.: The Rise of the Networked Society. Wiley-Blackwell, Hoboken, NJ (2009) 17. Ciuccarelli, P., Valsecchi, R.: Ethnographic approach to design knowledge. Dialogue and participation as discovery tools within complex knowledge contexts. In: IASDR 2009—Rigor and Relevance in Design (2009) 18. Collins, S.: Neuroscience for Learning and Development. How to Apply Neuroscience & Psychology for Improved Learning & Training. Kogan Page, London (2015) 19. Coro, G.: Valutazione del software e-REAL Speech Analysis. ISTI-CNR, Pisa (2019) 20. Cruz-Neira, C., Sandin, D.J., De Fanti, T.A., Kenyon, R.V., Hart, J.C.: The CAVE: audio visual experience automatic virtual environment. Commun. ACM 35, 64–72 (1992) 21. De Leeuw, K.E., Mayer, R.E.: A comparison of three measures of cognitive load: evidence for separable measures of intrinsic, extraneous, and germane load. J. Educ. Psychol. 100(1), 223–234 (2008)

7 Enhanced Reality for Healthcare Simulation

137

22. De Rossi, M., Petrucco, C.: Le narrazioni digitali per l’educazione e la formazione. Carocci, Roma (2013) 23. De Souza e Silva, A., Sutko, D.M.: Digital Cityscapes: Merging Digital and Urban Playspaces. Peter Lang Publishing, Inc., New York, NY (2009) 24. Eppler, M., Burkhard, R.: Knowledge Visualization. Towards a New Discipline and Its Field of Application. Research Paper 07-02. University of the Italian Switzerland, Lugano (2004) 25. Ericcson, A., Krampe, R., Tesch-Romer, C.: The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 100(3), 363–340 (1993) 26. Fields, H.L., Hjelmstad, G.O., Margolis, E.B., Nicola, S.M.: Ventral tegmental area neurons in learned appetitive behavior and positive reinforcement. Annu. Rev. Neurosci. 30, 289–316 (2007) 27. Friedlander, M.J., Andrews, L., Armstrong, E.G., Aschenbrenner, C., Kass, J.S., Ogden, P., Schwartzstein, G., Viggiano, T.R.: What can medical education learn from the neurobiology of learning? Acad. Med. 86(04), 415–420 (2011) 28. Gaba, D.M., Fish, K.J., Howard, S.K.: Crisis Management in Anesthesiology. Churchill Livinstone, Philadelphia, PA (1994); Walker, J.D., Spencer, P.J., Walzer, T.B., Cooper, J.B.: Simulation in cardiac surgery. In: Cohn, L.H., Adams, D.H. (eds.) Cardiac Surgery in Adult. McGraw Hill, Columbus, OH (2018) 29. Gardner, R.: Medical Simulation Week 2018. Center for Medical Simulation. https://e-real.net/ wp-content/uploads/videos/[email protected] (2018) (ver. 15.04.2019) 30. Gardner, R., Salvetti, F.: Improving Teamwork and Crisis Resource Management for Labor and Delivery Clinicians: Educational Strategies Based on Dynamic Visualization to Enhance Situational Awareness, Contextual Intelligence and Cognitive Retention, Research Abstract. IMSH 2019, San Antonio, TX (2019) 31. Gazzaniga, M.S. (ed.): The Cognitive Neurosciences. MIT Press, Boston, MA (2009) 32. Genette, G.: Figures III. Éditions du Seuil, Paris (1972) 33. Guralnick, D.: Re-envisioning online learning. In: Salvetti, F., Bertagni, B. (eds.) Learning 4.0. Advanced Simulation, Immersive Experiences and Artificial Intelligence, Flipped Classrooms, Mentoring and Coaching. Franco Angeli, Milan (2018) 34. Kandel, E.R., Schwartz, J.H., Jessell, T.M., Siegelbaum, S.A., Hudspeth, A.J.: Principles of Neural Sciences. McGraw Hill, New York, NY (2013) 35. Kernbach, S., Eppler, M., Bresciani, S.: The use of visualization in the communication of business strategy: an experimental evaluation. Int. J. Bus. Commun. I-24 (2014) 36. Khanna, T.: Contextual intelligence. Harv. Bus. Rev. 9 (2014) 37. Knight, C., Glaser, J.: Diagrams. Innovative Solutions for Graphic Designers. Tables. Graphs. Charts. Forms. Maps. Signs. Instructions. RotoVision, Mies (2009) 38. Lamberti, F., Pratticò, G.: E-REAL Speech Analysis. User Manual V1.0.4. Polytechnic School of Turin, Turin (2018) 39. Lester, P.M.: Visual Communication: Images with Messages. Thomson Wadsworth, Belmont, CA (2006) 40. Lévy, P.: Qu’est-ce que le virtuel? La Découverte, Paris (1998) 41. Licci, G.: Immagini di conoscenza giuridica. Cedam, Padova (2011) 42. Lira, M., Gardner, S.M.: Leveraging multiple analytic frameworks to assess the stability of students’ knowledge in physiology. In: CBE Life Science Education, 1 Mar 2020, 19 (2020) 43. Lowe, R., Ploetzner, R. (eds.): Learning from Dynamic Visualization. Innovations in Research and Application. Springer, Berlin (2017) 44. Lyons, R., Lazzara, E., Benishek, L., Zajac, S., Gregory, M., Sonesh, S., Salas, E.: Enhancing the effectiveness of team debriefings in medical simulation: more best practices. Jt. Comm. J. Qual. Patient Saf. 41(3), 115–123 (2015) 45. Marchese, A.: L’officina del racconto. Mondadori, Milano (1983) 46. Markowitz, M., Bailenson, J.: Virtual Reality and Communication, Oxford Bibliographies. https://vhil.stanford.edu/mm/2019/02/markowitz-oxford-vr-communication.pdf (2019) (ver. 15.04.2019)

138

F. Salvetti et al.

47. Mayer, R.E., Moreno, R.: Nine ways to reduce cognitive load in multimedia learning. Educ. Psychol. 38(1), 43–52 (2003) 48. Mazur, E.: Peer Instruction, A User’s Manual. In: Prentice Hall Series in Educational Innovation. Upper Saddle River (1997) 49. Milgram, P., Kishino, A.F.: Taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1321–1329 (1994) 50. Minehart, R., Pian-Smith, M., Walzer, T., Gardner, R., Rudolph, J., Simon, R., Raemer, D.: Speaking across the drapes. Communication strategies of anesthesiologists and obstetricians during a simulated maternal crisis. Simul. Healthc. 7, 166–170 (2012) 51. Minehart, R., Rudolph, J., Nadelberg, R., Clinton, E., Gardner, R.: Name/claim/aim for obstetric crises: a new paradigm in crisis resource management. In: Poster Communication: Phoenix, AZ: SOAP 51st Annual Meeting (2019) 52. Morin, E.: La connaissance de la connaissance. Le Seuil, Paris (1986) 53. Murray, S.: Interactive Data Visualization for the Web. O’Reilly Media, Sebastopol, CA (2013) 54. Parisi Presicce, P.: Impalcature. Teorie e pratiche della narratività. Mimesis, Milano (2017) 55. Popper, K.R.: Objective Knowledge: An Evolutionary Approach. Oxford University Press, Oxford (1972) 56. Potter, M.C., Wyble, B., Hagmann, C.E., McCourt, E.S.: Detecting meaning in RSVP at 13 ms per picture, in attention. Percept Psychophys. 76, 270–279 (2014) 57. Raemer, D.B., Kolbe, M., Minehart, R.D., Rudolph, J.W., Pian-Smith, M.C.: Improving anesthesiologists’ ability to speak up in the operating room: a randomized controlled experiment of a simulation-based intervention and a qualitative analysis of hurdles and enablers. Acad. Med. 91(4), 530–539 (2016) 58. Ridgway, J., Nicholson, J., Campos, P., Teixeira, S.: Dynamic Visualisation Tools: A Review. ProCivicStat Project: https://IASE-web.org/ISLP/PCS (2018) 59. Rizzolatti, G., Sinigaglia, C.: Mirrors in the Brain: How Our Minds Share Actions and Emotions. Oxford University Press, Oxford and New York, NY (2008) 60. Robilant, E.: Conoscenza: forme, prospettive e valutazioni. La traduzione della conoscenza nell’operatività. Lessons at the University of Turin 1990–1991, Turin: manuscript (1991) 61. Rosenberg, L.B.: The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator Performance in Remote Environments. Technical Report AL-TR-0089. USAF Armstrong Laboratory, Wright-Patterson AFB OH (1992) 62. Rudolph, J., Simon, R., Raemer, D., Eppich, W.: Debriefing as a formative assessment: closing performance gaps in medical education. Acad. Emerg. Med. 15, 1010–1016 (2008) 63. Salvetti, F.: Rethinking learning and people development in the 21st century: the enhanced reality lab—e-REAL—as a cornerstone in between employability and self-empowerment. In: Salvetti, F., La Rosa, M., Bertagni, B. (eds.) Employability: Knowledge, Skills and Abilities for the Glocal World. Franco Angeli, Milan (2015) 64. Salvetti, F., Bertagni, B.: Anthropology and epistemology for “glocal” managers: understanding the worlds in which we live and work. In: Bertagni, B., La Rosa, M., Salvetti, F. (eds.) “Glocal” Working. Living and Working Across the World with Cultural Intelligence. Franco Angeli, Milan (2010) 65. Salvetti, F., Bertagni, B.: e-REAL: enhanced reality lab. Int. J. Adv. Corp. Learn. 7–3, 41–49 (2014) 66. Salvetti, F., Bertagni, B.: Interactive tutorials and live holograms in continuing medical education: case studies from the e-REAL experience. In: Proceedings of the ICELW Conference, Columbia University, New York, NY, pp. 1–8 (2016) 67. Salvetti, F., Bertagni, B. (eds.): Learning 4.0. Advanced Simulation, Immersive Experiences and Artificial Intelligence, Flipped Classrooms, Mentoring and Coaching. Franco Angeli, Milan (2018a) 68. Salvetti, F., Bertagni, B.: Reimagining STEM education and training with e-REAL: 3D and holographic visualization, immersive and interactive learning for an effective flipped classroom. In: Salvetti, F., Bertagni, B. (eds.) Learning 4.0. Advanced Simulation, Immersive Experiences and Artificial Intelligence, Flipped Classrooms, Mentoring and Coaching. Franco Angeli, Milan (2018b)

7 Enhanced Reality for Healthcare Simulation

139

69. Salvetti, F., Gardner, R., Minehart, R., Bertagni, B.: Teamwork and Crisis Resource Management for Labor and Delivery Clinicians: Interactive Visualization to Enhance Teamwork, Situational Awareness, Contextual Intelligence and Cognitive Retention in Medical Simulation. Research Paper. ICELW 2019 at Columbia University, New York, NY (2019a) 70. Salvetti, F., Bertagni, B.: Virtual worlds and augmented reality: the enhanced reality lab as a best practice for advanced simulation and immersive learning. Form@re 19(1) (2019b) 71. Shapiro, M., Gardner, R., Godwin, S., Jay, G., Lindquist, D., Salisbury, M., Salas, E.: Defining team performance for simulation-based training: methodology, metrics, and opportunities for emergency medicine. Acad. Emerg. Med. 15, 1088–1097 (2008) 72. Strayer, J.F.: The effects of the classroom flip on the learning environment: a comparison of learning activity in a traditional classroom and a flip classroom that used an intelligent tutoring system. Dissertation Abstracts International Section A, 68 (2008) 73. Sweller, J., Ayres, P., Kalyuga, S.: Cognitive Load Theory. Springer, New York, NY (2017) 74. Szparagowsk, R.: The Effectiveness of the Flipped Classroom (2014). Honors Projects. 127. https://scholarworks.bgsu.edu/honorsprojects/127 (2014) 75. Taras, J., Everett, T.: Rapid cycle deliberate practice in medical education—a systematic review. Cureus 9(4), 1180 (2017) 76. Toto, R., Nguyen, H.: Flipping the work design in an industrial engineering course. Paper presented at the ASEE/IEEE Frontiers in Education Conference, San Antonio, TX (2009) 77. Tucker, B.: The flipped classroom. Educ. Next 12(1) (2012) 78. Tufte, E.: Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press, Cheshire, CT (1997) 79. Tufte, E.: The Visual Display of Quantitative Information. Graphics Press, Cheshire, CT (2001) 80. Vasconcelos, A. (ed.): Global Trends 2030. Citizens in an Interconnected and Polycentric World. European Institute for Security Studies, Brussels (2011) 81. Vogel, D., Dickson, G., Lehman, J.: Persuasion and the Role of Visual Presentation Support. Research Paper. Management Information Systems Research Center. University of Minnesota: School of Management, Minneapolis, MS (1986) 82. Wieman, C.: STEM education: active learning or traditional lecturing? In: Salvetti, F., Bertagni, B. (eds.) Learning 4.0. Advanced Simulation, Immersive Experiences and Artificial Intelligence, Flipped Classrooms, Mentoring and Coaching. Franco Angeli, Milan (2018) 83. Wirth, W., Hartmann, T., Bocking, S., Vorderer, P., Klimmt, C., Holger, S., Saari, T., Laarni, J., Ravaja, N., Gouveia, F., Biocca, F., Sacau, A., Jancke, L., Baumgartner, T., Jancke, P.: A process model for the formation of spatial presence experiences. Media Psychol. 9, 493–525 (2007) 84. Wissmath, B., Weibel, D., Groner, R.: Dubbing or subtitling? Effects on spatial presence, transportation, flow, and enjoyment. J. Media Psychol. 21(3), 114–125 (2009) 85. Yeo, J., Gilbert, J.K.: The role of representations in students’ explanations of four phenomena in physics: dynamics, thermal physics, electromagnetic induction and superposition. In: Treagust, D.F., Reinders, D., Fischer, H.E. (eds.) Multiple Representations in Physics Education. Springer, Cham (2017)

Fernando Salvetti (J.D., P.P.E., M.Phil., Ph.D.—[email protected]), Founder of Centro Studi Logos in Turin and Logosnet in Lugano, Berlin and Houston, is an epistemologist, an anthropologist and a lawyer who co-designed e-REAL, the enhanced reality lab where virtual and real worlds are merging within an advanced simulation environment. He is committed to exploring virtual and augmented reality, cognitive aids by artificial intelligence, visual thinking, interactive and immersive learning, emerging scenarios and trends, and cross-cultural intelligence.

140

F. Salvetti et al.

Roxane Gardner (M.D., M.H.P.E., M.P.H., Ph.D.—[email protected]), Senior Director for Clinical Programs and Director of the Visiting Scholars and Fellowship Program at the Center for Medical Simulation in Boston (CMS), has been a principle faculty member of CMS since 2002 and Co-Director of its Labor and Delivery Teamwork and Crisis Management program since its inception in 2003. In addition to her roles at CMS, Dr. Gardner is an Assistant Professor of Obstetrics, Gynecology and Reproductive Biology at the Harvard Medical School and holds appointments in Boston at Brigham and Women’s Hospital, Boston Children’s Hospital, and Massachusetts General Hospital. Rebecca D. Minehart (M.D., M.S.H.P.Ed.—[email protected]), Director for Anesthesia Clinical Courses at the Center for Medical Simulation in Boston (CMS), is an obstetric anesthesiologist at Massachusetts General Hospital (MGH), an Assistant Professor of Anesthesia at Harvard Medical School, and the Program Director for the MGH Obstetric Anesthesia Fellowship Program. She is an ardent education and patient safety advocate who has been involved in international efforts to both research and promote best teamwork and communication practices, especially involving speaking up and giving feedback. She is a recognized expert in educational techniques utilizing simulation and is a core teaching faculty member at both CMS and the MGH Learning Laboratory, where she serves as the Operating Room Simulation Officer. Barbara Bertagni (B.Sc., B.A., M.A., M.Phil., Ph.D., Clin.Psy.D.—[email protected]), Founder of Centro Studi Logos in Turin and Logosnet in Lugano, Berlin and Houston, as well as eREAL co-designer, is a clinical psychologist, an anthropologist and a practical philosopher particularly involved with personal and professional development, coaching and mentoring, immersive learning and advanced simulation. She works as a sparring partner, a coach and a mentor advising people and organizations across the globe.

Chapter 8

maxSIMhealth: An Interconnected Collective of Manufacturing, Design, and Simulation Labs to Advance Medical Simulation Training maxSIMhealth Group Abstract maxSIMhealth is a multidisciplinary collaborative manufacturing, design, and simulation laboratory at Ontario Tech University in Oshawa, Canada combining expertise in Health Sciences, Computer Science, Engineering, Business and Information Technology, aiming at building community partnerships to advance simulation training. It focuses on existing simulation gaps, while providing innovative solutions that can change the status quo, thus leading to improved healthcare outcomes comprised of cutting-edge training opportunities. maxSIMhealth utilizes disruptive technologies (e.g., 3D printing, gaming, and emerging technologies such as extended reality) as innovative solutions that deliver cost-effective, portable, and realistic simulation catering the high variability of users and technologies, which is currently lacking. maxSIMhealth is a novel collaborative innovation with aims to develop future cohorts of scholars with strong interdisciplinary competencies to collaborate in new environments and to communicate professionally for successful medical-tech problem solving. The work being conducted within maxSIMhealth will transform the current health professional education landscape by providing novel, flexible, and inexpensive simulation experiences. In this chapter, a description of maxSIMhealth is provided along with an overview of several ongoing projects. Keywords Medical simulation · 3D printing · Immersive technologies · Serious gaming · Gamification

8.1 Introduction Simulation “allow[s] persons to experience a representation of a real event for the purpose of practice, learning, evaluation, testing, or to gain an understanding of systems or human actions, [57].” Simulation has been positively disrupting the traditional education model in healthcare. Evidence in support of this change is solid for maxSIMhealth Group (B) Ontario Tech University, L1H 7K4 2000 Simcoe St N, Oshawa, ON, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_8

141

142

maxSIMhealth Group

learning outcomes, patient outcomes, and safety [13]. Simulation provides a viable alternative to practice with actual patients, providing medical trainees the opportunity to train until they reach a specific competency level. One of the prevailing arguments for using simulation in the learning process of trainees is the ability to engage the trainee in the active accumulation of knowledge by doing with deliberate practice, while it also allows for careful matching of the complexity of the learning encounter to the trainees’ current level of advancement [41]. Further, there are several studies indicating that ‘hybrid’ clinical placement curricula—whereby part of the time is spent in simulation and part in the clinical setting following a preceptorship model—are as effective as traditional clinical placement curricula, while reducing the resource strain on clinical placement sites (see, for example [1]). However, despite these proven advantages, simulation faces limitation within several training programs for allied health professionals, or in rural and remote settings due to commercial unavailability, high development costs [19, 17] , and the inability to address many of competencies requiring specialized facilities. We believe that disruptive technologies will help us create and establish novel simulation solutions that will provide an alternative model to better train future cohorts of health care professionals [7], and to equip practicing professionals with the tools and knowledge required to function within their complex and rapidly changing work environments [6]. To this end, we have recently established maxSIMhealth, a synergistic multidisciplinary collaborative (laboratory) whereby multiple professions work together to address simulation challenges in training and education. This is made possible given that maxSIMhealth is an academic-public-for-profit collaborative based at Ontario Tech University where access to several different manufacturing, design, and simulation labs is leveraged. A blended funding model supports maxSIMhealth with institutional support for labs and infrastructure, Canadian Foundation for Innovation, Canada Research Chair in HealthCare Simulation (through the Canadian Institute for Health Research), Natural Sciences and Engineering, and Social Sciences and Humanities Research Councils (NSERC, and SSHRC respectively). maxSIMhealth combines expertise in faculties across the university, including Health Sciences, Business and Information Technology (computer science, game development, etc.), Engineering and Applied Sciences, Education, and Social Sciences. Furthermore, the collaborative builds upon existing and new community partnerships: Lakeridge Health Hospital, Durham Region Department of Health, Canadian Society for Medical Laboratory Sciences, Collaborative Human Immersive Interaction Laboratory (CHISIL), and Simulation Canada. In addition, maxSIMhealth actively seeks and establishes research partnerships with not-forprofit and for-profit organizations as commercial channel partners and stakeholders in order to advance simulation training globally. Finally, maxSIMhealth acts as an idea-seeding mechanism for local startup incubator and experiential learning hub, Brilliant Catalyst. This multi-sectoral collaborative allows for the connection and cross-pollination of multiple professions and areas of expertise for discovering existing simulation gaps, providing innovative solutions that change systems that lead to improved

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

143

healthcare outcomes. Specifically, maxSIMhealth utilizes disruptive technologies including, but not limited to, 3D printing, gamification (including serious gaming), and emerging technologies including extended reality (XR), as innovative solutions that allow for cost-effective, portable, and realistic simulation. Thus, it provides health professionals with innovative, consumer-level, flexible, and highly adaptable simulation solutions that will work in tandem with, and augment the preceptorship model (this is currently lacking in medical-based simulation education), while equipping each member of the healthcare team in every point of care setting. With its increasing popularity and effectiveness of simulation, it is now imperative to integrate simulation throughout entire curriculums (e.g., nursing, surgery) [22]. Programs can no longer rely on this ‘add-on’ notion since simulation opts as a replacement for traditional, and often rare, clinical experiences, and allows learners to develop skills, clinical reasoning, and care competency [26]. With its foundation in technology, sciences, and professional practice, maxSIMhealth thrives on this growing acceptance and enthusiasm of simulation in medical professions education. In doing this, it is able to fulfill its vision of advancing the discovery and application of knowledge that revolutionizes health by providing innovative solutions for simulation training and clinical application. In the following section (Sect. 11.2), we provide a brief overview of several ongoing maxSIMhealth research projects aimed at solving specific medical education needs and problems. In Sect. 11.3 a discussion regarding the “Ideate. Create. Disseminate” approach we follow in maxSIMhealth is provided while concluding remarks are provided in Sect. 11.4.

8.1.1 Immersive Technologies The technologies of video games, virtual worlds and social networks have become collectively known as immersive technologies because of their ability to engage users of all ages, driving massive investment into technologies to attract, capture and retain our attention [94]. The continuous increase in computational processing power and accompanying decrease in the size of electronic components has led to the decreasing cost and rising availability of consumer-level immersive technologies have helped advance the adoption of virtual simulation in recent years. A definition of various immersive technologies follows below. • Virtual reality: An interactive computer simulation, which senses the user’s state and operation and replaces or augments sensory feedback information to one or more senses in a way that the user obtains a sense of being immersed in the simulation (virtual environment). • Augmented reality: The addition of computer-generated objects to the real physical space to augment the elements comprising it. • Mixed reality: The integration of computer-generated graphics and real objects seamlessly.

144

maxSIMhealth Group

• Extended reality: A term referring to the synergy of virtual, augmented, and mixed reality technologies, in conjunction with motion capture, user data acquisition, and maker space has been gaining momentum thanks to recent technological advances in electronics miniaturization, image processing, and motion capture system [24]. • Serious games: Video games whose primary purpose is education, training, advertising, simulation, or education as opposed to entertainment. • Gamification: The application of “game-based elements, aesthetics, and game thinking to engage learners, motivate action, promote learning, and solve problems” [50].

8.2 maxSIMhealth Projects In this section, an overview of several research projects and initiatives currently underway (or will begin shortly), within the maxSIMhealth lab is provided. All of the projects are interdisciplinary, at the very least, consist of at least one content expert (medical/health sciences professional and/or trainee), at least one technology expert (engineer/computer scientist and/or trainee), and access to experts in medical education. The experts may be academics or practicing professionals from the various maxSIMhealth partner institutions. Our current work is focused on projects whose solutions fall broadly within three major areas: (i) immersive technologies, (ii) gamification and serious gaming, and (iii) 3D printing. Within the immersive technologies domain, a large focus includes the development, testing, and implementation of a novel MLT professional development tool, in the form of a Game-Based Education Multi Technology Platform (GEM-Tech Platform), supported by a combination of serious games coupled with virtual, augmented, and mixed realities (VR, AR, and MR respectively), and physical simulators. We envision the GEM-Tech Platform being used in a number of training applications including several which are described below.

8.2.1 Immersive Technology-Based Solutions In this subsection, we provide a description of projects whose solutions focus on immersive technologies and virtual reality in particular.

8.2.1.1

Phlebo Sim: A Novel Virtual Simulation for Teaching Professional Medical Laboratory Technologist Skills

Medical laboratory technologists (MLT’s) perform a range of services in the patient sample-testing environment. For example, MTLs are responsible for performing

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

145

phlebotomy procedures which consist of taking blood from patients and organizing the samples into proper containers for testing. However, there is a health human resource shortage in the profession, due to higher rates of retiring MTLs compared to MTLs entering the workforce [16]. Although the issue was raised in 2010, there has not been an increase in new MLT graduates to offset the shortage [16]. This shortage is further complicated by the lack of available teachers (who are often working in the field concurrently), to mentor students to a position of confidence [16]. Professional standards have been developed to establish a minimum level of competency for a new entry-level MLT, yet pre-analytical errors still account for up to 75% of laboratory errors [44]. These analytical errors, such as lost or incorrect sample request forms, occur before the sample is tested [44]. These errors are costly and contribute to over one million injuries and approximately 4,400–98,000 deaths in the United States annually [40], making medical errors the eighth leading cause of death in North America, ahead of AIDS, motor vehicle accidents, and breast cancer [40]. One solution previously identified is to increase the efficiency of training and increase MLT’s proficiency, is simulation [44]. While recent studies have shown the potential of simulation, it has yet to be fully integrated into MLT training programs [10]. To further contribute to this work in MLT simulation, an interdisciplinary collaboration between experts in medicine/health sciences, computer science/engineering, and medical education at Ontario Tech University has resulted in the development of a novel, interactive, and engaging virtual phlebotomy (blood-drawing) simulation prototype (known as “Phlebo Sim”), in an attempt to increase the efficiency of MLT training. Currently, Phlebo Sim focuses on the cognitive aspects of MLT training, following guidelines set out by the World Health Organization [93], and the profession’s accrediting body. Sample screenshots are provided in Fig. 8.1. According to Canadian Society for Medical Laboratory Science, the credentialing body for MLTs, the graduating MLT student must achieve 95 competences [17]. The current version of the simulation (i.e., Phlebo sim) addresses 12/95 competencies

Fig. 8.1 Phlebo Sim sample screenshots a the player prepping required materials into a tray, and b dialogue to guide the player through the procedure

146

maxSIMhealth Group

Fig. 8.2 Required competencies that will be addressed in a future version of Phlebo Sim

expected of an entry-level MLT, although Phlebo Sim could cover as many as 93/95 competencies with further development. We are porting Phlebo Sim to the GEM-Tech while expanding the scope of Phlebo sim to address 93/95 competencies set out by the Canadian Society for Medical Laboratory Science (see Fig. 8.2). This will be achieved through the synergy between researchers and research laboratories at Ontario Tech University including the Gamer Lab, HealthTech Lab, Materials Research Laboratory, undergraduate level MLT teaching laboratories, and our Lakeridge Hospital TechEd Living Lab, which together form the maxSIMhealth collaborative. In addition, we will leverage the expertise of our community, and more specifically, the Canadian Society for Medical Laboratory Sciences and Simulation Canada, amongst others. We anticipate that our updated GEM-Tech Platform-based Phlebo Sim will fulfill at least 98% of the competencies expected from a practicing MLT; serve as a training tool for any new competencies or shifts in the scope of practice; and, consequently, improve the supply of MLT professionals by strengthening education and training pathways, and promote efficient and effective learning. We anticipate that our updated Phlebo Sim will improve the quality and accuracy of laboratory tests and further our understanding of implemented simulation interventions into MLT training.

8.2.1.2

The Anesthesia Crisis Scenario Builder (ACSB): Development of an Anesthesia Crisis Scenario Builder for Virtual Reality Training

An anesthesiologist is a medical professional who practices within the anesthesia field. The job includes perioperative care, developing anesthetic plans, pain-relieving medication during surgical procedures and monitoring the patient’s vitals [47]. Anesthesiologists go through multiple training and education programs to obtain the required knowledge and psychomotor skills to be an anesthesiologist. They are

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

147

required to maintain appropriate knowledge of complications during operation by attending lectures or passively learning from journals or textbooks with no feedback [86]. Although knowledge is best retained by actively doing rather than passively learning [4], there is a lack of active methods that are easily accessible to anesthesia trainees [62]. Here, we describe the anesthesia crisis scenario builder (ACSB), developed in an interdisciplinary collaboration between anesthesiologists, computer scientists, engineers, and game developers, and followed an iterative development cycle where a prototype was developed, evaluated, and modified accordingly. The ACSB allows for the creation of multiple anesthesia crisis scenarios and modification of existing scenarios based on the Anesthetic Crisis Manual (ACM) [12]. The manual covers 22 life-threatening crises and provides concise, clear and simple systematic instructions that can be used by any health professional who is leading or assisting in an anesthesia crisis management situation [12]. The ACM has been referenced in recent papers revolving around the evaluation of resident competencies in anesthesia crisis management simulation [25]. The ACM lists each step for the life-threatening crises which have been turned into individual modules. These modules can then be used to create these life-threatening crisis scenarios. The goal of the ACSB is to take these systematic instructions and turn them into modules, allowing medical educators to create their own custom scenarios within each module where trainees can then practice them freely within a safe environment. Currently, the ACSB is in the prototyping phase and does not include all the scenarios found within the ACM. The scenarios are created within the scenario builder portion of the project is shown in Fig. 8.3. The scenario builder can add modules, save and load scenarios, and further customize modules that contain multiple options. Each of the modules contain a description of the task and an overall idea of what needs to be done to achieve the task.

Fig. 8.3 The scenario builder interface

148

maxSIMhealth Group

Fig. 8.4 Virtual operating room modeled after an actual operating room at Sunnybrook Health Sciences Centre in Toronto, Canada

The trainee is “placed into” a virtual operating room (modeled after an actual operating room at Sunnybrook Health Sciences Centre in Toronto, Canada) to immerse user into a real operating room (see Fig. 8.4). Once in virtual reality they can go through the modules that the medical educator or the user has selected. Each module has a different” type”, corresponding to the type of interaction the trainee will have to perform to complete the task. For example, the “Call Help” module requires the trainee to inform the nurse to call for help and the surgeon to stop the operation. This is accomplished using a dialogue wheel (see Fig. 8.5) that requires the user to have the nurse within view to trigger the nurse to call for help and surgeon to stop the operation. Currently one module has been implemented (the anaphylaxis module was chosen as it is the most common cause of complications during anesthesia [85]), although the goal is to develop modules for each life threating crises described in the ACM. The final goal is to allow medical educators the ability to create rare anesthesia scenarios that can be created with low levels of technological literacy and be used on multiple hardware devices for accessibility at a low cost. This will allow trainees and current anesthesiologists to actively train for common and rare scenarios.

8.2.1.3

Development of a Simulation-Based Solution to Related Musculoskeletal Disorders (WRMSDs) Amongst Canadian Sonographers

Sonography, “the statiscope of the future,” is a diagnostic ultrasound that noninvasively and effectively allows for patient diagnoses through the creation of bodily structure images using high-frequency sound waves [59]. The field of ergonomics suggests that any multi-joint movement requiring awkward body positions, and

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

149

Fig. 8.5 Interactions are accomplished using a dialogue wheel

application of forces (common in sonography), may result in work-related musculoskeletal disorders (WRMSDs). Therefore, it is not surprising that recent studies found 84% of respondent sonographers suffer from pain associated with their ultrasound practice, and that one in five workers in the province of Quebec, experienced a nontraumatic WRMSD in at least one body region over a one year period [2]. Another recent study validates this issue where, out of 567 sonographers, 99.3% of them reported WRMSD symptoms within the last year [96]. Statistics predict that in just nine years, 18,000 additional diagnostic medical sonographers will be needed in the U.S. alone, exceeding the average growth of all occupations [88]. However, it will be challenging to meet this demand while maintaining skill within the workforce with these exceptionally high injury rates. Due to their chronic nature, individuals can live with painful and debilitating WRMSDs for years, which will require rehabilitation and mitigation efforts. These disorders along with their resulting physical inactivity pose as risks for developing other ill-nesses and for increasing long-term

150

maxSIMhealth Group

health issues [49]. Not only is the health of most sonographers compromised, but also the costs of these WRMSDs are increasingly high as injuries arise. WRMSDs are estimated to cost the Canadian economy $15 billion per year, in addition to the $22 billion cost it faces from musculoskeletal diseases alone [49]. The Ontario Workplace Safety and Insurance Board receives reports of WRMSDs as their number one type of lost-time work injury (in the province of Ontario). In addition to direct physical costs of pain and suffering and economic costs of absence and lack of productivity, there are also indirect costs that should be noted. When an employee suffers from a WRMSD, their employer also faces many economic burdens including overtime or replacement wages, workstation and equipment alterations, administration, employee replacement training, lost productivity, and lowered quality [68]. The expected increase in the prevalence of WRMSDs will bring a high expense to the Canadian economy as well as a reduction in Canadians’ overall quality of life. The most significant causative factors of WRMSDs in sonographers include force, repetition, sustained, awkward, or poor positioning, grip and pressure, stress, and workload [96]. A study conducted by Zhang and Huang [96] demonstrated the percentage of musculoskeletal symptoms over four different times in 15 body regions. The most concerning regions include the neck, right shoulder, right hand, and back. Most recommendations and solutions that exist today to reduce risks of these WRMSDs involve: 1. Guidelines: Following posture guidelines and ergonomic techniques. 2. Improved equipment: Despite significant improvements, not all worksites are equipped with state-of-the-art equipment since more exams are being done at the patient’s bedside [65]. 3. Workload: workforce shortages caused by WRMSDs lead to less coverage and insufficient break periods; an essential risk-reducing strategy that allows muscles and tendons time to re-cover [45, 65]. 4. Assistive devices: recent developments involve alterations or re-imaginations of the ultrasound machinery itself since most scanning environments do not promote proper ergonomic techniques. The three main approaches to this challenge include: (i) autonomous robotic imaging that does not use a human operator, (ii) remotely operated sonography, or telesonography, and (iii) human-robot cooperation with a human physically present [82]. However, a study conducted by Al-Rammah et al. [2] revealed that amongst a group of 100 sonographers, there were low levels of awareness regarding best practices and safety measures. Thus, despite guidelines and preventive measures, WRMSDs are still occurring and affecting a vast majority of sonographers. In summary, there are several causes for WRMSDs in sonographers, and several solutions have been proposed. Unfortunately, the problem persists despite these solutions. The field of simulation-based education may provide another effective method of decreasing the risks of WRMSDs in sonographers. The short-term goal of this research is to examine the WRMSDs in sonographers through a “simulationbased education” lens. Specifically, risks will be evaluated with the goal of using

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

151

simulation-based education as an adjunct solution to other currently deployed solutions. To achieve this, several observational (in a clinical setting), think-out-loud (in a simulated setting), and ergonomic (in a laboratory) investigations and evaluations will be utilized to determine the preventable causes of these injuries. The long-term goal of this work is to develop and test solutions that utilize simulation to drastically reduce incidences of WRMSDs in sonographers in Canada. It is anticipated that the completion of this WRMSD study in sonographers will reveal: • • • •

The underlying causes of WRMSDs amongst sonographers. Sonographers’ awareness of faulty/risky posture and movements, if any. The reasoning(s) for failing to follow strict guidelines. A reduction of WRMSD incidence amongst sonographers through utilization and implementation of simulation.

8.2.1.4

Cultural Competency Training for Long-Term Care Professionals

Currently, Canada’s population is reported at 37.6 million people (as of July 1, 2019) [81], with an expected growth to 52.6 million by 2061 [80]. However, Canada has an aging population; in 2019, the population of Canadian’s seniors (aged 65 and over), was 6,592,611 (approximately 18%). More locally, the Durham Region in Ontario (where the proposed work/study will take place), is reported to be one of the fastest growing regions in the world [33]. In 2016, Durham’s population was 673,000, and in 2018 it reached a population of 683,600 [27] (immigration is a significant factor for this population increase). As the baby boomer generation ages, the seniors’ population is estimated to be 100,976 (approximately 15% of the total Durham Region population) [27]. The city of Oshawa is the biggest municipality in Durham Region, with a total population of 169,509, 16.7% (or 28,385 people), of which are seniors (age 65 and older). The growing number of seniors live in their community, assisted living, or group homes. As their abilities to care for themselves independently deteriorate, and home care services become insufficient to meet their needs in their own homes, seniors move into Long Term Care (LTC) homes. The LTC homes, also known as nursing homes, are defined by the Ontario Ministry of Health and Long Term Care, as a place where adults live, receive all aspects of personal care, nursing supervision, and assistance with activities of daily living [20]. LTC settings operate following a “24/7” schedule and LTC homes are considered the residents’ space and home. In 2014, Ontario had 627 LTC homes across the province, for 78,000 beds in total. Yet, there were 26,495 seniors on a waitlist to enter a home [67]. Accordingly, the demand for LTC beds surpasses the supply [66]. In 2018, the government of Ontario allocated 5000 new LTC beds across the province with 270 beds allocated to the Durham Region [32]. This was part of the Ontario government’s initiative to build 15,000 new beds and redevelop 15,000 existing beds over five year [20]. However, additional beds will require similar growth of trained professionals to ensure that seniors are well taken care of, yet there is a pre-existing shortage of healthcare

152

maxSIMhealth Group

professionals nurses and personal support workers (PSWs) in LTC. This is further compounded by the fact that our current training of healthcare professionals does not meet the growing demands. Two solutions may address this gap as follows: (a) training of qualified immigrants, and (b) accelerated retraining of workers from other lines of work. Objective: Given the increasing senior population, the increasing ethnical diversity in the Durham Region; and the fact that the healthcare system is moving towards a person-centered model of care, it is critical that seniors receive appropriate and cultural-centered care and service, that better meets their needs and increases their satisfaction and health outcomes. In other words, developing and enhancing cultural competencies in LTC workers is urgent to meet the cultural care needs of a growing diverse population. Although retraining programs for both internationally trained professionals, and local individuals who require retraining exist, we aim provide extended teaching and learning opportunities to develop or enhance cultural competencies in LTC workers. Adapted from the CanMEDS framework, the cultural competencies will include communication, collaboration, professionalism, and health advocacy. Purpose: We propose a solution, where the newly retrained LTC workers will develop cultural competencies using serious gaming, and more specifically, using our proposed Senior’s Cultural Competency Game (SCCG). Research: Once implemented and incorporated into professional practice, the SCCG will allow providers to be culturally competent when providing care to seniors. The training will be computer-based, freely available, and available at any given time for all healthcare providers’ access. Using a previously developed cultural competency game framework [51], we propose to build the SCCG for LTC workers. Initially, the SCCG will be part of the orientation process for new staff, and staff returning from an extended leave of absence. Furthermore, the SCCG will be added to the yearly mandatory education for all staff in the facility. Once implemented as part of the orientation process, in the next phases, the SCCG will be augmented to include remediation and education modules where providers who failed the competency threshold will have a chance to acquire these competencies in a safe and flexible learning environment. The research and development process will be completed in five phases: • Phase 1 (Scenario Development): Four scenarios will be developed by a content expert to address four cultural competencies: communication, collaboration, professionalism, and health advocacy. • Phase 2 (Face and Content Validity): Using expert consensus building methods (e.g., the Delphi method), experts will assess the face (realism) and content (appropriateness) of the SCCG. • Phase 3 (Imbedding the Scenarios into the Game Framework): this will be completed by computer scientists and serious game developers.

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

153

• Phase 4 (Implementation): An initial installment of the game during an orientation session for new staff at a single institution (Sunnycrest Nursing Home in Durham Region, Canada). • Phase 5 (Evaluation): Novice and experienced staff at the Sunnycrest Nursing Home will be asked to participate in a study to examine the usability and effectiveness of the SCCG. It is expected that the experienced staff will have higher scores than the novices (construct validity evidence). At the same time, the user experiences will be assessed using previously validated metrics. 8.2.1.5

Integrating Immersive Technology and Neurophysiological Techniques to Evaluate Optimal Learning Environments in Medical Simulation Training

Medical errors are the third leading cause of death in the United States, following cardiovascular disease and cancer [58]. Although there is no single solution to fix this problem, simulation is one approach [9]. Educators generally favor more realistic simulations, based on the assumption that they are more representative of the real world, and therefore more effective in training. High-fidelity simulators are expensive, and research suggests these simulations are not more effective than lowfidelity options [64]. Additionally, evidence suggests that it is not only the realism of the simulator, but also the level of immersion within the environment that leads to improved learning outcomes and skill development [15, 29, 64, 91]. In VR, a form of simulation, one way to enhance the perception of immersion is to provide multiple sensory cues (such as vision, audition, and haptic sensations). Currently, the field of healthcare simulation has not addressed what immersion means from the neurophysiological point of view, the impact of immersion on learning, and how multiple sensory inputs influence the level of immersion. With the use of blended research paradigms from neurophysiology [36] and behavioural sciences [14], this work aims to understand if more realistic simulators are more immersive and more effective, compared to lower-fidelity training environments. More specifically, how different forms of sound and touch feedback can influence a trainee’s perception of a drilling task, and whether these sensations promote motor learning and skill transfer. By integrating neurophysiological techniques, such as electroencephalography (EEG), we will examine brain activity during a drilling task with progressive levels of immersion. Using frequency analysis and source localization, we will also seek evidence of a neural signature of optimal immersion in a training environment. Utilizing other physiological measures, including surface electromyography (sEMG) and heart rate variables, we will assess trainee responses to different immersive stimuli, and potentially identify physiological differences in top- and bottom-performers. Our prior work has established that low-fidelity haptic force-feedback combined with realistic audio input can enhance subjective realism and accuracy in a simulated drilling task, compared to audio alone [39]. We will continue this work by examining how auditory and haptic sensations affect motor learning, skill transfer, and associated

154

maxSIMhealth Group

brain activity during a simulated task. We will have separate groups of volunteers participate in the simulated drilling task with either (a) no audio or haptic sensations, (b) audio sensations, (c) haptic sensations, or (d) audio-haptic sensations, and a transfer session 24 hours later. With a between-group design, we can hypothesize that the participants in the audio-haptic group will learn the drilling task more efficiently, and perform better during the transfer test, as compared to the other groups.

8.2.1.6

Customization of Pick and Place Tool for Cardiac Auscultation Tasks in Virtual Reality Employing User Ergonomics

Virtual reality (VR) applications in medical training allow for the reproduction of realistic scenarios depicting procedures for developing cognitive and psycho-motor skills, performed in seated, standing, and room-scale settings [79]. However, given the recent widespread application of commodity VR, on-size-fits-all solutions lack support for the variability of users in terms of their ergonomics (e.g., height, reach, mobility), which can lead to usability issues [60]. For example, VR sensitivity can be affected by speed and height changes within the scene mismatching their own [90]. In this section we present the development of a tool for customizing pick and place tasks within a virtual cardiac auscultation scenario employing user ergonomics. Cardiac auscultation is a routine examination that allows diagnosing heart conditions to determine proper care and treatment if needed [8]. With respect to cardiac auscultation training, practices are moving away from using the cost-effective stethoscope toward employing multimedia resources, manikins, and various diagnostics tools such as the echocardiography [8]. Although simulation has proven gained more popularity it is leading to concern regarding the loss of cardiac auscultation skills using the stethoscope [8]. This scenario in conjunction with current consumer-level virtual simulation is leading to the development of complementary training tools for addressing this problem [69]. The VR-based auscultation training tool is being developed using the Unity game engine, and SteamVR. The tool combines tracking scripts attached to all in-game objects that record the user’s actions in order to obtain ergonomic measures. This allows us to define the best placement and scene scale with the goal of overcoming the limitations of one-size-fits-all default interactions in VR software development kits. Before examining the virtual patient, the user is required to pick and place tasks by interacting with objects on a table to obtain the user’s ergonomics (see Fig. 8.6a). Once completed, the user can examine the patient by placing the virtual stethoscope on the mitral, tricuspid, aortic, and pulmonic areas. To provide quantifiable feedback to the trainee, metrics from the interactions include completion time, number of attempts, and motion paths are gathered during the virtual examination and displayed at the end of the simulation. In addition, the framework allows instructors to review the sessions within the scene to evaluate performance and identify areas where the trainee had trouble (e.g., auscultation areas, completion time, examination responses, and gaze areas). The recorded data can be reproduced to conduct debriefing sessions with the trainees to discuss their decision-making. The framework also provides

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

155

Fig. 8.6 Auscultation scenario view. a User calibration. b Virtual cardiac auscultation

an additional view on a monitor for instructors and other trainees to spectate the examination being performed with the HMD. Data obtained from a preliminary testing of the Vive controller and the Vive Tracker over 30 examinations, allowed us to observe that the trackers can be easily occluded by the trainee’s body during the interactions, resulting in 25 faulty interactions between the stethoscope and the auscultated area, affecting the examination and the metrics being recorded. Figure 11.6b shows the virtual examination with the Vive Tracker, the spectator’s view, and auscultation with the Vive controller. In this Section, we have briefly presented our ongoing work that is seeing the development of a VR framework that adapts the scene based on anthropometric measures captured within the virtual examination for cardiac auscultation. The preliminary assessment of its use across numerous examinations allowed us to identify problems with the Vive Trackers and their reliability for the developed training tool. Moreover, during the development of this project, a SteamVR up-date introduced inconsistency to our system provoking continuous tracking dis-connection. A problem later solved with an update by the developer. Future work will study the effects of ergonomics measures on usability, presence and performance within the virtual auscultation.

8.2.1.7

Guiding User Vision in Virtual Reality Environments

With the increase in demand for graphical fidelity, as well as the increase in display resolution and refresh rate, graphics performance is once again a concern for developers. This problem is most apparent in the field of Virtual Reality (VR) where framerates and response times must be kept high to avoid motion sickness and other unwanted effects such as delay when using techniques like foveated rendering [3]. This is made worse by the nature of current virtual reality hardware which requires rendering a display for each eye, doubling the graphics compute cost of VR games. The current trend in VR hardware is also leaning towards standalone systems, with reduced Graphics Processing Unit (GPU) compute capability compared to their

156

maxSIMhealth Group

desktop counterparts. Many methodologies and techniques have been developed for optimizing rendering performance in VR, such as Multiview outputs and foveation. Many of these are also relevant to the traditional rendering pipeline (where the vertex stream is rasterized to a single display), especially when combined with newer technology like high-performance eye tracking. One of the current leading areas of research in this field is perception-based rendering, where the GPU compute resources are allocated to areas that have a higher impact on user perception, such as areas of high contrast or in the foveal region (the area of highest visual acuity in the human eye) when eye-tracking hardware is used. Perception-Based Rendering: Perception-based rendering has been a goal of graphics researchers for many years due to its ability to efficiently allocate rendering resources [70]. It refers to a set of methodologies and techniques that aim to reduce the computational cost of rendering by leveraging the limitations of the human visual system [37]. This field of research borrows heavily from research into the psychophysical aspects of the human visual system, with some simplifications and generalizations made, such as using discrete foveal regions based on averages of human foveal regions. The human eye visual field spatial resolution can be categorized into three main regions: the foveal, inter-foveal and peripheral regions [89]. The foveal region has a high density of color-receiving cones, and a lower number of rods (contrast sensitive photoreceptors), which leads to the fovea excelling at visual acuity and color accuracy. The inter-foveal region is marked by a sharp decrease in cone density, with a large increase in rod density. These two regions constitute ‘central vision’ and are responsible for the majority of visual acuity. Beyond these regions there are no cones, and rod density falls off steeply and this is referred to as the peripheral region [89]. It is also worth noting that the periphery shows no decrease in ability to detect motion, which may have applications when attempting to guide user attention. Perception-based rendering aims to leverage these attributes and limitations of the human visual system to provide shortcuts for rendering techniques. The most promising of these fields is foveation, where rendering resources are allocated mainly in the foveal region, but there have been some promising results using contrast to guide rendering to higher-contrast areas [30]. Driving User Attention: Our area of research is in leveraging the findings of perception-based rendering research to drive user attention in a way that does not impact immersion. Foveation uses gaze information to guide rendering resources to the user’s area of focus, but there is little research done on reversing this, and having the simulation guide the user’s attention. Work has been done on guiding user vision in VR and Augmented Reality (AR) using more traditional approaches, such as arrows, object highlighting, and halos [72]. Our research will focus on whether it is possible to leverage contrast, movement, and aliasing in a way that guides user vision without impacting immersion in the virtual scene. Our current approach will be to artificially introduce aliasing or contrasting the inter-foveal region to induce a saccade response and move the artificially salient region until the user’s gaze aligns with the point of interest.

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

157

Fig. 8.7 Sample screenshot of our current cardiac auscultation application and timeline of the different objects where the user focused

Current Applications: This could provide guidance to the user in high-complexity virtual environments in video games and training simulations and may have a positive effect on immersion. Our current research involves guiding user vision in a full-immersion VR environment for cardiac auscultation training, as well as guiding vision on a more traditional display in reminiscence therapy for patients with dementia (see Fig. 8.7 for a sample screenshot of our current application). We also plan to examine how guiding vision impacts skill transfer to non-guided and real-world tasks.

8.2.1.8

Force Feedback for Precision Tool Grasping

Although virtual simulation is being applied across a wide range of medical training applications, currently the majority of these applications focus on the cognitive aspects of a procedure only typically ignoring the technical components given the complexities associated with generating the haptic cues required to simulate them. The aim of this project is to improve the medical training simulation by providing haptic feedback in virtual medical skills training. We are focusing on simulating the grasping and manipulating of precision tools (e.g., a scalpel, etc.) similar to commonly available haptic gloves (see Fig. 8.8). We determined the force required to lift an object at rest with a known mass based on Newton’s law of gravity. Grasping and manipulating precision tools such as a scalpel involves the thumb and the index finger. According to Nataraj [54], the difference between the magnitude of the forces applied by the thumb and index finger is negligible. Taking this into consideration, we distributed the forces equally between the thumb and the index finger. The haptic device then goes through a series of conversions of these forces to provide equal and opposing force. The user must then overcome this force in order to lift the virtual object. The motion of the object

158

maxSIMhealth Group

Fig. 8.8 Haptic device for precision tools grasping. Note some of the components used in the figure were adapted from [38]

is detected using a Leap motion sensor connected to a Unity application connected to an Arduino Uno as microcontroller communicating with the driver board.

8.2.2 Gamification- (and Serious Gaming-) Based Solutions In this subsection, we provide a description of projects whose solutions focus on gamification and serious gaming.

8.2.3 The Gamified Educational Network (GEN) Interacting online daily using social networks has become ubiquitous while “educational networking” (the use of social networking technologies for educational purposes [46, 71]), has also gained popularity. A prominent strategy is the adoption of gamification concepts to motivate, engage, and enhance the participant’s experience, thus positively impacting its academic achievement and social connectivity [95]. Gamification refers to the process of applying game elements (such as levels or points) to non-game contexts to stimulate learners to engage in collaboration, friendly competition with peers, and achieving the positive learning outcomes [95]. Online educational platforms, including massive online open courses (MOOCs), have applied gamification to entice participation and engagement by exploring the learners’ intrinsic motivation (e.g., socializers want to interact with others, killers

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

159

are engaged by competition and challenges with others) [61]. Within this context, the Gamified Educational Network (GEN) was born, to explore the application of gamification concepts to an educational network by using game elements as intrinsic motivators and aiming to engage and motivate learners and to promote a collaborative learning process. The GEN builds upon the Observational Practice and Educational Network (OPEN) that was initially designed to support health professions education by allowing a community of trainees to access educational and instructional content, communicate with peers and subject-matter experts, and provide/receive feedback asynchronously [76]. OPEN was previously used to study the role of Internet-based learning in clinical skill acquisition and medical-based cultural competence training for novice health professional trainees [21, 52]. More specifically, medical trainees were video-recorded practicing suturing and knot tying techniques [21] and interacting with a virtual simulation of an elder patient that does not feel comfortable with her doctor [52]. The resulting videos were uploaded to OPEN, where other users (e.g., peers or experts) provided constructive feedback by commenting on these videos. Furthermore, Khan et al. [52] demonstrated that the use of an Internet-based educational platform could encourage trainees to prepare for learning sessions, and video-based activities provided a fun and engaging experience. GEN aims to offset the low engagement identified in OPEN by applying gaming elements, and it has been designed from the beginning to be used by any field of study, not only health education. The game elements employed in the GEN were determined after a series of formal focus group sessions that were conducted with an equal mix of 15 participants (game developers, game designers, and medical trainees), recruited from the Game Development and Entrepreneurship at Ontario Tech University and from the faculty of Medicine of the University of Toronto, who interacted with the original OPEN platform [75]. That work identified three game elements that were implemented in the GEN and, with badges, are the most used in education [95]: • Point-based system: implemented in a manner similar to the “Reddit” entertainment, social networking, and news website which supports peer-based assessment where-by peers rate the quality of other comments or interactions. • Leaderboard: this social comparative feedback component provides learners with information regarding how well they are doing with respect to their peers. Such comparative information is provided both individually and in a general context by showing the learner position on a private individual leaderboard (e.g., ‘Forum likes: #2’) ensuring that the learners do not have access to the scores of their peers, avoiding comparisons that could be a detriment to motivation. Learners also get access to how many points they received in each course section through an individual scoreboard. • Module division: implemented as a segmented progress bar that allows learners to track their progress in each course and each course component. A preliminary between-subjects study with 10 participants was conducted using the QUIS (Questionnaire for User Interaction Satisfaction), and the SUS (System

160

maxSIMhealth Group

Usability Scale) questionnaires and four open-ended questions requesting general feedback to examine the usability and satisfaction perception of the GEN in two versions: with and without gamification elements [84]. Both versions achieved a SUS score above 80, which indicates a highly usable system, and the QUIS questionnaire also implies that the GEN interface is extremely easy to use, although not very stimulating. Concerning the open-ended questions, users provided constructive feedback regarding both versions of GEN. Here are a couple of answers when asked about the gamified version “Do you feel that GEN fosters a collaborative experience?” • “GEN has the potential for user collaboration, but I think areas like the comments section could use more functionality (i.e., up-voting, direct replies, etc.).” (Anonymous participant). • “I think it is possible, I noticed a couple social motivators on the comments for example, however I am unsure if collaboration can be better encouraged by the system somehow or if it falls primarily on a course instructor to direct.” (Anonymous participant). Furthermore, given the preliminary nature of the data, definitive conclusions cannot be drawn regarding the superiority of one version over the other. For future work, based on open-ended questions feedback, we will improve the comments functionality, quiz collaboration, integration with social media to allow users to share their accomplishments, and also study more methods to improve peerto-peer interaction. Additional testing will also be conducted to examine the engagement and motivation of both versions, in addition to their educational effectiveness (knowledge transfer and retention).

8.2.3.1

Assisting Medical Lab Technicians Using a Modified Objective Structured Assessments of Technical Skills (OSATS) Tool to Test Content Validity on the Microtomy Procedure

Histological techniques are a highly valued skill in the medical laboratory sciences (MLSc) program because it is the basis for all microscopic examination of tissues under the microscope [83]. A microtome is a tool that is used to cut paraffin wax blocks to create tissue samples. It involves the use of a sharp knife for tissue cutting and several safety precautions that students must be aware of. The most commonly used microtomes in histology are the rotary microtomes (see Fig. 8.9). The device has a rotary motion that is actually part of the cutting process. The blade is usually fixed in a horizontal position and the tissue section is placed above the blade. In many microtomes, the rotary wheel can be operated manually, but they are generally automated or semi-automated. Automated instruments reduce repetitive movements, which can minimize the risk of developing musculoskeletal disorders. There are a series of steps that must be completed in sequential order along with cautious safety features when handling the instrument. Rolls [77] states ten general examples of what is and is not appropriate during tissue processing and fixation.

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

161

Fig. 8.9 A semi-manual rotary microtome that is used for cutting tissue blocks

Although the literature illustrates the microtomy procedure using several steps, there is still a lack of tool that defines the most essential steps of the procedure. The Objective Structured Assessment of Technical Skills (OSATS) is a tool that will be used to validate the microtomy procedure. It was initially used by the University of Toronto in the 1990s to examine surgical residents’ skill competence. The checklist identifies tasks that must be performed correctly. The global rating scale consists of seven general competencies and the examiner rates the level of each competency on a five-point Likert Scale that is anchored with a behavioural description [5]. The OSATS tool has been implemented in evaluating surgical skills of residents and the reliability and validity of the assessment. The assessment identifies residents who may need additional training and provides a mechanism to ensure the competency of surgical skills. Thus, OSATS is a reliable and valid tool for assessing technical skills such as the microtomy procedure. The current learning methods consist of students following a stepwise procedure from a lab manual or a procedure written by the professor. These manuals all have variations between the sequence of steps, along with what steps are included in the procedure [77]. The biggest issue is when students aren’t aware of all the safety features, it can lead to accidents such as cutting themselves. The lack of adequate amount of time prevents students to confidently develop the technical skills of the microtomy procedure. Simulation provides an alternative method for health care professionals such as medical laboratory technicians, and future health care professionals such as students to develop their skills until they reach a specific competency level [28]. Therefore, a game-based simulation will be designed to improve the learning outcome of the medical lab students because it will allow for students to practice the technical skills outside of the lab.

162

maxSIMhealth Group

Purpose: use a modified version of the OSATS tool to (a) develop the stepwise procedure, (b) provide evidence of content and construct validity of the microtomy procedure. Research: The research questions we will be examining include (i) Can we break down the skills of microtomy into component skills using task decomposition methods and expert opinions. (ii) Does the modified OSATS tool show evidence of content and construct validity for the 10-step microtomy procedure? For both phases outlined below we will use a modified Delphi method as outlined in [42], by generating an initial concept document. Experts are recruited and consensus will be built from separate rounds. Each expert will complete the questionnaire and provide comments on each topic. At the end, a consensus will be made using the data provided by the experts. This is different from a full Delphi method, which starts with preparing the concept with an expert. We have selected to follow the modified version because we have access to a local expert who can prepare the initial concept document. Phase 1, “Development of the instrument”, will consist of the use of the OSATS tool to validate the content of the 10 key steps for successful microtomy completion. Data collection will be obtained using expert consensus methods (e.g., the Snowballing method), and experts will assess the stepwise procedure to provide content validation. Phase 2, “Assessment of content validity”, will require the MLSc content experts to complete a questionnaire regarding the OSATS tool and provide feedback on each of the dimensions. Phase 3, “Imbedding Safety Module into the Gamified Education Network”, which consists of an online safety module that students must complete prior to the simulation-game starting the microtomy procedure. Students must pass the module to demonstrate an understanding of the safety component of the microtomy procedure. The long-term objectives are to enhance the learning outcomes of MLSc students with microtomy techniques using simulation. The key deliverable will include a working simulation-game (beta version) that will consist of a virtual microtome, pre-game safety module, pre-game description of the skill, in-game information and feedback, in-game scoring, post-game feedback to the learners about their performance, and the game will balance educational and fun practices. The collaboration between experts in health sciences and computer science/engineering at Ontario Tech University will allow for the development of the simulation-game. We anticipate that this game will: • • • •

Improve the knowledge of all safety components of the microtome. Improve MLT confidence. Increase safe practices to ensure individuals are not cutting themselves. Improve sample slide tissues for more accurate microscopic examination.

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

8.2.3.2

163

The Autism Serious Game Framework (ASGF)

Autism spectrum disorder (ASD) has a variety of causes, and its clinical expression is generally associated with substantial disability throughout the lifespan of the affected individual. It is characterized by impaired social communication and interaction, and by restricted, repetitive interests and behaviors [97]. Mentalizing, which involves the ability of a person to attribute beliefs, thoughts, feelings, plans, and intentions to themselves and others, can be a struggle for those with ASD [73]. The mean clinical age of diagnosis is 4–5 years, despite advances in knowledge regarding early signs of the disorder [97]. Serious games (SGs) are games that do not have entertainment, enjoyment, or fun as their primary purpose [23]. Serious games are often designed and developed to address one specific problem/scenario that cannot be easily modified. Changes to scenarios require the serious game’s source code to be modified, which is a difficult and time-consuming process. Within the scope of this project, working collaboratively with ASD experts, we developed an Autism Serious Game Framework (ASGF) that allow therapists with limited, if any programming experience, to create new (or modify existing), serious games intended to assist children with autism (we are targeting children between 3 and 7 years of age). The ASGF provides a more usable, flexible structure than traditional gaming engines, as it allows a non-programmer to develop serious games and modify their parameters while overcoming the single scenario problem inherent. The ASGF includes a graphical user interface (GUI) that follows a WYSIWYG (what you see is what you get) approach whereby users (therapists) are able to drag and drop (and assemble) components associated with each game. The framework allows users to import 2D, 3D graphical assets (with or without animations), and sound assets to configure the different types of games. We are in the process of conducting a series of experiments to test the effectiveness of the ASGF and several games developed with it. However, preliminary testing with childhood autism experts are promising and indicate that the ASGF will allow for the simple development of autism-based serious games and assist and help children with autism to develop skills and obtain functional gains, such as recognizing faces.

8.2.3.3

COVID-19 Serious Game

Coronavirus disease 2019, colloquially known as COVID-19, is an affliction that was first reported back in early January of 2020, and as of March 2020 has been classified as a pandemic [92]. According to the World Health Organization [92], COVID-19 affects different people in different ways, and most infected people will develop mild to moderate illness and recover without hospitalization. The most common symptoms include fever, dry cough, and tiredness while less common symptoms include aches and pains, sore throat, diarrhea, conjunctivitis, headache, loss of taste or smell, a rash on skin, or discoloration of fingers or toes [92]. It is commonly spread by close contact between humans, and as such health professionals around the world have

164

maxSIMhealth Group

recommended maintaining at least two meters of social distancing, isolation and the use of face masks by those experiencing flu-like symptoms. With many workplaces and businesses having to shut down in order to limit the spread of COVID-19, schools and workplaces alike have become more reliant on technology to ensure work and learning can proceed. Those who work on a computer have shifted their work and meeting environment from the office to their home, and educators have had to work diligently to adapt their lessons to be delivered online (see Grant [39] for a discussion on the use of virtual reality to host meetings). While most post-secondary programs have been able to adapt to electronic content delivery, students training to be medical practitioners and other professions that require handson experience have suffered from a lack of hi-fidelity simulations in lieu of being in the operating room or other learning environment. Virtual reality in the form of virtual learning environments including virtual simulations, and serious games, can help fill in some of this gap. The use of such tools hasn’t only been seen in professional and academic fields, when dealing with a pandemic the general population requires access to reliable information in an engaging and easy to digest format. In response to this demand, we have begun developing a COVID-19 serious game to be deployed to mobile platforms where the player will step into the shoes of an essential worker, and must go about their day while making the right choices to keep not only themselves and their family safe, but everyone around them to minimize the spread of the disease.

8.2.4 3D Printing-Based Solutions In this subsection, we provide a description of projects whose solutions focus on 3D printing.

8.2.4.1

Low-Cost 3D Printed Craniotomy Simulator: Developing an Instrument Examining Contextual Factors that Matter in the Implementation of Three-Dimensional Printing and Virtual Reality Simulation in Nursing and Medical

A craniotomy for traumatic intracranial hemorrhage is a common procedure in a neurosurgery residency training program [56]. When a patient suffers a traumatic head injury (THI), such as an expanding epidural hematoma (EDH), or subdural hematoma (SDH), usually, a neurosurgeon will take an urgent operative intervention to relieve pressure on the brain and control hemorrhaging [56, 74]. When this happens in rural and remote areas where neurosurgeons may not be readily available, surgical intervention by community general surgeons (CGS) may be required to prevent progressive neurological impairment or possible death of the patient [34, 78, 87]. Even with the remote assistance by a skilled neurosurgeon through video call, the stress of an emergency and a CGS’s rare hands-on experience may increase the risk of surgical complications [56]. In this case, a CGS is confronted with a difficult decision:

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

165

operate in undesirable circumstances with remote assistant from a neurosurgeon, or transfer the patient to a tertiary care center with the potential for adverse consequences due to delay of care [11]. To date, medical simulation has become an excellent addition to healthcare education, as it promotes skill acquisition and maintenance through hands-on experience [74]. Simulation-based training may provide a good training platform to CGSs to “master the critical skills before performing their first craniotomy on a patient” [56]. However, the cost of similar commercially available high-fidelity simulators are magnitudes more expansive, making them potentially prohibitive outside large, well-funded neurosurgical training programs [11]. We have proposed a solution, 3D printing affordable simulators for rural and remote healthcare centers [11]. A three-dimensional (3D)-printed EBHC simulator (see Fig. 8.10) was designed and printed with the purpose of being incorporated into a simulation-based medical education (SBME) curriculum developed collaboratively by neurosurgeons and CGSs, specifically delivered in rural and remote areas. The direct cost for each EBHC simulator is approximately $12 although this cost can be further reduced if we recycle the 3D printed material. We have tested the EBHC simulators at hands-on workshop of the 26th Annual Rural and Remote Medicine Conference in St. John’s, NL, Canada. This conference,

Fig. 8.10 a The first stage of the simulator construction (base, brain, skull, and skin). b The final stage of the simulation construction (the skin was draped over the skull and secured using the clamps on the base and additional hardware). c Emergent burr hole/craniotomy simulator after a 15 min demonstration by an educator

166

maxSIMhealth Group

Fig. 8.11 The improved EBHC simulator according to the feedbacks collected at the workshop of the 26th Annual Rural and Remote Medicine Conference in St. John’s, Newfoundland, Canada

hosted by the Society of Rural Physicians of Canada, targeted healthcare professionals who are currently practicing, or those who look to practice, in rural and remote areas of Canada. 16 individuals attended the workshop, all of whom indicated that they were rural general practitioners (GP), with two individuals indicating they additionally completed enhanced surgical skills. Future work will examine the integration of the low-cost EBHC simulator (see Fig. 8.11) into a neurosurgical training program. Future work will also involve further improvements to the design of the EBHC simulator.

8.2.4.2

Developing an Instrument Examining Contextual Factors that Matter in the Implementation of Three-Dimensional Printing and Virtual Reality Simulation in Nursing and Medical Laboratory Sciences Education

There is a rapidly growing body of literature that examines how simulation can be best used in healthcare education [63]. However, a gap has been identified in the area of simulation-based medical education, and more specifically, program directors are struggling with how to successfully implement simulation programs, given a lack of clear guidelines on the matter [53]. Implementation science is a rigorous study of methods that allow for a systematic uptake of research findings and other evidencebased practices [31]. It is intended to guide the implementation of evidence-based programs in various contexts; however, it has not yet been integrated into simulationbased education [31]. This research study aims to use implementation science to

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

167

develop an instrument to assess the effectiveness of disruptive technologies in nursing and MLS education. Commercial simulators are expensive and have limited customizability, therefore restricting educational opportunities in fields such as nursing and MLS. In contrast, innovative 3D printing and VR simulators are cost effective and customizable. Our goal is to implement 3D printing and VR as adjunct options for academic institutions to develop simulators that are low-cost, good quality, and customizable. However, implementation science shows that only a minority of innovations are adopted without proper implementation planning and consideration of the context. This project uses an implementation framework to assess the feasibility and need for developing an instrument examining contextual factors that matter in the implementation of 3D printing and VR simulation in nursing and MLS education. What are the constructs that make up an effective instrument to evaluate readiness and fit for implementation of disruptive technologies to enhance the use of simulation in nursing and medical laboratory sciences education? The Consolidated Framework for Implementation Research (CFIR) was used to inform the development of the instrument. The CFIR includes 37 constructs that influence implementation, which are within five major domains: (i) inner setting, (ii) outer setting, (iii) intervention characteristics, (iv) characteristics of individuals, and (v) the process of implementation [35]. An online questionnaire will be administered to participants and completed anonymously, and the data collected will aim to reduce the number of constructs to those that are most applicable to simulation in nursing and MLS education. Participants will include thirty experts who are teaching faculty from the Nursing and Medical Laboratory Science (MLS) programs within the Faculty of Health Sciences, at Ontario Tech University in Oshawa, Canada. Each participant will rate the importance of each construct in regard to their specific educational program on a scale of 1–10; 1 signifying the construct is ‘not important’ and 10 signifying the construct is ‘very important’. The data will be analyzed and filtered based on expert ratings. Using the Delphi methodology, the constructs will be narrowed down based on the ratings. Further refinement of the constructs will take place until a feasible implementation instrument to evaluate the effectiveness of disruptive technologies in Nursing and MLS education is formed. Using 3D printing and VR to fulfill simulation requires careful implementation. Implementation frameworks inform this process, but they require adaptation to fit the context. To optimize the success of adoption of 3D printing and VR simulation, the implementation process should focus on the constructs that the experts deem important. With the pool of faculty members from both the Nursing and MLS programs, the Delphi methodology will be used to build consensus among these experts. The process of narrowing the constructs to create a feasible implementation instrument will help evaluate the effectiveness of disruptive technologies in Nursing and MLS education, and may have the potential to be further adapted to other educational contexts.

168

8.2.4.3

maxSIMhealth Group

Is the “Floss Dance” Really Enough to Make You Floss?

Approximately 2.2 million Canadians aged 20–64 have lost all of their natural teeth, while 96% of Canadians have or had dental cavities throughout their lifetime [18]. A simple oral hygiene routine could have largely prevented this. Educating the public on proper oral hygiene practices is one of the preventative measures against tooth decay, gum disease, and other common oral health problems. The program dedicated to improving brushing techniques in children resulted in significantly better brushing skills and more frequent brushing [55]. The Oral Health Division of the Durham Region Health Department (DRHD) is interested in acquiring a physical model of teeth and adjacent structures to demonstrate proper dental hygiene techniques to their patients. The model can also be used to explain the symptoms a patient experiences and the underlying oral health problems, as well as, educate children and youth during school screenings. Low-cost 3D printed dental models were previously evaluated on the face (realism) and content (usefulness) validity by dental students and maxillofacial surgeons and were rated as good or excellent [48]. Within the scope of this project, will create an electronic 3D model from 3D scans of subject volunteers. These will include patients with decaying teeth, deteriorating gum lines, abfraction, and brushing abrasion as a demonstration of these common problems resulting from poor oral hygiene. To further the realism of the model, CT scans of the upper and lower jaw can be used to accurately show the bone structure. We will use 3D printing and a silicone coating to construct a realistic model of the upper and lower jaw including teeth and gums. The teeth, maxilla, and mandible will be 3D-printed in various bone-like plastic materials. We will model gums and oral mucosa by direct application of dyed semi-liquid silicone onto 3D-printed “bone”. Alternatively, both the bony and soft tissue parts will be 3D printed simultaneously using a dual-filament 3D printer: jaws and teeth will be printed with PLA (for example) and gums will be printed with TPU (Ninjaflex or other). The advantage of the second method is the absence of any post-printing modifications and the possibility of adding periodontal ligament and the innervation commonly used in dental anesthesia. Working with the Oral Health Division of the DRHD, we will collect feedback from practicing dentists and oral health experts. We also plan to test the resulting product with Ontario Tech University dental club during their educational visit to local elementary school. The created 3D files will be a useful future asset for serious gaming simulations in maxSIMHealth labs as they could be easily incorporated into VR simulations.

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

169

8.3 Discussion Ideate. Create. Disseminate. This sums up how maxSIMhealth operates. Through this collaborative, we can take ideas, transform them into existence, and disseminate the final product via partnerships which results in solutions that matter. maxSIMhealth’s work spans a broad spectrum of scholarship from mapping existing gaps, to changing education systems, to improving learning and performance outcomes. In order to achieve this, the maxSIMhealth map is followed using five simple steps when starting new projects (see Fig. 8.12). Step 1—Gap Analysis/SKA Selection: Regards selecting skill, knowledge and/or attitudes (SKA) to focus on and improve through collaboration with content and medical experts at our partners. A key component of this first step involves the utilization of maxSIMhealth’s unique range of established research partnerships with hospitals, professional societies, governing bodies, and the simulation industry. This step follows a methodology adopted from implementation sciences and described in our earlier report [31]. In brief, this step constitutes the formation of core ideas through collaborating with diverse groups of stakeholders outlined above. A needs assessment is a fundamental stage in the educational process, which will lead to

Fig. 8.12 Five steps taken when starting a project within maxSIMhealth

170

maxSIMhealth Group

changes in practice and therefore, it is the starting point for designing a formalized educational program. The stakeholders’ contextually appropriate ideas are assessed for their ‘fit’ in the program. The needs assessment identifies gaps which must be addressed by looking at the current position of the stakeholders, current curriculum, and comparing it to the desired level of simulation learning. We employ diverse methods for conducting a gap analysis: individual interviews, focus groups, surveys, questionnaires, self-assessments, and observations. Next, this information is translated into a detailed implementation plan. The implementation plan addresses the ‘what, who and when’ of the implementation, which identifies activities to be performed, schedules, and people involved. Resources are gathered from internal and external talent to build a functional team, and risks and potential roadblocks are identified. Step 2—Technology Matching: This step is designed to determine the best possible and most feasible disruptive technological solution for the selected SKA. Similar to Step 1, this step focuses on involving stakeholders and employs diverse methods for conducting a gap analysis: individual interviews, focus groups, surveys, questionnaires, self-assessments, and observations. The step culminates with the formation of development teams including several students, faculty members and partners. Typical team composition has two students—one technology oriented and one with expertise in health sciences, two faculty advisors, and at least one partner lead. Step 3—Piloting and Validity: Piloting the concept, developing prototypes (where applicable), and conducting preliminary studies occurs to determine its face and content validity. Step 4—Efficacy and Effectiveness Testing: In this step, we aim to determine whether or not the concept works and, if so, how well it works. In steps 3 and 4 we follow an adapted Medical Research Council Framework [43]. This work typically constitutes graduate level scholarship, and therefore it adheres to our funding model emphasizing highly qualified personnel (HQP) training that needs to be imbedded in all activities. Step 5—Knowledge Dissemination and Implementation: This step involves knowledge dissemination of the products once they are shown to be effective as well as implementation into health and/or education systems. To ensure timely and meaningful knowledge translation, maxSIMhealth has also established an institutional channel called Archives of Scholarship in Simulation and Educational Techniques (ASSETS) with the open-access Cureus Journal of Medical Science through which our work is freely disseminated as peer-reviewed, PubMed-indexed publications. maxSIMhealth works with our research partners to distribute the solutions for free, at cost or at an affordable price point. To accomplish this, we are encouraging institutions to become members with maxSIMhealth and participate in further implementation and iterative improvement research. With this five-step map (see Fig. 8.12), students/trainees and experts within the collaborative work together and easily follow a set of guidelines to facilitate the

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

171

development and implementation of meaningful and economic simulation solutions that improve healthcare outcomes.

8.4 Conclusions maxSIMhealth is a novel collaborative innovation at Ontario Tech University in Oshawa, Canada, straddling many professions and settings. Keeping the goals of public health in mind, we collectively aim to develop future cohorts of scholars who will have strong competencies, ranging from technology application, to working with others in new interdisciplinary environments, to communicating professionally and problem-solving. Here, we have provided an overview of several current and planned (future work) maxSIMhealth projects, all of whom are interdisciplinary, bringing together experts, trainees, and various stakeholders from a wide range of disciplines to solve pressing problems. It is anticipated that our work will successfully transform current health professional education landscape by providing novel, flexible, and inexpensive simulation experiences. Authors’ Contributions At the time of writing this chapter, the maxSIMhealth (www.maxSIMhealth.com) group consisted of (in alphabetical order): Artur Arutiunian, Krystina M. Clarke, Quinn Daggett, Adam Dubrowski, Thomas (Tom) Gaudi, Brianna L. Grant, Bill Kapralos, Priya Kartick, Shawn Mathews, Pamela T. Mutombo, Guoxuan (Kurtis) Ning, Argyrios Perivolaris, Jackson Rushing, Robert Savaglio, Mohtasim Siddiqui, Andrei B. B. Torres, Samira Wahab, Zhujiang Wang, and Timothy Weber. Acknowledgements The financial support of the Canada Research Chairs Program (in Health Care Simulation to A. Dubrowski), the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Social Sciences and Humanities Research Council of Canada (SSHRC), is gratefully acknowledged. The support of the Brilliant Catalyst at Ontario Tech University is also acknowledged and we also appreciate the help and support from Dr. Norman Jaimes (School of Medicine, Mil. Nueva Granada University, Bogota, Colombia) with respect to the VR-based cardiac auscultation project, and Dr. Fahad Alam from Sunnybrook Health Sciences Centre in Toronto, Canada for his help with the ACSB project.

References 1. Agha, S.: Effect of simulation based education for learning in Medical Students: a mixed study method. J. Pak. Med. Assoc. 69(4), 545–554 (2019) 2. Al-Rammah, T.Y., Aloufi, A.S., Algaeed, S.K., Alogail, N.S.: The prevalence of work-related musculoskeletal disorders among sonographers. Work 57(2), 211–219 (2017) 3. Albert, R., Patney, A., Luebke, D., Kim, J.: Latency Requirements for foveated rendering in virtual reality. ACM Trans. Appl. Percept. 14(4), 25:1–25:13 (2017)

172

maxSIMhealth Group

4. Ameerbakhsh, O., Maharaj, S., Hussain, A., McAdam, B.: A comparison of two methods of using a serious game for teaching marine ecology in a university setting. Int. J. Hum Comput Stud. 127, 181–189 (2019) 5. Anderson, D., Long, S., Thomas, G., Putnam, M., Bechtold, J., Karam, M.: Objective structured assessments of technical skills (OSATS). Clin. Orthop. Relat. Res. 474(4), 874–881 (2016) 6. Antoniou, P.E., Dafli, E., Arfaras, G., Bamidis, P.D.: Versatile mixed reality medical educational spaces; requirement analysis from expert users. Pers. Ubiquit. Comput. 21, 1015–1024 (2017) 7. Atalah, H.: Synopsis of Surgical Training and Simulation. MOJ Surgery (2016) 8. Barrett, M.J., Mackie, A.S., Finley, J.P.: Cardiac auscultation in the modern era. Cardiol. Rev. 25, 205–210 (2017) 9. Barsom, E., Graafland, M., Schijven, M.: Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 30(10), 4174–4183 (2016) 10. Bayram, S.B., Caliskan, N.: Effect of a game-based virtual reality phone application on tracheostomy care education for nursing students: a randomized controlled trial. Nurse Educ. Today 79, 25–31 (2019) 11. Bishop, N., Boone, D., Williams, K.L., Avery, R., Dubrowski, A.: Development of a threedimensional printed emergent burr hole and craniotomy simulator. Cureus 11(4) (2019) 12. Borshoff, D.: The Anesthetic Crisis Manual. Cambridge University Press (2011) 13. Brazil, V., Purdy, E.I., Bajaj, K.: Connecting simulation and quality improvement: how can healthcare simulation really improve patient care? BMJ Qual. Saf. 28, 862–865 (2019) 14. Brydges, R., Carnahan, H., Backstein, D., Dubrowski, A.: Application of motor learning principles to complex surgical tasks: searching for the optimal practice schedule. J. Mot. Behav. 39(1), 40–48 (2007) 15. Brydges, R., Carnahan, H., Rose, D., Rose, L., Dubrowski, A.: Coordinating progressive levels of simulation fidelity to maximize educational benefit. Acad. Med. 85(5), 806–812 (2010) 16. CSMLS: The Canadian Medical Laboratory Profession’s Call to Action. Retrieved from https://www.csmls.org/About-CSMLS/Recent-Updates/Media-Releases/Media-releases/TheCanadian-Medical-Laboratory-Profession-s-Call.aspx?lang=en-CA (2018) 17. CSMLS: Simulation and Clinical Placements Current State of Medical Laboratory Science Programs Simulation and Clinical Placements. Retrieved from http://csmls.org/csmls/media/ documents/resources/CurrentStateofMedicalLaboratorySciencePrograms(August2016).pdf (2016) 18. Canadian Health Measures Survey 2007–2009: Retrieved from https://www.canada.ca/en/ health-canada/services/healthy-living/reports-publications/oral-health/canadian-health-mea sures-survey.html (2010) 19. Canadian Society for Medical Laboratory Science (CSMLS): CSMLS Competency Profile: General Medical Laboratory Technologist (2016) 20. Central East LHIN.: More Long-Term Care Beds for Seniors coming to the Central East LHIN New licenses to support increased access to care and reduce wait times in the health system. Retrieved from file:///C:/Users/Pam/Downloads/LTC%20Bed%20Allocations_May% 202018_FINAL%20(5).pdf (2018) 21. Cheung, J.J.H., Rojas, D., Weber, B., Kapralos, B., Carnahan, H., Dubrowski, A.: Evaluation of tensiometric assessment as a measure of skill degradation. Stud. Health Technol. Inf. 173, 97–101 (2012) 22. Chiu, H.-Y., Kang, Y.-N., Wang, W.-L., Huang, H.-C., Wu, C.-C., Hsu, W., Tong, Y.-S., Wei, P.L.: The effectiveness of a simulation-based flipped classroom in the acquisition of laparoscopic suturing skills in medical students—a pilot study. J. Surg. Educ. 75, 326–332 (2018) 23. Christinaki, E., Vidakis, N., Triantafyllidis, G.: A novel educational game for teaching emotion identification skills to preschoolers with autism diagnosis. Comput. Sci. Inf. Syst. 11(2), 723– 743 (2014) 24. Chuah, S.H.W.: Why and who will adopt extended reality technology? Literature review, synthesis, and future research agenda. In: Literature Review, Synthesis, and Future Research Agenda (2018)

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

173

25. Cindryani, M., Widnyana, I., Aribawa, I., Senapathi, T.G.: Analysis of anesthesia chief resident competencies in anesthesia crisis management simulation. Adv. Med. Educ. Pract. 9, 847–853 (2018) 26. Ciullo, A., Yee, J., Frey, J.A., Gothard, M.D., Benner, A., Hammond, J., Ballas, D., Ahmed, R.A.: Telepresent mechanical ventilation training versus traditional instruction: a simulationbased pilot study. BMJ Simul. Technol. Enhanced Learn. 5, 8–14 (2018) 27. Conference Board of Canada.: Sizing up the challenge. Meeting the demand for long-term care Canada. Retrieved from https://www.cma.ca/sites/default/files/2018-11/9228_Meeting% 20the%20Demand%20for%20Long-Term%20Care%20Beds_RPT.pdf (2017) 28. Cook, D.A., Hamstra, S.J., Brydges, R., Zendejas, B., Szostek, J.H., Wang, A.T., Erwin, P.J., Hatala, R.: Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Medical Teacher 35, e867–898 (2013) 29. Coulter, R., Saland, L., Caudell, T., Goldsmith, T.E., Alverson, D.: The effect of degree of immersion upon learning performance in virtual reality simulations for medical education. Stud. Health Technol. Inf. 125, 155–160 (2007) 30. Debattista, K., Chalmers, A.: A GPU based saliency map for high-fidelity selective rendering. In: Proceedings of the 4th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa, pp. 21–29. ACM, New York, NY, USA (2006) 31. Dubrowski, R., Dubrowski, A.: Why should implementation science matter in simulation-based health professions education? Cureus 10(12), e3754 (2018) 32. Durham Region: Population at Glace. Retrieve from https://www.file:///C:/Users/Pam/Desktop/ Population-at-a-Glance.pdf (2016) 33. Durham Region: Planning for Growth. Retrieved from https://www.durham.ca/en/living-here/ planning-for-growth.aspx#Statistics (2017) 34. Eaton, J., Hanif, A.B., Mulima, G., Kajombo, C., Charles, A.: Outcomes following exploratory burr holes for traumatic brain injury in a resource poor setting. World Neurosurg. 105, 257–264 (2017) 35. Fernandez, M.E., Walker, T.J., Weiner, B.J., Calo, W.A., Liang, S., Risendal, B., Friedman, D.B., Tu, S.P., Williams, R.S., Jacobs, S., Herrmann, A.K., Kegler, M.: Developing measures to assess constructs from the Inner Setting domain of the consolidated framework for implementation research. Implement Sci. 13(1), 52 (2018) 36. Garbens, A., Armstrong, B.A., Louridas, M., Tam, F., Detsky, A.S., Schweizer, T.A., Graham, S.J., Grantcharov, T.: Brain activation during laparoscopic tasks in high-and low-performing medical students: a pilot fMRI study. Surg. Endosc. https://doi.org/10.1007/s00464-019-072 60-5. [Epub ahead of print] (2019) 37. Gossweiler, R.: (Unpublished doctoral dissertation). University of Virginia (1996) 38. GrabCAD Community Members | Engineers and Designers [Online]. Available: https://gra bcad.com/dashboard. [Accessed: 06-Mar-2020] 39. Grant, B.L., Yielder, P.C., Patrick, T.A., Kapralos, B., Williams-Bell, M., Murphy, B.A.: Audiohaptic feedback enhances motor performance in a low-fidelity simulated drilling task. Brain Sci. 10(1) (2020) 40. Green, S.: The cost of poor blood specimen quality and errors in preanalytical processes. Clin. Biochem. 46(13–14), 1175–1179 (2013) 41. Guadagnoli, M., Morin, M., Dubrowski, A.: The application of the challenge point framework in medical education. Med. Educ. 46(5), 447–453 (2012) 42. Haji, A.F., Khan, R., Regehr, G., Ng, G., Ribaupierre, S., Dubrowski, A.: Operationalising elaboration theory for simulation instruction design: a Delphi study. Med. Educ. 49(6), 576–588 (2015) 43. Haji, F.A., Da Silva, C., Daigle, D.T., Dubrowski, A.: From bricks to buildings: Adapting the medical research council framework to develop programs of research in simulation education and training for the health professions. Simul. Healthc. 9(4), 249–259 (2014) 44. Hammerling, J.A.: A review of medical errors in laboratory diagnostics and where we are today. Labmedicine 43(2), 41–44 (2012)

174

maxSIMhealth Group

45. Harrison, G., Harris, A.: Work-related musculoskeletal disorders in ultrasound: Can you reduce risk? Ultrasound (Leeds, England) 23(4), 224–230 (2015) 46. Holcomb, L.B., Brady, K.P., Smith, B.V.: The emergence of ‘educational networking’: Can non-commercial, education-based social networking sites really address the privacy and safety concerns of educators? MERLOT J. Online Learn. Teach. 6(2), 475–481 (2010) 47. Hospital for Special Surgery: What is an Anesthesiologist? https://www.hss.edu/what-is-ananesthesiologist.asp, date = {2019-01-09} (2019) 48. Höhne, C., Schmitter, M.: 3D Printed teeth for the preclinical education of dental students. J. Dent. Educ. 83(9), 1100–1106 (2019) 49. Institute of Musculoskeletal Health and Arthritis, Canadian Institutes for Health Research: IMHA Strategic Plan 2014–2018: Enhancing Musculoskeletal, Skin and Oral Health (Cat. No.: MR4-35/2014E-PDF). Ottawa. Retrieved from https://cihr-irsc.gc.ca/e/48830.html#a4 (2019) 50. Kapp, K.M.: The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. Wiley (2012) 51. Khan, Z., Kapralos, B.: Fydlyty: a low-fidelity serious game authoring tool to facilitate medicalbased cultural competence education. Health Inf. J. 25(3), 632–648 (2019) 52. Khan, Z., Rojas, D., Kapralos, B., Grierson, L., Dubrowski, A.: Using a social educational network to facilitate peer-feedback for a virtual simulation. Comput. Entertainment 16(2), Article 5 (2018) 53. Kurashima, Y., Hirano, S.: Systematic review of the implementation of simulation training in surgical residency curriculum. Surg. Today 47, 777 (2016) 54. Li, K., Nataraj, R., Marquardt, T.L., Li, Z.-M.: Directional coordination of thumb and Finger forces during precision pinch. PLoS ONE 8(11), e79400 (2013) 55. Livny, A., Vered, Y., Slouk, L., Sgan-Cohen, H.D.: Oral health promotion for schoolchildren— evaluation of a pragmatic approach with emphasis on improving brushing skills. BMC Oral Health 8, 4 (2008). https://doi.org/10.1186/1472-6831-8-4 56. Lobel, D.A., Elder, J.B., Schirmer, C.M., Bowyer, M.W., Rezai, A.R.: A novel craniotomy simulator provides a validated method to enhance education in the management of traumatic brain injury. Neurosurgery 73, S57–S65 (2013) 57. Lopreiato, J.O. (ed.), Downing, D., Gammon, W., Lioce, L., Sittner, B., Slot, V., Spain, A.E. (associate eds.), and the Terminology and Concepts Working Group: Healthcare Simulation Dictionary. Retrieved from http://www.ssih.org/dictionary (2016) 58. Makary, M.A., Daniel, M.: Medical error—the third leading cause of death in the US. Br. Med. J. 353, i2139 (2016) 59. Mayo Foundation for Medical Education and Research (MFMER): Ultrasound. Retrieved 11 Nov 2019, from https://www.mayoclinic.org/tests-procedures/ultrasound/about/pac-20395177 (2018) 60. Menke, K., Beckmann, J., Weber, P.: Universal design for learning in augmented and virtual reality trainings. Universal Access Through Inclusive Instructional Design, pp. 294–304 (2019) 61. Morales, M., Amado-Salvatierra, H.R., Hernández, R., Pirker, J., Gütl, C.: A practical experience on the use of gamification in MOOC courses as a strategy to increase motivation. In: Learning Technology for Education in Cloud—The Changing Face of Education, Cham, pp. 139–149 (2016) 62. Morriss, W., Ottaway, A., Milenovic, M., Gore-Booth, J., Haylock-Loor, C., Onajin-Obembe, B., Mellin-Olsen, J.: A global anesthesia training framework. Anesth. Analg. 128(2), 383–387 (2019) 63. Motola, I., Devine, L.A., Chung, H.S., Sullivan, J.E., Issenberg, S.B.: Simulation in healthcare education: a best evidence practical guide. AMEE Guide No. 82. Med Teach. 35(10), e1511– e1530 (2013) 64. Munshi, F., Lababidi, H., Alyousef, S.: Low-versus high-fidelity simulations in teaching and assessing clinical skills. J. Taibah Univ. Med. Sci. 10(1), 12–15 (2015) 65. Murphey, S.: Work related musculoskeletal disorders in sonography. J. Diagn. Med. Sonogr. 33(5), 354–369 (2017)

8 maxSIMhealth: An Interconnected Collective of Manufacturing …

175

66. Ontario Ministry of Health and Long Term Care (OMHLTC): Ontario One Step Closer to Creating 15,000 New Long-Term Care Beds..Province Opens Call for Applications to Build and Redevelop Long-Term Care Beds. Retrieved from https://news.ontario.ca/mltc/en/2019/ 10/ontario-one-step-closer-to-creating-15000-new-long-term-care-beds.html (2019) 67. Ontario Ministry of Health and Long Term Care (OMHLTC): Long Term Care homes in Ontario and Overview: sector Overview. Retrieved from https://www.file:///C:/Users/Pam/Documents/ Exhibit-169-Long-Term-Care-in-Ontario-Sector-overview.pdf (2015) 68. Ontario Ministry of Labour, Training and Skills Development: Ergonomics in the Workplace: Learn About Poor Ergonomics Leading to Musculoskeletal Disorder, Visibility and Fall Hazards. Government of Ontario. Retrieved from https://www.ontario.ca/page/ergonomicsworkplace (2019) 69. Ortegon, T., Vargas, M., Uribe-Quevedo, A., Perez-Gutierrez, B., Rojas, D., Kapralos, B.: Development of a 3D printed stethoscope for virtual cardiac auscultation examination training. In: 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT) (2017) 70. Parke, F.I.: Perception-based animation rendering. J. Vis. Comput. Anim. 2(2), 44–51 (1991) 71. Peña-Ayala, A.: Educational networking: A glimpse at emergent field. In: Peña-Ayala, A. (ed.) Educational Networking: A Novel Discipline for Improved Learning Based on Social Networks, Lecture Notes in Social Networks, pp. 77–129, Springer International Publishing, Cham (2020) 72. Renner, P., Pfeiffer, T.: Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In: 2017 IEEE Symposium on 3d User Interfaces (3DUI), pp. 186–194. IEEE (2017) 73. Rice, L.M., Wall, C.A., Fogel, A., Shic, F.: Computer-assisted face processing instruction improves emotion recognition, mentalizing, and social skills in students with ASD. J. Autism Dev. Disord. 45(7), 2176–2186 (2015) 74. Rinker, C.F., McMurry, F.G., Groeneweg, V.R., Bahnson, F.F., Banks, K.L., Gannon, D.M.: Emergency craniotomy in a rural Level III trauma center. J. Trauma Acute Care Surg. 44(6), 984–990 (1998) 75. Rojas, D., Kapralos, B., Dubrowski, A.: The role of game elements in online learning within health professions education. Med. Stud. Health Technol. Inf. 220, 329–334 (2016) 76. Rojas, D., Cheung, J.J.H., Cowan, B., Kapralos, B., Dubrowski, A.: Serious games and virtual simulations debriefing using a social networking tool. In: 2nd Annual International Conference on Computer Games, Multimedia and Allied Technology (CGAT 2012), pp. 69–73 (2012) 77. Rolls, G.: An introduction to specimen processing. Leica BioSystems Training Resources. Retrieved from https://www.leicabiosystems.com/knowledge-pathway/an-introduction-to-spe cimen-processing (2019) 78. Rosenfeld, J.V.: Who will perform emergency neurosurgery in remote locations? ANZ J. Surg. 85(9), 600 (2015) 79. Shewaga, R., Uribe-Quevedo, A., Kapralos, B., Alam, F.: A comparison of seated and roomscale virtual reality in a serious game for epidural preparation. IEEE Trans. Emerg. Top. Comput. 8, 218–232 (2020) 80. Statistics Canada (2016) Canada Demographics at Glace. Retrieved from file:///C:/Users/Pam/Desktop/Population-at-a-Glance.pdf 81. Statistics Canada: Canada’s Population, 1 July 2019. Retrieved from https://www150.statcan. gc.ca/n1/pub/11-627-m/11-627-m2019061-eng.htm (2019) 82. Swerdlow, D.R., Cleary, K., Wilson, E., Azizi-Koutenaei, B., Monfaredi, R.: Robotic arm– assisted sonography: review of technical developments and potential clinical applications. Am. J. Roentgenol. 208(4), 733–738 (2017) 83. Sy, J., Ang, L.C.: Microtomy: cutting formalin-fixed, paraffin-embedded sections. In: Yong, W. (ed.) Biobanking. Methods in Molecular Biology, vol. 1897. Humana Press, New York, NY (2019) 84. Torres, A.B.B., Kapralos, B., Uribe-Quevedo, A., Zea, E., Dubrowski, A.: A gamified educational network for collaborative learning. In: Proceedings of the 2019 International Conference on Interactive Mobile Communication, Technologies and Learning, Oct. 31 – Nov. 1, 2019, Thessaloniki, Greece, pp. 1–10 (2019)

176

maxSIMhealth Group

85. Takazawa, T., Mitsuhata, H., Mertes, P.M.: Sugammadex and rocuronium-induced anaphylaxis. J. Anesth. 30(2), 290–297 (2016) 86. The American Board of Physician Specialties (ABPS): Board of certification in anesthesiology. https://www.abpsus.org/anesthesiology-board-certification-exams,urldate={201907-17} (2019) 87. Treacy, P.J., Reilly, P., Brophy, B.: Emergency neurosurgery by general surgeons at a remote major hospital. ANZ J. Surg. 75(10), 852–857 (2005) 88. United States Department of Labor: Occupational Outlook Handbook: Diagnostic Medical Sonographers and Cardiovascular Technologists and Technicians, including Vascular Technologists. U.S. Bureau of Labor Statistics. Retrieved on 7 Nov 2019. https://www.bls.gov/ooh/hea lthcare/diagnostic-medical-sonographers.htm#tab-6 (2019) 89. Weier, M., Stengel, M., Roth, T., Didyk, P., Eisemann, E., Eisemann, M., Slusallek, P.: Perception-driven accelerated rendering. Comput. Graph. Forum 36(2), 611–643 (2017) 90. Weser, V.U., Hesch, J., Lee, J., Proffitt, D.R.: User sensitivity to speed- and height-mismatch in VR. In: Proceedings of the ACM Symposium on Applied Perception—SAP 16 (2016) 91. Wong, D., Unger, B., Kraut, J., Pisa, J., Rhodes, C., Hochman, J.B.: Comparison of cadaveric and isomorphic virtual haptic simulation in temporal bone training. J. Otolaryngol. Head Neck Surg. 43(1), 31 (2014) 92. World Health Organization (WHO): Coronavirus Disease (COVID-19) Pandemic. https://www. who.int/emergencies/diseases/novel-coronavirus-2019 (2020) 93. World Health Organization (WHO): Guidelines on Drawing Blood: Best Practices in Phlebotomy. World Health Organization, Geneva; 2012. 1, Introduction. Available from: http:// www.ncbi.nlm.nih.gov/books/NBK138675 (2010) 94. Wortly, D.: The future of serious games and immersive technologies and their impact on society. In: Baek, Y., Ko, R., Marsh, T. (eds.) Trends and Applications of Serious Gaming and Social Media. Springer Science + Business Media Singapore (2014) 95. Zainuddina, Z., Chua, S.K.W., Shujahata, M., Perera, C.J.: The impact of gamification on learning and instruction: a systematic review of empirical evidence. Educ. Res. Rev. 30 (2020). https://doi.org/10.1016/j.edurev.2020.100326 96. Zhang, D., Huang, H.: Prevalence of work-related musculoskeletal disorders among sonographers in China: results from a national web-based survey. J. Occup. Health 59(6), 529–541 (2017) 97. Zwaigenbaum, L., Penner, M.: Autism spectrum disorder: advances in diagnosis and evaluation. BMJ 361, k1674 (2018)

Chapter 9

Serious Games and Multiple Intelligences for Customized Learning: A Discussion Enilda Zea, Marco Valez-Balderas, and Alvaro Uribe-Quevedo

Abstract Teaching strategies need to swiftly respond to abrupt changes in delivery modes that provide engaging and effective experiences for learners. The current pandemic has made it evident the lack of readiness of several academic sectors when moving from face-to-face to online learning. While research into understanding the use of technologies have been gaining momentum when innovative tools are introduced, it is important to devise strategies that lead to effective teaching tools. Recently, user experience has been influencing content development as it takes into account the uniqueness of users to avoid enforcing one-size-fits-all solutions. In this chapter, we discuss multiple intelligences in conjunction with serious games and technology to explore how a synergy between them can provide a solution capable of capturing qualitative and quantitative data to design engaging and effective experiences.

9.1 Introduction Life in the 21st century requires radical changes in teaching models that correspond to current learners’ behaviors due to the ubiquitous nature of current digital media [39]. The rapid adoption and evolution of digital media has led to an increased use of neuroscience and cognitive science principles to examine the neural underpinnings of learning [35]. Scientists have begun to focus their attention on these areas to determine how they can transform learning and instruction. Furthermore, the exploration E. Zea Universidad de Carabobo, Naguanagua, Carabobo, Venezuela e-mail: [email protected] M. Valez-Balderas Laurier University, Waterloo, ON, Canada e-mail: [email protected] A. Uribe-Quevedo (B) Ontario Tech University, Oshawa, ON, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_9

177

178

E. Zea et al.

of intrinsic motivation coupled with digital games, has been gaining momentum to empower learners to further engage with content by enhancing engagement leading to improved conceptual understanding [30]. Currently, the fourth industrial revolution is changing who people live and work, ultimately affecting the development of competencies and transferable skills from education to labour environments [51]. For example, the use of automation, artificial intelligence, Internet of things, makerspace, and virtual/augmented reality are requiring industries to respond to these rapid changes adequately [5]. As a result, education and business leaders must be aware of the nature, functioning, potentials and limitations of the human mind to face the disruption to job and skills. Moreover, such plans need to account for innovation and productivity, lower inequality, improve agile governance, and fuse technologies ethically [36]. These contributions must be used to service the educational system, education and, most importantly, at the service of the student, to achieve their full psychological, social, moral and ethical development [16]. Technological advances have required research on neuroscience concepts to enhance the educational and teaching capabilities. Such an approach aims to improve the instructor-student interactions by analyzing concepts including: (i) general insights regarding memory and the ways to enhance learning, (ii) working memory and its role in education, (iii) memory storage and retrieving stored information, (iv) attention and memory, (v) prior knowledge as a basis for acquiring new information and learning, (vi) repetition of newly introduced information as a method for internalizing, and priming, (vii) increasing the efficacy of teaching by extending class teaching over a wider time span, (viii) emotions and their relation to learning, memory and social behavior, (ix) motivation and learning, and finally, (x) the Daily cycle of wakefulness and sleep (the Circadian Circle) [51]. Moreover, as COVID-19 physical distancing and isolating measures taking place during 2020 have pushed to the implementation of e-learning tools, the cognitive neuroscience of learning and memory plays an important role in such scenario as it has established how to optimize learning in traditional learning settings [40]. While technology developments continue introducing systems and tools that are changing learning, training, and work-related skills [51], engagement and motivation remain a challenge as learners struggle to balance the overwhelming attention demanded by digital connectedness [14]. A solution attempting to make the most out of digital technologies, while boosting knowledge uptake and retention is serious games [49]. Serious games are games whose primary focus is not entertainment, but the development of skill, including cognitive or psychomotor. Serious games have been studied to develop an understanding their effects on learning better. For example, a review conducted in [25] analyzed 1390 research articles between 1970 and 2015, leading to the identification of diverse assessments focusing on behaviour, affect and cognition obtained from interviews, questionnaires, physiological approaches, in-game metrics, and time and performance on a task. However, while serious games present opportunities for engagement and learning, the context, age, and technologies may play a significant role in learning success [26].

9 Serious Games and Multiple Intelligences …

179

In this chapter, we discuss the potential application of serious games and immersive technologies to personalized learning through metrics that could enhance learning. One of the significant challenges in education is one-size-fits-all approaches that fail to account for the learners’ variability and uniqueness. Currently, digital learning has taken a more prominent role as COVID-19 measures required students across all education levels to stay at home and take online classes. Our discussion leverages qualitative and quantitative approaches to balance the role of technology and learning. The chapter is organized as follows. Section 9.2 presents a review of works associated with focusing on neuroscience and cognitive science. Section 9.3 introduces the challenges faced by educators. Section 9.4 discusses some technology opportunities. Section 9.5 describes how serious games can help to address educational challenges. Finally, Sect. 9.6 presents the concluding remarks and takeaways.

9.2 Multiple Intelligences Howard Gardner, a psychologist, researcher and professor at Harvard University, specialized in the analysis of cognitive abilities and contributed to the field this theory on Multiple Intelligences (MIs) [20]. This theory has led to the development of a more productive and more complete educational curriculum through the proposed intelligences presented in Table 9.1. The theory of MIs focuses on the human being and its diversity, aiming at developing all the potentialities of the body and mind. The theory also states that there is no single and uniform way to learn that we all have MIs (having some standing out more than others), which are combined and used in different ways. Regarding e-learning, the benefits of acknowledging MIs have sparkled discussion on its adoption in higher education as the students’ prior experiences, and current perceptions can lead to the best use of digital technology. For example, a mixed-methods study employing video-assisted environments for e-learning, found that students were higher on Intrapersonal intelligence and lower

Table 9.1 Multiple intelligences [20] Intelligences Linguistic Logical-mathematical Spatial Bodily-kinesthetic Musical Interpersonal Intrapersonal Naturalist

Example Words Numbers or logic Pictures A physical experience Music A social experience Self-reflection An experience in the natural world

180

E. Zea et al.

in Existential intelligence, in addition to bodily-Kinesthetic and Verbal-Linguistic intelligences [22]. In this case it was concluded that the videos could address students’ various intelligence types and abilities. Changes to teaching strategies should be carefully crafted to account for the variability amongst students in order to maximize their potential intellectual [31]. From this point of view, and considering the focus of MIs, learning environments that value diversity can be possible [32]. Since there are diverse and varied forms of knowledge intake, it is important to account for strategies that lead to successful learning outcomes, rather than continue propagating learning styles as directives for teaching that have lacked objective support [29]. MIs build on top of various types of intelligences needed to conduct daily activities sets of different specific capacities altogether working as a network of autonomous sets related to each other. Thus, multiple intelligences refer to a thought process that suggests the existence of a set of capacities and abilities, which can be developed by people based on biological, personal, and social factors. The Theory of Multiple Intelligences encapsulates multiple different learning methods. These methods of learning are developed in layers and strategies so that individuals can identify problems and provide solution [13]. The solutions seek to empower individuals and enable them to increase their understanding of other individuals and themselves, as well as connect with the environment that surrounds them through language, logicalmathematical analysis, spatial representation, musical thought and the use of their bodies [47]. Consequently, teaching and learning must become a continuous process articulated with learning styles [43]. Although the MIs theory has been broadly adopted as a foundation for customizing education, it has lacked experimental validity, remaining purely theoretical. Shearer [44] conducted a review of neuroscientific works examining neural correlates for skill units within seven intelligences. The review provided evidence of a neural activation patterns relationship with skill units within its designated intelligence.

9.3 Challenges to Educators The challenge to educators is to produce changes and thus create an educational system that responds to the ultimate meaning of education, which is to help students formulate and carry out their life project in a sustainable manner [34]. The greatest challenge in current education is re-purposing personalized learning through lessons learned from the creation and use of modules, mastery-based grading, immediate feedback and consistent data use, and the re-imagining of the school schedule [10]. Part of the challenge then becomes how to incorporate the re-purposed strategies into essential elements in the teaching and learning process. Gardner presented two fundamental proposals that support this approach [20]: 1. The minds of unique individuals present notable differences. It is never true that all students who start a school year in a given grade are placed in the grade

9 Serious Games and Multiple Intelligences …

181

they are placed in because of the specific learning outcomes of that grade. The school must fulfil the educational function; they must guarantee that each person maximizes their intellectual potential. 2. The claim that each subject is capable of mastering all the knowledge produced historically in the world or at least a significant part of it is not a realistic goal. This is not possible even in a specialized area of knowledge, even less so across the entire range of disciplines and competencies. Based on these proposals, Gardner defines intelligence as the set of capabilities that allows us to solve problems or develop valuable products in our culture. Gardner defines eight great types of capacities or intelligences, according to the production context (see Table 9.1) [20]. Failure to adequately respond to the current understanding of learning and technological changes can lead to perpetuating a system that does not provide the most effective learning tool to help skills development towards successful career paths. If instead, these differences are accounted for, each learner will be able to develop their intellectual and social potential more fully. To this end, technology can provide quantitative means to gather learner’s performance that can be used to customize learning outcomes [21].

9.4 Technology Opportunities The role of technology in education has become more relevant than before as many educational institutions work to devise best education delivery strategies during the COVID-19 pandemic. The spike on remote technology use, has exploded into a digital revolution led by companies, developers, enthusiasts, and researchers to provide adequate learning tools. Technology-driven efforts have resulted in the exploration of telemedicine [24], as a solution to prioritize convenient and inexpensive care. Moreover, re-thinking education requires to acknowledge the current constraints and find a solution to successfully deliver and engage students in achieving their learning outcomes despite the practical and logistical difficulties associated with physical distancing [41]. In particular, academic programs relying on hands-on skills development, face logistical hardships as these may require the operation of specialized equipment, under specific conditions, within proximity to others [1] while immediate solutions have seen the adoption of online replacements that may only account for cognitive development [41]. Such a shift towards online learning has sparked research towards defining the best strategies and tools. For example, Bao [7] proposed focusing on the following online learning best practices based on a study conducted during the pandemic outbreak in China: (i) high relevance between online instructional design and student learning, (ii) effective delivery on online instructional information, (iii) adequate support provided by faculty and teaching assistants to students; (iv) high-

182

E. Zea et al.

quality participation to improve the breadth and depth of student’s learning, and (v) contingency plan to deal with unexpected incidents of online education platforms. Technology can help gather metrics that can provide insights between MIs, teaching strategies, and learning outcomes [38]. Analytics focused on student performance prediction and intervention, have been used by educational institutions to tailor educational strategies [53]. Moreover, using academic analytics can help predict and improve students’ achievement: A proposed proactive, intelligent intervention [9]. By identifying the learning metrics, relationships can be established towards achieving a personalized learning environment [33]. The following subsections present technologies that can assist the gathering of learners’ metrics towards customizable learning experience.

9.5 Serious Games Video games have become the largest entertainment industry that has spawned interest across multiple areas where engagement boosts skills development, and analytics drive content and user experience improvements [18]. Adding game elements to a learning scenario can result in the development of different complementary learning solutions. Discussion and research about the use and application of game components in learning scenarios has resulted in different forms of games such as serious games (i.e., games designed for skills development purposes) and gamification (i.e., take routine activities and convert them to games to engage users to break monotony). These forms of games have led to the development of pervasive games (e.g., games for advertising referred to as advergames, or games for physical activity known as exergames), alternate reality games, or playful design [50]. Serious games present an interesting field of application of MIs as games are designed to engage players through customizable experiences employing rewards, adaptive difficulty, and pathways depending on the learner’s performance [42]. Furthermore, the employment of games in learning environments presents inclusivedesign opportunities for helping those with learning disabilities excel at the acquisition of reading, writing, vocabulary and mathematics, as well as improvement of executive functioning and behavioural control skills [19]. The design process must account for the student’s motivation when there is none so that the game experience provides an engaging environment to boost intrinsic motivation resulting in higher learning autonomy [48]. Although extrinsic motivation is the most common in the form of rewards provided to the learner, it is often identified, introduced, and relies on external outcomes [8], such as grades or awards. On the other hand, intrinsic motivation’s reward is the satisfaction it causes to perform the activity [8], for example, when highly autonomous students engage with activities not because of a grade, but, because of their growth. Since motivation plays an important role in learning, serious game mechanics can be mapped to learning mechanics that can help boost MIs across the board [3]. Table 9.2 summarizes the relationships between motivation, serious game and learn-

9 Serious Games and Multiple Intelligences …

183

Table 9.2 Motivation, serious game mechanics, learning mechanics, Bloom’s taxonomy relations based on [3] Game mechanics Learning mechanics External Introduced motivation motivation

Game mechanics Learning mechanics Integrated Intrinsic motivation motivation

Bloom’s taxonomy

Consequences Assessment Rewards/penalties Feedback

Strategy/planning Collaboration Discovery Experiment Competition Progression Cooperation Role-play Deliberate practice

Create Evaluate Analyze

Accountability Assessment Scores Shadowing

Progression Imitation Cascading information Tutorial Story Instruction Interaction

Ownership Collaboration Reflect/discuss

Apply Action Participation Discover

Understand Remember

Do and repetition

ing mechanics with Blooms’ Taxonomy based on [3]. From Table 9.2 it is important to highlight the proximity of serious gaming elements with learning strategies and how they can be mapped to create engaging experiences that can lead to high motivation and autonomy. To properly design serious games mechanics, [32] proposed defining the following six facets: (i) the pedagogical objectives, (ii) the simulation domain on how to respond consistently and coherently to the correct or erroneous actions of the game players within a specific, unambiguous context, (iii) the interactions with the simulation on how to engage the players with the simulator, (iv) the problems and progression to determine which problems to give the players and the order, (v) the decorum to choose the engaging elements that will foster the motivation, and (vi) the conditions of use that set the rules for how, where, when, and with whom the game is played. Simulation has proven to be a useful tool for developing and maintaining cognitive and psychomotor skills in numerous areas where exposure to realistic controlled scenarios is critical for effective responses and adequate decision making in the professional life [46]. Simulation guarantees that the phenomenon presented through digital media adheres as closely as possible to real-life representations, as the cognitive and psychomotor skills should be transferred to the professional practice domain [15]. While simulation has been traditionally associated with high-end equipment employed in specialized scenarios such as medical training [27], where costs can become an entry-barrier and limit the availability of simulators for training [52]. Interestingly, given the high costs associated with simulation, educators, researchers, enthusiasts, and industry, have started looking at developing cost-effective solution to

184

E. Zea et al.

increase access to a larger number of students, focusing on guaranteeing the achievement of the learning and skills outcomes [23]. Some of the technology trends helping with the development of consumer-level solutions include the recent availability of VR and AR (immersive) technologies are blurring the line between the physical world and the simulated or digital world [11]. Additionally, 3D printing and open electronics are empowering content developers and learners to customize further learning tools previously exclusive to research and industry [54]. A current problem associated with immersive technologies is related to the assessment of user interfaces and usability for developing effective experiences that correlate to learning outcomes [12]. The development of engaging experiences focuses on reproducing real-life scenarios in an interactive, immersive and engaging manner, often presenting different levels of fidelity depending on the available hardware, or learning goals [17]. For example, virtual prototyping addressing product development where tool operation was reported to be affected by the user interface [4], and a room-scale VR advanced cardiovascular life support procedure for training [45]. As a result of current technological advances, inclusive design with the user has become more relevant, and through makerspace and open electronics, developers are continuously exploring solutions to create consumer-level add-ons and affordable technology to enable capturing user metrics in virtual environments and serious games that can lead to effective learning outcomes. Figure 9.1 presents a system architecture depicting any user providing inputs through different human interface devices that will allow capturing physiological, physical, and cognitive metrics through the user’s interactions. The system’s reactions are recorded and processed to customize the experience back to the user depending on the virtual activity. For example, [28] developed a system that factors user’s ergonomics to facilitate reaching virtual objects, [2] employed a smartwatch to capture leg’s movement for virtual walking, and custom-made user interfaces employing 3D printing to capture feet movement [6], or facilitating practice with medical equipment at home [37].

9.6 Conclusion In this book chapter, we have presented a discussion on how serious games and immersive technologies can be used to personalize learning by gathering metrics that could enhance teaching by factoring the learner’s performance. We started our discussion with the theory of multiple intelligences to highlight research on acknowledging the differences amongst learners. However, it is worth noting that this is an area of active research with researchers arguing for and against MIs. Regardless of the different research streams, the goal centers around the need for effective learning materials, that ultimately relies on how we can measure the achievement of the learn-

9 Serious Games and Multiple Intelligences …

185

Fig. 9.1 System architecture for gathering user data and providing custom interactions based on user metrics

ing outcomes. A major challenge in education associate do the different materials and techniques is motivation and how learners respond. By adding serious games to the learning ecosystem, a set of opportunities become available to facilitate the engagement with the educational contents. By leveraging game elements, active learning can be boost by having learners become more participative with their education. This is highlighted by the articulation of gaming and learning mechanics and the various levels of motivation and autonomy that different educational strategies provide. These can be articulated with Blooms’s Taxonomy to facilitate the development of skills for life and not just for the moment or a grade. Ideally, the student should be intrinsically motivated and highly autonomous, and to achieve this goal, analytics have started playing a critical role for gathering and processing data from user interactions to help educators and policymakers design better strategies to enhance learning. In the current connected world, analytics has become a powerful tool to help design user experiences. Education is no different, and the employment of digital tools including web, virtual/augmented reality, makerspace, and even social media, provide valuable data to understand the relationship between the learner and the content. As a result of current technological advances, digital tools have become intrusive, by collecting a number of metrics from keystrokes to the unique body, eye, and brain signals that define our engagement with content. This has become ever

186

E. Zea et al.

more important given the current COVID-19 pandemic, and has seen a shift from a face-to-face education model to a remote/online education model. It is our belief that further articulating education and technology research can help better understand and design inclusive solutions that bring learners altogether regardless of their location, hardware, language, or disability condition. This is of particular importance for swift and adequate responses to changing delivery methods in cases where face-to-face teaching is not possible, and online teaching can affect the quality of education by having students disengage with low motivation. Future work will focus on conducting a set of studies to assess how the capture of user metrics is perceived by students when presenting content tailored to different MIs and elucidate on any significant differences that can lead to proposing innovative approaches to create customizable learning experiences through serious games. Acknowledgements The authors acknowledge the support of Universidad de Carabobo, Venezuela, and the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC) in the form of a Discovery grant RGPIN-2018-05917.

References 1. Adams, J.G., Walls, R.M.: Supporting the health care workforce during the covid-19 global epidemic. JAMA 323(15), 1439–1440 (2020) 2. Angarita, A., Hernandez, A., Carmichael, C., Uribe-Quevedo, A., Rueda, C., Salinas, S.A.: Increasing virtual reality immersion through smartwatch lower limb motion tracking. In: Stephanidis, C. (ed.) HCI International 2019—Posters, pp. 345–352. Springer International Publishing, Cham (2019) 3. Arnab, S., Lim, T., Carvalho, M.B., Bellotti, F., De Freitas, S., Louchart, S., Suttie, N., Berta, R., De Gloria, A.: Mapping learning and game mechanics for serious games analysis. British Journal of Educational Technology 46(2), 391–411 (2015) 4. Aromaa, S., Väänänen, K.: Suitability of virtual prototypes to support human factors/ergonomics evaluation during the design. Appl. Ergon. 56, 11–18 (2016) 5. Atiku, S.O., Boateng, F.: Rethinking education system for the fourth industrial revolution. In: Human Capital Formation for the Fourth Industrial Revolution, pp. 1–17. IGI Global (2020) 6. Balderas, M.V., Carmichael, C., Ko, B., Nova, A., Tabafunda, A., Uribe-Quevedo, A.: A makerspace foot pedal and shoe add-on for seated virtual reality locomotion. In: 2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), pp. 275–280. IEEE (2019) 7. Bao, W.: Covid-19 and online teaching in higher education: a case study of Peking university. Hum. Behav. Emerg. Technol. 2(2), 113–115 (2020) 8. Bear, G.G., Slaughter, J.C., Mantz, L.S., Farley-Ripple, E.: Rewards, praise, and punitive consequences: relations with intrinsic and extrinsic motivation. Teaching and Teacher Education (2017) 9. Bin Mat, U., Buniyamin, N., Arsad, P.M., Kassim, R.: An overview of using academic analytics to predict and improve students’ achievement: a proposed proactive intelligent intervention. In: 2013 IEEE 5th Conference on Engineering Education (ICEED), pp. 126–130. IEEE (2013) 10. Bingham, A.J.: A look at personalized learning: lessons learned. Kappa Delta Pi Record 55(3), 124–129 (2019) 11. Bonetti, F., Warnaby, G., Quinn, L.: Augmented reality and virtual reality in physical and online retailing: a review, synthesis and research agenda. In: Augmented Reality and Virtual Reality, pp. 119–132. Springer (2018)

9 Serious Games and Multiple Intelligences …

187

12. Bowman, D.A., Gabbard, J.L., Hix, D.: A survey of usability evaluation in virtual environments: classification and comparison of methods. Presence: Teleoperators Virt. Envir. 11(4), 404–424 (2002). https://doi.org/10.1162/105474602760204309 13. Campbell, L., Campbell, B., Dickinson, D.: Teaching & Learning Through Multiple Intelligences. ERIC (1996) 14. Cheong, P.H., Shuter, R., Suwinyattichaiporn, T.: Managing student digital distractions and hyperconnectivity: communication strategies and challenges for professorial authority. Commun. Educ. 65(3), 272–289 (2016) 15. Cheung, J.J., Kulasegaram, K.M., Woods, N.N., Moulton, C.a., Ringsted, C.V., Brydges, R.: Knowing how and knowing why: testing the effect of instruction designed for cognitive integration on procedural skills transfer. Adv. Health Sci. Educ. 23(1), 61–74 (2018) 16. Eberhard, B., Podio, M., Alonso, A.P., Radovica, E., Avotina, L., Peiseniece, L., Caamaño Sendon, M., Gonzales Lozano, A., Solé-Pla, J.: Smart work: the transformation of the labour market due to the fourth industrial revolution (i4. 0). Int. J. Bus. Econ. Sci. Appl. Res. 10(3) (2017) 17. Erlinger, L.R.: High-fidelity mannequin simulation versus virtual simulation for recognition of critical events by student registered nurse anesthetists. AANA J. 87(2) (2019) 18. Freire, M., Serrano-Laguna, Á., Manero, B., Martínez-Ortiz, I., Moreno-Ger, P., FernándezManjón, B.: Game learning analytics: learning analytics for serious games. In: Learning, Design, and Technology, pp. 1–29. Springer Nature Switzerland AG (2016) 19. García-Redondo, P., García, T., Areces, D., Garmen, P., Rodríguez, C.: Multiple intelligences and videogames: Intervention proposal for learning disabilities. IntechOpen: London, UK pp. 83–97 (2017) 20. Gardner, H.: Multiple approaches to understanding. In: Contemporary Theories of Learning, pp. 129–138. Routledge (2018) 21. Greenberg, K., Zheng, R.Z., Maloy, I.: Understanding the role of digital technology in multiple intelligence education: A meta-analysis. In: Examining Multiple Intelligences and Digital Technologies for Enhanced Learning Opportunities, pp. 65–92. IGI Global (2020) 22. Hajhashemi, K., Caltabiano, N.J., Anderson, N., Tabibzadeh, S.A.: Students’ multiple intelligences in video-assisted learning environments. J. Comput. Educ. 5(3), 329–348 (2018) 23. Harbison, R.A., Dunlap, J., Humphreys, I.M., Davis, G.E.: Skills transfer to sinus surgery via a low-cost simulation-based curriculum. In: International Forum of Allergy & Rhinology, 4, pp. 537–546. Wiley Online Library (2018) 24. Hollander, J.E., Carr, B.G.: Virtually perfect? telemedicine for Covid-19. N. Engl. J. Med. 382(18), 1679–1681 (2020) 25. Hookham, G., Nesbitt, K.: A systematic review of the definition and measurement of engagement in serious games. In: Proceedings of the Australasian Computer Science Week Multiconference, pp. 1–10 (2019) 26. Iten, N., Petko, D.: Learning with serious games: is fun playing the game a predictor of learning success? Br. J. Educ. Technol. 47(1), 151–163 (2016) 27. Jaffer, U., Normahani, P., Matyushev, N., Aslam, M., Standfield, N.J.: Intensive simulation training in lower limb arterial duplex scanning leads to skills transfer in real-world scenario. J. Surgical Educ. 73(3), 453–460 (2016) 28. Kartick, P., Quevedo, A.J.U., Gualdron, D.R.: Design of virtual reality reach and grasp modes factoring upper limb ergonomics. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 799–800. IEEE (2020) 29. Kirschner, P.A.: Stop propagating the learning styles myth. Comput. Educ. 106, 166–171 (2017) 30. Laurillard, D.: Learning number sense through digital games with intrinsic feedback. Aust. J. Educ. Technol. 32(6) (2016) 31. Mainhard, T., Oudman, S., Hornstra, L., Bosker, R.J., Goetz, T.: Student emotions in class: the relative importance of teachers and their interpersonal relations with students. Learn. Instruct. 53, 109–119 (2018) 32. Mariyana, R., Zaman, B.: Design of multiple intelligences based learning environment in early childhood as a learning model of the millennium century. In: 8th UPI-UPSI International Conference 2018 (UPI-UPSI 2018). Atlantis Press (2019)

188

E. Zea et al.

33. Maseleno, A., Sabani, N., Huda, M., Ahmad, R., Jasmi, K.A., Basiron, B.: Demystifying learning analytics in personalised learning. Int. J. Eng. Technol. 7(3), 1124–1129 (2018) 34. Mula, I., Tilbury, D., Ryan, A., Mader, M., Dlouha, J., Mader, C., Benayas, J., Dlouh`y, J., Alba, D.: Catalysing change in higher education for sustainable development. Int. J. Sustain. High. Educ. (2017) 35. Ng, B., Ong, A.K.: Neuroscience and digital learning environment in universities: What do current research tell us? J. Scholarship Teach. Learn. 18(3), (2018) 36. Nordin, N., Norman, H.: Mapping the fourth industrial revolution global transformations on 21st century education on the context of sustainable development. J. Sustain. Dev. Educ. Res. 2(1), 1–7 (2018) 37. Ortegon, T., Vargas, M., Uribe-Quevedo, A., Perez-Gutierrez, B., Rojas, D., Kapralos, B.: Development of a 3d printed stethoscope for virtual cardiac auscultation examination training. In: 2017 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), pp. 125– 128. IEEE (2017) 38. Perveen, A.: Facilitating multiple intelligences through multimodal learning analytics. Turk. Online J. Distance Educ. 19(1), 18–30 (2018) 39. Prior, D.D., Mazanov, J., Meacheam, D., Heaslip, G., Hanson, J.: Attitude, digital literacy and self efficacy: flow-on effects for online learning behavior. Internet High. Educ. 29, 91–97 (2016) 40. Reber, T., Rothen, N.: Educational app-development needs to be informed by the cognitive neurosciences of learning & memory. NPJ Sci. Learn. 3(1), 1–2 (2018) 41. Rose, S.: Medical Student Education in the Time of Covid-19. Jama (2020) 42. Sajjadi, P., Vlieghe, J., De Troyer, O.: Evidence-based mapping between the theory of multiple intelligences and game mechanics for the purpose of player-centered serious game design. In: 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), pp. 1–8. IEEE (2016) 43. Sener, ¸ S., Çokçalı¸skan, A.: An investigation between multiple intelligences and learning styles. J. Educ. Train. Stud. 6(2), 125–132 (2018) 44. Shearer, C.B.: A detailed neuroscientific framework for the multiple intelligences: describing the neural components for specific skill units within each intelligence. Int. J. Psychol. Stud. 11(3) (2019) 45. Shewaga, R., Uribe-Quevedo, A., Kapralos, B., Alam, F.: A comparison of seated and roomscale virtual reality in a serious game for epidural preparation. IEEE Trans. Emerg. Top. Comput. (2017) 46. Shewaga, R., Uribe-Quevedo, A., Kapralos, B., Lee, K., Alam, F.: A serious game for anesthesia-based crisis resource management training. Comput. Entertain. (CIE) 16(2), 1–16 (2018) 47. Waree, C.: A multiple intelligences development of primary students by supporting teachers and students’ ability. Adv. Sci. Lett. 24(7), 5346–5350 (2018) 48. Wentzel, K.R., Miele, D.B.: Promoting self-determined school engagement: Motivation, learning, and well-being. In: Handbook of Motivation at School, pp. 185–210. Routledge (2009) 49. Westera, W.: Why and how serious games can become far more effective: accommodating productive learning experiences, learner motivation and the monitoring of learning gains. J. Educ. Technol. Soc. 22(1), 59–69 (2019) 50. Wouters, P., Van Oostendorp, H.: Overview of instructional techniques to facilitate learning and motivation of serious games. In: Instructional techniques to facilitate learning and motivation of serious games, pp. 1–16. Springer (2017) 51. Yusuf, B., Nur, A.H.B.: Pedagogical orientation in the fourth industrial revolution: flipped classroom model. In: Redesigning Higher Education Initiatives for Industry 4.0, pp. 85–104. IGI Global (2019) 52. Zendejas, B., Wang, A.T., Brydges, R., Hamstra, S.J., Cook, D.A.: Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery 153(2), 160–176 (2013)

9 Serious Games and Multiple Intelligences …

189

53. Zhang, L., Li, K.F.: Education analytics: Challenges and approaches. In: 2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA), pp. 193–198. IEEE (2018) 54. Zhang, L., Zhou, J., Pan, Q., Li, L., Liu, H.: Makerspace-based innovation and entrepreneurship education system in higher education. In: 2018 4th International Conference on Social Science and Higher Education (ICSSHE 2018). Atlantis Press (2018)

Chapter 10

A Virtual Patient Mobile Application for Convulsive and Automated External Defibrillator Practices Engie Ruge Vera, Mario Vargas Orjuela, Alvaro Uribe-Quevedo, Byron Perez-Gutierrez, and Norman Jaimes Abstract Simulation has proven to be a useful tool for developing and maintaining cognitive and psychomotor skills in numerous areas, as exposure to realistic controlled scenarios is critical for effective responses and adequate decision making in real life. In the medical field, the implementation of simulation technologies has brought together interdisciplinary teams focused on developing training platforms, which allow training and practicing different procedures for improving health interventions and care. Recent advances in electronics, computing, and hardware have provided tools to develop simulators with various degrees of fidelity including manikins that mimic several patient conditions, computer graphics simulation of inner organs and body parts, and virtual interactions with haptic devices and computer imagery. However, high-end simulators are only available in laboratories given their requirements for specialized infrastructure, a fact that can limit access and availability to trainees, due to the inherent simulation costs associated to their acquisition, maintenance, and training. However, the recent spike on virtual reality commodity technology is providing simulation developers with an additional layer for developing immerse and interactive environments through computer-generated content that can be used with mobile devices in conjunction or independently of traditional simulation tools. The adoption of virtual reality and the availability of consumer-level systems is presenting opportunities for improving educational and E. Ruge Vera · M. Vargas Orjuela · B. Perez-Gutierrez Universidad Militar Nueva Granada, Cra11N101-80, Bogota, Colombia e-mail: [email protected] M. Vargas Orjuela e-mail: [email protected] B. Perez-Gutierrez e-mail: [email protected] A. Uribe-Quevedo (B) University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada e-mail: [email protected]; [email protected] N. Jaimes Universidad Militar Nueva Granada, Cra11N101-80, Bogota, Colombia e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_10

191

192

E. Ruge Vera et al.

experiential content delivery realistically regardless of the location and available facilities. In this chapter, we present the development of two virtual manikin mobile applications, one for resuscitation employing a virtual automated external defibrillator and another for convulsive training treatment. Our goal is to provide a mobile virtual approach to facilitate complementary practices via handheld devices by reproducing the tasks involved in each situation through a touch screen and motion-based interactions. To increase user engagement, we have added game elements that add realism to the simulation training by incorporating goals and metrics taken to assess performance and decision making. To evaluate engagement and usability, we have employed the System Usability Scale and the Game Engagement Questionnaire. A preliminary study shows that both apps are usable, engaging, and may help refreshing information about the procedures.

10.1 Introduction In the context of medical education, simulation can be defined as a technique that allows interactive and immerse activities by recreating all or part of a clinical experience without exposing the users to any real-life associated risks, including equipment and hazardous manipulation, patient interactions, and life-threatening situations amongst others [1]. Medical simulation has become a standard in training as it allows developing, fine-tuning, and maintaining cognitive and psychomotor skills for various procedures [2], while replicating real conditions otherwise impossible to reproduce in learning environments [3]. In contrast to traditional techniques (e.g., the apprenticeship model) for developing medical skills, simulation allows trainees to engage in deliberate practices where repetition and failure contribute to skills improvement [4]. Moreover, current mobile trends are presenting opportunities for developing content that extends practices outside the classroom [5]. One of the main reasons behind the adoption of simulation is related to the goal of improving health care delivery as in the US alone, more than 400,000 deaths are caused each year by medical errors, being the third cause of death followed by cardiovascular diseases and cancer [6]. Simulation has proven effective in reducing deaths caused by medical errors associated with overdoses, decision making, procedure execution, and equipment failure handling amongst others [7]. Most importantly, simulation helps trainees better react to the unexpected scenarios and improve through a debriefing to identify errors and best practices [8, 9]. The level of realism or fidelity varies significantly across all simulation tools and plays an essential role in conjunction with audio, visual, and haptic feedback when developing effective simulated experiences [10]. The study of fidelity has provided insights into its role during medical training. The level of fidelity can be tailored to a specific set of skills as pointed out by Chen, Grierson and Norman, who conducted a study that examined whether first-year medical students would benefit from lower fidelity simulations, and last-year ones from higher, given the differences between the complexity of the procedures practiced [11]. The study concluded that

10 A Virtual Patient Mobile Application for Convulsive …

193

highly contextualized learning environments may not necessarily lead to effective learning as low-fidelity participants scored comparable or superior performance to those practicing with the high-fidelity scenario. Multimodality on its own can provide relevant cues that help triggering responses during the training similar to real-life procedures where sight, hearing and touch drive several medical procedures [10]. However, given the current limitations in hardware, haptics feedback, for example, is still a relatively young field still producing new approaches and equipment, including those that are looking at compensating the lack of realism in one modality using enhancements in other modalities. For example, the employment of audio cues within a scene may increase the fidelity perception of a haptic task, which leads to more immerse and effective training experiences [12]. Fidelity and multimodality also affect the costs associated to the simulation system, as high fidelity sight, hearing, and haptic feedback requires high definition screens, headphones, and tactile interfaces that can reproduce the real world as accurate as possible, requiring complex hardware and software that is expensive [13]. Although many educational institutions have adopted such high-end systems, the costs also increase because of the need for specialized facilities, training, curricula integration and maintenance [14]. Due to the current prices of medical simulation, there is an increasing interest in the use of consumer-level solutions to complement both, traditional and simulated practices [15]. In this chapter, we present two mobile applications presenting virtual patients as complementary tools to enlarge the training opportunities offered by a medical simulation laboratory. One application focuses on treating a convulsive patient, and the other on the procedure of cardio-pulmonary resuscitation (CPR) while employing an automatic external defibrillator. To better understand the potential of our solutions, we conducted a preliminary study of usability employing the System Usability Scale Questionnaire [16], an analysis of engagement with the Game Engagement Questionnaire [17], while monitoring the participant’s performance before and after using the mobile applications employing pre- and post-testing.

10.2 Background Review Traditional medical training tools have relied on cadavers, patients, practices amongst students, printed materials, and most recently multimedia contents. Although cadaverbased training used to be preferred, it required the acquisition of the deceased body and the compliance with safety policies to guarantee proper systems to preserve and dispose the waste caused by the decomposing materials and chemicals used for conservation [18]. Although the cadaver provided hands-on experiences, its use presented some challenges associated to handling as trainees could damage (unrepairable) it or suffer from allergies to the preserving chemicals employed [19]. Furthermore, cadaver training added another layer of difficulty as the lack of blood circulation changes the coloration of organs, thus changing their appearance and mechanical properties [20]. Such limitation from cadavers were compensated by

194

E. Ruge Vera et al.

approaches including the practice on patients or between students to add realism. However, relying on cadavers and live subjects presented yet another challenge associated to the variability of health conditions provided by those serving as patients or the existing conditions on the cadaver. The content limitation was overcame whilst employing printed media in the form of illustrations and pictures, that allowed depicting a number of cases otherwise impossible to visualize with the previous approaches [21]. Most recently, multimedia have led to the development of interactive content that mostly include re-adaptations of traditional materials enhanced by hyperlinks, videos, animations, and interactive readings, which are currently being re-tailored to virtual and augmented reality experiences that can provide more immersion and interactions meaningful to the medical training [22]. As the medical field grew, traditional learning tools became insufficient, as inanimate objects and animals presented limitations of their own regarding the procedures that could be practiced with them. The need for training numerous students with safe patient replicas gave birth to simulation as an alternative to present a representation of a real-life scenario which can be augmented using technology to provide a meaningful experience, which could be assessed quantitatively [6]. Early approaches at simulation included the fabrication of clay and stone models for representing certain medical conditions, but with the evolution of medicine and technology, in the 18th century, the Phantom obstetrical manikin comprised of a human pelvis and a newborn cadaver was developed and allowed teaching delivery techniques that helped reduce the maternal and mortality rates at the time of its creation.

10.2.1 Early Simulation Since its birth, simulation has focused on providing safe environments that would expose trainees to conditions designed to test their skills while observing and gathering performance metrics. Over the years, the advances in technology provided the necessary tools to develop more effective scenarios with numerous forms of feedback and realism. Resusci Anne was the first manikin for cardiopulmonary resuscitation (CPR) training that allows trainees to practice hyperextension of the neck and the chin lift technique to treat airway obstruction. Posterior versions of Resusci Anne included a spring-based mechanism for performing chest compression during CPR. Resusci Anne was the cornerstone for further development in simulation, and currently, modern versions are still being used and studied to determine their reliability and improvements on medical training [23, 24]. Following the success of Resusci Anne, in 1968 the Harvey manikin was introduced, and it allowed trainees to practice cardiac examination. Harvey reproduced various cardiac diseases through the variation of blood pressure, heart sounds and murmurs, pulses, and breathing [25]. Since its creation, there have been numerous iterations aiming at producing more realistic versions of Harvey. It is important to note that even though manikin simulators can provide suitable tools to train multiple medical scenarios, they still lack realistic human behavior characteristics that does

10 A Virtual Patient Mobile Application for Convulsive …

195

not alter training outcomes [26]. A common alternative that adds realism is found in the form of standardized patients represented by actors that were first explored in 1964, proving to evaluate the interactions and performance of trainees when facing a real person [6].

10.2.2 Modern Simulation Currently, simulation manikins are employed in many areas of medical education. For example, with respect to cardiac auscultation, simulation manikins (e.g., Laerdal’s SimMan 3G, 3B Scientific SAM 3G) allow examining numerous heart conditions to help trainees develop, fine tune, and maintain auscultation skills essential in bedside care and routine examinations [27]. With respect to eye examination training, simulation efforts have resulted in the development of multimedia applications, manikin simulators [28], eye models [29], virtual reality (VR) [30], augmented reality [31], and mobile-based applications [32], whose purpose is to assist trainees into better diagnosing abnormalities in the eye. Furthermore, modern approaches of the medical simulation are seeing an integration of multiple technologies arising from consumerlevel driven industries in the fields of mobile, gaming and VR that are leading to the development of simulation tools with various levels realism [33]. The development of interactive content in the form of multimedia or computer generated worlds has led to the study of the learning effects, with a consensus supporting the use of enriched material, videos, 3D interactive models, animation, and general multimedia [34]. Recently, VR and games have become favorite tools amongst educators given their potential to engage learners in deliberate and meaningful practices [35]. Additionally, the popularity of games and their engaging effects have resulted in the further examination of the benefits of intertwining learning and game mechanics. For example, an overview in [36] highlights the growing use of games developed to assist in medical education aimed at delivering affordable, accessible, and usable interactive virtual worlds, supporting applications in training and education [37]. In addition to recreating a medical procedure, virtual simulation allows creating virtual patients that can provide realistic medical interactions with the purpose of educating, training and assessing trainees performance, cultural competence, and decision making [38]. Currently there are several types of simulators where virtual patients are used such as theVpsim [39], the Decisionsimulation [40], and the MedSims [41]). Although the previous systems present advanced solutions, mobile efforts are focusing on applications for emergency information in first responders procedures [42], specialized clinical content, and team response during medical training [43].

196

E. Ruge Vera et al.

10.3 Mobile Application Development From the background review, we observed a clear trend in employing games and simulation as a complement to traditional and manikin-based simulation training scenarios. Although the study of effectiveness and retention in virtual and physical simulators is essential to testing the efficacy of such tools, there are other relevant aspects of studying related to usability and engagement that can affect their usage outcome. The development of the AED and Convulsive Treatment mobile applications comprises an analysis and characterization of both procedures followed up by the system design, development, and implementation.

10.3.1 Automatic External Defibrillation Ischemic heart diseases are among the leading global diseases [44], accounting for approximately 31% of the worldwide deaths including cardiac arrest, arrhythmia and coronary failure among many others [45]. Proper and prompt care including CPR can make the difference between life and death. However, CPR requires the application of continuous positive airway pressure (CPAP) and passive ventilation of the lungs synchronously with chest compression and decompression [46]. CPR is mastered through extensive practice conducted with manikins offering various levels of fidelity that can measure and provide data to determine how well the technique is being performed [47]. Complementary to the manikin-based training, there have been a number of approaches employing mobile devices, videos, and foam dummies to help teach and train the basics of chest compression and decompression with promising results that show the importance of having consumer-level training tools that provide deliberate practices, given that CPR is a skill that can be used by anyone [48]. Defibrillators have evolved from the complex machines employed in health care centers to portable and friendlier devices that can be used by anyone in emergencies. Automated external defibrillators (AED) are widely spreaded in locations where large crowds of people are concentrated, to address heart emergencies while paramedics arrive [49]. Employing an AED does not require an extensive comprehension of the device, techniques, or medical knowledge since the invention walks the user through all steps. AEDs are taught as part various curriculum in medical school, first responders training, open courses offered by the Red Cross [50], or by occupational health offices with the goal of increasing its use. However, there are still barriers in the general public associate with the use of AEDs as there is a misconception about their use, leading to concerns regarding hurting the person in distress or legal consequences as many are unaware of the implications of applying an electrical shock to the human body to save a patient’s life [51]. Hands-on practice of CPR can involve the use of high an low fidelity simulators, along with dummy AEDs that do not deliver any electric shock. However, their

10 A Virtual Patient Mobile Application for Convulsive …

197

cost still creates an entry barrier for mass adoption and impact, leaving the general public with multimedia, printed guides, and most recently smartphone applications where basic information is provided. In this field, VR and games are providing interactive and engaging scenarios to help trainees and the general public obtain hands-on experience in the employment of AEDs. Boada et al., developed a serious game to complement and refresh CPR skills, and tested it by comparing 109 nursing undergraduate students performance with respect to a traditional method employing a simulation manikin, finding that those who engaged with the game performed better [52]. To identify the needed interactions for our mobile application, first, we determined the following necessary steps to use the AED properly: (i) verify if the person is conscious, (ii) dial the emergency number for your area, and (iii) apply CPR. However, in case the person is unresponsive, an AED can be used, and it will require the user to: (i) verify if the person is conscious, (ii) dial the emergency number for your area if it has not done before, (iii) turn on the AED, (iv) remove the protective cover on the electrodes, (v) place the electrodes in the correct places, and finally, (vi) follow the AED instructions (cardiac rhythm analysis, charging, do not touch the patient and stand clear, deliver shock, initiate CPR). More specifically, for this mobile application, we chose the ventricular fibrillation and ventricular tachycardia without a pulse as the conditions present in the virtual patient, and it was chosen by a medical simulation expert who based the choice considering the number of yearly reported cases of these two conditions that all AEDs are able to respond to [53].

10.3.2 Convulsive Treatment For the convulsive scenario, we chose the status epilepticus (SE), a neurological emergency that occurs in 12–30% of patients with epilepsy, presenting a mortality rate of 8% in children and 30% in adults [54]. SE is a neurological emergency that requires immediate attention, and if left unattended, it can pose a risk to life. As a result, the intrinsic mortality of SE has been between 1 and 7%, although the overall mortality can reach 20% and in cases of refractory SE up to 50%. Factors associated with the prognosis of SE include age, episode duration, aetiology and response to treatment. As a result, treatments focus on prompt medication administration that can help reduce the occurrence of the convulsion, and if occurring, it helps reducing the duration of the episodes before it becomes life-threatening. A patient with an episode of SE may present irregular breathing, sweating, pupil dilation, different vocal noises, and tremors amongst many others [55]. Such conditions can be simulated with current manikin simulators to an extent through electromechanical components. However, some symptoms, such as body tremors are challenging to reproduce throughout the manikin body as the addition of actuators to match all the body muscles would increase the manikin’s complexity and difficult its operation. Current manikins provide limited face movements with voice interactions or vocal sounds being reproduced from audio recordings or performers via a micro-

198

E. Ruge Vera et al.

phone [56]. Drug administration is also possible in some manikins, thus allowing to monitor the procedure application, drug selection, and dosage, altering the manikin’s behavior [41]. The SE standard procedure involves reading the patient’s clinical chart, checking the airways, monitoring the vital signs, performing a venous access if required, and administering the appropriate medication [54]. Since we chose SE, there are six different medications including Lorazepan, Diazepan, Midazolan, Carbamazepine, Benzodiazepine, and Vigabatrin, that can be administered to a patient via a syringe.

10.3.3 Design and Development Based on the AED and SE procedures, we designed our system architecture in a flexible manner that allows expanding on their functionality if needed. The first design parameter focused on the deployment platform, we chose mobile devices using an Android operating system given their large user installed base.1 Having defined the target development platform, next we chose the development tool. Given the requirements for the mobile applications, we decided to employ the Unity game engine as it provides cross-platform and rapid prototyping tools that suit the scope of the project supporting the following: (i) touch and motion-based user interactions to conduct both procedures, and (ii) visual and audio feedback obtained from the App and driven by the game/learning mechanics.

10.3.3.1

AED

The AED mobile application is designed to respond to touch inputs; a swapping gesture is required to remove the electrode protectors, remove the shirt from the virtual patient and place the electrodes over the chest. Similarly, when the patient requires chest compression and decompression, the user can decide to proceed in two distinctive manners, the first by tapping the chest of the patient within the appropriate rhythm, and the second, by holding the mobile device while intertwining the fingers and performing the compression and decompression motion in the air, towards a pillow, or soft object to mimic CPR. In the latter case, the movement is tracked by accessing the information from a mobile device’s accelerometer sensor. Moreover, the CPR motion parameters were adjusted by conducting chest compression over a simulation manikin. Motion tracking allowed determining the appropriate movement and thresholds of force required to treat the virtual patient. Figure 10.1 presents the gestures performed while using a tablet.

1 Global

mobile OS market share in sales to end users from 1st quarter 2009 to 2nd quarter 2017, Statista, https://www.statista.com/statistics/266136/global-market-share-held-bysmartphone-operating-systems/, Accessed on January 27, 2018.

10 A Virtual Patient Mobile Application for Convulsive …

199

Fig. 10.1 AED touch and accelerometer-based interactive gestures for CPR performed on a tablet

10.3.3.2

SE

The SE condition requires the user to perform the treatment through touch-based interactions by reviewing the patient chart, choosing and applying the medication with the proper dosage, and pupil examination. Swapping and pinching gestures were implemented to provide natural interactions for dose selection, and pupil examination respectively to mimic life-like interactions. SE interactions for vital signs, dosage, patient chart, and pupil examination are presented in Fig. 10.2.

200

E. Ruge Vera et al.

Fig. 10.2 SE touch-based gestures for vital signs, dosage, patient chart, and pupil examination

10.3.4 Game/Learning Mechanics The development of both mobile applications followed the Analysis, Design, Development, Implementation, Evaluation (ADDIE) methodology, since it focuses on the development of applications for teaching and learning, through phases of study based on the instructional objectives and final product [57]. The design process followed all the procedures involved in both scenarios with the addition of decision-making and meaningful choices to enhance the interactions with the virtual patients based on their conditions and random behaviours prompting for proper and rapid response. To encourage trainees to explore the mobile applications, we included learning, practice, and evaluation module. In the learning module, trainees can read about the concepts, procedures and best practices in both scenarios. The information is presented in a multimedia format with images, videos, and interactions to ensure the content is compelling to the user. The practice mode presents randomized scenarios for the trainee to practice without worrying about the outcomes. The primary goal for the practical module is to provide the user with an opportunity to apply the information from the learning module. Finally, the evaluation module lets the trainee test the skills on each procedure by introducing the users to a life-like scenario. Here, the decisions affect the outcome as all the conditions driving the situations are random

10 A Virtual Patient Mobile Application for Convulsive …

201

Fig. 10.3 Learning, practice, and evaluation modules in both mobile applications

requiring different approaches from the users. In the evaluation module, both mobile applications record metrics associated with the user interactions with the objective of providing feedback that can help the user improved. To increase realism, the virtual patient can be saved or if treated improperly they can reach life-threatening state. Figure 10.3 presents a screen capture of all the modules in both mobile applications. While performing both procedures, the trainees receive feedback based on the following game elements: (i) a score associated to each procedure where points are awarded or taken based on the decisions made during each required interaction, (ii) completion time is monitored to inform decision-making times when responding to the simulated life-threatening scenarios, (iii) deteriorating patient conditions over time, and (iv) different audio-visual cues providing hints associated with the performed actions. For example, the heart rate beeping will change depending on the

202

E. Ruge Vera et al.

Fig. 10.4 Scenarios and summary of outcomes for both Apps

treatment, or the stuttering of the patient during convulsion will worsen if incorrectly treated. At the end of the evaluation, a summary of the metrics is presented to the user to let them know how they performed as presented in Fig. 10.4.

10.4 Preliminary Study We designed a preliminary study to gauge usability and engagement perceptions along with a basic understanding of the procedures after employing both mobile applications. For the study, we had participants join a session where a facilitator introduced both mobile applications. Then, participants answered a pre-test quiz and engaged in all modules for 10 min. Afterwards, participants were asked to complete the system usability scale questionnaire (SUS) [16], the game engagement questionnaire (GEQ) [17], and a post-test quiz.

10 A Virtual Patient Mobile Application for Convulsive … Table 10.1 EAD pre and post-test questions Question Do you have to remove upper body clothing when using an AED? Must chest compression be gently applied to avoid harming a person in possible cardiac arrest? Should you apply CPR if an AED is available? Only high qualified personnel can use an AED

Table 10.2 SE pre and post-test questions Question Option 1 Option 2 What steps should be performed before administering medication to a convulsive patient? Should you introduce a foreign object into the mouth of the patient during convulsion to avoid choke? Should you hold and secure a patient while convulsing? Should you avoid controlling the airways of a patient during convulsion?

203

Answer 1

Answer 2

True

False

True

False

True True

False False

Option 3

Option 4

Check airways and vital signs

Check the clinical Check the clinical Check vital signs chart, airways, chart and secure the and secure the patient patient

True

False

True

False

True

False

The pre-test and post-test questions the EAD and ES mobile applications are presented in Tables 10.1 ant 10.2 respectively.

10.4.1 Participants As the skills associated with both scenarios are acquired within the first year of medical school, we invited first-year medical students from The Universidad Militar Nueva Granada in Bogota, Colombia to use both mobile applications. Five partici-

204

E. Ruge Vera et al.

Fig. 10.5 Mobile application use

pants in total were recruited, from which three were females, and two were males with ages between 18 and 20 years old. All participants were familiar with both procedures, and only the male participants indicated that they were gamers. Participants used the mobile applications by holding a tablet with one hand, and performing the interactions with the other as presented in Fig. 10.5.

10.4.2 Pre and Post-test AED and SE mobile application results from the Pre- and Post-Test are presented in Table 10.3. By comparing the results, it can be observed that the session helped participants revisit the procedures with an overall improvement for the use of an AED of 30 and 25% for the SE scenario.

10 A Virtual Patient Mobile Application for Convulsive … Table 10.3 Pre and post-tests Test P1a (%) P2 (%) Pre-tests AED Post-tests DEA Pre-tests SE Post-tests SE aP

205

P3 (%)

P4 (%)

P5 (%)

Mean (%)

50

50

75

75

75

65

100

75

100

100

100

95

75 100

50 75

100 100

25 100

100 100

70 95

denotes a participant

10.4.3 System Usability Scale The System Usability Scale (SUS) is a usability tool employed to study a variety of aspects related to system usability, including the need for support, training, and complexity. The SUS is comprised of ten questions, and the responses are based on a 5-point Likert scale and a single score ranging from 0 to 100, representing the composite measure of the overall usability of the system being studied [16]. Based on research, a SUS score above a 68 is considered usable, and anything below it is not. Although a higher score indicates more usability, the individual values provide information on which aspects require improvements. The obtained SUS results indicate that both mobile applications were found usable by the participants. The AED App obtained a SUS score of 90/100, while the SE obtained a 91.5/100. The participants responses allowed us to identify usability improvement associated with complexity, assistance, and difficulty to use.

10.4.4 Game Engagement Questionnaire The Game Engagement Questionnaire (GEQ) summarizes user perception regarding immersion (the experience of becoming engaged in the game-playing experience while retaining some awareness of one’s surroundings), presence (being in a normal state of consciousness while immersed in a VR or game environment), flow (the feelings of enjoyment that occur when a balance between skill and challenge is achieved), and absorption (total engagement in the present experience) [17]. The GEQ is comprised of 19 questions that measure the game playing experience with different levels of engagement. The GEQ score is calculated by assigning each possible response a numerical value: (i) No = −1, (ii) Maybe = 0, and (iii) Yes = 1. Higher GEQ scores indicate higher engagement and lower scores indicate lower engagement. The GEQ participant responses for each of the 19 questions are presented in Fig. 10.6. Additionally, the GEQ engagement results for presence, absorption, flow, and immersion are presented in Table 10.4. The results show that participants were overall

206

E. Ruge Vera et al.

Fig. 10.6 GEQ participant responses Table 10.4 GEQ results GEQ parameter Presence Absorption Flow Inmersion

Mean AED

Mean SE

0.25 −0.04 0.06 0.8

0.5 0 0.17 0.8

engaged concerning presence, flow, and immersion. However, their perception of absorption received the lowest rating, meaning that the participants did not achieve full engagement in the presented scenarios.

10.5 Conclusion Here we have presented two mobile applications for practicing treatment to a patient in need of an automatic external defibrillator and a convulsive patient. The choice of Unity as the game engine platform for the development provided a rapid prototyping environment that can be ported to other operating systems including immersive virtual reality. Although we did not present the mobile applications running on IOS devices, we did compile a version on this system to verify that the core functionality was maintained when ported. Portability is very important as it provides us with further testing scenarios that can help us enlarge the number of participants in future

10 A Virtual Patient Mobile Application for Convulsive …

207

tests, along with opportunities to study how each targeted system may affect the user experience and outcomes. A preliminary Pre- and a Post-test study was conducted allowing us to highlight that participants fine-tuned their knowledge of EAD and SE procedures as their Posttest results improved from the Pre-test. From a usability point of view, both mobile applications were found usable with average SUS score above 90/100. Moreover, although usable, after analyzing the individual SUS for both mobile applications the following improvements were identified: (i) ease of use as current interactions required familiarization with the touch gestures, (ii) navigation through the graphical user interfaces and (iii) mapping of real-life interactions to touch-based interactions. From an engagement point of view, GEQ results, show that the level absorption presented the lowest perception. We believed this is associated with the disconnection between the virtual and real tasks caused by treating patients through the screen of a handheld device with interactions taking place within the screen size. An interesting behavior to note from the participants is how actively they explored the scenario and expressed interest in having more time to use the mobile applications. Additionally, after finishing the session, the participants expressed gratitude and praised both simulations as they found them friendly and easy to use. Participants suggested that we review and redesign the accelerometer interactions for the chest compression as those were difficult to reproduce during the sessions leading to mismatching interactions and improper system responses causing frustration that led them to employ the touch interface instead. We believe that the inn-effectiveness of the accelerometer interactions was the result of the lack of haptic feedback when holding the mobile device in the air. We can conclude from this preliminary study, that both mobile applications hold the potential to impact practices within the AED and SE scenarios positively. We can also add to this remark that there is a gap to be bridge between high-end medical simulation and complementary tools that allow access to simulated practices on commodity devices that can enlarge the amount of deliberate practice. Future work will focus on addressing usability and engagement by designing interaction mechanics that better mimic the real procedures, adding more examination scenarios, and conducting more robust statistical analysis by increasing the number of participants and also accounting for retention related to the procedures. Acknowledgements The authors would like to thank the support of Universidad Militar Nueva Granada under grants INO1640 and INO1641, its Virtual Reality Center and Medical Simulation Laboratory.

References 1. Winkler, B.E., Muellenbach, R.M., Wurmb, T., Struck, M.F., Roewer, N., Kranke, P.: Passive continuous positive airway pressure ventilation during cardiopulmonary resuscitation: a randomized cross-over manikin simulation study. J. Clin. Monitor. Comput. 31(1), 93–101 (2017)

208

E. Ruge Vera et al.

2. SIm, H.: 3diteams. URL http://www.humansim.com/projects/3diteams 3. Mann, K.V.: Theoretical perspectives in medical education: past experience and future possibilities. Med. Educ. 45(1), 60–68 (2010). https://doi.org/10.1111/j.1365-2923.2010.03757. x 4. Satava, R.: Keynote speaker: Virtual reality: current uses in medical simulation and future opportunities medical technologies that vr can exploit in education and training. In: 2013 IEEE Virtual Reality (VR), pp. xviii–xviii (2013). https://doi.org/10.1109/VR.2013.6549339 5. for Health Metrics, I., Evaluation: Ischemic Heart Disease Worldwide, 1990 to 2013. URL http://www.healthdata.org/research-article/ischemic-heart-disease-worldwide-1990-2013 6. Omata, S., Someya, Y., Adachi, S., Masuda, T., Arai, F., Harada, K., Mitsuishi, M., Totsuka, K., Araki, F., Takao, M., Aihara, M.: Eye surgery simulator for training intracular operation of inner limiting membrane. In: 2017 IEEE International Conference on Cyborg and Bionic Systems (CBS), pp. 41–44 (2017). https://doi.org/10.1109/CBS.2017.8266126 7. Tang, S., Hanneghan, M., El Rhalibi, A.: Introduction to games-based learning. In: GamesBased Learning Advancements for Multi-sensory Human Computer Interfaces: Techniques and Effective Practices, pp. 1–17. IGI Global (2009) 8. Woolliscroft, J.O., Calhoun, J.G., Tenhaken, J.D., Judge, R.D.: Harvey: the impact of a cardiovascular teaching simulator on student skill acquisition. Med. Teach. 9(1), 53–57 (1987) 9. Brooke, J.: Sus: a ’quick and dirty’ usability scale. In: Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, A.L. (eds.) Usability Evaluation in Industry, chap. 21, pp. 189–194. Taylor and Francis, London (1996) 10. Vargas-Orjuela, M., Uribe-Quevedo, A., Rojas, D., Kapralos, B., Perez-Gutierrez, B.: A mobile immersive virtual reality cardiac auscultation app. In: 2017 IEEE 6th Global Conference on Consumer MISCs (GCCE), pp. 1–2 (2017). https://doi.org/10.1109/GCCE.2017.8229276 11. Allen, M.W., Merrill, M.D.: Sam and Pebble-in-the-Pond: Two Alternatives to the Addie Model. Trends and Issues in Instructional Design and Technology p. 31 (2017) 12. Huang, G., Reynolds, R., Candler, C.: Virtual patient simulation at u.s. and canadian medical schools. Acad. Med. (2007) 13. Perlini, S., Salinaro, F., Santalucia, P., Musca, F.: Simulation-guided cardiac auscultation improves medical students’ clinical skills: the pavia pilot experience. Intern. Emerg. Med. 9(2), 165–172 (2012) 14. Sümer, N., Ayvasik, H.B., Er, N., (2017) Cognitive and psychomotor correlates of self-reported driving skills and behavior. In: Driving Assessment: Proceedings of the 3rd International Driving Symposium on Human Factors in Driver Assessment. Training, and Vehicle Design. (2005). https://doi.org/10.17077/drivingassessment.1148 15. Torres, R., Nunes, F.: Applying entertaining aspects of serious game in medical training: systematic review and implementation. In: 2011 XIII Symposium on Virtual Reality (SVR), pp. 18–27 (2011). https://doi.org/10.1109/SVR.2011.33 16. Zendejas, B., Wang, A.T., Brydges, R., Hamstra, S.J., Cook, D.A.: Cost: The missing outcome in simulation-based medical education research: a systematic review. Surgery 153(2), 160–176 (2013). https://doi.org/10.1016/j.surg.2012.06.025 17. OH, J.H.: Are chest compression depths measured by the resusci anne skillreporter and cprmeter the same? Signa Vitae: J. Intens. Care Emerg. Med. 13(1), 24–27 (2017) 18. Gunderman, R.B., Wilson, P.K.: Exploring the human interior: the roles of cadaver dissection and radiologic imaging in teaching anatomy. Acad. Med. 80(8), 745–749 (2005). https://doi. org/10.1097/00001888-200508000-00008 19. Sharma, M., Horgan, A.: Comparison of fresh-frozen cadaver and high-fidelity virtual reality simulator as methods of laparoscopic training. World J. Surg. 36(8), 1732–1737 (2012). https:// doi.org/10.1007/s00268-012-1564-6 20. Field, V.K., Gale, T., Kalkman, C., Kato, P., Ward, C.T.: A serious game to train patient safety outside the classroom: a pilot study of acceptability. BMJ Simul. Technol. Enhanced Learn. pp. bmjstel–2017 (2018) 21. Bork, F., Barmaki, R., Eck, U., Yu, K., Sandor, C., Navab, N.: Empirical study of non-reversing magic mirrors for augmented reality anatomy learning. In: 2017 IEEE International Symposium

10 A Virtual Patient Mobile Application for Convulsive …

22. 23. 24.

25.

26. 27.

28.

29. 30. 31. 32.

33.

34.

35. 36.

37.

38. 39.

40. 41.

209

on Mixed and Augmented Reality (ISMAR), pp. 169–176 (2017). https://doi.org/10.1109/ ISMAR.2017.33 Riva, J.: Virtual reality in health care: An introduction. CyberTher. Rehabil. 1, 6–9 (2008) Mathur, A.S.: Low cost virtual reality for medical training. In: Virtual Reality (VR), 2015 IEEE, pp. 345–346. IEEE (2015) Boada, I., Rodriguez-Benitez, A., Garcia-Gonzalez, J.M., Olivet, J., Carreras, V., Sbert, M.: Using a serious game to complement cpr instruction in a nurse faculty. Comput. Methods Prog. Biomed. 122(2), 282–291 (2015) Anderson, K.R., Woodbury, M.L., Phillips, K., Gauthier, L.V.: Virtual reality video games to promote movement recovery in stroke rehabilitation: a guide for clinicians. Arch. Phys. Med. Rehab. 96(5), 973–976 (2015). https://doi.org/10.1016/j.apmr.2014.09.008 Panayiotopoulos, C.P.: Status Epilepticus. A Clinical Guide to Epileptic Syndromes and their Treatment, pp. 65–95 (2010) Eaton, G., Renshaw, J., Gregory, P., Kilner, T.: Can the british heart foundation pocketcpr application improve the performance of chest compressions during bystander resuscitation: a randomised crossover manikin study. Health Inf. J. p. 1460458216652645 (2016) Vankipuram, A., Khanal, P., Ashby, A., Vankipuram, M., Gupta, A., DrummGurnee, D., Josey, K., Smith, M.: Design and development of a virtual reality simulator for advanced cardiac life support training. IEEE J. Biomed. Health Inform. 18(4), 1478–1484 (2014). https://doi.org/10. 1109/JBHI.2013.2285102 of Pittsburgh School of Medicine, U.: Vpsim. URL http://vpsim.pitt.edu/shell/CaseList_ Assignments.aspx Valbuena, Y., Quevedo, A.J.U., Vivas, A.V.: Audio effects on haptics perception during drilling simulation. Ingeniería Investigación y Desarrollo: I2+ D 17(2), 6–15 (2017) Lee, J., Oh, P.J.: Effects of the use of high-fidelity human simulation in nursing education: a meta-analysis. J. Nurs. Educ. 54(9), 501–507 (2015) Aghazadeh, S., Aliyev, A.Q., Ebrahimnejad, M.: The role of computerizing physician orders entry (cpoe) and implementing decision support system (cdss) for decreasing medical errors. In: 2011 5th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–3 (2011). https://doi.org/10.1109/ICAICT.2011.6110916 Yoshida, E.A., Castro, M.L.A., Martins, V.F.: Virtual reality and fetal medicine #x2014; a systematic review. In: 2017 XLIII Latin American Computer Conference (CLEI), pp. 1–10 (2017). https://doi.org/10.1109/CLEI.2017.8226468 Orjuela, M.A.V., Uribe-Quevedo, A., Jaimes, N., Perez-Gutierrez, B.: External automatic defibrillator game-based learning app. In: 2015 IEEE Games Entertainment Media Conference (GEM), pp. 1–4 (2015). https://doi.org/10.1109/GEM.2015.7377206 Cooper, J.B.: A brief history of the development of mannequin simulators for clinical education and training. Qual. Saf. Health Care 13, i11–i18 (2004) Trinka, E., Cock, H., Hesdorffer, D., Rossetti, A.O., Scheffer, I.E., Shinnar, S., Shorvon, S., Lowenstein, D.H.: A definition and classification of status epilepticus-report of the ilae task force on classification of status epilepticus. Epilepsia 56(10), 1515–1523 (2015) Brockmyer, J.H., Fox, C.M., Curtiss, K.A., McBroom, E., Burkhart, K.M., Pidruzny, J.N.: The development of the game engagement questionnaire: a measure of engagement in video gameplaying. J. Exp. Soc. Psychol. 45(4), 624–634 (2009). https://doi.org/10.1016/j.jesp.2009.02. 016. http://www.sciencedirect.com/science/article/pii/S0022103109000444 Mescape: Virtual patient simulations. URL http://www.medsims.com/our-solutions/ Carlson, J.N., Das, S., Spring, S., Frisch, A., Torre, F.D.L., Hodgins, J.: Assessment of movement patterns during intubation between novice and experienced providers using mobile sensors: a preliminary, proof of concept study. BioMed Res. Int. 2015, 1–8 (2015). https://doi.org/ 10.1155/2015/843078 Chen, R., Grierson, L.E., Norman, G.R.: Evaluating the impact of high-and low-fidelity instruction in the development of auscultation skills. Med. Educ. 49(3), 276–285 (2015) López, V., Eisman, E.M., Castro, J.L.: A tool for training primary health care medical students: the virtual simulated patient. In: 2008 20th IEEE International Conference on Tools with Artificial Intelligence, vol. 2, pp. 194–201 (2008). https://doi.org/10.1109/ICTAI.2008.50

210

E. Ruge Vera et al.

42. Appventive: Ice: En caso de emergencia. URL https://play.google.com/store/apps/details? id=com.appventive.ice&hl=es_419 43. Brian, A., Sabna, N., Paulson, G.G.: Ecg based algorithm for detecting ventricular arrhythmia and atrial fibrillation. In: 2017 International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 506–511 (2017). https://doi.org/10.1109/ICCONS.2017.8250773 44. Belzer, F.O., Schweizer, R.T., Hoffman, R., Kountz, S.L.: Preservation and transplantation of human cadaver kidneys. Transplantation 14(3), 363–367 (1972). https://doi.org/10.1097/ 00007890-197209000-00013 45. Ruge-Vera, E., Uribe-Quevedo, A., Jaimes, N., Perez-Gutierrez, B.: Convulsive treatment game-based training app. In: 2015 IEEE Games Entertainment Media Conference (GEM), pp. 1–4 (2015). https://doi.org/10.1109/GEM.2015.7377200 46. WHO: Cardiovascular Diseases (cvds). URL http://www.who.int/mediacentre/factsheets/ fs317/en/ 47. Jorgenson, D.B., Skarr, T., Russell, J.K., Snyder, D.E., Uhrbrock, K.: AED use in businesses, public facilities and homes by minimally trained first responders. Resuscitation 59(2), 225– 233 (2003). https://doi.org/10.1016/S0300-9572(03)00214-4. http://www.sciencedirect.com/ science/article/pii/S0300957203002144 48. Kapralos, B., Moussa, F., Collins, K., Dubrowski, A.: Fidelity and multimodal interactions. In: Instructional Techniques to Facilitate Learning and Motivation of Serious Games, pp. 79–101. Springer (2017) 49. Jones, F., Passos-Neto, C.E., Braghiroli, O.F.M.: Simulation in medical education: brief history and methodology. Principles Pract. Clin. Res. 1(2), (2015) 50. Samra, H.E., Soh, B., Alzain, M.A.: A conceptual model for an intelligent simulation-based learning management system using a data mining agent in clinical skills education. In: 2016 4th International Conference on Enterprise Systems (ES), pp. 81–88 (2016). https://doi.org/ 10.1109/ES.2016.17 51. Mann, K.V.: Theoretical perspectives in medical education: past experience and future possibilities. Med. Educ. 45(1), 60–68 (2011). https://doi.org/10.1111/j.1365-2923.2010.03757.x. URL http://dx.doi.org/10.1111/j.1365-2923.2010.03757.x 52. Perkins, G.D.: Simulation in resuscitation training. Resuscitation 73(2), 202–211 (2007). https://doi.org/10.1016/j.resuscitation.2007.01.005 53. DecisionSim: Decision Simulation. URL http://decisionsimulation.com/what-is-decisionsim/ 54. van Berkom, P., Cloin, J., van Beijsterveldt, T., Venama, A.: The automated laerdal resusci anne scoring system: Scoring on slippery curves. Resuscitation 118, e96 (2017) 55. Chang, T.P., Weiner, D.: Screen-based simulation and virtual reality for pediatric emergency medicine. Clin. Pediat. Emerg. Med. 17(3), 224–230 (2016). https://doi.org/10.1016/j.cpem. 2016.05.002 56. RedCross: Aed | Learn to Use an Aed Defibrillator. URL https://www.redcross.org/take-aclass/aed 57. Schexnayder, S.M.: Cpr education. Curr. Pediatr. Rev. 9(2), 179–183 (2013). https://doi.org/ 10.2174/1573396311309020011

Chapter 11

Lessons Learned from Building a Virtual Patient Platform Olivia Monton, Allister Smith, and Amy Nakajima

Abstract Virtual Patients (VPs) were a mandatory component of the surgical rotation at McGill University for medical students and focused specifically on the teaching of trauma. These cases, written by clinicians and clinical researchers with research experience in VPs, enabled students not only to acquire core knowledge in the identification and management of trauma, but also provided an opportunity to practise skills, such as clinical decision-making and communicating in emotionally challenging situations (e.g., approaching a family member to discuss organ donation). Both faculty and learners appreciate the significant advantage of using VPs: it is a teaching modality which provides meaningful educational opportunities to learners, without risk of harm to patients (Cook et al. in Acad Med 85(10):1589–1602, [7]; Voelker in J Am Med Assoc 290(13):1700–1701, [26]). The authors were inspired by their experiences with these VP cases and began to consider how they might be able to contribute to this field. They became convinced that they could expand on the service offerings that were available at that time, by developing a platform for medical learners. This chapter focuses on this journey. The authors (AS, OM), with another colleague, a software developer, approached faculty at McGill whose expertise was in developing and researching VP cases. This collaboration has led to the creation of the VP software platform, Affinity Learning, and a content-based VP company, VPConnect. In this chapter, we will discuss our experience partnering, as medical students, with members of academia, research, clinicians and industry to create a VP platform, specifically highlighting: O. Monton (B) · A. Smith McGill University, Montréal, QC, Canada e-mail: [email protected] A. Smith e-mail: [email protected] A. Nakajima Simulation Canada, Toronto, ON, Canada e-mail: [email protected] Bruyère Continuing Care/The Ottawa Hospital/Wabano Centre for Aboriginal Health and University of Ottawa, Ottawa, ON, Canada © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_11

211

212

O. Monton et al.

1. The virtual environment as an effective, safe, and cost-efficient way to educate medical trainees; 2. The requirements behind a successful VP platform; 3. The obstacles and challenges we faced in developing a medical education innovation; and 4. Our thoughts on charting a way forward. Keywords Virtual patients · Virtual reality · Medical education

11.1 Introduction: Simulation and Virtual Patients Simulation-based medical education (SBME) has long been recognized as an effective way to educate trainees at the provider-, team-, and systems-level, addressing different learning needs and fulfilling a variety of functions [1, 19]. Simulation at the individual-level promotes knowledge acquisition and skill-development of a healthcare provider, whereas systems-level simulation takes a broader view, exploring issues related to the components of healthcare, a complex, socio-technical system consisting of multiple and multiply interacting components, including the environment, the organization, the work itself, and persons, which include providers, patients and families [11, 16, 24]. One form of simulation that has traditionally addressed individual learning gaps is screen-based simulation (SBS). SBS is a form of simulation in which the clinical scenario is displayed on a digital screen and functions as an alternative to inperson simulation [9]. SBS facilitates experiential learning through the use of digital scenarios, emphasizing the key role that experience has in the learning process [9, 17]. SBS includes the following simulation modalities: VPs; virtual worlds; screenbased haptic trainers; and resource management simulators [9]. Interest in digital simulation, including SBS, has been growing, reflecting the shift towards online learning [22]. VPs is a form of SBS that has been associated with positive learning outcomes. They are interactive computer-based clinical scenarios that replicate the physician– patient encounter [7–9]. VPs provide opportunities for medical trainees to engage with a seemingly real patient, in order to practise data gathering, diagnostic, clinical reasoning, and clinical decision-making skills. They have proven to be an effective tool in healthcare professions education [8]. There are several advantages to VPs as a simulation modality, including their accessibility, portability, customizability, scalability, and sustainability [9, 10]. Furthermore, VPs allow for standardization and robust data collection [9]. One significant advantage of using VPs, as with other forms of simulation, is the preclusion of potential harm to patients, which is an inherent risk associated with providers acquiring competencies in the clinical environment [28]. This is especially true for surgical specialties, in which learning curves representing the acquisition of surgical and procedural skills over time, can have tremendous implications for

11 Lessons Learned from Building a Virtual Patient Platform

213

patients [15]. Additionally, utilizing VPs also eliminates potential harm to simulated patients, such as anxiety and fatigue; for example, in scenarios involving emotionally challenging themes, such as sexual abuse or domestic violence [20]. VPs provide an opportunity for trainees to gain knowledge in areas with limited clinical exposure and address common knowledge gaps in a patient-safe environment [12, 25]. While scheduling of rotations and electives are decided upon at the levels of the various departments and the programs, the learners’ actual exposure to patients and clinical conditions is opportunistic and cannot be perfectly anticipated nor ensured. Macro-level societal, economic, ecological and policy factors also influence clinical exposure [16]. For instance, falling birth rates in Canada may challenge the Obstetrics and Gynecology residency programs to provide sufficient clinical exposure to their trainees, and also challenge pediatric Surgery programs to provide adequate surgical volumes to trainees and staff [2, 23]. VPs may help address learning gaps due to a lack of clinical exposure. As a form of online virtual simulation, VPs are a cost-effective simulation modality [14]. Though the development of a VP platform is associated with up-front financial and labour costs, these costs are relatively low when compared to the costs associated with the building, maintenance, and human resources required to operate a simulation center [18]. A recent study with nursing students, compared learning outcomes and cost-utility analyses for mannequin-based simulation and virtual simulation, and found no difference between the two simulation modalities in terms of learning or performance [14]. Furthermore, virtual simulation was found in a 2018 study, to be more favourable in terms of cost, with cost-utility ratios of $1.08 USD for virtual simulation, compared to $3.62 USD for mannequin-based simulation [14]. Leaders in medical education have advocated for an increase in the allocation of resources to support the use of instructional technologies, in addition to enhancing electronic infrastructure, through funding and leadership [22]. VPs can be used at all levels of medical education, including continuing professional development (CPD), in addition to undergraduate and postgraduate medical education. As a form of online education, VPs have the potential to fulfill multiple information needs, such as learning something new, expanding on existing knowledge, applying knowledge, and problem-solving [10, 13]. Additionally, by complementing in-person experiences, they present an opportunity for self-directed and blended learning [10]. At the undergraduate level, VPs may facilitate the development of clinical reasoning skills, and can be used to prepare trainees for simulation-based activities or clinical rotations [8]. Residents are motivated to use online resources and identify the delivery of safe and effective patient care as their motivation for engaging in online content [10]. With the current transition to competency-based medical education (CBME), VPs have been used in the development of entrustedprofessional activities (EPAs) [21]. Physicians are receptive to using educational technologies to keep abreast of the latest advances and developments in their respective fields, thereby also fulfilling requirements for continuing medical education and professional development [6, 10]. In fact, one study found that physicians desire more online learning materials and simulation-based activities for continuing medical

214

O. Monton et al.

education credits [6]. As such, VPs appear to be an effective learning modality for CPD. In summary, VPs are a subset of SBS that have been used in medical education at all levels. Advantages of using them include patient safety, customizability, and cost-effectiveness.

11.2 Virtual Patient Platform Requirements Recognizing the advantages of VPs to learners and educators, it is important that VPs be part of an environment to provide content generation and delivery. VPs cannot exist in a vacuum and require an ecosystem for learner engagement, case authoring development, and assessment of learner performance. To this end, the authors sought to create a VP platform that could lead to widespread distribution, opting to present the platform as a commercial product to best achieve this goal. The platform seeks to create an incentive for clinicians to publish VPs relevant to their disciplines, and in response to changing guidelines or observed clinical knowledge gaps, while providing learners an avenue to build on their clinical knowledge and reasoning. Learners range from healthcare students to seasoned clinicians looking to maintain their competency in specific domains. Both educators and learners benefit from using a medium with the potential to provide content which is current and can be easily updated to reflect new information, as research expands on what is known and understood about a given topic [10]. With these goals in mind, we found through trial and error, that for a VP platform to be successful, it must be able to achieve the following: • • • • •

Reduce learner barriers to access, Provide clinician authors with the ability to story tell, Enable author responsibility for updating content, Increase learner immersion and connection to the patient, and Integrate and scale new technologies.

a. Reduce learner barriers to access To reduce the friction between the learner and the application, the goal was to ensure that the VP platform is usable on any modern device and not require the installation of proprietary or ancillary software. Designing the application as a ‘mobile-first’ platform, wherein learners could use whatever device they choose (phone, tablet, desktop), ensured a uniform experience on all devices, and enabled flexibility, as learners could use the platform where and when they wanted. We discovered that learners tended to start and resume VP cases not just in scheduled blocks, but while on the go, such as during taking public transit. As part of standard software design, we prioritized the incorporation of intuitive interfaces to drive an innate understanding of the application. This minimized the need for training materials or a learning period to understand how to navigate the

11 Lessons Learned from Building a Virtual Patient Platform

215

software. The intuitive interfaces also enable learners to focus on the case material at hand, allowing learned functionality of common applications to serve as the only requirement. To complement existing learning approaches, VPs allow a seamless embedding of didactic information, such as online lectures or screen recordings, into a clinical case. Incorporating different learning modalities, such as multiple-choice questions, text response entries, and case branching to allow exploration on different topics, encourages active learning and a personalized learning strategy. b. Provide clinician authors with the ability to story tell The use of narrative, whether incorporated into a lecture, a podcast, or an in-person simulation, is a powerful technique to immerse learners into a clinical scenario. The use of stories presents opportunities for learning, both about clinical conditions and about patient-centered care [5], and offers a medium for clinician authors to draw upon personal experiences. VPs facilitate learner engagement, by transforming clinical narratives into cases, and allowing learners to chart their own course through different learning modalities and elements. Our experience has shown that as clinician authors became more familiar with the variety of possible adjuncts that could be added to the clinical narrative, the more authentic the cases became. c. Increase learner immersion and connection to the patient In addition to narrative, other immersive features can help place the learner into the clinical context presented in the case. For example, in the setting of managing a patient in a motor vehicle collision, the inclusion of a dynamic Vital Signs panel, which is responsive to user actions in real time, elevates the learners’ awareness that the actions and choices they make within the scenario result in physiologic consequences sustained by the simulated patient. Clinical decision-making, such as whether to insert a chest tube, becomes less theoretical and more realistic, and entails responsibility for the VP’s well-being. The very real challenges of medicine in the real world become explicit and visible for learners using this medium. While VP cases are generally useful to address individuals’ learning gaps, checkpoints can be designed and inserted by authors to deliberately prompt learner reflection and to provide a more faithful representation of how clinical problems unfold in the real world. The authors’ experience of cases written by the McGill Faculty members included difficult discussions with family members regarding organ donation. Even in the virtual environment, participating in such an emotionally fraught discussion, which students had not yet experienced in a real clinical setting, generated a very real emotional connection between the students and the simulated patient and family. d. Include author responsibility for updating content The ability to update online content also entails a responsibility to maintain that content, that is, to ensure that the content is kept current. This was most pertinent for the authors in developing COVID-19 VPs created for different specialties. The ever-changing landscape of a pandemic, in real-time, necessitated continual revisions

216

O. Monton et al.

of published VPs to best represent practices and guidelines, which themselves were being continually revised. VP cases that highlighted topics refreshed at more regular intervals, such as changes to specialty guidelines or updates in best practices, also require accountability and agreement on the content and the learning principles presented. When VPs are integrated as part of an institution’s curriculum, a review process must be developed and adopted to confirm that the content is reviewed and accurate, and to ensure that the stated learning objectives are appropriate, and that not only the material presented, but also the assessment metrics, correspond to those articulated objectives. e. Integrate and scale new technologies New technologies, whether augmented reality (AR), virtual reality (VR), or natural language processing (NLP) can provide new avenues to drive educational uptake. In building out each aspect of the VP platform infrastructure, the authors are provided with the flexibility to integrate new features into the platform as they become more widely adopted and feasible for widespread use. NLP is already showing promise in other studies focused on medical education [3] and is an active area of investigation for the authors.

11.3 Obstacles and Challenges Innovation in medical education faces a variety of obstacles, including the need for collaboration of expertise from a variety of disciplines such as education, engineering, human factors and behavioural sciences [9]. When designing an educational tool for commercial use, an additional challenge is to fulfil the requirement to make the platform attractive for commercial sale. Within this section, we will outline some of the challenges we encountered in the partnering of industry with academia to develop a commoditized product for medical education. The team, composed of members from academia and industry, as well as clinicians, was unified in our aim to improve existing VP cases by adopting an increasingly immersive and engaging environment. However, transforming an ideal vision to a sustainable financial reality was faced with multiple challenges. The team identified several differences in perspectives, which required resolution in order to achieve our goals: a. Defining product specifications The industry approach of engineers and entrepreneurs is user-centric, focusing on the needs of users and catering to their requirements. Academia looks inward, toward proven, evidence-based publications to serve as a roadmap. These two approaches may not always be in alignment; when our team was initially starting out, we discovered that developing a unified roadmap was a challenge.

11 Lessons Learned from Building a Virtual Patient Platform

217

b. Speed of release Academic pursuits necessitate careful configuration of control and intervention groups, and this approach can bleed into the approach to innovation. Academia and entrepreneurs ultimately can have different ideas surrounding risk. c. Exploration of new technology Similar to the speed of releases, experimentation in new, unfamiliar technologies can lead to differences of opinion between academia and industry innovators. d. Financial independence Perhaps the most crucial disconnect encountered by our team was divergent prioritization amongst the members regarding the need to generate a cash flow to sustain design and development. While being self-funded and using a bootstrapped approach to development, all members of the team understood costs would be carried internally prior to landing funding or sales. For the industry members, this created a pressing need to secure grant funding or to release a product for external sale, to provide a return on the investment of time and computing costs. Both clinicians and researchers, on the other hand, have primary income from their other research and clinical pursuits, and have research projects that reside in longer funding cycles. e. Transitioning to sales The divergent priorities regarding financial independence directly led to a difference of perspectives on marketing a sellable product. Academia and industry may agree on the requirement of partner organizations to beta test products; however, market analysis, competitor research, and ensuring a robust business model are not native to an academia perspective. It became increasingly clear that resolving competing objectives would become more challenging. Academia is incentivized to make a product in accordance to their perceptions of the requirements, as viewed through their research lens. Software developers and entrepreneurs value commoditization to make a financially successful endeavour, focusing on both market demand and user input to identify what is important to the innovation. To compensate for these differences in perspectives, it was ultimately decided to restructure the original team into two separate entities. The McGill Faculty members will continue to engage with clinician stakeholders using their evidencebased approach to VP cases and years of research experience in the domain. The authors (OM, AS), contributing to the product Affinity Learning, will continue to innovate and integrate new technologies to the VP platform and allow organizations from different spheres, within and beyond medicine, to access the platform and create their own use cases for the platform. We believe this allows parallel development to innovate VPs and will actually be more successful than proceeding with the single, originally proposed, entity.

218

O. Monton et al.

11.4 Lessons Learned We offer the following suggestions, engendered from our lived experience, to those planning on collaborating with a multidisciplinary team to develop a medical innovation: a. Establish clear roles As for any team-based endeavour, clear roles must be established and domains of expertise need to be respected and acknowledged. Perspectives from academia and industry are both valuable and need to be shared, but agreement on the principal decisions regarding the direction of the project and the product itself must be reached, in order to successfully scale up for widespread dissemination. b. Start with seed money Screen-based simulation is known to have “substantial up-front financial and labor costs” associated with its development [9]. While our venture was successful in being self-funded, which allowed us to control all aspects of ownership, this situation also concurrently created pressure to move from design and development to sales generation. c. Front load intellectual property agreements Akin to setting up authorship roles in a research publication, clearly identifying who and defining how they will contribute to an innovation should be established early in the development process. d. Start with a partner organization Our experience with screen-based simulation highlighted the importance of working with a partner organization. Besides being fertile ground for understanding the needs and actions of users, partner organizations can help validate the business approach. e. Include financial expertise While it is well known that a team composed of “programmers, designers, clinical subject matter experts, and experts in education” [9] is imperative for developing screen-based learning, we discovered that financial and marketing expertise are also necessary components of the skill set for launching an innovation product.

11.5 A Way Forward In 2011, Robin et al. advocated for a shift towards, and the uptake of, the use of instructional technologies in medical education. They encouraged the adoption and use of technology as a means to enhance learner experiences and called for funding

11 Lessons Learned from Building a Virtual Patient Platform

219

and leadership support for establishing necessary infrastructure and the appropriate use of such technologies [22]. Changes in the technology landscape has now provided us the necessary tools to meet these objectives. Our vision for the future extends beyond VPs. We envision a broad-based electronic learning platform, which could be applied to a variety of medical specialties and subdomains, and usable for all levels of training. This type of platform would be especially appropriate for facilitating the teaching and training of interprofessional competencies, such as collaborative teamwork [4, 27]. Our goal is to integrate new technology as it becomes available and accessible. We endeavour to continue expanding our natural language processing features, incorporate virtual reality, and other real-time features, such as heart rate monitors. To facilitate collaboration and connecting around online learning, we hope to integrate chat rooms associated with particular cases. Acknowledgements The authors would like to thank Sean Doyle for co-founding and developing the Affinity Learning platform, as well as acknowledge Drs. David Fleiszer and Nancy Posel, for their contributions as case authors and owners of VPConnect.

References 1. Auerbach, M., Stone, K.P., Patterson, M.D.: The role of simulation in improving patient safety. In: Grant, V.J., Cheng, A. (eds.) Comprehensive Healthcare Simulation: Pediatrics, pp. 55–65. Springer, Berlin. https://doi.org/10.1007/978-3-319-24187-6_5(2016) 2. Barsness, K.A.: Trends in technical and team simulations: challenging the status quo of surgical training. Semin. Pediatr. Surg. 24(3), 130–133 (2015). https://doi.org/10.1053/j.sempedsurg. 2015.02.011 3. Bond, W.F., Lynch, T.J., Mischler, M.J., Fish, J.L., McGarvey, J.S., Taylor, J.T., Kumar, D.M., Mou, K.M., Ebert-Allen, R.A., Mahale, D.N., Talbot, T.B., Aiyer, M.: Virtual standardized patient simulation: case development and pilot application to high-value care. Simul. Healthc. 14(4), 241–250 (2019). https://doi.org/10.1097/SIH.0000000000000373 4. Canadian Interprofessional Health Collaborative: A National Interprofessional Competency Framework. Retrieved from: https://ipcontherun.ca/wp-content/uploads/2014/06/National-Fra mework.pdf (2010) 5. Charon, R.: Narrative medicine: a model for empathy, reflection, profession, and trust. J. Am. Med. Assoc. 286(15), 1897–1902 (2001). https://doi.org/10.1001/jama.286.15.1897 6. Cook, D.A., Blachman, M.J., Price, D.W., West, C.P., Baasch Thomas, B.L., Berger, R.A., Wittich, C.M.: Educational technologies for physician continuous professional development: a national survey. Acad. Med. J. Assoc. Am. Med. Coll. 93(1), 104–112 (2018). https://doi.org/ 10.1097/ACM.0000000000001817 7. Cook, D.A., Erwin, P.J., Triola, M.M.: Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad. Med. 85(10), 1589–1602 (2010). https://doi.org/10.1097/ACM.0b013e3181edfe13 8. Cook, D.A., Triola, M.M.: Virtual patients: a critical literature review and proposed next steps. Med. Educ. 43(4), 303–311 (2009). https://doi.org/10.1111/j.1365-2923.2008.03286.x 9. Chang, T.P., Gerard, J., Pusic, M.V.: Screen-based simulation, virtual reality, and haptic simulators. In: Grant, V.J., Cheng, A. (eds.) Comprehensive Healthcare Simulation: Pediatrics, pp. 105–114. Springer, Berlin. https://doi.org/10.1007/978-3-319-24187-6_9(2016)

220

O. Monton et al.

10. Daniel, D., Wolbrink, T.: Comparison of healthcare professionals’ motivations for using different online learning materials. Ped. Invest. 3(2), 96–101 (2019). https://doi.org/10.1002/ ped4.12131 11. Dubé, M.M., Reid, J., Kaba, A., Cheng, A., Eppich, W., Grant, V., Stone, K.: PEARLS for systems integration: a modified PEARLS framework for debriefing systems-focused simulations. Simul. Healthc. 14(5), 333–342 (2019). https://doi.org/10.1097/SIH.000000000000 0381 12. Duque, G., Fung, S., Mallet, L., Posel, N., Fleiszer, D.: Learning while having fun: the use of video gaming to teach geriatric house calls to medical students. J. Am. Geriatr. Soc. 56(7), 1328–1332 (2008). https://doi.org/10.1111/j.1532-5415.2008.01759.x 13. Gottfredson, C., Mosher, B.: Are you Meeting all Five Moments of Learning Need? Learning Solutions. https://learningsolutionsmag.com/articles/949/are-you-meeting-all-fivemoments-of-learning-need (2012) 14. Haerling, K.A.: Cost-utility analysis of virtual and mannequin-based simulation. Simul. Healthc. 13(1), 33–40 (2018). https://doi.org/10.1097/sih.0000000000000280 15. Harrysson, I.J., Cook, J., Sirimanna, P., Feldman, L.S., Darzi, A., Aggarwal, R.: Systematic review of learning curves for minimally invasive abdominal surgery: a review of the methodology of data collection, depiction of outcomes, and statistical analysis. Ann. Surg. 260(1), 37–45 (2014). https://doi.org/10.1097/SLA.0000000000000596 16. Holden, R.J., Carayon, P., Gurses, A.P., Hoonakker, P., Hundt, A.S., Ozok, A.A., RiveraRodriguez, A.J.: SEIPS 2.0: a human factors framework for studying and improving the work of healthcare professionals and patients. Ergonomics 56(11), 1669–1686 (2013). https://doi. org/10.1080/00140139.2013.838643 17. Kolb, D.A.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Englewood Cliffs, NJ (1984) 18. Lin, Y., Cheng, A., Hecker, K., Grant, V., Currie, G.R.: Implementing economic evaluation in simulation-based medical education: challenges and opportunities. Med. Educ. 52(2), 150–160 (2017). https://doi.org/10.1111/medu.13411 19. Petrosoniak, A., Brydges, R., Nemoy, L., Campbell, D.M.: Adapting form to function: can simulation serve our healthcare system and educational needs? Advances in Simulation 3(8) (2018). https://doi.org/10.1186/s41077-018-0067-4 20. Plaksin, J., Nicholson, J., Kundrod, S., Zabar, S., Kalet, A., Altshuler, L.: The benefits and risks of being a standardized patient: a narrative review of the literature. Patient 9(1), 15–25 (2016). https://doi.org/10.1007/s40271-015-0127-y 21. Posel, N., Hoover, M.L., Bergman, S., Grushka, J., Rosenzveig, A., Fleiszer, D.: Objective assessment of the entrustable professional activity handover in undergraduate and postgraduate surgical learners. J. Surg. Educ. 76(5), 1258–1266 (2019). https://doi.org/10.1016/j.jsurg.2019. 03.008 22. Robin, B.R., McNeil, S.G., Cook, D.A., Agarwal, K.L., Singhal, G.R.: Preparing for the changing role of instructional technologies in medical education. Acad. Med. J. Assoc. Am. Med. Coll. 86(4), 435–439 (2011). https://doi.org/10.1097/ACM.0b013e31820dbee4 23. Statistics Canada.: Crude birth rate, age-specific fertility rates and total fertility rate (live births). https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=1310041801 (2020) 24. Stone, K.P., Huang, L., Reid, J.R., Deutsch, E.S.: Systems integration, human factors, and simulation. In: Grant, V.J., Cheng, A. (eds.) Comprehensive Healthcare Simulation: Pediatrics, pp. 67–75. Springer, Berlin. https://doi.org/10.1007/978-3-319-24187-6_6(2016) 25. Tellier, P.P., Bélanger, E., Rodríguez, C., Ware, M.A., Posel, N.: Improving undergraduate medical education about pain assessment and management: a qualitative descriptive study of stakeholders’ perceptions. Pain Res. Manage. 18(5), 259–265 (2013). https://doi.org/10.1155/ 2013/920961 26. Voelker, R.: Virtual patients help medical students link basic science with clinical care. J. Am. Med. Assoc. 290(13), 1700–1701 (2003). https://doi.org/10.1001/jama.290.13.1700

11 Lessons Learned from Building a Virtual Patient Platform

221

27. World Health Organization: Framework for action on interprofessional education and collaborative practice. https://www.who.int/hrh/resources/framework_action/en/ (2010) 28. Ziv, A., Wolpe, P.R., Small, S.D., Glick, S.: Simulation-based medical education: an ethical imperative. Acad. Med. J. Assoc. Am. Med. Coll. 78(8), 783–788 (2003). https://doi.org/10. 1097/00001888-200308000-00006

Chapter 12

Engaging Learners in Presimulation Preparation Through Virtual Simulation Games Marian Luctkar-Flude, Jane Tyerman, Lily Chumbley, Laurie Peachey, Michelle Lalonde, and Deborah Tregunno Abstract Background: With increased emphasis on technology, nurse educators must carefully assess learning outcomes associated with various components of clinical simulation. Presimulation preparation is a critical aspect of simulation education that has not been well-studied. Traditional presimulation activities include readings, lectures, and quizzes. Non-traditional activities include video lectures, online modules, and self-assessments. However, in our experience, learners may fail to adequately prepare for simulation when given traditional presimulation preparation activities. Research suggests that alternate presimulation activities improve learning outcomes. Thus, there is a need for innovative approaches to optimize learning during the simulation. Recently there has been an increase in the use of virtual simulation and gaming in medical and nursing education. To date we have not encountered the use of virtual simulation games (VSGs) as a presimulation preparation activity for clinical simulation. Over 30 validated clinical simulation scenarios have previously been developed by nurse educators from across Ontario for senior nursing students to enhance their transition to clinical practice. Each scenario is implemented with selfregulated pre-simulation preparation guided by a scenario-specific learning outcomes assessment rubric. The development of a series of VSGs aims to further enhance presimulation preparation for undergraduate nursing students participating in these scenarios. We propose that VSGs used for presimulation preparation will prove to be more engaging to learners, resulting in better preparation and improved performance M. Luctkar-Flude (B) · D. Tregunno School of Nursing, Queen’s University, 92 Barrie St., Kingston, ON K7L 3N6, Canada e-mail: [email protected] J. Tyerman · M. Lalonde School of Nursing, University of Ottawa, 451 Smyth Rd., Ottawa, ON K18M5, Canada e-mail: [email protected] L. Chumbley Trent University, 1600 West Bank Dr., Peterborough, ON K9L 0G2, Canada e-mail: [email protected] L. Peachey School of Nursing, Nipissing University, 100 College Dr., North Bay, ON P1B 8L7, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_12

223

224

M. Luctkar-Flude et al.

during live simulations. Ultimately the use of virtual simulation for presimulation preparation may translate to improved performance in real clinical settings with a positive impact on patient safety and well-being. Objective: To describe the development and implementation of VSGs used to prepare nursing students prior to their experience in live simulation scenarios. Description of the Innovation: To further enhance learner preparation to participate effectively in live simulations, a virtual simulation game was developed for each of four live scenarios. Each game incorporated five decision points designed to promote critical thinking. The learning outcomes assessment rubric was also incorporated into the game as a pre/post assessment allowing learners to gauge their own readiness to participate in the live simulation scenario. The virtual games consisted of video clips filmed from the perspective of the nurse interacting with the patient. At regular intervals, learners were required to select the best of three nursing actions in response to the situation. Learners were provided with immediate feedback following selection of an incorrect response and are directed back to select another response. Following selection of the correct response, learners were also provided with the rationale before the game proceeds to the next decision point. Feasibility Testing: Following usability testing, each virtual simulation game was implemented in two of four participating Ontario schools of nursing. Successful implementation of the games demonstrated the feasibility of using VSGs for presimulation preparation. Preliminary learner and faculty perceptions of the usefulness of the game to prepare learners to participate in live simulation have been very positive. Learners reported that the game was easy to use, interactive, relevant, and engaging, and preferable to completing assigned readings prior to simulation. Learners appreciated that the rationale for each decision was provided as immediate feedback, which supported their learning, and reported that playing the game helped them to feel more prepared and less anxious to participate in the simulation lab. Faculty anticipated that learners would be more likely to complete presimulation preparation when it was presented in this interactive format. Conclusion: We anticipate the virtual simulation game will be an engaging presimulation preparation activity. The advantages to using virtual games for presimulation preparation could include the promotion of self-regulated learning, enhanced knowledge, decreased anxiety, and enhanced preparation and performance during a live simulation scenario. Additionally, we anticipate that standardized pre-simulation preparation will reduce faculty preparation time and student assessment time and may decrease instructional time in the simulation laboratory. Collaboration and sharing of VSGs across nursing schools will mediate the development costs and result in cost savings in the long-term. Funding: ECampus Ontario Research and Innovation Grant. Keywords Remote virtual simulation · Virtual simulation games · Presimulation preparation · Clinical simulation · Nursing education

12 Engaging Learners in Presimulation Preparation …

225

12.1 Background Clinical simulation has been widely incorporated into health professional education as an adjunct or replacement for traditional learning experiences in clinical settings [1, 2]. Clinical simulations provide learners with realistic clinical learning environments in which to increase their knowledge and self-confidence, and decrease their anxiety prior to entering clinical practice settings [3]. Simulation-based experiences provide learners with the opportunity to apply knowledge and skills, and practice clinical decision-making in an environment where they can learn from their mistakes without fear of harming a real patient. However, when learners are ill-prepared to participate in live simulations, their learning may be hampered, and anxiety may be increased. Additionally the current generation of millennial learners (born from 1981 to 2004) prefer a variety of creative and experiential learning opportunities [4], including self-directed learning using digital resources such as blogs, videos, podcasts, social media and other online activities to traditional didactic educational practices [5]. Virtual simulation games (VSGs) are proposed as a presimulation preparation activity that may be more engaging to millennial learners, leading to better preparation and subsequently better performance during clinical simulations.

12.1.1 Presimulation Preparation A simulation-based experience consists of three distinct phases: (1) presimulation phase, (2) participation phase, and (3) debriefing phase [6]. The presimulation phase can further be partitioned into preparation and briefing, or prebriefing [7, 8]. During presimulation preparation, learners are provided with specific educational materials and activities to be completed in advance of the briefing and participation in the clinical simulation [7, 9]. The purpose of these materials and activities is to provide learners with the clinical content (knowledge and skills) required to participate successfully in the simulated clinical experience [6, 10]. Faculty help learners prepare for simulation through activities such as assigned readings, worksheets, video lectures or by providing learners with a brief history of the patient to be encountered during the session [7, 11]. Providing adequate preparation may help to decrease learner anxiety and enhance performance during the simulation [12, 13]. In 2010, an international and multidisciplinary group of simulation experts gathered in Copenhagen, Denmark, for a Utstein style meeting, in order to identify the state of the art of education simulation-based research as well as future research directions, including methods [14]. A key research question that emerged from the dialogue was: “What kind of learner preparation is necessary or required to optimize simulation-based learning?” [14]. Traditional presimulation activities include readings, lectures, nursing care plans, and quizzes. Non-traditional activities include video lectures, online modules, and self-assessments. However, in our experience, learners may fail to adequately prepare

226

M. Luctkar-Flude et al.

for simulation when given traditional pre-simulation preparation activities. A review of studies evaluating presimulation preparation activities suggests that alternate presimulation activities improve learning outcomes [8]. Thus, there is a need for innovative presimulation preparation approaches to optimize learning during the simulation. One novel approach to preparation is that of online video-based observational learning to prepare learners for simulation-based procedural skills. During observational learning, the desired skill performance is modelled for the learner who then uses this information to modify their own performance during live simulated practice [15]. Additionally, observational learning that includes observation of errors during a video-based demonstration can improve skill performance of learners as long as they are aware of when they are viewing a flawed performance [16]. Recently there has been an increase in the use of virtual simulation and gaming in the education of nurses and other healthcare professionals [17–20]. To date we have not encountered the use of virtual simulations or games as a presimulation preparation activity for clinical simulation.

12.1.2 Virtual Simulations Virtual simulations are “clinical simulations offered on a computer, the internet, or in a digital learning environment including single or multiuser platforms” [21]. Generally these involve animated or video-based portrayals of clinical situations that can be accessed on a computer. Other terms that have been used interchangeably with virtual simulation include online simulation, web-based simulation, esimulation, virtual clinical simulation, virtual patient simulation, computerized simulation, computerized virtual patients, simulation-based e-learning, and cybergogy [17, 22]. Educational benefits of virtual simulation are achieved through integration of virtual and general simulation strategies to promote learner engagement: general simulation characteristics include a theoretical framework, teaching method, feedback, debriefing, purpose of simulation, scenario, and outcome, whereas the virtual-specific characteristics include instructor competency, mode of representation, participant role, interaction, type of platform, virtual framework, and virtual ethics [23]. Virtual simulation delivery platforms include avatar-based simulations in 3D environments, virtual reality, and augmented reality, complex digital environments, virtual worlds, game-based platforms and mobile apps [18, 24]. Specific virtual platforms or communities include The Neighborhood, Second Life, Web-SP, and Blue Mars [25, 26]. Virtual simulation incorporates aspects of experiential learning, problem-based learning, self-directed learning, and adult learning theory [22]. Research suggests that learning with well-designed computer-based simulations results in higher achievements in students than occurs with traditional lecture-based learning [27–29]. Additionally, several studies have demonstrated that online simulations are at least as good as mannequin-based simulations in teaching clinical knowledge and reasoning

12 Engaging Learners in Presimulation Preparation …

227

[30–32]. Healthcare educators may use virtual simulations to enhance their lecture or web-based courses, to replace a portion of traditional simulations or clinical hours, to replicate high-risk or low-frequency clinical experiences, or for formative or summative assessments [18]. Results of an integrative review showed that web-based simulation is highly acceptable to nursing students, augments face-to-face teaching, provides increased accessibility and repeatability, allows learners to learn at their own pace, and promotes learning gains that align with other simulation approaches [17]. A meta-analysis of quantitative studies of web-based learning across the health professions concluded that interactivity, practice exercises, repetition and feedback were associated with improved learning outcomes in comparison to non-computer-based instruction [33]. Results of a systematic review and meta-analysis suggested that computer-based simulation was the most effective strategy on nurses’ knowledge when compared to other simulation strategies [34]. One randomized controlled trial demonstrated no statistically significant differences in nursing student knowledge and self-confidence between face-to-face and virtual clinical simulations; however, anxiety scores were higher for students in the virtual clinical simulation and learners’ self-reported preference was face-to-face citing the similarities to practicing in a ‘real’ situation and the immediate debrief [3]. Virtual simulations also provide an opportunity for deliberate practice and selfreflection prior to clinical practice [27]. Well-designed online simulations can also improve nursing students’ clinical reasoning. In one study, clinical reasoning gains were significantly higher for online simulations designed with a Simple-to-Complex (S2C) approach than simulations designed with Productive Failure (PF) approach; the S2C approach involved a detailed step-by-step instruction that increases in complexity as learners’ skills and abilities grow. By contrast, learners in the PF approach explored complex challenges beyond their current skills and abilities and were allowed to struggle and even fail before guidance was provided [35]. As use of web-based simulation in nursing curricula grows, further research is necessary to objectively evaluate learner outcomes and to justify its use [17]. Additionally economic analyses need to be conducted to support the use of virtual simulation as an adjunct to or as replacement for traditional clinical simulations [36, 37].

12.1.3 Virtual Simulation Games Virtual simulation may be enhanced by use of various gaming design principles or facets. Educational gaming strategies include delivering small chunks of information with interactive trial-and-error activities that allow risk-taking in a safe environment [38]. New technologies allow games to be applied through various devices including computers, tablets, smart phones, and virtual reality headsets. One engaging facet of gaming is the “make-believe” which allows participants to explore roles and situations; in healthcare education, credible VSGs allow learners to develop empathy and

228

M. Luctkar-Flude et al.

professionalism by “walking in someone else’s shoes” [39]. Simulation games have been reported to decrease learner fear of the unexpected and increase readiness for clinical practice [40]. Virtual simulation games are considered “serious games” that offer learning tasks in a realistic and engaging online environment where learners experience the consequences of their decision-making [41]. Few high-quality open access VSGs are available for nursing education. Two sites that host VSGs created in Canada are the Canadian Alliance of Nurse Educators using Simulation (CANSim) at www.can-sim.ca and the Virtual Simulation Community of Learning (VSCOL) at www.sim-one.ca/community/vscol. Well-designed VSGs have the potential to enhance learning to the same degree as live clinical simulations, but at a fraction of the cost when delivered to large groups of learners [42]. Pedagogical elements of VSGs may include clinical cases, multiple modalities, and scaffolding of learning; gaming elements many include competition and multiple levels; and simulation elements may include representations of situations or objects, user interaction and immediate feedback [41]. Additional advantages of VSGs versus face-to-face simulations include that they are accessible any time or place with internet access and they are repeatable. VSGs also promote psychological safety as learners can play the games independently without risk of judgment by their peers or their instructors [43]. Deliberate practice in simulation is known to improve performance and contribute to mastery learning of defined objectives [44]. Evidence suggests that mastery learning using technology-enhanced simulation is superior to nonmastery learning [45], and translates to improved patient care practices and outcomes [46]. Although both entertainment games and educational games must engage their players, the goal of an entertainment game is to keep the player engaged in the game environment for as long as possible, whereas, the goal of an educational game is to prepare players for the real world [24]. In health professional education, a good educational game reinforces previously learned didactic content and prepares learners for additional experiential learning during traditional clinical simulations and clinical practice rotations [24]. The use of VSGs may also entice the learner to explore the consequences of poor coordination of care by having the possibility of going back into the game and selecting the incorrect answers to learn what would occur in patient care. Hence, the learner explores concepts of patient safety in a near-miss aspect since the incident reviewed in the game does not reach the patient [47]. Computer-based simulations are designed to provide feedback without requiring the educator to invest additional time and resources to create the feedback [48]. However, debriefing is considered to be a critical component of the experience that contributes to learning, and thus educators may need to provide additional selfreflection activities or facilitate a synchronous online or in-person debrief with learners who have completed a VSG [49]. VSGs may also provide opportunities for both formative and summative assessments, as scoring systems can be embedded within the games; however, as with traditional simulations, for high-stakes testing the educator must ensure the participant has had multiple previous exposures to the simulation-based experience and evaluations [9].

12 Engaging Learners in Presimulation Preparation …

229

12.1.4 Presimulation Preparation Using Virtual Simulation Games One of the challenges in preparing learners to participate in live simulations is the lack of motivation and commitment of students who fail to complete assigned presimulation preparation activities such as readings [8]. When learners are not adequately prepared to participate in a simulation learning may be hindered. As there are considerable costs associated with live simulations, it is important that learners come fully prepared to demonstrate competency during the scenarios. Virtual simulation games combine the features of online observational learning, deliberate practice, and gamification to provide an accessible, repeatable, immersive, and engaging learning activity. Thus, we propose their use as presimulation preparation prior to participation in a live clinical scenario. The application of gamification has the potential for adding the element of fun to improve engagement in training activities and subsequently improve learner motivation and performance [50]. Online games align well with adult learning theory that assumes adult learners are intrinsically motivated to learn when educational activities are flexible and convenient to complete at their own pace [51]. Additionally, gamification through the use of points and badges has been found to enhance engagement with online educational programs [52]. Gamification is recognized as a strategy to increase knowledge retention and provides the opportunity for experimentation by the learner in a nonthreatening environment [51]. Thus, the application of gamification to virtual simulation could enhance the learner experience, improving learner adherence to and completion of a learning activity. Emotions and feelings generated through immersion in an educational activity also impact learner engagement, motivation and learning [53–55]. Both positive and negative emotional arousal levels have been found to be higher in learners involved in active participant roles compared with observer roles in live clinical simulations [56]. Active participants have also been found to learn significantly better than observers [57, 58]. During VSGs, all learners are involved in the participant role and are required to make decisions throughout the game. Through immediate feedback learner understanding of the rationale and consequences of their clinical decision-making is enhanced. Virtual simulation games combine the features of online observational learning, deliberate practice, and gamification to provide an accessible, repeatable, immersive, and engaging learning activity. Thus, we proposed that shorter VSGs could be used to enhance and standardize presimulation preparation and promote self-regulated learning. Ultimately this may translate to improved performance in real clinical settings with a positive impact on patient safety and well-being.

230

M. Luctkar-Flude et al.

12.2 Virtual Simulation Game Project 12.2.1 Rationale We proposed that VSGs used for presimulation preparation will be more engaging to learners, resulting in better preparation and improved performance during live simulations. Ultimately this may translate to improved performance in real clinical settings with a positive impact on patient safety and well-being.

12.2.2 Objective To describe development and implementation of VSGs to prepare nursing students to participate in live simulation scenarios.

12.2.3 Methods This project was funded through an eCampus Ontario Research and Innovation Grant to develop, implement, and evaluate four VSGs to prepare undergraduate nursing students to participate in live simulation scenarios. The funding allowed us to contract an eLearning specialist to guide us through the development process. Together we reviewed eLearning platforms that could support sharing of educational resources across the province. Options included using free platforms such as YouTube and H5P, or licensed software such as Articulate Storyline. We also reviewed existing VSGs that could be found freely on the internet. Additional commercial virtual simulation products were not reviewed due to the associated costs. Several of the VSGs featured in the Virtual Simulation Community of Learning were created in Ontario using institutional and grant funding and a collaboration between faculty, instructional designers, web developers and media specialists at an estimated cost of $35,000 CAD per game over a period of six to nine months each [59]. Other virtual simulation design and production costs have been estimated to range between $10,000 and $50,000 USD [60]. However, we wanted to create a VSG design process that would be sustainable beyond the funding period of our grant such that we could continue to create more VSGs on our own. Thus, we opted to film our VSGs using a Go-Pro camera and selected the Articulate Storyline 2 software to create our VSG template [8, 61].

12 Engaging Learners in Presimulation Preparation …

231

12.2.4 Scenario Selection With previous funding from the Ontario Ministry of Health and Long-Term Care (MOHLTC) over 30 validated clinical simulation scenarios were developed by nurse educators from across Ontario for senior nursing students to enhance their transition to clinical practice. Each scenario is implemented with self-regulated presimulation preparation guided by a scenario-specific learning outcomes assessment rubric. Senior nursing students (n = 83) at one site preferred this method of presimulation preparation to traditional presimulation preparation with a lecture and assigned readings, and reported high satisfaction with the assessment rubric, increased competence related to the learning outcomes, and valued the opportunity to identify their own learning needs related to the required competencies for the scenario [62].

12.2.5 Description of the Innovation To further enhance learner preparation to participate effectively in live simulations, a virtual simulation game was developed for each of four live scenarios. Each game incorporated five decision points aligned with the learning outcomes for the clinical simulation and designed to promote critical thinking. A learning outcomes assessment rubric was also incorporated into the game as a pre/post assessment allowing learners to gauge their own readiness to participate in the live simulation scenario. The virtual games consist of video clips filmed from the perspective of the nurse interacting with the patient. At regular intervals, learners must select the best of three nursing actions in response to the situation. Learners are provided with immediate feedback following selection of an incorrect response and are directed back to select another response. Following selection of the correct response, learners are also provided with the rationale prior to advancing to the next decision point, as it is possible for learners to select the correct response for the wrong reason; providing the rationale will clarify their understanding of the appropriateness of various actions in a given clinical situation.

12.2.6 Usability Testing Usability testing of each game was conducted by the game developers at each site. For example, the respiratory distress VSG was developed at Queen’s University and thus the usability testing was conducted there. Following initial review by the research team, the respiratory distress VSG was reviewed by a sample of 3 nursing students and 3 instructors [63]. Participants completed a usability survey based on the Technology Acceptance Model [64] which was revised to evaluate VSGs [65], and the ClassRoom Instructional Support Perception (CRISP-VSG) scale which is

232

M. Luctkar-Flude et al.

a revision of the Classroom Response System Perceptions (CRiSP) Questionnaire, a quantitative measure of learner perceptions of technology in the classroom, in this case the VSG [66, 67].

12.2.7 Cost Utility and Learning Outcomes The primary purpose of the research component of our project was to determine the cost utility of using VSGs as presimulation in comparison to a traditional preparation activity, in our case, a case study with decision points that were identical to those of the VSGs. The focus was on the costs of creating, delivering, and maintaining these educational tools. We propose that the cost for the VSGs will remain the same regardless of the number of students using them, thus the cost per student will decrease as the number of students increases. Our research project was also designed to compare the impact of the two types of presimulation preparation on nursing students’ ability to achieve the intended learning outcomes as measured by a clinical knowledge test and the learning outcomes assessment rubric for each VSG and clinical simulation. And additionally, we measured the impact of the VSG on learner anxiety and self-confidence prior to participating in the live simulation.

12.3 Results Over a one-year period we scripted, filmed, assembled, implemented, and evaluated four new VSGs used as presimulation preparation. Two VSGs were implemented at each of the four university sites, and data was collected on the cost-utility learning impact of using VSGs for presimulation preparation in comparison to traditional preparation using a paper-based case study. We also measured learner knowledge and performance, anxiety, and self-confidence. Some preliminary results using single site data have been presented at simulation education conferences which suggested that the VSGs resulted in similar outcomes to presimulation preparation with a case study; however, learners rated the VSGs higher [63]. Learners reported that the game was easy to use, interactive, relevant and engaging, and preferable to completing assigned readings prior to simulation. Learners appreciated that the rationale for each decision was provided as immediate feedback, which supported their learning, and reported that playing the game helped them to feel more prepared and less anxious to participate in the simulation lab. Faculty anticipated that learners would be more likely to complete presimulation preparation when it was presented in this interactive format. Results of the final analysis of the multi-site data are pending publication. Since completing the original eCampus-funded project we have refined the scripting and filming process to create a two-day workshop to teach other educators how to create their own VSGs. The product of each workshop is one or two

12 Engaging Learners in Presimulation Preparation …

233

VSGs that will be able to be shared with other educators across Canada through the Canadian Alliance of Nurse Educators using Simulation (CAN-Sim). We also have graduate students who are developing and evaluating VSGs for their own research studies [61].

12.4 Discussion Our project demonstrated the feasibility of nurse educators developing their own VSGs at relatively low cost compared to other virtual simulation modalities such as virtual reality. Three presimulation preparation VSGs and their associated clinical simulation scenarios are now available freely on the CAN-Sim website at https:// can-sim.ca/virtual-game-preview/. Additional resources such as learner preparation packages and learning outcomes assessment rubrics are included with each VSG. Preliminary results of our VSG implementation and evaluation studies are promising in suggesting that VSGs will be a cost-effective method to prepare learners to participate in clinical simulations [63]. Further research is needed to confirm these results and explore the potential for VSGs to supplement and/or replace a proportion of clinical simulations to offset their higher delivery costs without sacrificing learning. As a result of our project, we have developed the 2-day CAN-Sim Virtual Simulation Game Design Workshop. This fully interactive workshop walks all participants through the steps of simulation game design, filming, and assembly, and has been delivered to faculty from across Canada, and one site in the U.S. New games continue to be created and added to the repository. We encourage all users to implement the games within a research study to assess different aspects of learning using VSGs. Subsequent to creating our VSGs for presimulation preparation, a graduate student created a new VSG depicting optimal care for patients in cardiac arrest secondary to ventricular fibrillation, employing our user-friendly VSG design process [61]. The student has conducted a randomised controlled trial to empirically evaluate the impact of the game on senior-level nursing students’ cardiac resuscitation skills, results of which are pending publication. Rigorous studies that focus specifically on presimulation activities are needed to inform best practice recommendations for different groups and levels of learners [8].

12.4.1 Strengths and Limitations This work benefited from a collaboration of nurse educators from across the province of Ontario to develop a user-friendly approach to VSG development that can be replicated easily by other educators interested in creating their own VSGs for presimulation preparation or potentially as a replacement for some traditional clinical simulations which are resource intensive to deliver to large groups of learners. The

234

M. Luctkar-Flude et al.

lessons learned through the creation of each game informed the design and filming of the next game and so on. One limitation of the project was the tight timeline to complete all components of the development, implementation and evaluation phases; however, we were able to demonstrate the feasibility of a quick turnover from designing to filming and assembly of VSGs using our process, using a Go-Pro camera and the Articulate Storyline 2 software. Lessons learned have proven to be invaluable as we move forward with this area of innovation and research. Another limitation is that we did not explore different methods of debriefing the VSG to determine the best approach. Recent research has begun to evaluate debriefing methods for virtual simulation. These approaches include self-debriefing, virtual debriefing, and in-person debriefing [49]. However, research is needed to determine how to best debrief VSGs when they are used as presimulation preparation activities. We chose to use the learning outcomes assessment rubrics, which were created to accompany each VSG/clinical simulation pair, to guide the debriefing prior to and following participation in clinical simulations i.e. the VSGs were actually debriefed during the prebriefing for the live simulations. Further in-depth debriefing occurred following the live simulation. This is an area that we would like to explore further in the future. Overall, the work we have done to date highlights the value of collaboration in the development, implementation, and evaluation of innovative uses of technology in nursing education.

12.5 Conclusions Virtual simulation games are an innovative presimulation preparation strategy that engage learners and provide them with immediate feedback on their clinical decisionmaking. By creating our own VSGs we were able to provide VSGs that aligned directly with our intended learning outcomes, levelled to the learner experience level, to better prepare them to participate in a live simulation where they could demonstrate their competence within a given clinical scenario. We anticipate the advantages to using virtual games for presimulation preparation could include the promotion of self-regulated learning, enhanced knowledge, decreased anxiety, and enhanced preparation and performance during a live simulation scenario. Additionally, we anticipate that standardized pre-simulation preparation will reduce faculty preparation time and student assessment time and may decrease instructional time in the simulation laboratory. Collaboration and sharing of VSGs across nursing schools will mediate the development costs and result in cost savings in the long-term. Further research is needed to demonstrate the impact of VSGs on learning outcomes and transfer to practice.

12 Engaging Learners in Presimulation Preparation …

235

References 1. Aebersold, M.: Simulation-based learning: No longer a novelty in undergraduate education. Online J. Issues Nurs. 23 (2018) 2. Persico, L., Lalor, J.D.: A Review: Using simulation-based education to substitute traditional clinical rotations. Teach. Learn. Nurs. 14, 274–278 (2019) 3. Cobbett, S., Snelgrove-Clarke, E.: Virtual versus face-to-face clinical simulation in relation to student knowledge, anxiety, and self-confidence in maternal-newborn nursing: a randomized controlled trial. Nurs. Educ. Today 45, 179–184 (2016) 4. Novotney, A.: Engaging the millennial learner. Am. Psychol. Assoc. 41(3), 60 (2010) 5. Toohey, S., Wray, A., Wiechmann, W., Lin, M., Boysen-Osborn, M.: Ten tips for engaging the millennial learner and moving an emergency medicine residency curriculum into the 21st century. West J. Emerg. Med. 17, 337–343 (2016) 6. Luctkar-Flude, M.: Simulation approaches. In: Page-Cutrara, K. (ed.) Becoming a Nurse Educator in Canada. Canadian Association of Schools of Nursing Ottawa (2020, in press) 7. Oermann, M.H.S., Gaberson, K.: Clinical Teaching Strategies in Nursing, 5th edn. Springer Publishing Company, New York (2018) 8. Tyerman, J., Luctkar-Flude, M., Graham, L., Coffee, S., Olsen-Lynch, E.: A systematic review of health care presimulation preparation and briefing effectiveness. Clin. Simul. Nurs. 27, 12–25 (2019) 9. INACSL Standards Committee: INACSL standards of best practice: SimulationSM simulation design. Clin. Simul. Nurs. 12, S5–S12 (2016) 10. Leigh, G., Steuben, F.: Setting learners up for success: presimulation and prebriefing strategies. Teach. Learn. Nurs. 13, 185–189 (2018) 11. Smith, S.B.: Integrating simulation in a BSN leadership and management course. J. Nurs. Educ. Pract. 3(11), 121–132 (2013) 12. Gantt, L.T.: The effect of preparation on anxiety and performance in summative simulations. Clin. Simul. Nurs. 9, e25–e33 (2013) 13. Shearer, J.N.: Anxiety, nursing students, and simulation: state of the science. J. Nurs. Educ. 55, 551–554 (2016) 14. Issenberg, S.B., Ringsted, C., Ostergaard, D., Dieckmann, P.: Setting a research agenda for simulation-based healthcare education: a synthesis of the outcome from an Utstein Style Meeting. Simul. Healthc. 6, 155–167 (2011) 15. Cheung, J.J.H., Koh, J., Brett, C., Bägli, D.J., Kapralos, B., Dubrowski, A.: Preparation with web-based observational practice improves efficiency of simulation-based mastery learning 11, 316–322 (2016) 16. Domuracki, K., Wong, A., Olivieri, L., Grierson, L.E.M.: The impacts of observing flawed and flawless demonstrations on clinical skill learning. Med. Educ. 49, 186–192 (2015) 17. Cant, R.P., Cooper, S.J.: Simulation in the internet age: the place of web-based simulation in nursing education. An integrative review. Nurs. Educ. Today 34, 1435–1442 (2014) 18. Foronda, C., Bauman, E.B.: Strategies to incorporate virtual simulation in nurse education. Clin. Simul. Nurs. 10, 412–418 (2014) 19. Howe, J., Puthumana, J., Hoffman, D., Kowalski, R., Weldon, D., Miller, K., Weyhrauch, P., Niehaus, J., Bauchwitz, B., McDermott, A., Ratwani, R.: Development of virtual simulations for medical team training: an evaluation of key features. Proc. Int. Symp. Hum. Fact. Ergon. Healthc. 7, 261–266 (2018) 20. Huttar, C.M., BrintzenhofeSzoc, K.: Virtual reality and computer simulation in social work education: a systematic review. J. Soc. Work Educ. 56, 131–141 (2020) 21. Foronda, C.L., Swoboda, S.M., Henry, M.N., Kamau, E., Sullivan, N., Hudson, K.W.: Student preferences and perceptions of learning from vSIM for NursingTM . Nurs. Educ. Pract. 33, 27–32 (2018) 22. Foronda, C., Godsall, L., Trybulski, J.: Virtual clinical simulation: the state of the science. Clin. Simul. Nurs. 9, e279–e286 (2013)

236

M. Luctkar-Flude et al.

23. Shin, H., Rim, D., Kim, H., Park, S., Shon, S.: Educational characteristics of virtual simulation in nursing: an integrative review. Clin. Simul. Nurs. 37, 18–28 (2019) 24. Bauman, E.B.: Games, virtual environments, mobile applications and a futurist’s crystal ball. Clin. Simul. Nurs. 12, 109–114 (2016) 25. Sweigart, L., Burden, M., Carlton, K.H., Fillwalk, J.: Virtual simulations across curriculum prepare nursing students for patient interviews. Clin. Simul. Nurs. 10, e139–e145 (2014) 26. Verkuyl, M., Mastrilli, P.: Virtual simulations in nursing education: a scoping review. J. Nurs. Heal. Sci. 3, 39–47 (2017) 27. Dubovi, I., Levy, S.T., Dagan, E.: Now I know how! The learning process of medication administration among nursing students with non-immersive desktop virtual reality simulation. Comput. Educ. 113, 16–27 (2017) 28. McMullan, M., Jones, R., Lea, S.: The effect of an interactive e-drug calculations package on nursing students’ drug calculation ability and self-efficacy. Int. J. Med. Inform. 80, 421–430 (2011) 29. Öztürk, D., Dinç, L.: Effect of web-based education on nursing students’ urinary catheterization knowledge and skills. Nurs. Educ. Today 34, 802–808 (2014) 30. Johnson, D., Corrigan, T., Gulickson, G., Holshouser, E., Johnson, S.: The effects of a human patient simulator vs. a CD-ROM on performance. Mil. Med. 177, 1131–1135 (2012) 31. Liaw, S.Y., Wong, L.F., Chan, S.W.C., Ho, J.T.Y., Mordiffi, S.Z., Ang, S.B.L., Goh, P.S., Ang, E.N.K.: Designing and evaluating an interactive multimedia web-based simulation for developing nurses’ competencies in acute nursing care: randomized controlled trial. J. Med. Internet Res. 17, e5 (2015) 32. Schneider, P.J., Pedersen, C.A., Montanya, K.R., Curran, C.R., Harpe, S.E., Bohenek, W., Perratto, B., Swaim, T.J., Wellman, K.E.: Improving the safety of medication administration using an interactive CD-ROM program. Am. J. Health Syst. Pharm. 63, 59–64 (2006) 33. Cook, D.A., Garside, S., Levinson, A.J., Dupras, D.M., Montori, V.M.: What do we mean by web-based learning? A systematic review of the variability of interventions. Med. Educ. 44, 765–774 (2010) 34. Hegland, P.A., Aarlie, H., Strømme, H., Jamtvedt, G.: Simulation-based training for nurses: systematic review and meta-analysis. Nurs. Educ. Today 54, 6–20 (2017) 35. Dubovi, I.: Designing for online computer-based clinical simulations: evaluation of instructional approaches. Nurs. Educ. Today 69, 67–73 (2018) 36. Maloney, S., Haines, T.: Issues of cost-benefit and cost-effectiveness for simulation in health professions education. Adv. Simul. 1, 13 (2016) 37. Zendejas, B., Wang, A.T., Brydges, R., Hamstra, S.J., Cook, D.A.: Cost: The missing outcome in simulation-based medical education research: a systematic review. Surgery 153, 160–176 (2013) 38. Werth, E.P., Werth, L.: Effective training for millennial students. Adult Learn. 22(3), 12–19 (2011) 39. Ellaway, R.H.: A conceptual framework of game-informed principles for health professions education. Adv. Simul. 1, 28 (2016) 40. Ambrosio Mawhirter, D., Ford, G.P.: Expect the unexpected: simulation games as a teaching strategy. Clin. Simul. Nurs. 12, 132–136 (2016) 41. Dankbaar, M.E.W., Roozeboom, M.B., Oprins, E.A.P.B., Rutten, F., van Merrienboer, J.J.G., van Saase, J.L.C.M., Schuit, S.C.E.: Preparing residents effectively in emergency skills training with a serious game. Simul. Healthc. 12, 9–16 (2017) 42. Kalkman, C.J.: Serious play in the virtual world: can we use games to train young doctors? J. Grad. Med. Educ. 4, 11–13 (2012) 43. Turner, S., Harder, N.: Psychological safe environment: a concept analysis. Clin. Simul. Nurs. 18, 47–55 (2018) 44. Siddaiah-Subramanya, M., Smith, S., Lonie, J.: Mastery learning: how is it helpful? An analytical review. Adv. Med. Educ. Pract. 8, 269–275 (2017) 45. Cook, D.A., Brydges, R., Zendejas, B., Hamstra, S.J., Hatala, R.: Mastery learning for health professionals using technology-enhanced simulation. Acad. Med. 88, 1178–1186 (2013)

12 Engaging Learners in Presimulation Preparation …

237

46. McGaghie, W.C., Issenberg, S.B., Barsuk, J.H., Wayne, D.B.: A critical review of simulationbased mastery learning with translational outcomes. Med. Educ. 48, 375–835 (2014) 47. Canadian Patient Safety Institute: Canadian Incident Analysis Framework. https://www.patien tsafetyinstitute.ca/en/education/PatientSafetyEducationProgram/PatientSafetyEducationCurri culum/Pages/Canadian-Incident-Analysis-Framework.aspx (2017) 48. Weatherspoon, D.L., Wyatt, T.H.: Testing computer-based simulation to enhance clinical judgment skills in senior nursing students. Nurs. Clin. North. Am. 47(4), 481–491 (2012) 49. Verkuyl, M., Lapum, J.L., Hughes, M., McCulloch, T., Liu, L., Mastrilli, P., Romaniuk, D., Betts, L.: Virtual gaming simulation: exploring self-debriefing, virtual debriefing, and in-person debriefing. Clin. Simul. Nurs. 20, 7–14 (2018) 50. Borrás, G., Martínez, N., Martín, F.: Enhancing fun through gamification to improve engagement in MOOC. Information 6, 28 (2019) 51. Brull, S., Finlayson, S.: Importance of gamification in increasing learning. J. Contin. Educ. Nurs. 47, 372–375 (2016) 52. Looyestyn, J., Kernot, J., Boshoff, K., Ryan, J., Edney, S., Maher, C.: Does gamification increase engagement with online programs? A systematic review. PLOS ONE 12, e0173403 (2017) 53. Behrens, C.C., Dolmans, D.H., Gormley, G.J., Driessen, E.W.: Exploring undergraduate students achievement emotions during ward round simulation: a mixed-method study. BMC Med. Educ. 9, 316 (2019) 54. Leony, D., Munoz Merino, P.J., Ruiperez-Valiente, J.A., Pardo, A., Kloos, C.D.: Detection and evaluation of emotions in massive open online courses. J. Univers. Comput. Sci. 21, 638–655 (2015) 55. Wang, L., Hu, G., Zhou, T.: Semantic analysis of learners’ emotional tendencies on online MOOC education. Sustainability 10, 1921 (2018) 56. Rogers, T., Andler, C., O’Brien, B., van Schaik, S.: Self-reported emotions in simulation-based learning: active participants vs. observers. Simul. Healthc. 14, 140–145 (2019) 57. Delisle, M., Ward, M.A.R., Pradarelli, J.C., Panda, N., Howard, J.D., Hannenberg, A.A.: Comparing the learning effectiveness of healthcare simulation in the observer versus active role: systematic review and meta-analysis. Simul. Healthc. 14, 318–332 (2019) 58. Harder, N., Ross, C.J.M., Paul, P.: Student perspective of roles assignment in high-fidelity simulation: an ethnographic study. Clin. Simul. Nurs. 9, e329–e334 (2013) 59. Verkuyl, M., Lapum, J.L., St-Amant, O., Hughes, M., Romaniuk, D., Mastrilli, P.: Designing virtual gaming simulations. Clin. Simul. Nurs. 32, 8–12 (2019) 60. Botezatu, M., Hult, H., Tessma, M.K., Fors, U.: Virtual patient simulation: knowledge gain or knowledge loss? Med. Teach. 32, 562–568 (2010) 61. Keys, E., Luctkar-Flude, M., Tyerman, J., Sears, K., Woo, K.: Developing a virtual simulation game for nursing resuscitation education. Clin. Simul. Nurs. 39, 51–54 (2020) 62. Luctkar-Flude, M., Tregunno, D., Egan, R., Sears, K., Tyerman, J.: Integrating a learning outcomes assessment rubric into a deteriorating patient simulation for undergraduate nursing students. J. Nurs. Educ. Pract. 9, 65 (2019) 63. Luctkar-Flude, M., Tyerman, J., Tregunno, D., McParland, T., Peachey, L., Lalonde, M., Chumbley, L.: Feasibility, usability and learning outcomes of a virtual simulation game as presimulation preparation for a respiratory distress simulation for senior nursing students. In: International Nursing Association for Clinical Simulation and Learning (INACSL) Conference 14 June 2018: INACSL, Toronto ON Canada (2018) 64. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319 (1989) 65. Verkuyl, M., Atack, L., Mastrilli, P., Romaniuk, D.: Virtual gaming to develop students’ pediatric nursing skills: a usability test. Nurs. Educ. Today 46, 81–85 (2016) 66. Richardson, A.M., Dunn, P.K., McDonald, C., Oprescu, F.: CRiSP: an instrument for assessing student perceptions of classroom response systems. J. Sci. Educ. Technol. 24, 432–447 (2015) 67. Sheng, R., Goldie, C.L., Pulling, C., Luctkar-Flude, M.: Evaluating student perceptions of a multi-platform classroom response system in undergraduate nursing. Nurs. Educ. Today 78, 25–31 (2019)

Part II

VR/Technologies for Rehabilitation

Chapter 13

VR/Technologies for Rehabilitation Anthony Lewis Brooks

Abstract Early on, Virtual Reality (VR) was linked to communication—even as ‘a combination of the television and telephone wrapped delicately around the senses’ [1]. VR Technologies are increasingly being adopted in the areas of (re)habilitation and therapeutic intervention healthcare treatments. VR applications that supplement traditional intervention are in human physical, cognitive, and psychological functioning medical treatment programs. Extended Reality (XR), incorporating applications beyond solely VR, with Augmented Reality (AR) and Mixed Reality (MR) applications are also being introduced. This chapter introduces via micro-review, four chapters under the theme VR/technologies for rehabilitation. It follows on from the previous and opening part of this book where ten chapters were introduced and micro-reviewed under the theme Gaming, VR, and immersive technologies for education/training. Specifically, the chapters are titled:—‘Game-based (re)habilitation via movement tracking’ [2]; ‘Case studies of users with neurodevelopmental disabilities: Showcasing their roles in early stages of VR training development’ [3]; ‘AquAbilitation: ‘Virtual interactive space’ (VIS) with buoyancy therapeutic movement training’ [4]; and finally, ‘Interactive Multisensory VibroAcoustic therapeutic intervention (iMVATi)’ [5]. Keywords Virtual reality · Rehabilitation · Technologies · Autism · Participatory design · Games · Health

13.1 Introduction We See Things not as They Are, but as We are—that is, We See the World not as It is, but as Molded by the Individual Peculiarities of Our Minds. —G. T. W. Patrick (1890)

A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_13

241

242

A. L. Brooks

Virtual Reality (VR) technologies are increasingly being adopted in the areas of (re)habilitation and therapeutic intervention healthcare treatments. VR applications that supplement traditional intervention are in human physical, cognitive, and psychological functioning medical treatment programs. Extended Reality (XR), incorporating applications beyond solely VR, with Augmented Reality (AR) and Mixed Reality (MR) applications, are also being introduced. This chapter introduces four chapters under the theme ‘VR/technologies for rehabilitation’. It introduces by offering an opening cross-focused, and sometimes extended, ‘miniscule-review of the field’ by introducing these chapters that have been contributed by an international array of authors concerned about sharing their research to a wider audience across disciplines. Each paper’s author(s) acknowledgement citation and cross-reference herein align to use of their source text to create these snippets to overview and to introduce readership. Specifically, the chapters are titled:—‘Game-based (re)habilitation via movement tracking’ [2]; ‘Case studies of users with neurodevelopmental disabilities: Showcasing their roles in early stages of VR training development’ [3]; ‘AquAbilitation: ‘Virtual interactive space’ (VIS) with buoyancy therapeutic movement training’ [4]; and finally, ‘Interactive Multisensory VibroAcoustic therapeutic intervention (iMVATi)’ [5]. Virtual Reality as a theme is interpreted to include across the spectrum from head mounted display (HMD) systems to non-HMD systems where computer-generated projections onto screens create the ‘virtual’ environment that is interacted with and experienced by participants. This second part, being themed as ‘VR/technologies for rehabilitation’, aligns with how the book contents are overall segmented into four parts with chapters being selected to each. Specifically, Part 1: Gaming, VR, and Immersive Technologies for Education/Training; Part 2: VR/Technologies for Rehabilitation; Part 3: Health and Well-Being; and Part 4: Design and Development. What’s reality anyway? Just a collective hunch!—Jane Wagner/Lily Tomlin (Hamit 1993).

13.1.1 Game-Based (Re)habilitation via Movement Tracking [2] The co-authors of this chapter are from Aalborg University in Denmark. The subjects in the studies of the chapter titled ‘Game-based (re)habilitation via movement tracking’ were 18 children (10 females and 8 males) between the ages of 5 and 12 years, mean age 7.66 years, in 20 gameplaying sessions conducted at two large Scandinavian hospitals: One being the regional hospital for south Denmark located in the city of Esbjerg (Denmark’s 5th largest city), and the other being the Swedish regional Halland hospital located in Halmstad (Sweden’s 19th largest municipality). The hospital staff who were co-researchers in the project and leading the actual sessions selected both subjects and control group participants. The facilitators involved at the hospitals were two play therapists and three doctors. The equipment set-up consisted of a motion-detecting camera interfaced to a popular game-playing entertainment platform with a selected game. The goal was to study

13 VR/Technologies for Rehabilitation

243

the potential of the set-up to motivate/promote the children’s’ movements and to distract them from any medical procedures they were subject to. Results highlighted how the gameplay offered potentials in a healthcare setting that align to the larger body of research the study was conducted within [6, 7]. The authors state that their hypothesis is that game playing using embodied user interaction has evaluand potentials in therapy and thus significance in quality of life research for the special needs community. Aligned to the cited work above, the setup can be studied as a Virtual Interactive Space (VIS) where actions (free gesture game-playing) are analysed and evaluated aligned with investigation goals. The state of Presence commonly associated to virtual reality immersive experience of ‘being there’ was questioned as a ‘sense state’ continuum aligned to the concept of ‘Aesthetic Resonance’. It can be argued that head mounted display’s (HMDs) used with virtual reality optimise the experience of ‘presence’. However, in this research (see also [8–11]) no HMDs were used, yet the participants could not have been more engaged with the presented stimulus and interactions by the games and interactive environments. Thus, this research targeted a participant experience that differed from what is commonly referred to as ‘presence’. Instead targeting (and achieving) an experience more aligned with Loomis [12] who referred to it as “distal attribution” as an experience in which the user experiences “being in touch with” the simulated or remote environment while fully cognizant of being in the real environment in which the display is situated. Loomis [12] further states that “True presence would occur when the observer has neither prior knowledge nor sensory information signifying that he/she is using a virtual or teleoperator display; when these conditions of cognitive and sensory equivalence fail to be met, distal attribution is the more likely result” (p. 593). The authors herein state how meeting such conditions aligned to Loomis [12] are likely never or rarely possible and also that ‘fully cognizant’ is questionable achievable whereby cognizant linkages tend to be fleeting in temporal variants - thus, Aesthetic Resonance is attributed and argued [6–8]. Resulting in this work is an hybrid emergent model titled ‘Zone of optimized motivation’ (ZOOM—see [6, 7, 13]). The authors present how subjective presence has predominantly been investigated in respect of optimal user state in environments and has been suggested as being increased when interaction techniques are employed that permit the user to engage in whole-body movement. Situated presence is also presented in this chapter as real users in a real place vs a controlled laboratory. The goal being exploratory is thus implemented in a pilot study so as to define problem areas to achieve preliminary data on potential of video games in therapy. This chapter illustrates how tools such as the motion camera used herein have potentials to decrease the physical and cognitive load in a daily physical training regime where interactive games are adopted for use to supplement taditional therapeuti interventions. Not all people can use traditional controllers, especially those with certain dysfunction. However, it can be reflected that only recently have major game platform corporates come forward to impact this field. This exemplified by Microsoft and their Adaptive Controller, and Logitech with their switch kit to accompany the controller—both companies aparently having departments with specific

244

A. L. Brooks

focus on peripherals and games that can be accessible. Additionally, these companies are active in working closely with end-users and partners who have a mission statement to make games accessible and understand the potentials in the resultant healthcare and quality of life aspects that are alongside the feelgood factor of achievements and successes for a game player be be considered included. For example the video informing behind the adaptive controller and kit illustrates1 —and as the closing statement by ‘Logitech G’ (https://www.logitechg.com) vice president for gaming Ujesh Desai makes clear “we should have been doing this already”. This aligning to the author’s proposal for ‘inclusive gaming’ declined by major gaming platform contacts under guidance of the then secretary-general of The Interactive Software Federation of Europe (ISFE) around turn of the millennium.

13.1.2 Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development [3] The authors of this next chapter are truly an international team:- Yurgos Politis, who is affiliated to Michigan State University and University College Dublin, in Dublin, Ireland; Nigel Newbutt, who is affiliated to The University of the West of England, located in Bristol, United Kingdom; Nigel Robb, who is affiliated to The University of Tokyo, Japan; Bryan Boyle, who is affiliated to The University College of Cork, in Ireland; Hug-Jen Kuo, who is affiliated to The California State University, in Los Angeles, USA; and Connie Sung, who is affiliated to Michigan State University, in Michigan, USA. In this chapter the multi-national group of co-authors reflected on two projects (as case studies) that enabled disabled groups to be involved in designing and influencing technology used by them. Reflections are also on the process, limitations, barriers and actual involvement of the user groups. Keywords of these case studies include: Virtual Reality (VR), Augmented Reality (AR)/Mixed Reality; User Experience Design; Participatory Design; Case studies; Accessibility; Game design and development. The chapter opens with a clear statement of intent aligned with how user involvement in the design/creation of products and services has become more participatory in nature and the role of the users has evolved from being influencers of just the final outcome (testing a prototype) to influencers of the development and design process; from being tasked with the standardization of products/outcomes to being actively involved in their customization to meet individuals’ needs and preferences; and from being mere participants to having a relationship with the designers/developers. There has therefore been a shift to products being “designed by” the customers where they are actively involved in all phases of the development process of their product (cf). This aligns with this group’s other chapter in this volume where specifically individuals with autism are involved in PD design 1 see

https://www.logitechg.com/en-us/products/gamepads/adaptive-gaming-kit-accessories.943000318.html.

13 VR/Technologies for Rehabilitation

245

for a wall-mounted museum interface: This being a recommended read alongside this chapter to get an idea of the groups positioning in the field. The authors state how development and design process of products or services should enable specific user groups to achieve certain goals with effectiveness and productivity (ISO/IEC 25,010:2011; standardization of software products and software-intensive computer systems). They add by stating how the process can benefit from user involvement because users can offer a different point of view: In other words, to do some ‘out of the box’ thinking of their own that may provide inspiration for the future of development/design. However, as they then state and exemplify with related literature, there are inherent challenges in such participation that needs to be meaningful. The authors challenge the typical top-down design approach by their position on participants contributing to the design, and they introduce a recent initiative to impact Participatory Design with neurodiverse populations (mainly ASD and dyslexia) that has emerged based on the Diversity for Design (D4D) Framework [14]. This framework they inform advocates for design approaches that focus on the strengths of the participants, rather than overcoming weaknesses: Emphasising that the D4D framework is a blueprint for technology designers on how to engage with neurodiverse populations through a PD approach. The authors summarise and present the two main headings of the framework. In this chapter they state how, as far as they are aware, there exists a D4D Framework applied to the development of PD features for ASD and Dyslexic populations, however, there isn’t one developed for people with intellectual disabilities. Thus, the chapter presents two case studies addressing the practicalities of working with people with autism/ID (intellectual disability) in the early stages of Virtual Reality (VR) development. The first case study looks at users’ preferences regarding VR hardware, while the second considers user-involvement in designing the training content for carrying out a task of their choosing. The two case studies in combination examine how users with neurodevelopmental disabilities can influence decisions in the early stages of VR training development. The first reported case study was focused on engaging users in the potential of virtual reality opportunities for learning in schools having the goals to (a) learn about device preference (VR HMD) for young autistic people; (b) understand issues related to sensory and physical reactions, as well as levels of comfort and enjoyment of using VR HMDs; and (c) learn from the co-researchers ways that VR could be used in schools. Forty three participants took part between the ages of six and sixteen years-of-age with a mean age of twelve, with a male to female ratio being twenty eight to fifteen. Full details of the first case study is presented in the chapter by the co-authors. The second case study focused upon investigating a participatory design approach to co-create training materials on a daily living task for young adults with Intellectual Disabilities. In this study, a group of young adults with Intellectual Disabilities (IDs) led the creation of training guidelines on how to carry out a daily living task. The study was of (a) obstacles faced during the Participatory Design process; (b) barriers that the participants and researcher had to overcome in order to create guidelines that are effective; and (c) reflective account (by the researcher) that reflects on the whole process; developing a list of recommendations for best practice. The authors

246

A. L. Brooks

state that the long-term objective is to enable the participants in enhancing their lives, by being able to live independently and being able to secure lasting and meaningful employment. The participants in the second case study were attending a programme at an Irish Higher Education Institute (HEI) in Dublin. Ireland that was organised in partnership with a Service Provider for people with IDs. The authors decided that a focus group methodology was most appropriate. Six participants were involved with four being male and two female having an age range between early twenties and early forties having a variety of diagnoses of Kabuki syndrome, Williams syndrome, Downs Syndrome and general intellectual disability. Full details of the second case study is also presented in the chapter by the co-authors. The authors [3] conclude that the two case studies considered two neurodiverse populations (autistic people and people with ID) and they summarise with identifying the Guidelines for Best PD Practice with these populations that has emerged resulting from their research that they posit should be tested further to inform on inclusive design that can impact inclusive well-being to advance the field aligned to the title of this volume.

13.1.3 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training [4] The title of the next chapter is ‘AquAbilitation: ‘Virtual interactive space’ (VIS) with buoyancy therapeutic movement training’. The author is Anthony Lewis Brooks who is affiliated under Aalborg University, Denmark. The author makes clear that buoyancy aided movement-training is not new, however, he believes that buoyancy aided movement-training with direct audiovisual virtual reality feedback is novel. The background to the concept is that it was inspired from the author’s studies during the early 1990s that investigated his concept of ‘Antigrav’ (extended as ‘Anti Gravity’) aligned to concepts associated to weight and weightlessness, this under his larger body of research titled SoundScapes [7]. An inspiration to this work was from a meeting in Conférences CYPRES, Mouvement et Comportement, Ecole d’Art d’Aix en Provence (trans: ‘Movement and Behavior, Aix en Provence Art School, France’) in March 1995, with French dance choreographer and producer Kitsou Dubois. The author was presenting on the hypothetical potentials of weight and weightless art for individuals and groups who were considered as being handicapped across all ages and including balance training and gesture control for aged and disabled. The concept being to inquire how physical handicapped may use forces and counter-forces to express themselves creatively whilst exploring their own bodies—thus, to have a sense of their bodily ownership toward a freedom of movement to the best of a person’s ability. Dubois presented examples of her dance and movement experiments based upon her concept ‘microgravity’—https://vimeo. com/132641050—The following is from Kitsou Dubois, Gravité Zéro, 1994 from ‘Compagnie Ki’ Productions site-

13 VR/Technologies for Rehabilitation

247

Gravité Zéro: Artistic Description (cf https://www.kitsoudubois.com/wordpress/?page_id= 211#anglais). The unique experience of dancing in weightlessness resulted in a first show: “Gravity Zero” visualises how it feels to fly. The fear of falling ingrained in the memory of the body is overcome by the freedom apparent in the floating movement and by the release of a body no longer weighed down. The dream of flight has come true! A poetic space has come into being in “Gravity Zero”, and it conveys to the spectator the way it feels to fly. The video is unobtrusively present; it is both the trace and the memory of our dream come true. The first part of the performance is based on phenomena of unsteadiness which make us realise how fragile our earthly verticality is. Our references to up and down are muddled by the use of the inclined plane where some of the dancers appear upside-down. In the second part, we have the mirror effects combined with the body movements resulting from zero gravity: fluidity which makes it hard to notice the supports, continuous motion and no feeling of falling or sudden stops. The dancers are gliding and dancing on the floor and the audience sees them flying through space!

This led to the author’s explorations in Contact Improvisation Dance is a form of improvised dancing that has been developing internationally since 1972. It involves the exploration of one’s body in relationship to others by using the fundamentals of sharing weight, touch, and movement awareness. In 1996 ten art performances with international contact improvisation mixed ability dance companies at Arken Museum for Modern Art (MoMA) under The European Culture Capital Copenhagen Kulturby, and performances at the Cultural Paralympics in the Rialto Theatre Atlanta exposed the author to top movement performers including bungee dancers that challenged gravity. Such exposures left a lasting impression on the author and his research. Within facilities available to disabled communities in Scandinavia are special pools. It was in one of these pools at the Lund University Hospital, in Lund, Sweden that the proof-of-concept reported in the chapter was initiated when the author was based there under a European project based upon his research. The chapter overviews the AquAbilitation research concept and how it associated to creation of a Virtual Reality, games, and human behaviour interaction complex titled SensoramaLab. Socialcultural perspectives of the research are shared alongside the fieldwork that informed the practical aspects of the research. Testing was done with the goal of learning from users and therapists what could be an optimal solution. The aquatic Virtual Interactive Space built upon the author’s earlier work presented at the World Congress for Physical Therapy (WCPT) in Yokohama [8]. However, the research targets a participant experience that differs from ‘Presence’ or ‘Telepresence’—a state that many researchers of Virtual Reality target, i.e. experiencing “being in” the computer-generated virtual environment (i.e. the qualia of having a sensation of being in a real place—[15]). Loomis [12] proposed instead of Presence this engagement being as “distal attribution”, in which the user experiences “being in touch with” the simulated or remote environment while fully cognizant of being in the real environment in which the display is situated. Loomis, from a psychological perspective, states that “True presence would occur when the observer has neither prior knowledge nor sensory information signifying that he/she is using a virtual or teleoperator display; when these conditions of cognitive and sensory equivalence fail

248

A. L. Brooks

to be met, distal attribution is the more likely result” (p. 593). This chapter’s author suggests that such a situation, where prior knowledge nor sensory information signifying that he/she is using a virtual or teleoperator display, is arguably not achievable. Aligned, it is reflected that user engagement and experiences of being in touch (via interacting) with elements of the non-HMD simulated environment while partialy cognizant of being in the real environment led to targeted outcome behaviour in the immersive environment—that in a way aligns with Slater’s definition of presence as ‘virtual place illusion and plausibility that lead to realistic behaviour in the immersive environment’. However, given an acquatic environment safety issue considerations are prevalent for future instances of the research where HMDs are planned to be introduced. A hypothesis of such an environment is that an engaged state would be achieved, sufficient to realise the targeted outcomes but with participants still partialy cognizant of being in the real environment necessitated through the safety to prevent drowning or related accidents. It is anticipated that such engaging experiences maybe more intense that non-HMD. However, additionally considered are the safety and well-being of participants that are improved through non-HMD (issues that include eye damage, nausea and dizziness through HMD use as well as other blue-light screen use2 ). Biocca and Levy [1], p. 130 inform how HMDs can potentially spread bacterial infections or head lice among users and take some time to put on and adjust to the individual. Covid-19 pandemic disinfecting of a HMD between-users is similarly relevant! To conclude, as stated, water-based treatment for wellness is not new and buoyancy aided movement-training is introduced in this chapter in the form of the history of Hydrotherapy to lay foundation to the titled concept ‘AquAbilitation’. The author informs on use of technology to create the setup and to analyse participant interactions. The concept emerged from reflecting from the author’s field work where it was clear, that following immersion in water, individuals with profound and severe dysfunctions were different in sessions where gesture-based motion sensors were mapped to auditory feedback (later including visuals and robotic devices) that stimulated their movements. Thus, the author’s non-aquatic environments positively supplemented traditional therapeutic intervention in sessions within a treatment program. To combine water immersion with multimedia feedback (including Virtual Reality) that gives direct feedback to a participant was an obvious next step. The design of the ideal setup is shared where different technologies are proposed to advance the field and this is targeted as future research if a sponsor was to be realised. Images of known pools with projection facilities are also shared to exemplify what could be an ideal setting to further the research towards significant societal impact.

2 https://laserfitlens.com/vision-risk-from-long-term-exposure-to-screens-of-electronic-devices/.

13 VR/Technologies for Rehabilitation

249

13.1.4 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi) [5] The final chapter in this part is titled ‘Interactive Multisensory VibroAcoustic Therapeutic Intervention’ by the same author as the previous chapter, namely Anthony Lewis Brooks from Aalborg University, Denmark. In this case, stimulus is in the form of haptic/tactile vibrations (alongside auditory and visuals, thus multimedia) as direct feedback instead of water against the participant’s body and direct auditory and visual feedback. In this was the holistic body of research SoundScapes includes auditory, visuals, robotics, water against body stimulus, bouyancy stimulus, as well as haptic/tactile feedback. SoundScapes thus advances towards an array of selectable interfaces as well as an array of selectable content stimulus such that flexible modular tailorable adaptable interactive environments are able to be personalised to a participant profile and targeted outcome from the participant experience. The participant experience was targeted to be entertaining, fun and playful and optimally beneficial such that therapeutic targeted goals were approached under a strategy to supplement traditional intervention. This chapter discusses and describes the setup, including the use of a vibroacoustic chamber containing low frequency speakers that were receiving filtered auditory signals from a sonic amplifier. The sounds that are amplified originated from a sound synthesizer with each tone triggered via human movement within a sensing zone. The sensed moving by the participant is directly manipulating sonic feedback that communicate where the movement was made—useful in the case of handicapped as sense of proprioception can be dysfunctional. Thus, a form of communication with the self is established, a causal feedback loop of cause and effect. Further, this is referred to by the author as ‘closing the afferent efferent neural feedback loop’ as what is subsequently sensed as sonic feedback by the brain (afferently), resulting from an initial motion, leads to a firing of synapses from the brain to efferent motoric pathways to initiate following motion with the sensing space. In line with Within the SoundScapes body of research that this chapter is under, communication is thus between a participant and a represented projected self, not necessarily as a replicant humanoid but rather as an interacting ‘form’ of data assemblage that can be referred to as multimedia—to stimulate in a multi-sensory fashion: This a given due to the selectable options available to manipulate sourced forward-input-to-system sensed human data (feed-forward) routed into computer-based contemporary mapping and scaling software. Biocca’s Cyborg’s Dilema text (1997) that followed his keynote of the same title at the International Cognitive Technology Conference in August, 1997 in Aizu, Japan, is considered aligning to this representation of the self as a communication form. The participant lies upon the vibroacoustic chamber to be able to feel the vibration and differences in frequencies that they themselves trigger via movements within three 3-dimensional volumetric space infrared sensors with data configuration rings as an onion in cross-section. The research also used linear ultrasound and planar CCD and CMOS sensors so that pros and cons of each sensor profile can be fitted to

250

A. L. Brooks

therapist’s desired movement for the patient to target optimal benefit and progression in training and optimal patient experience for the patient as they creatively express through triggering the multimedia by motion. Studies were conducted at an institute for children and adolescence having special needs being diagnosed with profound dysfunction. Biofeedback relative to the research is introduced in the chapter as well as the methodology, which were typically case studies aligned to a mixed methods approach using both qualitative and quantitative analysis (e.g. [16]). The chapter further informs on the author’s emergent model for intervention titled Zone of Optimised Motivation (ZOOM [6, 7, 13])—a methodology to optimise a patient experience and a strategy for improved facilitator intervention using technology in a session and including assessment and evaluation post session towards next-session redesign under an iterative progressive treatment program. The chapter informs how the design of the Virtual Interactive Space (VIS), i.e. the environment, where the VibroAcoustic Therapeutic (VAT) setup was establised at the Swedish school (Emaljskolan in Landskrona) built upon related work designing sensor-based multimedia interactive spaces [17, 18]. Close working and social relationship with the teachers led to an optimal situation for the research to be conducted in-situ—that is where the children and adolescence were comfortable and attended daily with staff that they knew who introduced the researcher and tasks. Content was selected to stimulate each individual knowing the various interfaces and the preferences of the children and adolescence—as informed by staff: For example low frequency oscillator synthesiser sounds from a Moog Taurus 3 foot pedalboard matched the vibroacoustic chamber. Such a pedalboard could also be physically pressed by such participants and this is an aspect presented for future inquiry in this field where foot pedals (such as the Moog was intended to be used by its designers) to change the multimedia (primarily the sounds and images) could be an expansion of the research in this chapter. Further, the chapter explains an intended direction where larger VAT platforms are envisaged for use to intensify vibrations and aligned to test differences between valve amplifiers and solid-state amplifiers in the process chain (testing envisaged by deaf and blind who may be more sensitive to haptic feedback stimulus). The work in progress is however limited to space requirement of a designated room for the research to flourish and advance the associated fields including the questioning of (alternative) communication potentials via creative expression as a channel that can potentially stimulate other pathways of stimuli that may be dysfunctional following damage e.g., in brain injury such as stroke. In this way a stroke patient could hear, see and/or feel their movements of a limb in space to train that limb despite their sense of that limb is diminished. Similarly, in such training, a stroke patient can hear, see and/or feel where their torso is in relation to their sensing of balance that is dysfunctional via the feedback that can be selected to match the patient and the sensing of data fine tuned to the individual needs and therapist goal from intervention. The chapter informs how feed-forward and feedback data can be manipulated to realise an optimised patient experience of (re)habilitation.

13 VR/Technologies for Rehabilitation

251

13.2 Conclusions In concluding this second part of the book, that has presented a brief introduction review of each chapter and its author(s) positioning, VR technologies for rehabilitation content chapters are presented. It is anticipated that scholars and students will be inspired and motivated by these contributions to the field of Technologies of Inclusive Well-Being towards inquiring more on the topics. The third part of this book follows the four chapters herein this second part—it is themed ‘Health and well-being’—enjoy. Acknowledgements Acknowledgements are to the authors of these chapters in this part of the book. Their contribution is cited in each review snippet and also in the reference list to support reader cross-reference. However, the references are without page numbers as they are not known at this time of writing. Further information will be available at the Springer site for the book/chapter.

References 1. Biocca, F., Levy, M.R.: Communication Applications of Virtual Reality. Erlbaum (1995) 2. Brooks, A.L. & Brooks, E.I.: Game-based (re)habilitation via movement tracking. In Brooks et al. (eds) Recent Advances in Technologies for Inclusive Well-Being. Intelligent Systems Reference Library 196, (2021). https://doi.org/10.1007/978-3-030-59608-83 3. Politis, Y., Newbutt, N., Robb, N., Boyle, B., Kuo, H.J., Sung, C.: Case studies of users with neurodevelopmental disabilities: Showcasing their roles in early stages of VR training development. In Brooks et al. (eds) Recent Advances in Technologies for Inclusive WellBeing. Intelligent Systems Reference Library 196, (2021). https://doi.org/10.1007/978-3-03059608-83 4. Brooks, A.L.: AquAbilitation: ‘Virtual interactive space’ (VIS) with buoyancy therapeutic movement training. In Brooks et al. (eds) Recent Advances in Technologies for Inclusive WellBeing. Intelligent Systems Reference Library 196, (2021). https://doi.org/10.1007/978-3-03059608-83 5. Brooks, A.L.: Interactive Multisensory VibroAcoustic therapeutic intervention (iMVATi). In Brooks et al. (eds) Recent Advances in Technologies for Inclusive Well-Being. Intelligent Systems Reference Library 196, (2021). https://doi.org/10.1007/978-3-030-59608-83 6. Brooks, A.L.: Intelligent Decision-Support in Virtual Reality Healthcare & Rehabilitation. Studies in Comput. Intell. 326, 143–169 (2011). https://doi.org/10.1007/978-3-642-16095-0 7. Brooks, A.L.: SoundScapes: the evolution of a concept, apparatus and method where ludic engagement in virtual interactive space is a supplemental tool for therapeutic motivation (2011). https://vbn.aau.dk/files/55871718 8. Brooks, A.L.: Virtual interactive space (V.I.S.) as a movement capture interface tool giving multimedia feedback for treatment and analysis. In: Proceeding of World Confederation for Physical Therapy (1999) 9. Brooks, A.L.: Ao Alcance de Todos Música: Tecnologia e Necessidades Especiais., Casa da Musica. Proc. 7th ICDVRAT with ArtAbilitation, Maia, Portugal, 2008. Reading University, UK (2008). https://vbn.aau.dk/ws/portalfiles/portal/41580679/Porto_Workshop_2008_paper. pdf 10. Brooks, A.L., Hasselblad, S.: Creating aesthetically resonant environments for the handi434 capped, elderly and rehabilitation: Sweden. In: Proceedings of 6th International Conference on Disability, Virtual Reality and Associated Technologies, pp. 191–198 (2004)

252

A. L. Brooks

11. Brooks, A.L., Hasselblad, S., Camurri, A., Canagarajah, N.: Interactionwith shapes and sounds as a therapy for special needs and rehabilitation. In: Proceedings of 4th InternationalConference on Disability, Virtual Reality, and Associated Technologies, pp. 205–212. Veszprém, Hungary (2002) 12. Loomis, J.M.: Presence and Distal Attribution: Phenomenology, determinants, and assessment. In: Proc. SPIE 1666 Human Vision. Visual Processing and Digital Display III, 590–594 (1992) 13. Brooks, A., Petersson, E.: Recursive reflection and learning in raw data video analysis of interactive ‘play’ environments for special needs health care. Healthcom (2005b). IEEE https:// ieeexplore.ieee.org/document/1500399 14. Benton, L., Vasalou, A., Khaled, R., Johnson, H., Gooch, D.: Diversity for design: a framework for involving neurodiverse children in the technology design process. In: Proceedings of CHI (2014) 15. Slater, M.: Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. Royal Soc. 364, 3549–3557 (2009). https://doi.org/10.1098/rstb. 2009.0138 16. Yin, R.K.: Case Study Research and Applications: Design and Methods (6th edn.) Sage (2017) 17. Brooks, A.L., Petersson, E.: Play therapy utilizing the Sony EyeToy®. In: Slater, M. (ed.) Annual International Workshop on Presence (8th) pp. 303–314 (2005) 18. Brooks, A.L., Sorensen, C.: Communication Method and Apparatus. Patent US 6(893), 407 (2005)

Chapter 14

Game-Based (Re)Habilitation via Movement Tracking Anthony Lewis Brooks and Eva Brooks

Abstract An international collaborative explorative pilot study is detailed between hospitals in Denmark and Sweden involving rehabilitation medical staff and children where the affordable, popular and commercially available Sony PlayStation 2 EyeToy® is used to investigate our goal in enquiring to the potentials of games utilizing mirrored user embodiment in therapy. Results highlight the positive aspects of gameplay and the evacuant potential in the field. Conclusions suggest a continuum where presence state is a significant interim mode toward a higher order aesthetic resonance state that we claim inherent to our interpretation of play therapy. Whilst this research is a few years ago the findings are still relevant and align to contemporary studies and in context of this book where cross-reference is made from another publication. Keywords Flow · Therapy · Training · Play

14.1 Introduction Our hypothesis is that game playing using embodied user interaction has evaluand potentials in therapy and thus significance in quality of life research for the special needs community. A state of presence is inherent where stimulation of fantasy and imagination involves engagement and subsequent interaction with a virtual environment (VE). Once this engagement is achieved and sustained we propose that a higher order state is achievable through empowered activity toward a zone of optimized motivation (ZOOM) [1]. This is possible by using an interface to the VE that is empowering without the need for any wearable technology that is deemed encumbering or limiting for the participant. The interface data—participant motion—is mapped to control immediate feedback that has real world physical traits of response interesting, enjoyable, and fun for the participant experience and engagement is further enhanced. A. L. Brooks (B) · E. Brooks Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_14

253

254

A. L. Brooks and E. Brooks

Subjective presence has predominantly investigated in respect of optimal user state in environments and has been suggested as being increased when interaction techniques are employed that permit the user to engage in whole-body movement [2]. Our findings to date indicate at the motivational potential from an enhanced state of presence achieved from game environments where the body is used as the interactive unencumbered interface [3–7].

14.1.1 Presence and Aesthetic Resonance: As a ‘Sense State’ Continuum We are interested in observed behaviour aspects of presence where there is evidence of only a limited body of research. Accordingly, the case is made for a continuum beyond presence that satisfies our requirement of a play therapy scenario where, from within what is termed a state of aesthetic resonance, we enquire to the potential from game systems with mirrored user embodiment by using the EyeToy® . As a result of this initial pilot enquiry we intend to reach a point from where to launch a fuller investigation with a more optimized environment, method, and analysis design. Aesthetic Resonance (AR) is when the response to intent is so immediate and aesthetically pleasing as to make one forget the physical movement (and often effort) involved in the conveying of the intention and is in line with [4, 8]. Within targeted aesthetic resonance our strategy is to approach the same immersive engagement that occurs between a child and a computer video game that is often subject to negativity and reverse the polarity of attitude so that it is positively used to empower activities beyond the usual limits of the special needs participant through encouraging an immersed ‘play’ mindset rather than a ‘therapy’ mindset which our prior research has shown as optimal [9]. Within this set up the same information that is used as control data to the interactive feedback content is available for simultaneously performance progress monitoring. System tailoring as a result of observations of user performance—both physiological and psychological—is opportune with related testing that supplements traditional forms of performance measurement. This is in line with our earlier approach to interaction in virtual environments with acquired brain damage patients [4, 5, 9, 10] and is related to a study concerning brain neuroplasticity and associated locomotor recovery of stroke patients that reports on users interacting with games and perceiving the activity not as exercise or therapy, but as play [11].

14 Game-Based (Re)Habilitation via Movement Tracking

255

14.1.2 Play Most play research informs about its relationship to children’s cognitive development, and focuses on solitary play [12]. However, this research does not account for the totality of what is going on between children in situations of interactive play therapy. Our play therapy approach is activity driven and the targeted aesthetic resonant state of the user we suggest is beyond the often-used all-encompassing term of presence. Significantly, others have approached presence as an activity including video games [13]—but conducted in a laboratory which we question due to the situated effect of the environment on the participants. In previous studies [1] we state that activities always are situated, which underline a complex relationship between the individual, the activity, and the environment as mutually constitutive [14]. Thus, a relationship to situated presence is implied as we base our enquiry at locales of predicted use with real users. The goal being exploratory is thus implemented in a pilot study so as to define problem areas to achieve preliminary data on potentials of video games in therapy.

14.1.3 Under Used Resource for Therapy With the advancement in computer vision techniques and camera advancements, we claim that systems such as the EyeToy® , which focus on the body as the interface, are an under resourced opportunity for therapists to include into training as unlike traditional biofeedback systems, specific licensing is not required as there are no attachments to the patient. The system also achieves an essential aspect of children’s engagement in virtual or real worlds as within our situated interactive therapy space they are ‘placed’ in the midst of the experience, as in a flow state [15]. We hypothesize that tools such as the EyeToy® have potentials to decrease the physical and cognitive load in a daily physical training regime, and this is central to our concept as the child experiences a proactive multimodal state of presence that encourages an unconscious ‘pushing of their limits’ that they otherwise would not approach outside of the interactive framework. This supports the statement of iterative human afferent efferent neural loop closure as a result of the motivational feedback and feed-forward interaction. This process is valuable for the child’s physical demands in everyday life as the pushing intensifies the child’s experience of movements in practice [16].

256

A. L. Brooks and E. Brooks

14.2 Gameplaying and Mastery The investigation presented in this chapter addresses the promotion of motivational feedback within empowered gameplaying activities whilst attempting at understanding motivational mechanisms. This is by analysing the gameplaying as an action where the child’s increased skills in using the video game is viewed as a process of emerged mastery [17] of their ‘doings’ in a form relating to cycles of action-reactioninteraction. The material of the child’s action within this study is the movement as the child masters the computer game by moving the body. In Laban’s [16] terminology this is described as an ‘effort’ and he furthermore underline the importance of offering the child opportunities to express him- or herself through non-human directed efforts in order to keep and increase the child’s immediate spontaneity in the situation (e.g. reactive content that promotes subsequent interaction from the child). For environments to be supportive in this sense, they must engage the child in challenging ways. Even though environments provide children a sense of challenge, they have to feel that their skills meet the challenges. If there is an imbalance between the challenges and the child’s skills the child will become stressed or bored. Play and exploration encourage a sense of flow (immersion in enjoyable activities) that “provides a sense of discovery, a creative feeling of transporting the person into a new reality. It pushed the person to higher levels of performance, and led to previously undreamed-of states of consciousness” [15, p. 74]. Optimal experience is also described as “a sense that one’s skills are adequate to cope with the challenges at hand, in a goal-directed, rule-bound action system that provides clear clues as to how well one is performing” [15, p. 71]. These activities are intrinsically rewarding and the enjoyment derives from the gameplaying activity in itself, which is related to the notion of the Zone of Proximal Development in learning situations [18]. In an explorative manner the child’s cycle of movements can be shown to be fluent and intense or segmented without connection. Laban [16] defines such changes in movements as important as they indicate whether there is a presence or absence of flow from one action and state of mind to another. As such the ZOOM [1] is important in its encouragement of the child’s unintentional and/or intentional explorations, without immediate goals as in play, or curious discovery, and as a foundation of evoked interest [19]. This kind of interest indicates that the state of aesthetic resonance facilitates a foundation of creative achievements. The motivational feedback loop described in this paper is also influenced by Leont’ev [20] description of the formation of an internal plane. We have chosen to use the term of mastery to describe such processes where emphasis is on how the child’s use of the game features leads to development of certain skills rather than on internalization [18], or more generalized abilities. Thus, gameplaying actions do not need to be conscious, as at a certain level they can be unconscious skills, which, supported by playful aspects of the game, proactively push the child’s limits towards new levels of movements.

14 Game-Based (Re)Habilitation via Movement Tracking

257

As a preliminary investigation, we attempt to understand movements according to a semiotic interplay between the child’s inner and outer world [21] and relate the understanding to presence, through which spontaneous movement engagement and intensity is assigned [16]. We compare this to Wenger’s [22] and Vygotsky’s [18] description of emergent development processes. Bigün et al. [23] characterize such processes as non-formal, where exploration and curiosity are central conditions, rather than traditional formal training conditions. The movement cycle of the gameplaying child includes a construal of rhythm. The movement cycle is concentrated on the game’s external achievement and by moving the body to achieve the external goal the child relates the inner world to the outer. However, it is not so that every movement unifies the inner and outer worlds, there has to be a “reciprocal stimulation of the inward and outward flow of movement, pervading and animating the whole of the body” [16, p. 110] in order to enhance a sense of aesthetic resonance. In this way there is a range of flow through presence, from excitement to stillness, which increases and decreases the child’s participation in the gameplaying activity. This range embraces an orchestration of expanding bodily action in space, or, in terms of Laban [16], includes different trace forms of movements that demands continuity of gestures and it is these gestures that we analyse.

14.3 Method In consequence with our interpretation of the referenced theories and to fulfil the goals of the investigation we used a triangulation of qualitative methodologies to qualitatively analyse the combined materials from the two hospitals: – – – –

Video observations of children playing with the Keep Up EyeToy® game; Interviews with children and facilitators; Questionnaires to the facilitators involved; Diaries/field notes from the facilitators involved.

The subjects in the studies were 18 children (10 females and 8 males) between the ages of 5 and 12 years, mean age 7.66 years, in 20 gameplaying sessions. The children were selected by the hospitals and were well functioning. The control group was similar children from the hospitals not in sessions [5, 9, 10]. The facilitators involved at the hospital were two play therapists and three doctors.

258

A. L. Brooks and E. Brooks

14.3.1 Description of Material In 2003 Sony Computer Entertainment Inc. released the EyeToy® as a new video game series for its market leading PlayStation® 2 (PS2) platform which is based upon using the player’s body movements as the interface to the game. This controller is unique in concept as all interactions to the game are through the video window rather than through the more common handheld gamepad or joystick device. The system is thus ideal for our enquiry. The EyeToy® game chosen for this study was called ‘Keep Up’ due to its immediate action content, built in scoring, and cross gender qualities. A monitoring system based on multiple cameras supplemented so that post session analysis was available.

14.3.2 Description of Procedure EyeToy® games have ‘tasks’ for the participants to accomplish. The task within this game is to keep a virtual football—with animated real-world physical properties— ‘up’ within a virtual environment. One game sequence is limited to three balls and three minutes. After three balls, or alternatively three minutes, the game agent turns up and gives the player negative or positive feedback related to the scores of the game. The player can increase or decrease the scores by hitting monkeys and other animated characters with the ball as the game progresses. At both hospitals the studied activities took place in rooms that also were used for other purposes, such as staff meetings and parent information. The children were not normally playing in this room and the system had to be set up around positional markers on floor and tables. Parents were approached about the project, informed of the goals, and were asked to give their permission on behalf of their children beforehand. Following the parents signing their permission the children were also asked to sign their permission to participate. The process started with positioning the child in the calibration upper torso outline on the screen and after an introduction the game was started. The gameplaying activity was observed and video recorded by the play therapists and doctors. After the ending of the session the children were immediately asked follow up questions concerning their experiences of the gameplaying activity. After the end of all sessions the play therapists and doctors were asked to fill in a questionnaire concerning their own experiences. A final interview with the play therapists and doctors was also carried out to conclude the field materials.

14 Game-Based (Re)Habilitation via Movement Tracking

259

Fig. 14.1 Set-up of equipment: a EyeToy® camera plus front monitoring camera to capture face and body expression. b VHS tape recorder c screen d PS2 e projector f the user space g rear camera to capture scene and screen h tape recorder #2

14.3.3 Description of the Set up In previous research on camera capture as game interface [6, 11] standard TV monitors were apparently used. Our approach uses a LSD projector for large image projections approaching a 1:1 size ratio of the child (mirroring). This strategy is built upon our prior research investigations [1, 3–5, 8, 9, 10, 24] to optimize the experience. A related study is reported in the case of presence and screen size [25]. Traditional use of mirroring is used in therapy training at institutes for people with disabilities and thus our design is ‘fit appropriate’ to this context. Figure 14.1 demonstrates the set-up of the gameplay. The components included in the set up was: (a) EyeToy® camera plus front monitoring camera to capture face and body expression (b) VHS tape recorder (c) screen (d) PS2 (e) projector (f) the user space (g) rear camera to capture scene and screen (h) tape recorder #2.

14.3.4 Description of Analysis The video recordings underwent numerous tempo spatial analyses [26] where the units of analysis were the qualitatively different expressions of movement. The material attained from the sessions consisted of 36 × 1 (one) hour mini digital videos (rear and front views)—and corresponding additional backup video recordings—of the 240 video games that were played by the children (n = 18) in 20 sessions at the

260

A. L. Brooks and E. Brooks

two hospitals. Each video was digitized for the subsequent analysis; similarly, all video interviews, written notes, memos and written interviews were transcribed and transferred onto a computer workstation.

14.3.4.1

Manual Analysis

Annotation was conducted by two coders. An initial series of four manual annotations of the video materials were conducted. These accounted for observed expressive gesture of the children (facial & body) (see Fig. 14.2, and Appendix 4: Table 14.3). In addition, each video archive game and pause duration was time logged and the first, last, and best performance extracted for later analysis (example charts of three children in Appendix 1: Fig. 14.3). Annotation of parameters of the games and pauses (between) before/after best and worst performance were also subject of closer analysis. An extra annotation was carried out on same child multiple sessions (n = 2) including t element task scores (ball 1, 2, 3). The temporal specifics concern rhythm as a periodic repetition and include dynamic kinetic change as well as structural patterns. Examples of temporal events are the qualities that are in play when the child affects the ball from one spot to another by swinging the body/hands or arms to and fro, which is often a challenge for those with functionality problems. The repetition of a movement develops a sense of enjoyment and engagement of the activity, which, in turn, motivates the child to continue to experience the movement. Laban [16] states that the repetition creates a memory of the experience, which is needed for new inspiration and insight to develop. More specifically the temporal data was classified into discrete units for

Fig. 14.2 Fully engrossed in the interaction—left images in Denmark hospital—right images in Swedish hospital

14 Game-Based (Re)Habilitation via Movement Tracking

261

analysis by applying the specifics of speed, intensity, and fluency of movements [16, and Efron in 26]. The spatial specifics concern where the body moves through extended movements towards another situation in the spatial environment. Example of spatial events are the qualities that are in play when the child seeks another situation in the spatial environment, e.g. moving like jumping or leaning the body from one side of the screen to the other whereby the central area of the child’s body is transported to a new position when keeping the virtual game ball up in the air. The spatial data was classified into discrete units for analysis by applying the specifics of range and intentionality of movements [16, and Efron in 26]. Alongside with these tempo spatial qualities children’s face expressions and utterances were analysed. Thus, a detailed manual multimodal analysis of the videos was realized so that: (a) each video was watched twice before the detailed analysis began; (b) the analysis of the first eight videos was realised twice each and the following eight videos once each; (c) each minute of video was systematically analysed and transcribed into an excel flowchart in relation to the categories described above. The categories analysed represented high or low degrees of the specific movement trait. This flowchart also included analysis of a facial expression, a description of what happened on the screen (Appendix 4: Table 14.3); (d) every category (n = 8) was analysed separately, which means that the first eight videos were watched in total 18 times each, and the remaining being watched 10 times each. Additionally, the multi-sessions were annotated further four times.

14.3.4.2

Computer Analysis

Toward a goal to amass indicators of the overall motion attributes of each child an automated low-level movement analysis was computed on the videos utilising software modules from the ‘EyesWeb Gesture Processing Library’ specific to the quantity and contraction aspects of the movement.1 The data was then exported to a spread sheet for further analysis. Our strategy for the automated computer video analysis was to supplement the manual annotations toward our overall goal in development of the methodology by (a) following a background subtraction on the source video to segment the body silhouette a Silhouette Motion Image (SMI) algorithm that is capable of detection of overall quantity, velocity and force of movement is used. Extraction of measures related to the ‘temporal dynamics of movement’ is computed and a threshold value slider can be adjusted according to each child’s functional ability so that he or she is considered to be moving if the area of the motion image is greater than the related (to threshold) percentage of the total area [27]. The adjustment of the threshold value is achieved in real-time annotation of the videos (Appendix 2: Fig. 14.4); (b) a contraction index (CI with range 0–1) algorithm is used with a bounding rectangle that surrounds the 2D silhouette representation of the child within the minimal possible 1 www.bris.ac.uk/carehere

and www.eyesweb.org.

262

A. L. Brooks and E. Brooks

rectangle. The CI is lower if the child has outstretched limbs compared to an image showing the limbs held close to the body where the CI approaches 1 (Appendix 2: Fig. 14.5). Problems were apparent with the child encroaching towards the camera, and background noise. A correcting normalisation algorithm was unsuccessful in correcting the problem and thus refinement is needed [27].

14.4 Results Our explorative question concerned the potential of video games in therapy and requirements toward a meaningful and optimized full investigation. Our findings present the facts that: (1) more care in the set-up of the room background is needed— some videos had curtains blown with wind and people walking behind the child, (2) attire of children should contrast background—if light background and light shirt, then camera software problems occur with differentiating between child and background, (3) lighting of child/room should be optimised, (4) the system is developed for upper torso single person play but many of the children used all of their bodies, especially in kicking when the ball was lower in the screen (5) facilitators should not talk or be in line of sight. Our instructions were also interpreted differently by each hospital in so much that (1) in Sweden a time limit of 10 min was established for each session, (2) a long practise period was included within the Swedish ten-minute period, (3) in Denmark one of the doctors also included practice periods for his children, (4) in Sweden multiple sessions were held in the same day whilst in Denmark single session per day.

14.4.1 Tempo Spatial Movements In annotating the games Start—Middle—End segmented zones were interpreted in respect of game and pause data. As expected the best performance was achieved in the end segments on an 8:15:17 ratio (even accounting for extended play boredom through no level change). The shortest game ratio was 18:13:9; the longest pause ratio 16:12:12; and the shortest pause ratio 8:14.18. These figures indicate that the virtual environment interaction with the EyeToy® met with predicted balance of performance and learning curve. Of interest within the figures was the fact that in most cases the best performance was preceded by the child’s shortest pause and that following the best game it was often the case that the next two games declined in performance drastically. This matches the manual annotation where the activity (play) peaks and in most cases the emotional expression from face and body gesture before and after relates. A general result was the faces of the children giving a defined statement of their presence (and aesthetic resonance) in the interaction with the content of the game, which was mostly pleasing and a challenge for their skills.

14 Game-Based (Re)Habilitation via Movement Tracking

263

The detailed analysis showed a connection between tempo spatial movements and aesthetic resonance through a correlation between the categories of intensity and intentionality. When there was a high, medium, or low degree of movement intensity, the same degree was always appearing in the category of intentionality of movements. Furthermore, there was a higher degree of aesthetic resonance related to spatial movements than to temporal as the categories of range, intentionality, and shifts had high or medium degree of movements. The categories of speed and fluency, on the other hand, had low or medium degrees of movements, while the degree of intensity in temporal movements was high (Appendix 3, Table 14.2). The computed data analysis supported the manual analysis so as to indicate higher or lower degrees of quantity of movements (QOM) and through the threshold of motion and non-motion segmentation (Appendix 2: Fig. 14.4). Our findings in the multi-sessions were limited to two children. The standard deviation in scores between the sessions is significantly reduced with the girl [duration] 46% [between] 30% [1st ball duration] 79% [2nd ball duration] 1% [3rd ball duration] 49%—the boy, who notably in the first session had an intravenous attachment, showed insignificant change in total. Overall, consistent to our single sessions were reduced ‘between’ times for both the girl (12%) and the boy (9%) which we claim as a possible indicator of motivation, which we relate to the enjoyment and fun in playing the game. This involves emergent learning of navigation modes and is an attribute to aesthetic resonance through its inherent presence factor. In the multi-sessions we conducted a preliminary computer analysis for duration of last pause and motion phases (Appendix 2: Fig. 14.4). Our findings were that both the girl and the boy had increased standard deviation and average of duration of last pause phase combined with a reduced duration of motion phase from the first to second session. This may indicate that over a number of sessions less motion is required to achieve similar tasks, thus more effective movement is learnt as the child gets acquainted with the game. Further investigation in relating such findings to presence would seem in order. To sum up, aesthetic resonance was indicated partly through the high degree of intensity and intentionality in movements. Intensity and intentionality were shown through the children’s concentration and also through their force and passion when playing the game. Aesthetic resonance was indicated by the degree of movements of range and shifts in the children’s movements. The categories of speed and fluency did not have any influence on aesthetic resonance as they did not influence the intensity, intentionality, range, or shifts in movements.

14.4.2 Interface and Activities In interviews with children concerning their positive and negative experiences of the EyeToy® game the main part of the children expressed positive experiences. 61.1% (n = 11) of the children thought the EyeToy® game was fun, while 22.2% (n = 4)

264 Table 14.1 Attributes

A. L. Brooks and E. Brooks Positive? Interface

Children

Activity

Children

Body used

22.2% (4)

Ball-play

22.2% (4)

To move

11.1% (2)

Monkeys

16.6% (3)

Mirroring

5.5% (1)

Challenge

16.6% (3)

Scoring

5.5% (1)

Sum

61.1% (11)

Sum

38.8% (7)

Negative?

Difficult?

Activity

Children

Activity

Children

Monkeys

5.5% (1)

Ball-play

38.3% (7)

Repetition

5.5% (1)

Pauses

5.5% (1)

Sum

16.6% (3)

Sum

38.3% (7)

said that they liked it. One (1) child said that the EyeToy® game was difficult, but he also said that the gameplaying was fun. Concerning positive and negative specifics of the gameplay 38.8% (n = 7) of the children answered on the interface attributes and 61.1% (n = 11) on the activity attributes of the game (Table 14.1). The children’s negative experiences of the game only concerned activity attributes regards the content of the game. Two children answered that they enjoyed the whole EyeToy® game. Six children referred to movements—using the body and to move—when they were asked about the positive attributes of the game. Four children said that the ball-play attribute was the best, while seven children stated that the ball-play attribute was the most difficult. These facts indicate that the ball-play attribute in itself was a challenging activity, as three of the children also confirmed. The game agents were the main attributes when the children referred to negative aspects of the EyeToy® game experiences as it repeatedly gave negative feedback to the children. The monkeys were stated as difficult by one child, but were also considered as fun by three of the children. In summary, the children’s experiences of the EyeToy® game indicated that the interface supported the gameplaying activity in a challenging way and aesthetic resonance was achieved through this challenge.

14.4.3 Resource for Therapy In interviews and the field notes from the play therapists and the doctors positive, negative, and practical aspects of the children’s gameplay with the EyeToy® game was started. They also gave indications on potential with the EyeToy® game in therapy.

14 Game-Based (Re)Habilitation via Movement Tracking

265

Positive aspects: The EyeToy® game was great fun for the children who were concentrated on the tasks in the game. Negative aspects: The children quickly became bored as it was either too hard or too easy to play; three balls were too few; the game ended quickly limiting the challenge; the game agent mostly gave negative feedback, which many of the children commented upon. Practical aspects: A room allocated for the test is necessary for future research; the camera set-up was too complicated to handle; the camera set-up limited some of the children’s movements; both hospitals wish to continue with future EyeToy® research. Potentials with EyeToy® in therapy: The game activity is fun and the training aspect simultaneously involved, becomes fun as well; the game activity brings in movements to the therapy, which make sense and benefits the children’s rehabilitation; playing the EyeToy® game becomes physiotherapy; if there was more of challenge and action in the games, the potentials for therapy would increase as the fun and motivation for moving probably would increase.

To sum up, the results from field notes and interviews with the play therapists and doctors underlined the potential with the EyeToy® system in therapy emphasizing flow and fun aspects of the gameplaying as beneficial for the therapy training.

14.5 Discussion The purpose of the study was to qualify the initial use of the system for children in rehabilitation in a hospital scenario with a consideration of the inherent logistics and practicalities. We restricted our unit of analysis to different expressions of tempo spatial movements in process as indicators of a possible presence state related to behaviour and situation within play therapy. Through our exploratory investigation our findings indicate that aesthetic resonance through intensity and intentionality is related to flow and conscious reactions when a child interacts with the EyeToy® game. Furthermore, presence enhanced aesthetic resonance through range and shift related to movement increments. As far as we can ascertain, the limited computed data supports the manual annotations and our claim where observation of activity mediated within a human afferent efferent neural loop closure as a result of interaction to content of a virtual environment. The field-experiments we consider as a start toward understanding the mechanisms of motivation promoted by multimodal immersion, and the triangulations of actions becoming reactions resulting in interaction in play activities.

266

A. L. Brooks and E. Brooks

14.6 Conclusions Our approach relates to the heuristic evaluation strategy of Nielsen [28] where natural engagement and interaction to a virtual environment having ‘real-world’ physical traits and being compatible with user’s task and domain is such that expression of natural action and representation to effect responsive artefacts of interesting content feedback encourages a sense of presence. Beyond presence we seek a sense state continuum that stimulates intrinsic motivated activity, and from prior research we have termed this aesthetic resonance. To engage an actor in aesthetic resonance we implement a strategy toward creating enjoyment and fun as the user perceived level of interaction where emotional expression of body is the control data of the feedback. In this way an afferent efferent neural feedback loop is established. The data that is controlling the feedback content is available for therapeutic analysis where progression can be monitored and system design adapted to specifics of the task centred training. The user experience however is targeted at being solely play based. In this document we report on our pilot study which is the first phase of an extended full-scale research investigation based on our hypothesis that the positive attributes in utilizing digital interactive games that embody the actor in VE therapy will relegate the negativity tagged to video games and offer new opportunities to supplement traditional therapy training and testing. Our prior research informs that intrinsic motivation is a potential strength of game interaction where the user becomes aware only of the task and in an autotelic manner extends otherwise limiting physical attributes beyond what may otherwise be possible to achieve, and this supports our hypothesis. This study discovered that problems to overcome are the video recording system, the interpretation of instruction, and the room availability. A new single button system for optimizing the video recording system has been designed and budget planned to improve the next phase of the project. Similarly, the hospitals promise a designated space in future. The children’s quantity, dynamic, and range of movements when immersed in the gameplaying activity were over and above their usual range of movements. Their facial expression and emotional outbursts further substantiated our claim that an initial state of presence was achieved. Acknowledgements Hospitals, Staff, & children: Länssjukhuset in Halmstad, Sweden; Sydvestjysk Sygehus in Esbjerg, Denmark. SCEE, Egmont/Nordisk Film, Scandinavia. Sony Denmark. This study was part financed by cooperation between Sony Computer Entertainment Europe; Egmont Interactive, Scandinavia; Sony Denmark, SoundScapes ApS, Denmark, and the authors. “PlayStation” is a registered trademark of Sony Computer Entertainment Inc. “EyeToy” is a registered trademark of Sony Computer Entertainment Europe. Algorithms adapted from those created with partial support from IST Project CARE HERE where the first author was researcher [27]. A considerable part of this paper is based on a paper presented at the Presence 2005 event in London without DOI (http://matthewlombard.com/welcome.html or https://astro.temple.edu/~lom bard/ISPR/Proceedings/2005/Brooks%20and%20Petersson.pdf).

14 Game-Based (Re)Habilitation via Movement Tracking

267

Fig. 14.3 Three examples showing game play results: (top graph) Esbjerg 9 (male 7 years of age) where successes are inconsistent and possibly due to unstable presence. Game 13 is where a higher level was attempted shown by his ‘between time’ high. Esbjerg 13 (girl of 8 years of age—middle graph) achieved completion of the full game (8th game) resulting in an affirmative comment from the game agent. Esbjerg 14 (female 10 years of age—low graph) had most problems (game duration average 24/56.6) this reflective of her functional condition (brain tumor), however she achieved the most number of games (32) whilst continuously pushing her limitations and at conclusion interview described the “great fun” despite her difficulties

Appendix 1 See Fig. 14.3.

Appendix 2 See Figs. 14.4 and 14.5.

268

A. L. Brooks and E. Brooks

Fig. 14.4 Quantity and segmentation of movement. Threshold/buffer/motion phase indicators (upper right). Buffer image, SMI and source windows (upper left), Halmstad hospital, Sweden. Algorithm for QOM, pause and motion phase duration available from authors

Appendix 3 See Table 14.2.

14 Game-Based (Re)Habilitation via Movement Tracking

269

Fig. 14.5 Contraction Index (CI) analysis. Upper right shows silhouette bounding rectangle initially set on buffer image, Esbjerg hospital, Denmark. Algorithm is made available from the authors

Appendix 4 See Table 14.3.

8

9

2 3

5 6

2

3

72.2

55.5

66.6

Range

Intentionality

Shifts

16.6

9

6

2

7

7

Fluency

14

4

4

6

61.1

Shortest g#

7

9

10

6

33.4

Longest g#

15

25

28

5

Intensity

4

Shortest g#

13

15

4

Speed

16

Longest g#

3

High degree (%)

1

Total games

2

Category of movement trait

16

7

1

Session

8

5

9

5

13

7

4

2

1

5

5

8

14

13

22

12

24

9

13

2

1

7

14

10

2

4

3

2

5

12

9

2

4

8

11

13

16.6

22.2

16.6

55.5

16.6

22.2

Medium degree (%)

4

3

4

3

5

11

Table 14.2 Session overview: upper = sessions/games (g)/pauses (p). lower = movement analysis

18

8

13

25

32

14

8

2

3

11

14

15

4

5

2

1

5

17

6

4

3

1

6

18

16.8

22.3

11.2

27.9

22.3

44.4

Low degree (%)

4

3

1

2

8

16

5

8

7

3

8

19

2

6

2

5

8

20

270 A. L. Brooks and E. Brooks

1

1

1

1

1

1

10

11

12

13

14

15

1

9

1

1

7

8

1

1

6

5

1

3

1

1

2

4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

Lo

1

Hi

1

1

1

1

1

1

1

1

1

1

1

1

1

1

Lo

1

1

1

1

1

1

1

1

Hi

Hi

Hi

Lo

Spatial Range

Speed

Fluency

Intensity

Temporal

1

Time min.

1

1

1

1

1

1

1

Lo

Table 14.3 Tempo spatial analysis: an example of one annotated session video file

1

1

1

1

1

1

1

1

1

1

1

1

Hi

1

1

1

Lo

Intentionality

1

1

1

1

1

1

1

Hi

Shift

1

1

1

1

1

1

1

1

Lo

Monkeys/game over/character/wave (continued)

Character/wave/ball/monkeys (shortly)

Character/wave/ball/game over

Ball/monkeys/game over

Ball/monkeys

Monkeys/game over/character/wave

Character/wave/ball/monkeys

Wave/ball/monkeys/game over

Ball/monkeys/game over/character/wave

Monkeys/game over/character/wave

Wave/ball/monkeys

Ball/monkeys/game over/character/wave

Character/wave/ball/game over

Wave/ball/game over

Start screen/character

Screen

14 Game-Based (Re)Habilitation via Movement Tracking 271

1

1

1

12

24

25

Sum

13

1

1

23

1

1

22

21

20

19

1

1

18

17

17

1

1

1

1

1

Lo

8

1

1

1

1

1

1

3

1

1

Hi

Lo

22

1

1

1

1

1

1

1

1

12

1

1

1

1

Hi

Hi

Lo

Hi

1

Spatial Range

Speed

Fluency

Intensity

Temporal

16

Time min.

Table 14.3 (continued)

13

1

1

1

1

1

1

Lo

16

1

1

1

1

Hi

9

1

1

1

1

1

1

Lo

Intentionality

10

1

1

1

Hi

Shift

15

1

1

1

1

1

1

1

Lo

Ball/monkeys/game over

Wave

Ball/monkeys/game over/character

Game over/character/wave

Character/wave/ball/monkeys

Wave/ball/monkeys/game over

Ball/monkeys/game over/character/wave

Monkeys/game over/character/wave/ball

Wave/ball/monkeys

Ball/monkeys/game over/character/wave

Screen

272 A. L. Brooks and E. Brooks

14 Game-Based (Re)Habilitation via Movement Tracking

273

References 1. Brooks, A., Petersson, E.: Recursive reflection and learning in raw data video analysis of interactive ‘play’ environments for special needs health care. In: Healthcom 2005, 7th International Workshop on Enterprise Networking and Computing in Healthcare Industry, pp. 83–87 (2005) 2. Slater, M., Steed, A., McCarthy, J., Maringelli, F.: The influence of body movement on subjective presence in virtual environments. Hum. Factors 40(3), 469–477 (1998) 3. Brooks, A.: Enhanced gesture capture in virtual interactive space. Dig. Creat. 16(1), 43–53 (2005) 4. Brooks, A., Hasselblad, S.: CAREHERE—creating aesthetically resonant environments for the handicapped, elderly and rehabilitation: Sweden. In: Sharkey, P., McCrindle, R., Brown, D. (eds.) 5th International Conference on Disability, Virtual Reality, and Associated Technologies, pp. 191–198 (2004) 5. Brooks, A., Petersson, E.: Humanics 2: human computer interaction in acquired brain injury rehabilitation. In: Proceedings of HCI International 2005, Las Vegas, USA. CD Rom (2005) 6. Rand, D., Kizony, R., Weiss, P.L.: Virtual reality rehabilitation for all: Vivid GX versus Sony PlayStation II EyeToy. In: Sharkey, P., McCrindle, R., Brown, D. (eds.) 5th International Conference On Disability, Virtual Environments and Association Technologies, pp. 87–94 (2004) 7. Kizony, R., Katz, N., Weingarden, H., Weiss, P.L.: Immersion without encumbrance: adapting a virtual reality system for the rehabilitation of individuals with stroke and spinal cord injury. In: Sharkey, P., Sik Lányi, C., Standen, P. (eds.) 4th International Conference on Disability, Virtual Reality and Association Technologies, pp. 55–62 (2002) 8. Brooks, A., Hasselblad, S., Camurri, A., Canagarajah, N.: Interaction with shapes and sounds as a therapy for special needs and rehabilitation. In: Sharkey, P., Sik Lányi, C., Standen, P. (eds.) 4th International Conference On Disability, Virtual Reality, and Associated Technologies, pp. 205–212 (2002) 9. Brooks, A.: Virtual interactive space (V.I.S.) as a movement capture interface tool giving multimedia feedback for treatment and analysis. In: 13th International Congress of The World Confederation for Physical Therapy, pp. 66 (1999) 10. Brooks, A.: Humanics 1: a study to create a home based telehealth product to supplement acquired brain injury therapy. In: Sharkey, P., McCrindle, R., Brown, D. (eds.) 5th International Conference On Disability, Virtual Environments and Associated Technologies, pp. 43–50 (2004) 11. You, S.H., Jang, S.H., Kim, Y.H., Hallett, M., Ahn, S.H., Kwon, Y.H., Kim, J.H., Lee. M.Y.: Virtual Reality-Induced Cortical Reorganization and Associated Locomotor Recovery in Chronic Stroke. Retrieved 28/7/2005 www.hamptonu.edu/News_Publications/ 12. Rogoff, B.: Apprenticeship in Thinking. Cognitive Development in Social Context. Oxford University, NY (1990) 13. Retaux, X.: Presence in the environment: theories, methodologies and applications to video games. Psychol. J. 1(3), 283–309 (2003) 14. Lave, J., Wenger, E.: Situated Learning. Legitimate Peripheral Participation. Cambridge University, NY (1991) 15. Csikszentmihalyi, M.: Flow: The Psychology of Optimal Experience. Natur och Kultur, Stockholm (1992) 16. Laban, R.: Modern Educational Dance, 3rd edn. MacDonald and Evans, Bungay Suffolk (1963) 17. Wertsch, J.V.: Mind As Action. Oxford University Press, New York Oxford (1998) 18. Vygotsky, L.S.: Mind in Society: The Development of Higher Psychological Processes. Harvard University Press, Cambridge, MA (1978) 19. Berg, L-E.: Den lekande människan. Lund (1992) 20. Leont’ev, A.N.: The problem of activity in psychology. In: Wertsch, J.V. (ed.) The Concept of Activity in Soviet Psychology, pp. 37–71. Armonk, NY (1981) 21. Leont’ev, A.N.: Activity, Consciousness and Personality. Prentice-Hall, Englewood Cliffs, NJ (1981)

274

A. L. Brooks and E. Brooks

22. Wenger, E.: Communities of Practice. Learning, Meaning, and Identity. Cambridge University Press, NY (1998) 23. Bigün, J., Petersson, E., Dahiya, R.: Multimodal interfaces designing digital artifacts together with children. In: 7th International Conference on Learning and Educational Media, Slovakia (2003) 24. Brooks, A.: SoundScapes—a concept and methodology of “being there”. In: Proceedings 6th Annual International Workshop on Presence. [Abstract], pp. 67 (2003) 25. Lee, K.M., Peng, W.: Effects of screen size on physical presence, self presence, mood, and attitude toward virtual characters in computer/video game playing. In: Proceedings 6th Annual International Workshop on Presence. [Abstract], pp. 23 (2003) 26. Ruesch, J., Kees, W.: Nonverbal Communication: Notes on the Visual Perception of Human Relations, pp. 76. UCLA (1970) 27. Camurri, A., Mazzarino, B., Volpe, G.: A Tool for Analysis of Expressive Gestures: The EyesWeb Expressive Gesture Processing Library. Ret. 18/2/2005 www.megaproject.org 28. Nielsen, J.: Usability engineering. Academic, NY (1993)

Chapter 15

Case Studies of Users with Neurodevelopmental Disabilities: Showcasing Their Roles in Early Stages of VR Training Development Yurgos Politis, Nigel Newbutt, Nigel Robb, Bryan Boyle, Hung Jen Kuo, and Connie Sung Abstract In this chapter we will reflect on two projects that have enabled disabled groups to be involved in designing and influencing technology that will be used by them. In addition, we will reflect on the process, limitations, barriers and actual involvement of the user groups. The case studies will be illustrative and provide a rich and meaningful account of what we did. Lessons and implications will be discussed as will future ways to engage a range of users in technology-based outcomes. Keywords Virtual reality (VR) · Augmented Reality (AR)/Mixed reality · User experience design · Participatory design · Case studies · Accessibility · Game design and development

Y. Politis Technological University Dublin, Dublin, Ireland e-mail: [email protected] N. Newbutt (B) University of Florida, Gainesville, USA e-mail: [email protected] N. Robb University of Tokyo, Tokyo, Japan e-mail: [email protected] B. Boyle University College Cork, Cork, Ireland e-mail: [email protected] H. J. Kuo · C. Sung Michigan State University, East Lansing, MI, USA e-mail: [email protected] C. Sung e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_15

275

276

Y. Politis et al.

15.1 Introduction Steve Jobs once famously said that people do not know what they want until the innovators and pioneers show it to them (Business Week, 25th May 1998), implying that user involvement can stifle creativity and limit the potential for ‘out of the box’ thinking. However, 10 years later in an interview to CBS (60 min, 2008) he suggests that great things in business are never done by one person, they’re done by a team of people. Societies have embraced a shift to a ‘participatory culture’ where consumers create, share and respond to media [1]. A diverse group of people with varied roles in the process are coordinated towards a shared goal and can have a successful outcome [2]. User involvement in the design/creation of products and services, has become more participatory in nature and the role of the users has evolved from being influencers of just the final outcome (testing a prototype) to influencers of the development and design process; from being tasked with the standardization of products/outcomes to being actively involved in their customization to meet individuals’ needs and preferences; and from being mere participants to having a relationship with the designers/developers. There has therefore been a shift to products being “designed by” the customers where they are actively involved in all phases of the development process of their product. The development and design process of products or services should enable specific user groups to achieve certain goals with effectiveness and productivity (ISO/IEC 25010:2011; standardization of software products and software-intensive computer systems). Moreover, the process can benefit from user involvement [3], because users can offer a different point of view, do some ‘out of the box’ thinking of their own that may provide inspiration for the future of development/design. However, the more complex a product is - in terms of requiring specialist skills (e.g., coding, programming), requiring subject knowledge and demanding significant resources (time and finances)—the more challenging user involvement becomes. Thus, even though participatory design methods (with varied techniques) have been successfully applied to the development of new technologies [4], users should be involved only when deemed that they can contribute meaningfully [5]. Adopting user involvement as a methodological approach is not straightforward and seems to require a new mindset; there would be a need to use varied methods to get the users input and show a degree of flexibility to ensure that they find the approach enjoyable and useful [6]. This would partly explain the resistance to employing it more frequently. Another explanation could be that user involvement has not always been more effective than an approach producing the same output without user involvement. In some instances of users being co-designers, they were less effective in changing behaviour than when the users were only testers, while in other instances, user involvement showed less effectiveness in increasing the participants’ self-efficacy [7]. When users engaged in participatory design of educational games they were more interested in design aspects rather than the educational content [8].

15 Case Studies of Users with Neurodevelopmental Disabilities …

277

Should everyone be involved in design? Technology giants claim, “their greatest products came from needs people hadn’t yet articulated” [9; April 8th 2018]. On the other hand, there is an emerging belief that in a user involvement approach, not every member must contribute, but all must “believe their contribution matter” and that what they contribute will be appropriately valued [1, p. 7]. The implication is that the approach is effective under specific conditions. The effectiveness of the approach would be enhanced if the users were sufficiently acquainted with the subject matter or had a certain level of design knowledge to identify the relevant information to their task and address the educational goals [8, 10, 11]; another option is to offer them training to gain the necessary knowledge [10, 12]. Another aspect is the knowledge deficit of the developers/designers in relation to user involvement design approaches, which would allow them to provide users with clear instructions and positive reinforcement [13, 14], allow them to clearly describe the process and techniques used [14] and ask good, probing questions [9]. When it comes to the field of disability and technology, historically the designing with and for user groups is an area that is often neglected [3, 16]. Designing technology for special needs groups tends to happen in a top-down framework, which means that decisions related to the design and development are generally influenced by designers rather than end-users. User involvement has been implemented since the 1970s and its benefits have been proven in several studies and settings (e.g., special educational needs; [3]) since then, yet there are those who believe it has been underutilized as an approach [17].

15.2 Neurodiversity and Participatory Design There have been few studies conducted using user involvement in the special education needs field. Early attempts were not that inclusive, because the users were limited to being either testers or informants, according to Druin’s [10] levels of child involvement in the design process (users, testers, informants, or design partner). Users, play with technology that is fully designed and developed with limited scope for revisions; testers play with an earlier version of the technology and the designers gather information from them by observing them and can make some changes; informants have a more active role, where they are brought in earlier in the process to help solve a problem; design partners are actively involved in all phases of the process, by attending design sessions, brainstorming ideas and also as testers of the product that has been co-designed. For instance, children with ASD (Autism Spectrum Disorder) were involved in a participatory design as testers [18, 19], or testers and informants in designing, for example, facial expression recognition software [20]. However, more recently, that has changed, and projects have involved children with disabilities as design partners. For instance, Frauenberger et al. [21] co-designed a learning environment for social skills development, while Millen et al. [5] have developed a participatory design method for the design of collaborative virtual environments. Even though

278

Y. Politis et al.

offering children with special needs significant control over the design process can be liberating for them [21], user involvement has not been adopted broadly [17], which raises the question: Why not? This approach does not reserve the title of “expert” for just a handful of people, but rather professes that everyone can bring their own expertise to the table, be creative and good listeners [17]. Others though would argue that it wastes valuable resources (money and time), which is seen as a luxury [22]. However, the biggest drawback may be that it is difficult to assess its effectiveness since identifying the controlling factors of an experimental design with and without user involvement may only be possible during the co-creation process [15]. When the users are people with disabilities, a host of new challenges and obstacles arise. There are challenges with regards to ethics, where the involvement of clinicians and/or families may also be necessary [23] and with regards to informed consent (difficulty in obtaining it). There are obstacles when it comes to recruitment of users that are representative of the population, due to the large variety of conditions and levels of severity [24]. Obstacles related to users with intellectual disabilities, are difficulty in expressing themselves and articulating their views due to lack of communication skills, lack of imagination and difficulty in understanding the concept of perspective – someone else’s point of view [5, 25]. In more recent years, a new approach to Participatory Design with neurodiverse populations (mainly ASD and dyslexia) has emerged that is based on the Diversity for Design (D4D) Framework [26]. This Framework advocates for design approaches that focus on the strengths of the participants, rather than overcoming weaknesses. The D4D Framework is a blueprint for technology designers on how to engage with neurodiverse populations through a PD approach. It can be summarized under the following two main headings (with associated subheadings): 1. Structuring environment Understanding culture. Tailor to individual. 2. Additional supports Understanding culture. Tailor to individual. While Benton et al. [26] have a D4D Framework applied to the development of PD features for ASD and Dyslexic populations, there isn’t one developed—to the best of our knowledge—for people with Intellectual Disabilities. The authors will be presenting here two case studies that address the practicalities of working with people with autism/ID (Intellectual Disability) in the early stages of Virtual Reality (VR) development. The first looks at user’s preferences regarding VR hardware, while the second considers user-involvement in designing the training content for carrying out a task of their choosing. The two case studies in combination examine how users with neurodevelopmental disabilities can influence decisions in the early stages of VR training development.

15 Case Studies of Users with Neurodevelopmental Disabilities …

279

15.3 Ethical Considerations Both studies applied for, and were approved, by their respective Ethical Standards Committees before engaging with their respective cohorts. As part of this approval, and following guidelines set out by both institutions, the data reported here is anonymized and names of participants/co-researchers are pseudonyms or not used. Every participant/co-researcher provided their full consent (and consent of their parent/guardian where necessary). In addition to this, the schools, centres and HEI (Higher Education Institution) we worked in are not named or identifiable. All authors report no conflict of interest in the work we engaged and neither do the participants or the schools/centre/HEI. The researchers reiterated that the participants’ involvement was voluntary and that they could cease their involvement at any point without any consequences.

15.4 Case Study Presentations In this chapter we will reflect on two projects that have enabled disabled groups to influence early stage VR technology development. In addition, we will reflect on the process, limitations, barriers and actual involvement of these user groups. The case studies will be illustrative and provide a rich and meaningful account of what we did [27]. Lessons and implications will be discussed as will future ways to engage a range of users in technology-based outcomes. The first case study involved autistic children in the UK (and non-autistic groups) while the second case study involved young adults with Intellectual Disabilities in Ireland. In the following section, we will explain the purpose of the studies, describe the population and the process adopted to carry out the study, list the barriers and obstacles faced during the process and then put it all into context and deliver a set of recommendations that could serve as best practice for conducting this type of study with a neurodiverse population. The first case study will describe the process of engaging with an autistic group in testing and developing future directions for the use and role of virtual reality in educational-based settings (i.e. school). The second case study will describe how a researcher engaged with a group of young adults with Intellectual Disabilities in order to co-create with them a set of guidelines to help them carry out a daily living task of their choice. The following sections will provide an overview of the case studies, the aims and objectives, case study context, characteristics and findings. While we have not followed the complete steps recommended by Thomas [27], we have sought to provide a rich and descriptive (and illustrative) case study, having accepted the stance that case study research involves gaining a rich picture and analytical insights into a subject. Following of the steps suggested by Thomas [27], we describe the four frames that will define our studies.

280

Y. Politis et al.

15.5 Case Study 1: Engaging Users in the Potential of Virtual Reality Opportunities for Learning in Schools 15.5.1 Brief Overview/introduction This first case study sought feedback from end users of virtual reality (VR) head mounted-displays (HMDs) in a classroom; working with participants as coresearchers to inform considerations for practice as this field moves forward. Here we use the term ‘co-researchers’ as a way to describe active and meaningful engagement in research of participants; researching with, rather than for, autistic groups. We agree with the perspectives of Heron and Reason [28, p. 145] who suggest that “co-researchers come together to explore an agreed area of human activity” and as such describe our participants as co-researchers in this case study as we seek to explore their views and their agendas. The feedback that we sought was in the form of device preference (in this case a HMD) and insights to the possible uses of VR applied to their learning [29]. While this case study has not led (yet) to technology development, we reflect on the potential the case study data provide for future technology development in this arena. The case study groups were drawn from a range of schools, with a specific emphasis on autistic children. While this area of research (i.e. VR and HMDs) has recently tended to focus on adult populations (18 years+) (see Newbutt et al. [30] for an overview), the current case study sought views and perspectives of co-researchers (otherwise known as participants) on preferences and potential of VR used in classrooms. As part of working in participatory ways, the teachers were also considered co-researchers in this context. Therefore, their views and data are also reported. Figures 15.1 and 15.2 provide examples of the VR HMDs being used in situ by the co-researchers.

15.5.2 Aims and Objectives Aims of this case study was to: 1. Learn about device preference (VR HMD) for young autistic people; 2. Understand issues related to sensory and physical reactions, as well as levels of comfort and enjoyment of using VR HMDs; 3. Learn from the co-researchers ways that VR could be used in schools. In doing so, and only by asking such questions in a participatory manner, can we start to develop suitable, usable and appropriate technology that could be taken up, and used in schools (i.e. working from the ground up, not top down). This is the starting point of technology design and development for autistic groups, as we see

15 Case Studies of Users with Neurodevelopmental Disabilities …

281

Fig. 15.1 An example of children using a VR HMD in their classroom (school A)

Fig. 15.2 An example of children using a VR HMD in their classroom with their teacher; in this example collaborating with another student (school A)

282

Y. Politis et al.

Table 15.1 School demographics for Case Study 1 School A

School B

School C

School D

School status

Special educational needs

Special educational needs

Mainstream

Mainstream

School type

Free school—Special

Independent school

Academy—Converter mainstream

Voluntary aided school

Education phase

Primary, secondary and 16–18

Secondary and 16–18

Secondary

Primary

Age range

4–19

9–18

11–16

5–11

Number of pupils in whole school

85

54

550

89

Ofsted rating

Outstanding

Good

Good

Good

School type

Urban

Rural

Urban

Urban

Free school eligibilitya

n = 28 (33%)

Not known

n = 13 (2.4%)

Not known

a In the UK, free school meal eligibility is one way to understand the social-economic area in which

the school is based. In primary schools, 14.1% of pupils are known to be eligible for free school meals, whereas in secondary schools, the figure is 12.9% [31]

it, and we hope the findings from this work will inform further design consideration for VR and autistic groups.

15.5.3 Context/Setting This case study was situated in the United Kingdom (UK) in both special needs schools and mainstream schools. We worked with four schools in the South East and South West part of the UK. An overview of the school characteristics is listed in Table 15.1. The researcher worked with key stakeholders and children in the schools in collecting data. These data, in part, are reported here and inform the findings through a co-design model; involving users/participants as co-researchers.

15.5.4 Case Study Group/Characteristics We worked with 43 pupils in total ranging from 6–16 years old. The mean age was 12 and the male to female ratio was 28:15. Seventy-three percent of the co-researchers we worked with were autistic with the remaining 27% described as typically-developing (non-autistic). The schools we worked with included both mainstream and specialist

15 Case Studies of Users with Neurodevelopmental Disabilities …

283

provision (special needs schools). We worked with two of each. In the mainstream setting we worked with close to 50% autistic children along with children without autism. Figures 15.3 and 15.4 provide a sense of the classrooms and settings in which we worked. We worked with all the groups in a very similar manner. We worked to ensure that the technology was appropriately introduced to the individuals and cohorts. This was done in a staged way; working with teachers to offer the opportunity for the co-researchers to become familiar with the technology they were going to be using. We wanted to be especially careful to address any potential sensory concerns faced by autistic individuals when wearing a HMD [29]. With that in mind we worked carefully and consulted with teachers and in some cases with the parents. Once we established who wanted to be involved in the study, consent was gained, and we ran

Fig. 15.3 Example of the classroom layout and space provided in one of the autistic specialist schools (school A)

284

Y. Politis et al.

Fig. 15.4 Example of the classroom space in a mainstream setting (school C)

a range of virtual environments on three different HMDs. Table 15.2 provides an overview of the technology used, while Fig. 15.5 illustrates what these looked like. Each session involved the co-researcher using and testing each of the devices. They all started with the cardboard followed by the Class VR headset and concluded their experience using the HTC Vive. They were then asked a series of questions reporting their experiences and preferences, from the VR HMDs demonstrations. Each user spent about 5–10 min immersed in each HMD. Post experience the coresearchers were asked to complete a short questionnaire. Figure 15.6 provides the flow of the sessions, highlighting the ethical approaches taken to ensure the safety of the participants. Questionnaires concluded the process with follow up checks on the children.

15.5.5 Findings This case study placed the users (co-researchers) at the core of this endeavor. By doing so we’ve been able to gain some interesting insights. Firstly, we learned from the co-researchers that the most preferred HMD and experiences therein were with the HTC Vive. There was an overwhelming preference for this device with ninety-seven percent reporting this as their preferred device. The

15 Case Studies of Users with Neurodevelopmental Disabilities …

285

Table 15.2 Properties of the VR HMDs used in Case Study 1 Equipment

Input/control

Additional information

HTC Vive HMD and gaming computer (HP Omen)

Head can be turned 360° and tilted in all directions. Hand controllers being held by the user

The HTC Vive is considered a ‘high-end’ HMD and uses graphics and images that are of a high quality. Fully immersive; users can walk, bend down and jump to modify their environment. Users can also control elements and interact moving their hands (when holding a controller). Extensive use of cables and need for power outlets

Class VR (stand alone device)

Head can be turned 360° to view material. This HMD also features augmented reality (AV), but was not used for this project. Limited input or control over the VR environments

Class VR provides a mainly 360° video experience. One that enables users to view a ‘scene’ that is captured using 360° cameras. Images are photorealistic. There are also 360° video scenes. Considered mid-range experiences and device Wireless and no need for cables, when being used

Google Cardboard (stand alone device; with a smart phone)

Similar to the above (Class VR) experiences. Cardboard relies on using a smart phone (in this case study an iPhone 7) to deploy content. 360 video or still scenes. Input is available using a small button on the top right of the headset

Similar to the above description. Content can be loaded onto a smartphone and viewed via the cardboard HMD. Content tends to be 360 video or still images. Some 360 games also available Wireless and no need for cables, when being used

remaining 3% reported the cardboard as preferential. The least preferred was reported as the Class VR device followed by the cardboard (53% and 48% respectively). This data was taken from across the entire case study group (i.e. autistic and non-autistic). The views of the autistic children had great similarities across the various groups and schools. All of the autistic children selected the HTC Vive as their preferred VR device. Data related to the least preferred device was similar: 52% for the Class VR device and 48% for cardboard. Teachers felt a little differently. They reported a similar preference in the HTC Vive, but by only 60% of those surveyed (n = 5). The other 40% (n = 2) reported the Class VR and Cardboard as preferred options. In capturing the qualitative views of teachers, the reason for the difference in preference type might be due to the cabling of the HMDs. Table 15.2 highlights the wired and wireless options. One teacher suggested that: “If it needs to be plugged in it might become more difficult and kids

286

Y. Politis et al.

Fig. 15.5 Equipment used in the case study, from top: HTC Vive, bottom left: Class VR, and bottom right: Google Cardboard

in the classroom might become distracted by it [the HMD]” (Teacher 1). In addition, other teachers suggested that: “appropriate space in classrooms” (Teacher 2) and “class sizes and previous experience” (Teacher 3) would need careful consideration. This is perhaps in relation to the space required for the HTC Vive (which needs a 5 m square space) to be appropriately used and maximised. This tends to indicate that more investigation is required in relation to the types of devices in addition to looking in more detail at the various experiences within VR and how these can be linked to the curriculum. This might help inform the selection and deployment of VR in schools. But listening to the user groups (in this case the co-researchers) we need to be sure that decisions are not be taken without their voice or input; as doing so might mean the uptake of technology could be limited within school-based settings. With regards to sensory concerns and potential physical discomfort, this study found overall, and by working closely with the co-researchers and valuing their views, that there were no negative effects reported and high levels of enjoyment. Questionnaires were administered after the experiences (all three) as Fig. 15.6 highlights. These were meant to capture three factors (or themes) and comprised 8 questions in total. Each question asked the co-researcher to rate on a scale of one to four

15 Case Studies of Users with Neurodevelopmental Disabilities …

287

Fig. 15.6 Process of involving co-researchers in the development of VR HMDs in schools (and the ethical/safety implications therein)

(1 = low/not very much to 4 = high/very much). Factor 2 brought together questions related to the physical experiences (wearing the device, side-effect, comfort, etc.). The whole co-researcher population reported a mean score of 3.69 (standard deviation = 0.11) related to factor 2 with the autistic only co-researchers reporting a mean score of 3.71 (the non-autistic co-researchers reported a mean score of 3.67). Moreover, all co-researchers reported liking the HMD VR experience as it relaxed them and made them feel calm, especially the autistic users. These data start to shed some light on how end users (co-researchers in this case) reported feeling, in a physical sense, wearing and using (interacting within) a VR HMD. The self-reported mean scores are very close to the high/agreement end of the scale and suggest that further engagement with stakeholders is required, and necessary, in the co-design of teaching materials for autistic groups in schools. They are also positive insofar as users in this case study did not feel negatively towards the use of a HMD in their school. Finally, in relation to ascertaining how VR and HMDs could be used in schools— indeed if VR should be used in schools - data (similar one to four scale as above) reports that a mean score of 3.86 (SD = 0.10) was recorded for all the co-researchers

288

Y. Politis et al.

across all schools when asked about using VR HMDs again. This was the final factor (factor 3), with questions related to using the equipment again and seeing potential for using it in their school being drawn together. For the autistic groups a mean score of 3.86 was reported with 3.85 for the non-autistic co-researchers. Teachers also reported some potential for using VR in schools. One teacher suggested that the VR HMD “encourages good communication with playing games with another player” (Teacher 2) with another suggesting the use of VR could: “engage students in their work and make lessons more interesting” (Teacher 4). There were comments about the potential fit with curriculum with a teacher proposing that VR could be used “in subjects like history you would be able to experience the places that might be talked about in order for children to relate more” (Teacher 5). Both suggestions (i.e. curriculum and engaging students though VR) were born out in another suggestion by a teacher, who suggested that “Interactive experiences, tours of visits before the real visit happens, immersive and interactive social stories” (Teacher 1). Taken together, the aim of involving co-researchers and teachers in the study of VR HMDs used in schools revealed that: (1) device preference was mainly reported as being the HTC Vive; (2) physical experiences and sensory concerns did not impact these groups we worked with (the co-researchers felt comfortable and reported as much); and (3) that the appetite for using VR again, and developing further content, was seen as a positive next step. We suggest this is an urgent next step and should be done in a collaborative way. There are good reasons to be positive about the role of VR in schools and as one teacher put it: “this is the technology pupils have a grasp of and get excited about. We have a duty to include this into their learning experiences” (Teacher 6). In relation to the process and participatory nature of this case study we offer the following insights. The case study sought the views, perspectives and insights of users to inform the future direction of virtual reality used in their classrooms. We reported thus far data that addressed the preferences of VR HMDs and the possible application for educational contexts, however we also engaged our co-researchers in the process throughout. Here we refer to engaging autistic opinions and voices in the design of the project from the outset; helping to inform the research questions and manner in which the project was carried out (especially the safety of the co-researchers). The second author of this chapter engaged with school teachers, parents of potential co-researchers, in addition to working with a research mentor. The mentorship was kindly provided by an autistic adult who was able to inform the research design and manner in which the project was planned; providing clear and insightful information from an autistic point of view. Once the project was formed, the set up, questionnaires, ethical processes and research questions were all co-designed with stakeholders. The project in the data collection phase also took on the views of the co-researchers to inform the work and of course were the centre of the data collection. By involving, in a participatory and exploratory manner, co-researchers, the project developed into one whereby the users were leading the choice of software to be used (based on a selection they could choose from; of a total of 6 interfaces). The co-researchers also informed the process as it developed. This was achieved through meetings with co-researchers

15 Case Studies of Users with Neurodevelopmental Disabilities …

289

(at various points) to best gauge what they wanted to explore incrementally in terms of VR content and hardware. These were meetings that also enabled teachers to provide input and fruitful links with content and curriculum (work they were doing in class). By collaborating with the co-researchers and placing their views at the centre of the case study we enabled/helped to shape best practice in the use of VR in their schools. While this is not reported here, the best practice guidelines (framework) and ways to approach VR in classrooms for autistic children, will be reported separately. However, practice related to: (1) what software is best suited for autistic groups using VR emerged, as did (2) the manner in which to introduce VR to autistic children and (3) how to set classrooms up accordingly to enable safe use of VR HMDs. These are some of the practice guidelines that came about as a result of this case study.

15.6 Case Study 2: Participatory Design Approach to Co-Create Training Materials on a Daily Living Task for Young Adults with Intellectual Disabilities 15.6.1 Brief Overview/introduction The second case study tried to understand the dynamics involved in the co-creation of training materials with individuals with Intellectual Disabilities (IDs), with regards to barriers faced and lessons learnt during the process. A significant body of research work exists with people with autism, however that is not the case for the ID population. The researcher wants to address that gap with this case study, which was carried out with six young adults with ID who are attending a 2-year HEI (Higher Education Institution) programme in Ireland. The participants were allowed to be in the driving seat of the Participatory Design process, with respect to choosing the daily living task they wanted to learn how to do, creating the training material (based on their understanding and using their preferred vocabulary) and offering feedback after testing the training. The participants were thus empowered to make decisions that could affect their lives and provide them with the opportunity of having more independence in carrying out certain daily living activities.

15.6.2 Aims and Objectives The aim of this case study was to have a group of young adults with Intellectual Disabilities (IDs) lead the creation of training guidelines on how to carry out a daily living task. This case study sought to report: 1. Obstacles faced during the Participatory Design process;

290

Y. Politis et al.

2. Barriers that the participants and researcher had to overcome in order to create guidelines that are effective; 3. A reflective account (by the researcher) that reflects on the whole process; developing a list of recommendations for best practice. The long-term objective is to enable the participants in enhancing their lives, by being able to live independently and being able to secure lasting and meaningful employment. To this end, there is going to be a follow-up study with a similar population, where an assistive technology such as Virtual Reality (VR) will be used as the platform for the training delivery, i.e. the co-created guidelines. Participatory Design will be adopted once more and a host of new challenges are envisaged with regards to lack of experience and knowledge of such technologies.

15.6.3 Context/Setting The participants of the case study are attending a programme at an Irish HEI in the Dublin area in partnership with a Service Provider for people with IDs. The learners attend college full time (5 days a week 10am-3 pm), for two years and have tutorials with the main body of students. The tutorials cover areas such as Creative Studies, Drama, Art, Music, Sports Coaching and Personal Training. The sessions that required a group discussion took place in the break room, where the participants and researcher joined together in forming a circle. The other sessions took place in the computer room, where each participant could log in and try the tasks individually.

15.6.4 Case Study Group/Characteristics The Methodology deemed most appropriate for this project was a focus group, so that the participant felt more comfortable being among their peers but at the same time, everyone had a chance to contribute in a meaningful way. The case study consisted of four 90-min sessions, with a 15 min break for a drink and snack. The group consisted of 6 participants: • 4 male/2 female • Age range early 20 s-early 40 s • Conditions included Kabuki syndrome, Williams syndrome, Downs Syndrome and general intellectual disability. The researcher met with the group prior to the commencement of the PD sessions to discuss with them possible daily living activities that they would have an interest in learning more about, bearing in mind that the training would eventually be delivered through a Virtual World (someone mentioned ironing, which would be more difficult to present in Virtual Reality than for instance steps necessary to get on a bus or use

15 Case Studies of Users with Neurodevelopmental Disabilities …

291

an ATM card). One of the participants mentioned online shopping which seemed to resonate with almost all the participants, which would also involve being able to use a credit card. Thus, these were the daily living tasks chosen by them. Session 1 involved showing on a projector 2 examples of websites that were chosen by the participants (a supermarket chain and a department store). The researcher showed them in detail the steps they need to follow to make an order (except for entering the credit card details). In session 2 the researcher explained to the participants that they would go through the steps of buying something online (one by one) and that he wanted them to write down, in their own words, how they would describe them. When that part of the study was completed, the researcher’s role became more active. He got the participants in a circle and asked every participant in turn to describe each step and then he tried to gauge which words and expressions were commonly used and choose them in the guidelines. When the group was split between two words, he tried to come up with a consensus choice (Fig. 15.7). During session 3 the participants had the opportunity to test the guidelines; they were asked to follow the guidelines and add an item from the website they had practiced in the previous session into their basket. The researcher applied the same methodology in session 4 as in session 2, with the objective this time being to create guidelines on how to proceed to the checkout and input credit cards details (Fig. 15.8). For this purpose, they were all given a fake credit card (Fig. 15.9) with all the necessary information (Name, credit card type, number, expiry date and CCV number at the back, along with a fake email address, fake mobile number and address.

15.6.5 Findings With regards to overall impressions from the whole process, conversations with the participants revealed that sharing ownership of the process was important to them in order to have a vested interest in the outcome. Moreover, learning something new that could enhance their daily living was an attractive proposition. The group had no issues comprehending the researcher’s accent, which could have proved an obstacle. The group said during the de-briefing that the language used by the researcher was understandable most of the time but could not offer any further explanations. Depending on the severity of the participants’ conditions, spending time with them and ascertaining their language competency would be a critical first step before repeating a project like this, because it would be a serious barrier to full participation in the process. The participants did ask for help from the researcher and the programme coordinator who was present in all sessions, but that seemed to be more about finding a button on their screen or a key on the keyboard, rather than not understanding the instruction. Reflecting on the process, it would have been helpful to have spent a few days with them prior to starting the sessions, in order to allow them to create a rapport with

292

Y. Politis et al.

Fig. 15.7 The guidelines created in order to browse an online store and select items

the researcher, so that they immediately felt comfortable carrying out the research work. Having said that, the fact that the researcher spent coffee and lunch breaks with them, had that desired effect by session 2. Using a focus group was an appropriate methodology, firstly because they collectively attended the same programme (they seemed to have a bond with each other) and secondly because all participants were given a platform to contribute to the discussions. Some participants had more of a leadership role, but that can be attributed to slight lack of confidence by other members of the group. Spelling words was a significant barrier in the process. In most cases, it was possible to navigate the websites by clicking on categories, sub-categories, themes, drop-down menus etc., which would not have required any spelling. However, the group chose to navigate the websites by using the search boxes. Was that due to a lack of accessibility of the websites or because the participants’ reading skills were

15 Case Studies of Users with Neurodevelopmental Disabilities …

Fig. 15.8 The guidelines created for paying during online shopping with a credit card

Fig. 15.9 A sample of a “fake” credit card used in Session 4

293

294

Y. Politis et al.

not strong enough for the task? A new iteration of the PD process should try to use accessible websites that offer online shopping to mitigate any potential unease or confusion. Another rather minor obstacle was typing, but that would improve over time. The researcher utilized two methods to gather from the participants the steps necessary to do the task. The first method, used in session 2, involved the researcher taking them through step by step and getting them to write on a notepad how they would describe each step; in the end, the researcher and group had a discussion and came to a consensus on the wording of each step. In session 4, the researcher took the group through each step and instantly had a discussion with them and they all agreed on the most appropriate wording. This seemed to be a more efficient way of completing the process, due to mainly the difficulties some participants had with spelling and vocabulary limitations. The participants had a diagnosis of a varied type of conditions, and their computer literacy skills varied as well, therefore the fact that some participants described the process in a more exhaustive manner than other was to be expected. Personalizing the training material will probably be the greatest obstacle for the success of the PD process. With that in mind, the first method mentioned above may not have been the most efficient, however it offered the opportunity for participants to personalize the steps. In session 4, the use of props (fake credit cards) made the task slightly more realistic and gave them a sense of what they would physically need to do in the payment phase of online shopping. Simulating a real-life situation would make the participants appreciate the usefulness of and necessity for learning about the task.

15.7 Overall Discussion and Conclusions The two case studies presented and discussed above considered two neurodiverse populations (autistic people and people with ID). We will next compare, contrast and synthesize the experiences and lessons through a Participatory Design context. We hope this will demonstrate the ways in which we sought to focus on the strengths of participants, while identifying commonalities in our work that proved successful with the two groups with whom we worked. When examining the two case studies, we found a number of overlapping themes and approaches with regards to the sub-headings of the D4D Framework, which are highlighted in bold (through the following text). With regards to “Structuring Environment”, both case studies were carried out in familiar spaces to the participants (school classroom and a Higher Education Institution [HEI] breakroom/computer room), helping them to feel comfortable and safe. In order to have routine and structure, the sessions took place on same days and times each day (case study 1) and week (case study 2). The theme/topic of both case studies met the interests of the participants, either because they were asked to pick the task for the PD from a list of topics (ID population) or after being

15 Case Studies of Users with Neurodevelopmental Disabilities …

295

introduced to the VR HMDs they could decide whether to take further part in the study (autistic population). In both studies we took steps to ensure that participants found the content appropriate to their ability, by seeking input from the facilitator or programme co-ordinator in providing guidelines with regards to suitable curriculum (for the autistic group) and tasks that they are capable of performing on their own (ID group). The authors used multiple modes to gain the participants’ views, including written feedback, oral feedback and choosing emojis to express their satisfaction levels. Lastly, we were conscious and careful to deliver the sessions in a similar manner, thus keeping a consistent session structure (explaining at the beginning of the session what was going to happen, trying to keep the process as uniform as possible). The “Additional support” aspect of the D4D Framework initially addresses the PD process. The first interaction with a group—autistic or ID—started with a team building exercise, where the researcher introduced himself and the project, explained their role in this process, clarified that their facilitator/teacher/coordinator would be present in every session and made sure they understood their rights (voluntary participation, ability to cease participation at any point without repercussions). The interactions happened in a staged manner, whether that involved the introduction to the VR headsets for the autistic group or breaking down the training material co-creation into steps. There was ample adult support to the neuroiverse people of the studies in terms of engagement by the facilitator/teacher/co-ordinator and additionally to the researchers by either the autistic mentor or the co-ordinator and other supporting staff (within school or service provider). The participants were involved with visual activities; navigating in a Virtual World in the case of the ASD population and navigating online shopping websites in the case of the ID population. Lastly, the interactions built on existing knowledge/skills, whether that involved computer skills that the ID population had developed in their HEI programme or linking to school content/curriculum with the autistic children. There were some other observations from the sessions that applied to just one of the case studies. In relation to the ID population, there was adult support (collaboration/sensitivities) by the co-ordinator in advising the researcher of the group dynamics (who are seen as leaders), about certain behaviours exhibited (boredom or lack of confidence) when a certain participant lost interest and how to keep that participant engaged (having at the break her favourite drink). Also, these participants saw the Role-playing aspect of the PD sessions (i.e., props such as fake credit cards and a fake persona including fake email and physical address as well as phone number) made the task appear real. On the other hand, the autistic population was heavily involved in shaping the research itself by informing the research questions and having input on the delivery of the project. We also had to contend with dealing with health and safety issues when it came to wearing the VR HMDs, especially in the early stages of the study. Lastly, the study design and planning were informed by the life experiences of the mentor that was consulted.

296

Y. Politis et al.

15.8 Implications for Practice and Further Work Upon reflection of the points made in the above general discussion, we have developed a list of guidelines of how others might design and deliver a PD approach for a neurodiverse population in a way that will maximize its effectiveness and potentially produce better outcomes and deliverables. The two case studies demonstrated that a key to success for a PD approach with neurodiverse populations was that participants are keen to acquire new knowledge, skills or competences, especially if these help them in their pursuit for independent living, education or other skills development. Taking some control over learning means that autistic and ID groups could have a vested interest in the success of the process and will be excited to be involved. With regards to the process, meeting the potential participants in advance (even more than once) proved to be very beneficial. It helped the researcher to get to know them personally, make them feel safer and enabled them to have greater confidence in taking part. It also allowed us to liaise with teachers, practitioners, other professionals and self-advocates, so that the we were better prepared and informed. Expanding on that point, a PD approach ensures that neurodiverse voices will be heard, they will be treated as valued partners and will be able to affect the product or service they will co-create. Focus groups work will generally be the most appropriate methodology adopted, because it allows for collaborative learning. However, in order to meet individual needs and preferences, there is a need to combine focus groups with direct interactions with a participant to personalizing the training material. Consequently, the interactions need to be pitched at an appropriate level for the individuals, which will be assisted from earlier ground work at the relationship building phase of the process. It would also be of great benefit to the participants if the sessions were more practical in nature; including simulations of real-life situations (e.g., use of props, role-playing) would make the participants understand the need for their participation and value the learning experience more. Lastly, it would be remiss of us not to highlight a caveat from the introductory section that the neurodiverse population is a key piece of the PD approach, however, they are partners in the process. This means that the researchers’ and other facilitators’ combined knowledge, expertise and training should be welcomed and valued. A relationship of mutual respect among equal partners will yield the best results. In conclusion, a brief summary of the Guidelines for Best PD Practice with neurodiverse populations we have identified, through these two case studies, consist of: 1. 2. 3. 4. 5.

Ownership of Learning Building Relationships Influencing Design Meeting Needs and Expectations Understanding Boundaries.

There is an urgent and timely need for research with different neurodiverse populations (especially the diverse Intellectual Disabilities cohort) in order to test these

15 Case Studies of Users with Neurodevelopmental Disabilities …

297

guidelines, and strengthen the case that designers, developers, researchers, technologists, academics, practitioners and other relevant professionals should adopt a PD approach in creating products and services. We hope these case studies and synthesis of findings/lessons in this chapter will help to inform future discussions, working practices and ideas in relation to PD approaches when working with neurodiverse populations.

References 1. Jenkins, H.: Fans, Bloggers and Games: Exploring Participatory Culture. New York University Press, New York (2006) 2. Lankshear, C., Knobel, M.: DIY Media: Creating, Sharing and Learning with New Technologies. Peter Lang, New York, NY (2010) 3. Benton, L., Johnson, H.: Widening participation in technology design: a review of the involvement of children with special educational needs and disabilities. Int. J. Child-Comput. Interact. 3(4), 23–40 (2015) 4. Wilson, J.R., Patel, H., Pettitt, M.: Human factors and development of next generation collaborative engineering. In Proceedings of the Ergonomics Society Annual Conference. Taylor and Francis, London, UK (2009) 5. Millen, L., Gray-Cobb, S.V., Patel, H.: Participatory design approach with children with autism. Int. J. Disabil. Hum. Dev. 10(4), 289–294 (2011) 6. Neale, H., Gray-Cobb, S.V., Wilson, J.R.: Involving users with learning disabilities in virtual environment design. In Proceedings of the 9th International Conference on Human-Computer Interaction (HCI 2001). ACM, New York, NY (2001) 7. DeSmet, A., Thompson, D., Baranowski, T., Palmeira, A., Verloigne, M., De Bourdeaudhuij, I.: Is Participatory design associated with the effectiveness of serious digital games for healthy lifestyle promotion? A meta-analysis. J. Med. Internet Res. 18(4), e94 (2016) 8. Ke, F.: An implementation of design-based learning through creating educational computer games: a case study on mathematics learning during design and computing. Comput. Educ. 73, 26–39 (2014) 9. Skimen: Why Participatory Design? Retrieved on 25 Mar 2017 from https://detroitcollisionwo rks.com/value-and-participatory-design/ (2012) 10. Druin, A.: The role of children in the design of new technology. Behav. Inf. Technol. 21(1), 1–25 (2002) 11. Yip, J., Clegg, T., Bonsignore, E., Gelderblom, H., Rhodes, E., Druin, A.: Brownies or bagsof-stuff? Domain expertise in cooperative inquiry with children. In: Proceedings of the 12th International Conference on Interaction Design and Children. ACM, New York, NY (2013) 12. Walsh, G., Foss, E., Yip, J., Druin, A.: FACIT PD: a framework for analysis and creation of intergenerational techniques for participatory design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2013. ACM, New York, NY (2013). https://doi.org/ 10.1145/2470654.2481400 13. Denner, J., Werner, L., Ortiz, E.: Computer games created by middle school girls: can they be used to measure understanding of computer science concepts? Comput. Educ. 58(1), 240–249 (2012). https://doi.org/10.1016/j.compedu.2011.08.006 14. Joubert, M., Wishart, J.: Participatory practices: Lessons learnt from two initiatives using online digital technologies to build knowledge. Comput. Educ. 59(1), 110–119 (2012) 15. Frauenberger, C., Good, J., Fitzpatrick, G., Iversen, O.S.: In pursuit of rigour and accountability in participatory design. Int. J. Hum. Comput. Stud. 74, 93–106 (2015) 16. Frauenberger, C.: Disability and technology—A critical realist perspective. In: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS’15. ACM Press, Lisbon, Portugal (2015)

298

Y. Politis et al.

17. Plumley, E., Bertini, P.: Co-Creation: Designing with the User, For the User. Retrieved 20 Mar 2017 from https://www.uxbooth.com/articles/co-creation-designing-with-the-user-forthe-user/ (2014) 18. Keay-Bright, W.: The reactive colours project: demonstrating participatory and collaborative design methods for the creation of software for autistic children. Des. Principles Pract. 1(2), 7–15 (2007) 19. Van Rijn, H., Stappers, P.J.: Expressions of ownership: motivating users in a co-design process. In: Proceedings of PDC’08. ACM, New York, NY (2008) 20. Madsen, M., el Kaliouby, R., Eckhardt, M., Hoque, M.E., Goodwin, M.S., Picard, R.: Lessons from participatory design with adolescents on the autism spectrum. In Proceedings of CHI EA’09. ACM Press, New York, NY (2009) 21. Frauenberger, C., Good, J., Keay-Bright, W.: Designing technology for children with special needs: bridging perspectives through participatory design. CoDesign 7(1), 1–28 (2011) 22. Grudin, J.: Obstacles to user involvement in software development, with implications for CSCW. Int. J. Man Mach. Stud. 34, 435–452 (1991) 23. Alm, N.: Ethical issues in AAC research. In: Proceedings of 3rd ISAAC Research Symposium. ISAAC, Toronto, Canada (1994) 24. Newell, A.F., Gregor, P., Morgan, M., Pullin, G., Macaulay, C.: User-Sensitive Inclusive Design. Univ. Access Inf. Soc. 10, 235 (2011) 25. Lewis, A., Porter, J.: Research and pupil voice. In: Florian, L. (ed.) The Sage Handbook of Special Education, pp. 222–232. Sage Publications, London, UK (2007) 26. Benton, L., Vasalou, A., Khaled, R., Johnson, H., Gooch, D.: Diversity for design: a framework for involving neurodiverse children in the technology design process. In: Proceedings of CHI 2014. ACM, New York, NY (2012) 27. Thomas, G.: How to Do Your Case Study: A Guide for Students and Researchers. Sage, London, UK (2011) 28. Heron, J., Reason, P.: The practice of co-operative inquiry: research ‘with’ rather than ‘on’ people. In: Handbook of Action Research, vol. 2, pp. 144–154 (2006) 29. Bradley, R., Newbutt, N.: Autism and virtual reality head-mounted displays: a state of the art systematic review. J. Enabling Technol. (in press) 30. Newbutt, N., Sung, C., Kuo, H.J., Leahy, M.J., Lin, C.C., Tong, B.: Brief report: a pilot study of the use of a virtual reality headset in autism populations. J. Autism Dev. Disord. 46(9), 3166–3176 (2016) 31. Department of Education, UK.: Schools, pupils and their characteristics. https://assets. publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/650547/ SFR28_2017_Main_Text.pdf. Accessed 10 May 2010 (2017)

Chapter 16

AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy Therapeutic Movement Training Anthony Lewis Brooks

Abstract Therapeutic (re)habilitation intervention within aquatic environments to advantage buoyancy aided movement-training using ‘Virtual Reality’ technologies indicated potentials and next-level challenges through Proof-of-Concept (PoC) ‘beyond-simulation model’ testing. Beyond the buoyancy aspect is a concept built on evidence from field work where profoundly disabled participants after a pool visit exhibited greater awakeness and engagement within intervention. The study builds upon the author’s prior research to empower, motivate and engage interactions to supplement traditional (re)habilitation approaches. Testing took place in a contextspecific location with a human subject. Therapists and special pool staff attended and evaluated positively to potentials. This chapter shares the concept, set-up and workin-progress toward realising next-level research funded consortia to question and address challenges found to date with a goal of advancing the field of (re)habilitation through AquAbilitation. Keywords Hydrotherapy · Virtual Reality · Healthcare · Aquatic Rehabilitation · Disabled · Movement-training · Virtual Interactive Space (VIS)

16.1 Preamble/Introduction In context of the call for chapter contribution for this volume in the series on technologies for inclusive well-being, this text shares testing of a work-in-progress considered as a “simulation beyond conceptual model”—herein introduced as a realized Proof-of-Concept (PoC). The explorative study relates to patented ‘Communication method and apparatus” (see e.g. United States patent US6893407 [38]) aligned to realization of

A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_16

299

300

A. L. Brooks

a process developed within the author’s larger body of research targeting apparatus and methods to supplement traditional (re)habilitation training and therapeutic motivation—publications available at university profile.1 The text is shared herein to represent a gap in the literature/field in alternative (re)habilitation training proposed within aquatic environments to advantage buoyancy aiding movement training using technologies associated to the field of Virtual Reality (VR) and Virtual Interactive Space (VIS—see [4]. An explorative aim was toward testing initial design alongside motivating concept adoption and further exploration by peer researchers in the field alongside gaining interest from interested project funders toward realizing next-level research. As of writing, funding applications to realise next levels beyond PoC are ongoing. Thus, this text represents an ongoing work-in-progress toward next level research.

16.1.1 Simulation and Targeted End-Users/participants In line with the call for chapters for this volume (i.e. simulations) and the claim that there a strong culture of networking, building relationships, and communities around the topic of technologies for inclusive well-being—this text reports towards realising interest in this alternative approach of an aquatic environment with multimedia feedback to supplement traditional approaches. Simulation is defined aligned to literature i.e. as: • “imitation of a situation or process”—“the production of a computer model of something, especially for the purpose of study”2 • “A simulation is an approximate imitation of the operation of a process or system; that represents its operation over time. Simulation is used in many contexts, such as simulation of technology for performance tuning or optimizing, safety engineering, testing, training, education, and video games.”3 • “Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist”[32, p. 6]). This contribution reports on design and implementation realised as PoC-beyondsimulation…and toward next phase implementation for research iterations towards realising a new tool for therapeutic (re)habilitation to supplement traditional approaches.

1 https://vbn.aau.dk/en/persons/103302. 2 https://www.lexico.com/en/definition/simulation. 3 https://en.wikipedia.org/wiki/Simulation.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

301

16.1.2 PoC—Design Justification This PoC, presented herein, is thus beyond simulation where prior research provided the foundational model upon which to build a design to take advantage of buoyancy with an environment where use-hypothesis relate to motivated interactions to engage participants4 such that as a supplement for therapeutic intervention increase range and dynamics of movement may result. Additionally, the concept targets use of the aquatic environment to stimulate post pool session therapeutic activities based upon prior-research (fieldwork) considering awakeness and awareness resulting in increased engagement and compliance. The design associated to use is considered applicable across a range of movement dysfunctional diagnosis—from Profound and Multiple Learning Disabled (PMLD), Cerebral Palsy (CP), Acquired Brain Injured (ABI) and others. This is aligned with the author’s overarching research body of work, which, over decades, has investigated these and other conditions associated to movement dysfunction where empowerment through Virtual Interactive Space (VIS) has been found evidently impactful from professional expert evaluations. Focus of the research over the years, in respect of diagnosis/condition, has been where funding has provided opportunities for investigating specific dysfunctional conditions researching with expert healthcare professionals.

16.1.3 Technology and End-Users The call-for-chapters text of this volume further stated how technology itself has limitations and it is of vital importance that both the technical team but more importantly those that will employ the technology knows these limitations beforehand. This is considered an important statement as over the years of research it has been found (respectfully) that the “bottleneck” has been the facilitator or therapist who typically has limited time to learn the opportunities presented by being able to adapt and tailor the system to the participants profiles i.e. preferences, needs, desires etc. Thus, this work targets to explore selected technologies to determine constraints and limitations from both the technical team perspective and from the position of those who are proposed to apply the technologies in their (re)habilitation practices i.e. therapists, in order to support towards optimal usage. Future phases of the research targets to achieve funding so that healthcare professionals have substitute workers in place to enable their focused time in learning the systems. The next section introduces the technologies that are involved in this explorative work-in-progress, first with an attempted defining of terminology and subsequently presenting the specific set-up and research design. 4 Literature in this field denote using various terms, thus participant can relate to patients, end-users,

subjects etc.

302

A. L. Brooks

16.2 Technologies and Terminology: From Virtual Reality (VR) to Virtual Interactive Space (VIS) Confusion has existed over the defining of the oxymoron term ‘Virtual Reality’ due to the apparent contradiction between the adjective Virtual and the noun Reality [33]. Technologies associated to the term Virtual Reality has witnessed radical transformation over the last three decades. The term itself has likewise undergone various definitions and interpretations and there are many literatures Online and in academic libraries that inform as such. Much has been written about Virtual Reality, the early days and the peaks and troughs—thus, in this contribution, a brief and selected focus is on use in healthcare as related to this body of work. An early keynote address titled “Virtual Reality and Persons with Disabilities” was given by Jaron Lanier5 at California State University Northridge for the Center On Disabilities Virtual Reality Conference 1992. The proceedings are still Online6 as of writing this chapter and the resource (even beyond Lanier’s keynote) is considered insightful on state-of-the-art in respect of Virtual Reality and healthcare and related thinking at the time. Notable from attendance was that Lanier in his keynote posited what Virtual Reality is and isn’t, in doing so he linked multimodality of sensing to motoric abilities and interactions and introduces head-mounted displays (HMDs). He emphasised that the community of people developing for persons with disability was the same as developing Virtual Reality—this a point rarely discussed elsewhere. HMDs have been around for many years and in work associated to Virtual Reality, general acknowledgement is of Ivan Sutherland [36] and his student Bob Sproull as originators from around mid-to late 1960s of the first working virtual reality HMD. Lanier’s keynote presentation in California a quarter-of-a-century after Sutherland is a few years after the claim as ‘industry leader’ by GestureTek© from Canada on their multi-patented video gesture control technology (VGC) that users—academics and non-academics—have referred to over a number of years as Virtual Reality. At the company’s website, it states that their products allow users to engage with multimedia content, access information, manipulate special effects, and even immerse themselves in interactive 3D virtual worlds—simply by moving their hands or body. The company’s product line includes GestureXtreme©—marketed as a virtual world gaming experience that transports a participant’s image into a computer-generated landscape so one can view oneself onscreen where body gestures manoeuvre game simulations or interactions with onscreen characters and objects in real time resulting in an unencumbered full-body immersive experience. Another product is GestureTek Health© that is marketed as touch-free, gesture-controlled solutions for virtual reality therapy, multisensory stimulation and immersive play offering engaging experiences resulting in marked improvements of physical and cognitive conditions—regardless of age, ability, or stage of recovery—as state-of-the-art systems having unique patient 5 Jaron Zepel Lanier is an American computer scientist and a pioneer in the field of VR. He was the

co-founder of VPL Research, Inc., which was one of the first companies selling VR technology. 6 https://www.csun.edu/~hfdss006/conf/1992/proceedings/vr92.htm.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

303

and clinical benefits that have been proven in numerous studies over the last 20 years.7 A search at the GestureTek© website for the term Virtual Reality returns fifty-four items matched including academic publications.8 Healthcare professional publications reviewing the field such as those titled “The GestureTek virtual reality system in rehabilitation: a scoping review” [18], and “Does intervention using virtual reality improve upper limb function in children with neurological impairment: A systematic review of the evidence” [17], … use the term Virtual Reality (VR) in respect of projected/monitored visualisations such as GestureTek© with the second text referring additionally to the commercial Sony EyeToy© as Virtual Reality. However, on examination the systems are seemingly not using HMD, stereo glasses (active shutter glasses or passive polarised to stereo) nor any head tracking. Thus, use of the term is confounding. The reason for such terminology use, could be that healthcare authors such as cited and at the GestureTek© site are not aware of the contemporary redefinition aligned with Brooks as stated in the following. Notable was that this confusion of terminologies was even evident in establishing the International Society for Virtual Rehabilitation (ISVR) where much debate in title of the society by founders was around the terming of Virtual Rehabilitation—see https://isvr.org. For the sake of brevity and to summarise and not draw out the terminology debate this section closes by positing how Frederick Philip Brooks Junior defined VR in the following, (which is typically accepted in contemporary researches)—“Virtual Reality (VR) requires three real features: (i) real-time rendering with viewpoint changes as head moves, (ii) real space, i.e., either concrete or abstract 3D virtual environments, and (iii) real interaction, i.e., possible direct manipulation of virtual objects” (in [33], viii). In closing this section, it is pertinent to share that, in the history of the larger body of work associated to this explorative study, lengthy research was conducted at the end of the 1990s at the Center for Advanced Visualisation and Interactivity (CAVI9 ) Aarhus University, Denmark, when first artist-in-residence. The research was with Barco VR projector systems (a 3D panoramic ‘cinema’) and a Holobench10 that both required active stereo shuttered glasses for 3D/stereo. Subsequently, an ‘underground’ multi-room Virtual Reality Human Behaviour complex titled “SensoramaLab” was appropriated, designed, and funded by the author’s efforts built around 2004 at Aalborg University Esbjerg campus in Denmark that focused upon a 5-m by 2-m special stereo screen by Stewart (United States of America) and a rear Cyviz (Norwegian) multi-projector (4) wall—see Fig. 16.1—to present content to participants wearing passive stereo glasses (135/45-degree polarisation). This decision as opposed to active glasses and HMD was because of the earlier research with active stereo shutter glasses (a hardware with a typically ‘one-size-fits-all’ concept)

7 Abridged

from text at https://gesturetek.com/index.php.

8 https://www.gesturetekhealth.com/search/site/Virtual%20Reality. 9 https://cavi.au.dk. 10 See

Virtual Reality in Denmark https://apps.dtic.mil/dtic/tr/fulltext/u2/a454907.pdf Sect. 2.1.2.

304

A. L. Brooks

Fig. 16.1 SensoramaLab Cyviz rear projection wall system for passive stereo 3D VR with Steward screen ©Cyviz with permission

finding discomfort for users and thus the plastic passive stereo glasses were experienced by users (many being profoundly disabled) as ‘normal’ sunglasses that a participant would typically use in the summer sun. This was especially important for those participants who were profoundly disabled or otherwise weak in strength. Head tracking (for reactional image change to head position) and camera tracking of body for interactions were incorporated via separate bespoke apparatus and software. Notable to mention is that head position tracking technique when using a VR projection screen differs to that experienced in a 360-degree visualisation contemporary HMD. In this AquAbilitation design a VR projection screen and associated head tracking is impractical. Thus, the two considered conditions, as presented herein as targeted, for explorations of AquAbilitation, are justified. Thus, for clarity, this PoC used projected/monitored image—as illustrated in Fig. 16.5 without claim of it being Virtual Reality: rather it is referred to as Virtual Interactive Space (VIS) in line with [4]. However, as stated herein, future plans for this AquAbilitation design is to test use of a waterproof HMD with smartphone alongside addressing the three items in Brooks’ list (in [33], viii) so as to question AquAbilitation VR possibilities beyond AquAbilitation VIS.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

305

16.3 Background and Concept—Fieldwork and Theoretical Framework This PoC was considered as constructivist in nature placed within the author’s larger body of work considered relating to contemporary conceptual art, established and realised in early forms near the end of the twentieth century and explored and studied through the early decades of the twenty-first century up to current-day, across and within both the fields of contemporary interactive multimedia art forms (e.g. targeting empowered creative expression) and health/rehabilitation (within which creative expression was found empowering and motivational for end-users). The avant-garde research was found informing in both directions across fields. It is also considered as ‘experiential’ with individuals and audiences experiencing the concept as a ‘happening’ event, thus temporal (live/real-time) and non-repeatable. Some viewers may concern that it is more experiential/conceptual art due to aesthetic concern being the human body and how it gives rise to living experience, however, the author’s position on aesthetics aligns with practising artist Ben Shahn who stated “Aesthetics is to the artist as ornithology is to the bird”—explained as how the practising artist may regard the creative process as inexplicable and may dismiss all attempts by psychologists and philosophers to explain it as a waste of time (“for the birds”) [20, p. 53]. The body in art is well documented and, in this work, learning about bodily interactions with technology from the arts transcends into (re)habilitation to design systems to supplement and advance the field. The larger body of work referred to (see footnote 1) has for a number of decades been active in targeting compendia of adaptive and tailorable technology-enhanced interactive environments that offer opportunities for participants to communicate through self-expression and to approach a state of flow [14, 15]. Target for such environments is in and across the disciplines of (re)habilitation, performance art, and entertainment such that similar environments are used. In the former case, such features of the created environments provide appropriate opportunities for the participant to achieve mastery and to increase inter-subjective interactions—for example with a therapist or session facilitator as “more knowledgeable other” (MKO)—a social factor contributing to development [39, 40]. Participant conscious awareness level of the activity within the created environment is considered as play(full) rather than being considered as formal therapy—the facilitator’s role as MKO includes to promote this play aspect of the interactions whilst also being aware that it is therapeutic outcomes that are targeted as goal from the activity. The facilitator is of decisive importance for the development of the play [1, 28, 39]. This aligns with activity theory (e.g. [24, 39]) where communication is an important aspect of the play. This approach contrasts communication theorists (e.g. [1, 28]), who primarily look at such play as communication. The position taken herein aligns with Lisina [25] in considering this view as limited as such play also, beside the communication, contains a content, human and social functions, which are communicated through the play and, accordingly, have to be considered as innate to targeted empowerment [5, 19].

306

A. L. Brooks

Petersson and Bengtsson [29] regard empowerment as a dynamic concept that concerns individuals’ possibilities and resources associated with growth and development in everyday interactions. Aligned, this work, presented herein, considers a holistic and process-directed view on empowerment instead of it being a mental state where the play in the interactive environment serves as a means to enhance a participant’s communication through the causal feedback loop. Philosophically, it can be said that this view enables experiences supporting outcomes of a more positive selfperception and belief in one’s own ability and capacity [29]—aligned with efficacy attained from self-expression. Other associated attributes related to this research, such as ‘agency’ and ‘autotelic’ are discussed elsewhere in the authors publications (see previous footnote). Research studies of empowerment include movement data sourced within invisible sensing spaces that is mapped to multimedia feedback that act as stimulus/stimuli to motivate subsequent movement. In this way afferent efferent neural feedback loop closure is targeted and achieved through optimally matching a user to the tailored created system attributes and feedback content. Emergent model for intervention in-action and on-action has resulted from the research (for more see [6, 7]). Within created environments, multisensory feedback can be selected typically as auditory, visual, haptic. A user’s movement within the sensing environment can thus create music, digital painting, robotic control and more—basically control of multimedia content is based upon routing, scaling and otherwise manipulation of sourced data to responsive data via algorithm patch programming. The bespoke system has been used within the author’s interactive exhibitions at leading Museums for Modern Art (MoMA); stage performances; and more. The same system has been used within healthcare—mostly (re)habilitation training. In such settings with participants/patients with motor dysfunction numerous positive evaluations have been stated from healthcare professionals. The research has developed an emergent model for intervention and evaluation under an iterative session strategy in treatment programmes. Outcomes also point to how an alternative feedback stimulus channel can impact a different sensory channel to positive effect—by this an example is in use with acquired brain injured patients where different auditory feedback increments gave users a feedback sense to affect their proprioceptive related balance. Tangible outcomes from intervention, such as in producing own-created digital paintings (printed screen shots of the monitored screen where a user’s in control of own image generation and colour control through threshold dynamics of motion) has been positively received. Such studies resulted in exhibitions, for example where ‘paintings’ were created from within workshops, and mounted on a wall. Two results related to efficacy were identified (1) the sense of pride, self-value and achievement immediately after creating the artefact (see Fig. 16.5, upper right—workshop participant showing of created artefact and exhibition behind), and (2) ownership. In the case of ownership, the author, as workshop leader, met an institution leader (whose clients—intellectually challenged elderly—had attended the workshop) four months after the workshops. The leader informed how on return to the workshop site (a cultural figurehead in Oporto, Portugal, namely Casa da Música) to collect their

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

307

‘paintings’ from the exhibition wall, even though abstract and ‘blobby’ representations, all knew their own created artefact. As a result, this specific institute arranged a show and tell session for families and friends where the ‘painter’ shared their creation and their story in creating that was filled with joy. The leaders informed how they were astounded by the stories they heard as whilst they suspected a positive experience from the recognition of own ‘paintings’ they had not expected the embodiment effect of the creative process that they heard from their clients. It was reported as an emotion-filled evening of presentations by the intellectually challenged elders. Digital tools that can empower such creative expression are also impactful for supporting staff, for example, as shown (in Fig. 16.2) where an image of a profoundly dysfunctional individual wears a headset assemblage apparatus to paint with a traditional brush, paint and canvas. As can be seen a protective apron is worn and underneath the wheelchair where he is seated the floor is covered to help ease cleaning. Digital painting via movement, while acknowledged as differing, offers new opportunities for end users—in this case, including those creating the painting, as well as attending care staff.

Fig. 16.2 Traditional painting for profoundly disabled participant at an institute—illustrating potential benefits for digital painting within such situations to prevent spillages and participant discomfort when compared to digital—see Fig. 16.5 i.e. upper inserted Pictures-in-Picture (PiP) © Martins with permission

308

A. L. Brooks

16.4 Fieldwork Throughout the second half of the 1990s the author’s field work studied potentials in rehabilitation using his bespoke interactive system with invisible sensing technologies mapped to direct and immediate feedbacks, selectable from auditory, visuals and robotic apparatus. A consistent clear outcome from sessions was differences in users’ level of participation, mood, and engagement to comply with session tasks when they had earlier been to the local water facility: Awareness was heightened alongside improved awakeness. Reflecting on these outcomes, the concept of incorporating water—its feel against flesh alongside buoyancy attributes—into the research interventions with responsive multimedia was conceived. The following is a brief hypothesis on related stimulus. The somatosensory system, as an aspect of the human sensory nervous system, includes human epidermis—the outer layer of skin, which, in context of this study, when submerged in water is stimulated to typically result in a feel-good factor experience. Contact with water in the aquatic environment activates sensor receptors associated to the epidermis that subsequently send signals along connected sensory nerves to the spinal cord where neural processing takes place with further processing taking place after being relayed to the brain. This water-contact stimulus typically includes temperature change (via thermoreceptors—though in pools for specialneeds participants use the temperature is controlled higher than in public pools) and corresponding mechanoreceptors. In this research, such somatosensory transduction resulting from aquatic environment is thus related to afferent efferent neural feedback loop closure towards meaningful interactions for participants where causal cycles of action-interactions are hydro-motivated.

16.5 Hydrotherapy (with Innate Multimedia-Driven Causal Cycles of Action-Interactions) Austrian Vincenz Priessnitz is acknowledged by some as inventor of water-cure or hydrotherapy (formerly known as hydropathy—see [34, p. 3586]) or hydrotherapeutics [27, p. 153]—see also [31]. The term “spa” translates from the Latin phrase Salude Per Aqua meaning “healing through water". Uses of aquatic environments is typically considered as a part of alternative medicine (particularly naturopathy), occupational therapy, and physiotherapy, involving the use of water for pain relief and treatment. A broad range of approaches and methods take advantage of the physical properties of water, such as temperature and pressure, for therapeutic purposes, to stimulate blood circulation and treat the symptoms of certain diseases11 (see

11 https://en.wikipedia.org/wiki/Hydrotherapy.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

309

also “Hydrotherapy—What is it and why aren’t we doing it?” International SPA Association. Kansas. October 2009).12

16.6 Aquatic and Virtual ‘Immersion’ (Pun Intended) In closing this introduction to the design concept and before sharing the design set up used in the PoC and plans for next phases—a consideration is to mention the common term or immersion from aquatic and Virtual Reality perspectives by citing [22] whose definition would seem to strike common ground between the two concepts highlighted in this text: Yet immersion viewed from a human standpoint is achieved once our physiological, creative, intuitive, intellectual, and imaginative functions are engaged and transcend basic disruptions and distractions. This immersion is not the escape offered by cinema or by the CAVE environment for 3D computer graphics, but is a multifaceted and active human experience of connection and communication that involves both internal and external dimensions. Kozel [22, p. 145].

Reflecting upon this, the human (participant) experience of connection and communication is posited relating to the proposed (re)habilitation motivated interaction cycles that are hypothesised achieved from the multimedia stimuli in this design. Immersion herein refers to the submerged torso of the participant into the water where stimulus to skin is experienced. Once a design has achieved afferent efferent neural feedback according to the larger body of work (readers are suggested to refer to the authors publications as listed in footnote 1) i.e. where a matching of sourcing of data and mapping to selected content is optimised such that the participant is immersed into the experience of interactions—a state where conscious intent of motion is transcended such that feedback responding to motion input drives the subsequent input that sequentially drives corresponding feedback as cycles of action-interaction. However, beyond this situation challenges of technique still are apparent. One of these was an outcome from the PoC in that when a limb in inside the silhouette that is established in the camera sensed planar FOV it loses the interaction definition—for example as in tracking occlusion problems. In a non-aquatic environment, contemporary solutions have used depth-of-field cameras to be able to source data from the Z-axis of depth. This where a tracked limb may be in front-of the body being tracked (silhouette) unlike occlusion where it is behind. However, such issues are proposed to be addressed in next phase of AquAbilitation researches.

12 https://experienceispa.com/ispamedia/news/item/hydrotherapy-what-is-it-and-why-aren-t-we-

doing-it.

310

A. L. Brooks

16.7 Set-Up of PoC The PoC was realized within a real-life location (not-simulated) of a special pool facility affiliated to Lund University Hospital, in Lund, Sweden. The pool was used by a crosssection of end-users diagnosed as ‘handicapped’ (a term accepted in Scandinavia) handicapped in the region. The principle and goals behind the PoC case study were presented to pool staff and networked therapists prior to the testing in order to have permission to use the pool for the testing and to invite attendance. Ethical approvals were also in place. A goal was to demonstrate feasibility of the idea with an in-situ preliminary test of the work-in-progress with a single case study example and to receive input from attending therapists and pool staff. In the PoC, a non-waterproof camera was used necessitating use of an open top container for wired connection and weights to submerge and make static. This has now been updated with a waterproof camera and a bespoke metal tripod with hooks where diving lead weights can be attached to enable placement on the pool floor. Further testing is needed to establish if the same camera can service both the computer software algorithms and programmed content (bespoke content i.e. body painting, music-making and games) as well as commercial game platform content (as used in related works with acquired brain injured and others). Wired connection is still deemed necessary. The next section introduces the software used in the case study PoC.

16.8 Software Examples for Non-Aquatic Movement Tracking-Environments (Typically Dance) The EyesWeb project was conceived to develop a modular system for the real-time analysis of body movement and gesture being built for interactive dance and music [11, 12]. The software targeted different environments, such as dancers on stage, actors in theatre, or visitors to an art museum. The author visited the EyesWeb team in University of Genoa at the turn of the century to introduce his research investigating use of movement tracking to empower people with handicap—primarily motoric dysfunction. A European project titled CAREHERE13 was developed and funded. The research around the modular system considers aspects of movement, intention, and effort (e.g. [23]). Whilst these theories primarily were conceived related to dance choreography, the examining of characteristics of movement, intention, and effort within observed human movement without focusing on a particular kind of movement or expression was interpreted by the author as applicable to persons having motoric dysfunction and the related training by therapists. Thus, the modular 13 CARE

HERE European Project IST-20001–32,729: Creating Aesthetically Resonant Environments for the Handicapped, Elderly and Rehabilitation—https://www.bristol.ac.uk/car ehere/Postprojectreflections.html.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

311

Fig. 16.3 Eyesweb algorithm image with bounding box (upper right surrounding silhouette responding to dynamics) and motion cues (contraction index (CI); Quantity of Movement [QOM] through the threshold of motion and non-motion segmentation, and Kinematics cues) extracted from Brooks and Petersson [8, 9]: for more see EyesWeb project documentation and [12]

system seemed suitable to adopt alongside other software being used at the time— primarily Cycling74 Max modular system for MIDI14 sensing. In EyesWeb camera sensing is a main input (Fig. 16.3). Within an aquatic environment traditionally used sensor apparatus will not work submerged—i.e. infrared and ultrasound sensing technologies, as implemented in earlier researches in the larger body of work.1 Thus, camera sourcing of participant movement was tested in the PoC (Fig. 16.5).

16.9 Techniques—for Example with EyesWeb and EyeCon Software Camera-sourcing techniques involved in tracking of a participant’s movement for interactions will be as tested outside of an aquatic environment in the associated holistic larger body of work. In the PoC test was in using the EyesWeb start-frame buffer background extraction—see also Brooks and Petersson 8, 9 where Fig. 16.3 herein is extracted from. The reader can get an additional and improved detail on the technique by reference to the [12] text and viewing cited Figure 2 ‘tendency to movement’ and ‘equilibrium’ images and accompanying text.

= Musical Instrument Digital Interface = an industry standard music technology protocol that connects products.

14 MIDI

312

A. L. Brooks

Fig. 16.4 EyeCon control showing drawn dynamic ‘Field’ (green box around background dancer) and a series of ‘Touchlines’ around foremost dancer’s body. Various mapping of data triggers is available in sub-windows e.g. field and lines upper left, mapped to sounds—upper right showing pixel threshold adjustment and sensitivity control © Weiss with permission

Another software based upon camera-tracking as used in the holistic larger body of work is EyeCon15 by Frieder Weiss16 under the dance company Palindrome. Within this software a video signal is fed into the computer, the image appears in the main window of the program (see Fig. 16.4). As illustrated, lines, fields or other elements can be drawn as a layer over the source video image. If a participant in the camera field of view (FOV) then moves and touches one of the drawn elements then an event is triggered which can be selectively mapped to multimedia feedback. The software can also measure amount of movement that happens within a field. In such computer-vision software a common need is to differentiate the participant from the background to enable focus on tracking a participant. Typical is that an initial snapshot picture of the empty space is used to inform the software that this is the reference such that differences (i.e. the person moving) activates/triggers events. 15 https://eyecon.palindrome.de. 16 Frieder Weiss author of EyeCon and Kalypso, video motion sensing programs especially designed

for use with dance, music and computer art.

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

313

In EyesWeb, background subtraction is achieved via a first-frame buffer extraction of background. In EyeCon, a manual image is captured via a button in the control window—see Fig. 16.4. For more on the background subtraction technique via the brightness/chromaticity distortion method i.e. without green/blue chroma coloured background—see e.g. Horprasert et al. [21].

16.10 Lighting Being camera-based/computer-vision software means that lighting is important. The technique of pixel thresholding compares a video picture (live feed in this case) with the stored reference initial background picture. The software looks at the brightness values of each individual pixel in the foreground live feed and calculates the brightness differences between those pixels and the same pixels in the background. If the difference on any of the (e.g. lines) pixels is higher than ‘pixel threshold’ than the software assumes that the touchline is triggered. Touchlines and a field are shown in the EyeCon control window—Fig. 16.4. Only EyesWeb was tested in the PoC. Both software have programmable adjustments for all aspects and beyond EyesWeb, EyeCon (though at time of writing a dated software) or similar software is planned to be introduced for second level testing. Notable was that in the PoC testing the participant had to move out of the camera FOV in order to create an empty scene for snapshot image as the reference buffer image (see background extraction notes in this text). There were also issues with noise in the FOV when water surface movements were detected. Such issues are noted towards next-level researches. Whilst the buoyancy can offer support in making movements that may otherwise be more challenging for the participant, it can also be the case that to establish a state of stability (aligned with [23] description) certain of the participants may need to ‘work’ harder (increase effort—aligned with [23] description) to maintain posture. This aligns with tasks of stillness—i.e. areas (zones, lines or fields) drawn around the body (e.g. [10] and such as illustrated by Touchlines in Fig. 16.4) such that the handicapped participant’s task in not to trigger—but rather to control posture ‘stillness’ so as to keep the feedback multimedia silent. Numerous tasks for participant incremental challenges can be established by a creative intervention session-facilitator in this way.

16.11 Projected Image Versus HMD Figure 16.5 illustrates an AquAbilitation environment where, simplistically, a participant is in a pool (as in Fig. 16.6—pictures from PoC case study); a camera sources

314

A. L. Brooks

Fig. 16.5 Illustration of set-up with participant in water up to chest (can be with additional buoyancy aids or harness support [not illustrated]—see Fig. 16.2): Various computer software techniques are tested to enable monitored “body painting”—see also Fig. 16.6 showing PoC testing and harness for pool access and training (as proposed for next phases of this research)

Fig. 16.6 (Left) PoC testing with end-user and buoyancy aids—camera set up shown in lower left image (open top container weighed). (Right) Harness support [with electric rail assembly over] to aid participants into pool and support intervention

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

315

submerged body; the images are processed by computer software to selected audiovisual stimuli, which is presented to the participant (in almost no-delay real-time) as direct and immediate feedback to feed-forward input. HMD use is not illustrated as this is planned for subsequent testing. However, a 3D printed (thus waterproof) HMD has been proposed to mount a smartphone capable of Virtual Reality stereoscopic display and capable of transmitting head motions back to the computer for ‘tracking’ of the Virtual Reality content. A hypothesis aligned to this plan is that wearing a HMD in an aquatic environment may be challenging for some participants. This is especially important as the concept targets motivating through experiences associated to such VR topics as presence, flow and engagement – thus, it is essential that the participant can relax into the experience without concern for safety, well-being, and comfort – these being a given of what should be in place at the start of any testing. Therefore, the testing in PoC and next phase will be constrained at the projected visualisation level as illustrated with HMD incorporated at a later date.

16.12 Conclusions In conclusion it is posited how this contribution addresses the call for chapters by reporting design and PoC testing beyond simulation of an alternative setup to supplement traditional interventions in (re)habilitation where a focus is on patient/participant experience, treatments, and therapies. Technologies for inclusive well-being are implemented such as to realise new opportunities in the field. Aligned with related literature, e.g. [30] the PoC testing plans for two systems in therapeutic settings. Rand et al. (ibid) tested a projected system (GestureTek©) and another based upon a head mounted display (HMD). As in the Rand exploration with non-handicapped testers, similar was tested in this PoC. In this explorative research a difference is made between a projected system as a Virtual Interactive Space (VIS— [4]) and Virtual Reality (the planned waterproof HMD and smartphone explorative research in an aquatic environment. This environment is considered offering opportunities for benefit in (re)habilitation where a person is supported to be safe and secure depending on profile and needs. The concept targets for participants to experience the feel of water against naked flesh alongside buoyancy attributes to stimulate and aid movement that would otherwise be constrained through dysfunction. The concept was considered a positive enhancement and next-step in the goal of creating compendia to supplement traditional intervention tools that grew from non-aquatic intervention sessions i.e. the larger body of work referred to herein. Research indicates that too few institutional water facilities for handicapped (a term used in this text due to the research is based in Denmark where this is the used term—despite its questionable political correctness internationally) exist and let alone incorporate a harness device as shown in this text (Fig. 16.6—lower right). Such a harness can not only aid staff in ease of supporting an end-user into an aquatic environment such as a heated pool for (re)habilitation, but it can also support an end-user in an upright

316

A. L. Brooks

position where he/she feel safe enough to explore an interactive environment: In this case where a submerged camera set up captures the user’s underwater motions and through computer software manipulates to directly present an audio-visual feedback aligned to the sourced movements (Fig. 16.5 illustration). The concept aligns with non-aquatic environment harness use in Virtual Reality rehabilitation—typically in gait training as shown in [3] [See in cited publication “Figure 1. The −VE Mobility Simulator: (a) general view”].

16.13 Summary In this research the audio-visuals are presented on a poolside large projection screen or monitor and sound-system such that a user can see and hear interactions and is motivated to further move and interact sequentially in response to the stimuli. In this way afferent efferent neural feedback loop closure is attained within an environment that is buoyant to aid restricted/delimited motions and a stimulant for the body through contact with the water e.g. somatosensory system stimulation. In presenting this exploratory research in its initial stages (albeit the proof-ofconcept to feasibility was conducted originally in the first years of the twenty-first century with further phases remaining in need of external funding to realise) it is pertinent to state that by considering a form of non-formal creative expression and play in (re)habilitation as a supplement to traditional intervention, it is not the target to claim robustness of investigation neither any statistically relevant findings. The target of the research is to ongoingly create solutions to explore that can make a difference for people with challenges in functional physical ability. In this body or work these people are typically ranging from the most profoundly handicapped along a continuum including those more able. Research designs are observational and built upon prior experiences and are generally case study based using mixed methods analysis though primarily qualitative due to differences of end-users and specific profiles. The term ‘case study’ is used herein lightly without relating the typical depth of inquiry associated to the term. In presenting media such as visual stimulus as direct and immediate real-time feedback for motivating participant moving within a created environment, the author posits how his position aligns with the mature Cézanne who “had no designs on the field of vision except to uncover the designs he saw in it”. In other words, the created environments, offers an observation of innate human designs and intentions that act as a conduit from the participant, through the invisible space into the responsive media, and back into the participant in ever evolving cycles of causal action-interactions. Thus, in this way the afferent efferent neural human feedback loop closure is achieved toward technological tools for next-generation therapists who target inclusive wellbeing. Relatedly, [40] reflected how play is considered important to a child’s development. When a child is playing, a potential developmental zone is created—referred to as Zone of Proximal Development (ZPD), which is defined as distance between actual

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

317

level of development—determined through a child’s own way of solving problems— and the potential level of development—defined through supplemental guidance of an adult [39]. In context of the research presented herein, where both children and adults are targeted to be participants, and where typically profoundly disabled adults may have a mental age of a child, the design of the interactive environment having feedback stimuli possibilities that respond directly and immediately to participant input offers an interactive play space where one can make music or create/manipulate visuals through movement that is buoyantly supported whilst feeling good with water surrounding the body—the head is immersed in the creating/playing interactions— the body is immersed in the water. Alongside, a facilitating ‘More Knowledgeable Other’ (MKO [39]) promotes the creating and playing through interactions with the participant and using the interactive multimedia as a vehicle for the activities e.g. challenging specific tasks of participant feed-forward motion input to stimuli feedback. This activity targets creating a development opportunity for nuance of progress under the (re)habilitation targeted outcomes. Further, [2] refers to interactions, such as predicted afforded by this design and PoC test, as instrumental exploration, motivated and learned by the cause and effect, and surprise exploration awakened by pure and simple novelty. This instrumental exploration links to surprise exploration achieved from the interactive environment empowerment offering relations as forms of motivation due to the novelty [2]. Such novelty can often be frowned upon in research evaluations, however in context of this design, PoC, and larger body of work, novelty is a targeted ongoing aspect that aligns with an emergent model titled Zone of Optimised Motivation (ZOOM) of incremental task challenges within intervention to ongoing create novel experiences from the interactions—in other words a balancing that matches to the participant’s profile [7].

16.14 Further Challenges, Critique, and Reflections Toward Future Research Besides attaining the funding to research the concept further there are challenges to address. In observations of pool use in (re)habilitation it is evident that many handicapped participants enjoy to float horizontal on the surface of the water with buoyancy aids rather than being supported in an upright position. In such a supported upright position there may be a need for diving weights for lower limbs—however, these were not required by the PoC test participant in the pool. It can also be a consideration to include an overhead camera for tracking of movement whilst floating on one’s back with corresponding ceiling projection or means to monitor in place to show feedback. In context on the PoC it should be noted that assessments and evaluations by attending therapists and pool staff was of concept potentials, whereas evaluations in the holistic body of work consider movement action potentials and related human

318

A. L. Brooks

linkages as a foundational unit of analysis. Thus, PoC evaluations as cited should be regarded accordingly as relating to a non-formal exploration. This PoC test did not have opportunity to implement a Head Mounted Display (HMD) instead of the used monitoring apparatus. Increased immersion with the feedback is a distinct possibility with HMD use—and thus could be a hypothesis of future research explorations. However, under such a hypothesis, such use would likely be needed restricted to short-term due to discomfort of wearing. This reflection is especially in respect of the early explorations in the holistic larger body of work where around 1999 Barco active shutter glasses were considered by handicapped participants to be uncomfortable such that they shook their heads to not wear preferring instead to experience the visualisations in mono. Such glasses at the time were heavy, and one-size-fits-all (which they did not). Linked to such exploration, details of participant optical health to use HMD to be able to experience stereo visualizations is important to include in pre-session preparation to ascertain if fitting to participant. Discussions with optometrists suggest this is apparently rarely undertaken. There is also a potential that wearing a HMD may socially-isolate a session facilitator (the MKO, [39]) and hinder observational interpretations of emotional well-being of the participant in session activities. As presented in this text, towards establishing answers to these questions, use of a waterproof HMD to test has been discussed as a 3D-printed device17 to mount a waterproof mobile smartphone capable of stereo VR. It is a hypothesis that a system with Virtual Reality Head Mounted Display (HMD) may be too claustrophobic for some participants or otherwise not optimal and projected VR preferred. Due to these fields of technologies (HMDs and smartphones) advancing at rapid pace (e.g. see [33]) it is a plan that when funding is realised an optimal set up of HMD and smartphone will be researched according to budget and fit to need. A final challenge to mention is that not all facilities have such pool plus projection screen/monitor space as an inbuilt facility with what can be reflected an optimal environment where air movement can be regulated to prevent any damaging effects of the aquatic environment such as corrosion to motorised or other equipment18 e.g. see Hotell Tylösand spa and pool with projection screen lowered (Fig. 16.7). It is a vision of this research to explore toward realising and increasing adoption/creation of such facilities that can be of benefit to such targeted communities of users in (re)habilitation- or rather AquAbilitation, a term coined and originated in this work. A related design is how such designs for creating and playing (aligned to entertainment and learning) can be used as benefit beyond the (re) habilitation posited herein. In other words installed in exclusive villas where they are used as exercise and party pool enabling owners and guests to self-create real-time projected paintings from their submerged body motion and as well to archive prints of created images

17 E.g.

https://www.aniwaa.com/blog/best-3d-printed-vr-headset-for-smartphones/. @Pool Water Treatment Advisory Group https://www.pwtag.org/pool-temperatures-dec ember-2010/. 18 See

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

319

a)

b)

Fig. 16.7 a+ b Hotell Tylösand, Halmstad, Sweden—showing retractable projection screen as an ideal setting for AquAbilitation in physical training or with audience entertainment mode a ©Kicki Norman b ©Pamela.se—[both with permission]

320

A. L. Brooks

for their art collection: This being contemporary visual art realized from movement training. Figure 16.7 illustrates an ideal set-up of a hotel facility with a projection screen of significant size to offer suitable visual feedback for a participant undertaking AquAbilitation. Currently the research has available a heavy sixty-inch monitor (prior use as a video conferencing screen) with HDMI input; a 3-m × 2-m Altuglas screen and projector; and a sound system. This optimal pool and spa facility at Hotell Tylösand, Halmstad, Sweden has been visited to view possibilities and, due to the hotel’s positioning with an art gallery, a contact has been made towards partnering in a future funded project. A later iteration of the research design and further elaboration of the system is the positioning of additional video cameras around the participant to record from both sides and back of the body to be able to monitor and measure movements. Later still, into the future, a bespoke pool facility with in-built cameras, wireless signal transmittal, optimal multimedia monitoring means, harness participant support, etc., is envisaged as an accessible and inclusive research resource to advance the field of technologies for inclusive well-being.

16.15 Closing Summary In summary it can be shared that there are many challenges involved in such a design herein posited as AquAbilitation with VIS/VR stimulus. Such challenges align with numerous assessments and evaluations of the holistic body of work that has developed and evolved over the last decades toward realising compendia of interactive systems for supplementing traditional (re)habilitation. Additionally, it is complex to decipher immediate impact (let alone impact transcending to actual daily living [ADL], which is always a challenge in this work); also aligned to any generalising of theoretical frames relating emotion and bodily responses due to the participant idiosyncratic differences. For example, it can be questioned which theoretical framework any outcomes may be pertinent to discuss as innate to the research. For example, the James-Lang Theory (emotions evoked following receiving a bodily response to a stimulus—e.g. [13]), the Cannon-Bard Theory (a bodily response and an emotion occur at the same time following a stimulus—e.g. [16]), or even the SchachterSinger Two-Factor Theory (where an emotion is the result of an interpretative label applied to a bodily response to a stimulus—[35]) may be applicable or whether, as posited herein, that a participant has an innate embedded pre-emotion related to preawareness of interaction potentials associated to motion-cues, that are acted upon to activate initial bodily input within a sensing zone, which sequentially activates a selected multimedia response (e.g. in the form of a visual or auditory feedback) that when sensed and perceived as a direct and immediate result of the participant’s input evokes a sense of self-agency related to self-efficacy that stimulate the emotional human psych. This basic human linkage relative to causality seemingly happens without the need for any so-called Emotional Intelligence (EI) to link reasoning into

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

321

the equation (see e.g. [26]) to suggest that the closure of the afferent efferent neural feedback loop is redundant of cognitive engaged need—but rather operating at an unconscious or sub-conscious level of interactions with baseline cognition unifying emotion directly with sensing/perception but not directly with conscious thought: thus positing an action-to-perception-to-action causal cycle potential without cognitive input. Thus, the claim in this body of work for optimised matched systems to end-users as participants that achieve closure of their afferent efferent neural feedback loop. This is posited misaligned with contemporary psychology literature. For example, [37] suggests “growing evidence that emotion is represented at multiple levels of our sensory systems and infuses perception, attention, learning, and memory” positing thought as innate to emotional representations …primary function of emotional representations is to produce unified emotion, perception, and thought (e.g., “That is a good thing”) rather than discrete and isolated psychological events (e.g., “That is a thing. I feel good”). The emergent view suggests ways in which emotion operates as a fundamental feature of cognition, by design ensuring that emotional outcomes are the central object of perception, thought, and action. Todd et al. [37]

Whilst the author is not a psychologist nor a therapist the argument put forward is based upon decades of experiences from creation and use of interactive systems within (re)habilitation where cognitive functioning attributes of participants can be questioned with regard to ability to engage thought into such human afferent efferent neural loop closures as achieved. It is a closing point to offer such environments, as this posited PoC AquAbilitation environment or others created to date to further investigate with such expert research partners. In such therapeutic well-being programmes, it is herein posited as a future research hypothesis on how an AquAbilitation session that immediately precedes an intervention session would result in maximum benefit for both end-users. This is considering one end-user as participant—in respect of stimulated motivation and engagement (predicted to be higher than without the AquAbilitation session)—and as well for the second end-user as session facilitator—due to the higher arousal awakeness state of the participant (from the AquAbilitation session) in order to reach higher goals within intervention towards realising advances in nuance of progress (microdevelopment). A case of “watch this space”…

References 1. Bateson, G.: A theory of play and fantasy. In: Bruner, J.S., Jolly, A., Sylva, K. (eds.), Play: Its Role in Development and Evolution, pp. 119–129. Penguin Books (1976) 2. Berlyne, D.E.: Novelty and curiosity as determinants of exploratory behaviour. Br. J. Psychol. 41, 68–80 (1950) 3. Boian, R.F., Burdea, G.C., Deutsch, J.E., Winter, S.H.: Street crossing using a virtual environment mobility simulator. In: Proceedings of IWVR 2004, Lausanne, Switzerland (2004). Available at https://ti.rutgers.edu/publications/papers/2004_iwvr_boian.pdf

322

A. L. Brooks

4. Brooks, A.L.: Virtual Interactive Space (V.I.S.) as a movement capture interface tool giving multimedia feedback for treatment and analysis. Science Links Japan (1999) 5. Brooks, A.L., Camurri, A., Canagarajah, N., Hasselblad, S.: Interaction with shapes and sounds as a therapy for special needs and rehabilitation. In: Sharkey, P., Sik Lányi, C., Standen, P (eds.) 4th International Conference on Disability, Virtual Reality and Association Technologies, pp. 205–212. University of Reading Press (2002) 6. Brooks, A.L.: Enhanced gesture capture in virtual interactive space (VIS). Dig. Creativity 16(1), 43–53 (2005) 7. Brooks, A.L.: Intelligent decision-support in virtual reality healthcare & rehabilitation. Stud. Computat. Intell. 326, 143–169 (2011) 8. Brooks, A., Petersson, E.: Recursive reflection and learning in raw data video analysis of interactive ‘play’ environments for special needs health care. In: Healthcom, 2005 The 7th International Workshop on Enterprise networking and Computing in Healthcare Industry Korea, IEEE/Korea Multimedia Society Busan, pp. 83–87 (2005) 9. Brooks, A.L., Petersson, E.: Play Therapy Utilizing the Sony EyeToy®. Presence 2005: The Eight International Workshop on Presence (2005) 10. Brooks, A.L., Petersson, E.: Stillness design attributes in non-formal rehabilitation. In: CADE2007—Computers in Art Design and Education—Perth, Australia, pp. 36–44 (2007) 11. Camurri, A.: Interactive dance/music systems. In: Proceedings of the 1995 International Computer Music Conference. International Computer Music Association, pp. 245–252 (1995) 12. Camurri, A., Hashimoto, S., Ricchetti, M., Ricci, A., Suzuki, K., Trocca, R., Volpe, G.: EyesWeb: toward gesture and affect recognition in interactive dance and music systems. Comput. Music J. (MIT) 24(1), 57–69 (2000) 13. Coleman, A.E., Snarey, J.: James-Lange theory of emotion. In: Goldstein, S., Naglieri, J.A. (eds.) Encyclopedia of Child Behavior and Development: Springer (2011) 14. Csikszentmihalyi, M.: Flow: the Psychology of Optimal Experience: Harper Perennial (1991) 15. Csikszentmihalyi, M.: Creativity: Flow and the Psychology of Discovery and Invention: Harper Perennial (1996) 16. Dror, O.E.: The cannon-bard thalamic theory of emotions: a brief genealogy and reappraisal. Emotion Rev. 6(1), 13–20 (2013) 17. Galvin, J., McDonald, R., Catroppa, C., Anderson, V.: Does intervention using virtual reality improve upper limb function in children with neurological impairment: a systematic review of the evidence. Brain Inj. 25(5), 435–442 (2011) 18. Glegg, M.N., Tatla, S.K., Holsti, L.: The GestureTek virtual reality system in rehabilitation: a scoping review. J. Disab. Rehab. Assist. Technol. 9(2), 89–111 (2014) 19. Hasselblad, S., Petersson, E., Brooks, A.L.: Empowered interaction through creativity. Dig. Creativity 18(2), 89–98 (2007) 20. Holbrook, M.B.: Consumer Research: Introspective Essays on the Study of Consumption. Sage Publications (1995) 21. Horprasert, T., Harwood, D., Davis, L.S.: A statistical approach for real-time robust background subtraction and shadow detection. In: Proceedings of the IEEE Frame-Rate Applications Workshop, Kerkyra, Greece. (aka Chalidabhongse*) (1999) 22. Kozel, S.: Closer, Performance, Technology. MIT Press, Phenomenology (2007) 23. Laban, R.: Modern Educational Dance. Macdonald and Evans (1963) 24. Leont’ev, A.N.: Problems in the Development of the Mind. Progress Publishers (1982) 25. Lisina, M. L.: Communication and Psychological Development from Birth to School Age (Danish.Sputnik Press (1989) 26. Mayer, J.D., Roberts, R.D., Barsade, S.G.: Human abilities: emotional intelligence. Annu. Rev. Psychol. 59, 507–536 (2008) 27. Metcalfe, R.L.: Life of Vincent Priessnitz, Founder of Hydropathy. Richmond Hill Press (1898).. Available https://archive.org/details/lifeofvincentpri00metciala/page/n6/mode/2up 28. Olofsson, B.K.: Play for the Life (Swedish). HLS Förlag (1987) 29. Petersson, E., Bengtsson, J.: Encouraging co-operation as participation—the beginning of a beautiful friendship? European Community, Innovation Programme, Transfer of Innovation methods IPS-2000-00113 (2004)

16 AquAbilitation: ‘Virtual Interactive Space’ (VIS) with Buoyancy …

323

30. Rand, D., Kizony, R., Feintuch, U., Katz, N., Josman, N., Rizzo, A., Weiss, P.: Comparison of Two VR platforms for virtual reality rehabilitation: video capture versus HMD. Presence 14(2), 147–160 (2005) 31. Shew, J.: The Water-Cure Manual. Fowlers and Wells (1852). Available https://archive.org/det ails/watercuremanual00shewgoog/page/n14/mode/2up 32. Sokolowski, J.A., Banks, C.M.: Principles of Modeling and Simulation. Wiley (2009) 33. Steinicke, F.: Being Really Virtual: Immersive Natives and the Future of Virtual Reality. Springer (2016) 34. Stevenson, A. ed.: Definition of Water Cure. Shorter Oxford English Dictionary. 2: N-Z (6th ed.), p. 3586. Oxford University Press (2007) 35. Sullivan, L. E. (2009). Two-factor theory of emotion. In: The SAGE glossary of the social and behavioral sciences, 524–524, SAGE Publications 36. Sutherland, I.E.: The ultimate display. In: Proceedings of the IFIP Congress, pp. 506–508 (1965) 37. Todd, R.M., Miskovic, V., Chikazoe, J., Anderson, A.K.: Emotional objectivity: neural representations of emotions and their interaction with cognition. Annual Review of Psychology, 71, 25–48 (2020) 38. Brooks, A.L., Sorensen, C.: Communication Method and Apparatus (2005) United States patent US6893407:. USPTO—available https://www.google.com/patents/US6893407 39. Vygotsky, L.S.: Mind in Society: the Development of Higher Psychological Processes. Harvard University Press (1978) 40. Vygotsky, L.S.: The genesis of higher mental functions. In: Wertsch, J.V. (ed.) The Concept of Activity in Soviet Psychology, pp. 144–188. Publishing, Sharpe (1981)

Chapter 17

Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi) Anthony Lewis Brooks

Abstract This chapter introduces two case studies that exemplify how interactive visualisations were introduced to supplement an interactive vibroacoustic therapeutic intervention setup for adolescents diagnosed as profoundly disabled each having individual dysfunctional conditions. The hypothesis behind the research of multisensory stimuli intervention aligns with how humans can differ in needs, desires, and preferences and it is posited toward optimising selectable feedback stimuli within intervention targeting inclusive well-being. The studies were associated to a European funded research project (https://www.bristol.ac.uk/carehere) (with end-users overall being handicapped and/or elderly and/or undertaking rehabilitation) where the author coordinated Sweden partner research and user studies due to his research being catalyst and responsible for gaining the project. Both case studies took place in a school for special needs in Landskrona municipality, Sweden—they were conducted applied as a part of the day-to-day activities of the school rather than being laboratory-based. Keywords Vibroacoustic therapy · Multimodal stimuli · Modes of therapeutic interaction · Virtual interactive space (VIS) · Profound and multiple learning disabled (PMLD)

17.1 Introduction This chapter contributes with overviewing two case studies illustrating various technologies for inclusive well-being where a focus was on establishing a ‘treatment’ intervention environment concept that was modular and flexible to offer adaptable and tailored multisensory stimuli feedback responding to participant input. Participants are both teenagers diagnosed as profound and multiple learning disabled (PMLD).

A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_17

325

326

A. L. Brooks

Interactive feedback stimuli are selectable multimedia that responds to a person’s input (feedforward) movement sensed within what is coined as Virtual Interactive Space (VIS—Brooks 1999). These stimuli typically include interactive auditory and/or visual environments, for example (Serious) Games, Virtual Reality, Music making (typically improvisation as real-time composition), Digital Painting (abstract), Visual manipulation of geometric patterns, and more. In these case studies a vibroacoustic setup has been incorporated toward realising vibroacoustic therapeutic intervention as an interactive form (i.e. where the participants generate their own audio-visual plus vibratory stimulus, within VIS environments. Related literature cited includes a body of work titled ‘Sound Therapy’, elaborated by Swingler [59] and Ellis (1995–2004) wherein the approach to intervention aligns with this authors experiences, beliefs and positioned argument. The setup incorporated interactive multisensory vibroacoustic therapeutic intervention according to the staff decisions from knowledge of each participant and with comprehension of their likely response to the additional stimulus of visuals. This meaning that a participant’s feedback incorporated auditory and visual stimuli that was synchronous with vibratory haptic/tactile stimulus, all empowered for selftriggering/self-manipulation/self-control toward promoting a sense of achievement, self-agency, and efficacy. This research was catalyst to national and international (European) funded projects within rehabilitation with end-user participants across ranges of age and diagnosis including handicapped, elderly, and/or undertaking rehabilitation, and situation 1 .

17.2 Biofeedback A point of departure for this chapter is biofeedback. Historically positioned within the holistic body of work that led to the European project mentioned in the opening of this chapter includes research of biofeedback systems. These are systems that typically use electrodes positioned on parts of a human body to generate signals indicating bodily function and change. As well as informing via data on bodily function that correspondingly offer indications of the participant’s experience and related emotion (e.g. heartbeat, brainwaves, galvanic skin response…), the same sensed data are able to be mapped to generate direct stimuli as feedback, for example in response of an aroused state to keep a participant aroused or to de-stress. This research has been referred to as a form of biofeedback in that under a mixed methods approach both qualitative and quantitative analysis can be implemented: Data that is sensed from a human body can be mapped to trigger and effect feedback stimuli that in turn can affect the participant: The affected participant reacts to the stimuli by subsequently generating data to again trigger and effect. This causality is referred to as the human afferent efferent neural feedback loop closure, and is seen 1 https://www.bristol.ac.uk/carehere.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

327

as a powerful motivator within (re)habilitation and situations associated to wellbeing. An emergent model on optimised motivation from the research is reported in Brooks and Petersson [11]. Aligned to the emergent model is that correlations between sensed data can indicate differences in reactions to specific stimuli to aid setup design and pre-sets. A contemporary example of biofeedback is where miniature cameras can be positioned inside a Head Mounted Display (HMD) to generate videos of what aspect of a created visualisation is being looked at and at the same time what is the emotional reaction of the person to what is being looked at.2 Within the headset this can be via additional miniature cameras that capture pupil dilation and additionally other signals can be sourced such as mentioned above (heartbeat, galvanic skin response, brainwaves…). With such a setup, a designer of an experience (virtual/augmented) can research human responses to a media creation and adjust accordingly. Thus, subsequent designs being informed from a user’s specific experience. In this chapter, HMDs were not used with the two PMLD participants so that full facial reactions were observable to indicate emotions associated to the experience of the multisensory stimuli. In line with inclusive well-being, participants in the applied research to date span across ages (children, adults, aged) and across ranges of (dys)functional ability and diagnosis. Thus, a catalyst of the body of work is as a supplement for traditional intervention targeting impact in (re)habilitation. The work has also researched outside of (re)habilitation where the VIS multisensory environments, including biofeedback setup, have been used in context of installation and performance art. From these contexts learnings have been impactful for the (re)habilitation and inclusive well-being aspects as well as the multisensory multimodality aspects. In a similar fashion, the healthcare activities inform the art context. This chapter, alongside presenting the two case studies with participant end-users, additionally comments on the thinking behind ‘VIS-based multisensory interactive vibroacoustic therapeutic intervention’ and related practical implementations of such empowering technologies to realise meaningful experiences for end-users as found in this research. Input from noteworthy luminaries in the field are extensive for clarity of association to this work and to position in the field. ‘End-users’ as a term used in this field to include beyond the participant/patient/client, who in this research is considered central. The term is used to additionally refer to session facilitators who may be therapists, individual carers of participants, or other healthcare professionals: Family members and friends can also be referred to as end-users in context of the participant’s shared systemic experience, for example where co-playing a game such that social interactions are targeted. In other words, the term is encompassing of all those who are using the system at a given time. The next section introduces the multisensory setup.

2 https://pupil-labs.com/products/vr-ar/.

328

A. L. Brooks

17.3 Multisensory Stimulus: Sound, Sound Therapy, Music Therapy, Vibroacoustic Intervention Raghu [51] informs how: Sound is a form of energy produced by vibrations caused by movement of particles. Sound can travel through solids (such as metal, wood, membranes), liquids (water) and gases (air). The sound vibrations that reach our ear are produced by the movement of particles in the air surrounding the source of sound. The movement or vibration of particles produces waves of sound. Sound waves are longitudinal and travel in the direction of propagation of vibrations. The pitch of sound is related directly to its frequency, which is given by the number of vibrations or cycles per second. The higher the pitch of sound, the higher is its frequency, and the lower the pitch, the lower is its frequency. Human ear can hear sounds of frequencies ranging from 20 – 20,000 cycles per second (or Hertz – Hz). Sound waves can be visually seen and studied using ‘Chladni’ plates… // Sound is everywhere. There is perpetual movement and action in the world around us, and this produces a variety of sounds, such as those coming from Nature, from animals, those generated by humans in the form of speech or music, those that are generated by vehicles, machines, gadgets that are used for comfort, leisure and convenience. What is interesting or important about Sound? Sound is an integral part of our lives. Whether we like it or not, the vibrations of these sounds reach us, not only through the hearing sense, but also by coming into contact with the physical body. The sound vibrations can affect us either positively or negatively, entering into our being, via the physical, mental and emotional realms, thereby affecting our consciousness as a whole. Therefore, while it is important, it would also be interesting, to know more about the nature of sound, how it affects us, and in what way we can harness it positively and try to reduce its negative impact on us. Raghu [51, p. 75].

There is abundant literature on using sonic/audio (and/or visual) responsive feedback in (re)habilitation including the author’s archive covering publications, projects and activities spanning multiple decades—accessible at https://vbn.aau.dk/da/per sons/103302. These responsive feedback stimuli are established within a created modular environment that can be tailored to an individual profile. The conceptual environment is referred to as Virtual Interactive Space (VIS) and was first published in 1999 at the World Congress for Physical Therapy (WCPT) in Yokohama, Japan. There is also much literature on vibroacoustic therapeutic intervention (see next section) where various terms are used—such as Vibroacoustic Therapy (VAT) e.g. [50], Vibroacoustic Music (VAM) e.g. [64], and Vibroacoustic Sound Therapy (VAST) by Ellis (2004). The two case studies presented herein are cautiously positioned in relation to this literature above. This is stated as VAT and VAM both use specific low frequency auditory impulses (30–120 Hz or musically approximately B0 –B2 ), either as stand-alone or mixed within music: Both are reported within context of ‘Music Therapy’ discipline. Aligned, [43] offers in-depth reflections on emotional music therapy relation across pre-recorded music and musical improvisation. Ellis’ (2004) VAST study, and related research, does not argue position in music therapy literature—rather arguing against such inclusion.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

329

This is potentially why Ellis’ (2004) study does not state specific what is used in the vibroacoustic sound therapy cases apart from a Soundbeam™, a chair built with audio speakers inside, and a music recording (but no detail to the content of either MIDI or recording being focused upon low frequencies). The two case studies presented herein, as well as the author’s holistic work overall, are positioned aligned more with Sound Therapy [20, 25, 59] (Ellis 1995) than Music Therapy approaches to Vibroacoustics e.g. [50, 64]. This is because in the case studies there was no focus on restricting the tonal range of audio to 120 Hz and below using solely sine wave tones (both participants enjoyed range of sounds and effects) and no melodic or harmonic form was targeted. Also, the author is aware of the Sound Therapy’s work with visual stimuli aligned to Soundbeam triggering of audio—thus the ‘thinking’ on multisensory stimuli aligns with Ellis and the Soundbeam personnel. In the next section [20–25, 27, 28, 59] are sourced extensively in elaborating on the Soundbeam and the Sound Therapy approach.

17.4 Soundbeam and Sound Therapy Soundbeam3 is a linear ultrasound sensing of movement device that triggers MIDI notes (which is typically wide span of frequencies depending on MIDI patch selected and range of notes programmed). It has been a leading product in the field for approximately thirty years with units sold globally and a network of therapist users. Ellis in his research [20, 25] (1995, 2004) primarily used Soundbeam with MIDI whole tone scale sounds in his reported research. Thus, he doesn’t classify his research specifically as music therapy: see [59], who explains Ellis’ approach and Music/Sound Therapy eloquently: In traditional music therapy, the less the child is able to say something with sound because of a physical or cognitive disability, the heavier becomes the therapist’s responsibility for empathy and interpretation. The main focus and engine for the mood and meaning of the music which is happening is on the therapist, and this creative and interpretative role is increasingly shifted away from the child with more profound levels of disability. Consequently, as the liberating potential of musical expression increases, it becomes correspondingly less achievable. This allocation of creative ‘power’ may have no clinical or therapeutic rationale, it may simply result from what is physically possible. Swingler [59, np].

In justifying use of the Soundbeam he adds: The experience of initiation is central to the success of Soundbeam, especially for individuals with profound disabilities. If ones overall experience of life is essentially passive, it may be difficult to develop any concept of ‘selfhood’, any idea of oneself as a separate individual. What Soundbeam offers, perhaps for the first time and regardless of the individual’s degree of immobility, is the power to make something happen. This is the vital experience of “that 3 https://www.soundbeam.co.uk.

330

A. L. Brooks

was me!”, which can function as the foundation stone for further learning and interaction. This use of sound as the source of motivation is an extremely simple but crucially important application of the technology; it is impossible to overstate its value. Swingler [59, np].

Swingler [59] elaborates on how [20, 25], Ellis (1995) one of the most prominent Soundbeam researchers, established a systematic long-term evaluation of Soundbeam’s potential for children and adults with disabilities. In sharing the technique of use he states that the Soundbeam device (the sensing ultrasonic beam) “is positioned so that as soon as the child begins to move an interesting sound is triggered, motivating further movement and, eventually, radically enhanced posture, balance and trunk control.” Due to individual differences and handicap profiles quantitative measures are rarely implemented—though given contemporary computer vision technologies and related software it is possible to measure using an automatic extraction of motion cues—what Camurri et al. (2011) relates as QoM of Quantity of Motion including contraction and expansion (CI), Kinematic cues and more (–see pp. 25–27, also [11]. Also relative to the author’s research is how Swingler [59] states—“All of this is accomplished in parallel with a strong sense of fun and achievement. For the child, the therapeutic dimension of what is happening is irrelevant”. Importantly, relating to this chapter author’s research is the term “For the child” as the therapeutic intervention being targeted is considered a facilitator layer of knowledge that is not needing to be conveyed as it could demotivate if targeted formal ‘clinical’ goals in a session were not achieved. Swingler [59] further posits how in Ellis’ use of Soundbeam: ….Sound itself is the medium of interchange… This approach contrasts with traditional models of music therapy, with its emphasis on ‘treatment’, direct intervention and imposition of external stimuli determined by an outside agent. Even where a music therapist may claim to be ‘responding’ to a patient’s music, this is a personal response on the part of the therapist. Often the therapist uses, or moves towards, a traditionally based musical language comprised of melody, harmony and rhythm, so limiting the soundscape and genre of ‘musical’ discourse. The ‘patient’ or ‘client’ is viewed in a clinical way, with a condition which needs to be treated or ameliorated. There are clearly defined goals with these treatments, with success measured according to how effective the treatment has been in terms of the clinical or medical condition. The modus operandi of these approaches is essentially from the outside -in, with an emphasis on clinical intervention rather than independent learning. Swingler [59, np].

In considering the composition of the human body4 and that approximately sixty percent is water related, it is not a surprise that sonic frequencies have an effect on its inner structures movement. As composition structures, such as cells, atoms and molecules, are agitated by the sonic waves, increased interactions and communications take place as a result of deep cellular stimulation that some claim results in harmonious and healthy resonances that are restored to the body. 5 Raghu [51] informs on tone relationships: 4 https://en.wikipedia.org/wiki/Composition_of_the_human_body. 5 https://www.vitalhealthcare207.com/index.php?p=504881.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

331

Sound vibrations can come in contact physically through the body and have an effect on our consciousness at the mental, emotional and spiritual levels. Sounds that are musical can be categorized as consonant sounds that are pleasant, and dissonant sounds that are unpleasant or not so pleasant. Musical sounds are comprised of notes in increasing or decreasing order of pitch (frequency). The interval between notes can give rise to consonance and dissonance. Example, an interval of an octave -a range of seven notes - is said to be consonant, whereas an interval between adjacent notes can be dissonant. These are studied by experimenting with musical notes and intervals, their visual patterns and their effect on consciousness. While consonant intervals can cause happiness, joy, courage or calmness, dissonant intervals can cause tension, anger, fear or sadness, thereby affecting the emotional aspect of consciousness. Raghu [51, p. 75].

Here we can look at Ellis’ decision to use whole tone scale intervals and issues such as beats, resonance and entrainment. Raghu [51] informed on the topic of ‘beats’ that: When two (or more) sounds are produced having a frequency difference of less than about 20 or 30 Hz, you will hear “beats.” The frequency of the beats will be at the difference frequency. If the frequency difference is larger than about 20 or 30 Hz, a tone is usually perceived rather than distinct beats. Raghu [51, p. 78].

On the topic of ‘resonance’ [51] posited how: A musical instrument can be forced into vibrating at one of its harmonics (with one of its standing wave patterns) if another interconnected object pushes it with one of those frequencies. This is known as resonance - when one object vibrating at the same natural frequency of a second object forces that second object into vibrational motion. The word resonance comes from Latin and means to “resound” - to sound out together with a loud sound. Resonance is a common phenomenon of sound production in musical instruments. Raghu [51, p. 78].

On the subject of ‘entrainment’ [51] shares how: This involves changing the natural vibrational frequencies of an object and replacing them with different vibrational frequencies of another object, thereby actively changing the vibrations of one object to that of another object. Entrainment is considered as an active method, whereas resonance is considered as a passive method. (Healing Sounds, n.d.). Raghu [51, p. 78].

Sound qualities such ‘beat’ ‘resonance’ and ‘entrainment’—as pointed out by [51] align to the author’s holistic body of work titled SoundScapes [8] and the positioning behind the topic of Aesthetic Resonant Environments (see also [10, 14, 21–28, 35,…]) where variations of the term is elaborated by different authors but with an aligned red thread of meaning. Acknowledged by Raghu [51] regarding musical/bichordal intervals and effect is that: One of the reasons why listening to music is so healing for us, is due to the power of musical intervals. A musical interval is created when one note is played with another note. The interval can be created by playing two notes together, or one after the other. When two notes are played together the interval has a stronger effect on us. The frequencies of the two notes

332

A. L. Brooks

of the interval create a mathematical ratio that affects the body in different ways. When we listen to all the intervals in the musical scale it is profoundly healing for our body and our mind. Pythagoras discovered that the ratios of the musical intervals were found in nature, the planets and constellations. (Simon, H., n.d). Raghu [51, p. 78].

Ellis’ research [20, 25] (1995) tended to use whole tone scales that may align with how the notion that each musical interval has a unique psychological effect has not been fully determined (e.g. [44, p. 309]) and affect may differ cross-culturally and situation [45]. Further, [16] offer findings from a factor analysis groupings analysis of bichords and music focused upon three aspects (1) emotional evaluation, (2) activity, and (3) potency, where factors 1 and 2, proved to be most significant related to interval discrimination (p. 4). This [16] study (and linked in text) points to gender differences aligned with different emotional affect intensity—something not discussed in the two case studies in this chapter where one was male and one was female. This section has focused on the auditory stimulus—the next section introduces the selected visuals.

17.5 Multisensory Stimulus: Visuals—Case Studies 1 and 2 In the case studies presented, two forms of interactive visualisations were predominantly used. The first interactive visual system used MIDI messages controlling a software synthesiser having three oscillators. Each oscillator could be selectively mapped to control channels on [a] an additive colour wheel i.e. (1) Red, (2) Green, and (3) Blue, where 0.00; 0.00; 0.00 values equate to black and 1.00; 1.00; 1.00 equate to white: or [b] an alternative colour wheel controlling channels of (1) Hue, (2) Saturation, and (3) Value (or intensity or brightness) where white and black are added to the colour to adjust. Patterns were programmed into the synthesiser so that geometric shapes are generated that are then ‘painted’ by adjustments to three MIDI continuous controllers (CC) where each is assigned to control one oscillator (1, or 2, or 3). A continuous controller is a specific MIDI message capable of transmitting a range of values, usually 0–127—images as fully illustrated in Fig. 17.1 (on left wall) and Fig. 17.2. A video illustrating hand gestures of a choir conductor creating the images with three infrared sensors is available at https://www.youtube.com/watch? v=65gAT_RAfvU. The second interactive visualisation system used camera capture of participant where the image was processed in a software algorithm (Eyesweb6 ) to reflect a selectable threshold of movement quantity and quality that generated a form of ‘blobby’ ‘abstract’ digital painting. The participant could select which Soundbeam (ultrasound beam) he moved within to generate selected sounds and where the same 6 https://www.infomus.org/eyesweb_ita.php.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

333

Fig. 17.1 Author (right) setting up a multisensory Interactive Vibroacoustic Therapeutic Intervention system for female participant M who is lying on a vibroacoustic Soundbox™. This has embedded speakers that produce tactile stimulus for her experience of auditory feedback responding to movement that M activates by her motions within three volumetric infrared sensing spaces. Attending mother (next to open door) looks on, with one staff member arranging video camera and another positioned prepared to observe. Visualizations (geometric shapes in this case) that M triggers through her movement can be seen projected onto left wall facing M. A Microphone is directed to M to capture session utterances that are processed in an effect unit. Computer workstation is at the rear of the room where all data/stimuli is sourced and mapped to visual synthesiser software with output to projector

motions generated the digital painting image Fig. 17.3. Examples of images from this research is shown in Fig. 17.4.

17.6 Multisensory Stimulus: Tactile/Haptic = Vibroacoustic Therapeutic Intervention Vibroacoustic therapeutic intervention has been practiced under different titles by an array of people for many years in different forms and in differing settings for a variety of patient profiles and conditions e.g. physical, emotional, mental. Music therapy has been conducted primarily by musicians such that “the dominant approach to the field has been subjective and founded in artistic and literary traditions” [64, p. 14]. Punkanen and Ala-Ruona [50], in their article abstract in the journal Music and Medicine, state how:

334

A. L. Brooks

Fig. 17.2 Movement by end-users (participants with handicap) within three infrared sensor zones that are mapped to different MIDI CC channels controlling colours that ‘paint’ the different generated geometric images. Purposefully, the participant is positioned ‘inside’ the image (if possible) for a heightened sense of immersion (see [10])

Fig. 17.3 (Left) Male participant R seated upon Soundbox with original camera image upper right on screen; first stage of processing in Eyesweb algorithm—lower right on screen; Digital painting (upper left): (Right) Soundbox with desk and a variety of switches. Behind desk are two Soundbeams and central video camera. Altuglas Black and White projection screen behind [36]

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

335

Fig. 17.4 Examples of ‘digital painting’ through movement in this research (see [7])

Vibroacoustic therapy (VAT), traditionally considered to be a physical and receptive type of music therapy intervention, uses pulsed, sinusoidal, low-frequency sound on a specially designed bed or chair. Today VAT is viewed as a multimodal approach, whereby the therapist works with the client’s physiological and psychological experiences, incorporating a mind– body approach. Punkanen and Ala-Ruona [50, p. 128].

Credit for the VAT concept is acknowledged to Olav Skille and Juliette Alvin from 1968 in relation to development of a music therapy model. Evolution of the model is again credited (ibid.) to Skille from his 1982 definition of VAT as being: …the use of sinusoidal, low-frequency (30–120 Hz), rhythmical sound–pressure waves mixed with music for therapeutic purposes/considering/ “low-frequency sound massage” would assist in the reduction of pain and other stress-related symptoms. Punkanen and Ala-Ruona [50, p. 128].

A range of literature point to favourable outcomes from vibroacoustic therapeutic intervention, for example with patients having fibromyalgia [47],children with profound and multiple learning difficulties and the elderly in long-term residential care (Ellis 2004); spasticity and motor function in children and adults with Cerebral Palsy [40], and more. Whilst Skille is reported to have initially focused VAT on using a single amplitude modulated sinusoidal sound, and later incorporating music—at around the same time, a research professor from Aalborg University Denmark, Tony Wigram, mixed music and low-frequency sound, which he named “vibroacoustic music” (VAM)—(see [64]). However, these definitions would seem to overlap and confuse as both VAT and VAM apparently use similar content i.e. single amplitude modulated sinusoidal sound plus music. Subsequently, Ellis (2004) coined the term Vibroacoustic sound therapy from use of whole tone scales of sounds routed to a vibroacoustic chamber or chair—thus there was no melody, harmony and structure as one would relate to musical form (see elsewhere in this chapter for more on Ellis’ ‘Sound Therapy’). Whilst literature points to benefits from vibrations impacting the human body (see elsewhere in this chapter), however, care is needed when using vibroacoustic systems

336

A. L. Brooks

as whole-body vibration can cause weariness, digestive troubles, headache, imbalance and tremor shortly after or during exposure. Special Interest Groups (SIGs) such as VIBRAC, the Skille-Lehikoinen Centre for Vibroacoustic Therapy and Research which was founded in 2012 and is managed by The Eino Roiha Foundation under the University of Jyväskylä, Finland instruct on best practice. Unfortunately, the VIBRAC site’s last update is seemingly 2016 as of writing, suggesting a lack of activity. However, this comment may be premature, as there is apparently a renewed activity and networking being initiated (2019–2020) following the author’s contact to the leadership but the Covid-19 situation prevented the VIBRAC SIG meeting at a conference in Boston USA—it is anticipated that this group will reinvigorate the field and collaborate with this author to advise best practice, tools, and to explore the interactive vibroacoustic therapeutic intervention as posited herein. The next section introduces VIBRAC and the field.

17.7 VIBRAC and Review of the Field A dedicated Vibroacoustic Therapy facility was established in 2012 in Jyväskylä, Finland, namely VIBRAC Skille-Lehikoinen Centre for Vibroacoustic Therapy and Research, under the Eino Roiha Foundation. The centre is recognised as a development, training, and research centre for Vibroacoustic Therapy (VAT). In reported research at the site it is stated how “effects and benefits of VAT have been originally linked to high muscle tone and reduction in spasticity.”7 The site informs on an array of conditions where positive treatments have been reported—however it also makes clear that: The typical shortcomings in most studies relates to design, small sample sizes, and poorly described interventions which are not based on best clinical practices, as well as the inability to find applicable and sufficiently sensitive measurement tools. Future research should focus more on improving the practices and reporting of VAT, and studying the effects of the most relevant clinical interventions and procedures for the clinical groups which seem to benefit most from this particular intervention. Special attention should also be given to the measurement tools used in VAT studies. https://www.vibrac.fi

In line with this statement, it can be considered how the research presented herein—the two case studies—aligns with such shortcomings, however, the goal of this work was not to treat or measure clinically. Rather it was to study potentials to supplement auditory with additional stimuli from a position argued aligned with individual differences, preferences, needs and desires. This a position argued in line with Ellis [20, 59] (1995, 1995, 2004) contributions as quoted elsewhere in this chapter.

7 https://www.vibrac.fi/content/research-vibroacoustic-therapy.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

337

17.8 Conclusion Two case studies are presented to exemplify the concept of multisensory interactive vibroacoustic intervention in a VIS environment. Teenagers of similar age and similar condition with one being male and one being female were the participants. Invisible sensors using ultrasonic and infrared technologies were used to capture participant movements that were mapped to auditory, visual and tactile feedbacks— all experienced as synchronous. Evaluations were highly positive by involved staff and family. Sound boxes such as used in these case studies offer opportunities to stimulate participants with severe dysfunction to have rewarding and enjoyable experiences that whilst being fun can also lead to formal therapeutic outputs. In this work reports of less spasming, increased contact to staff, happier disposition and more beneficial outcomes are reported (e.g. see [8, 20–26, 59] Ellis (2004). These two case studies did not use specific low-frequency content as in VAT or VAM. Audio content was rather across the whole range of frequencies and typically the chromatic scale was used (as opposed to Ellis’ use of the whole tone scale). Justifying this choice is that the chromatic scale offered a higher sensitivity/resolution (more notes in the invisible sensing space) to challenge smaller motion triggering to challenge increased precision of gestural control. The use of interactive environments where tones are composed through gesture relates to Stokowski who in 1932 predicted “a time when musicians would be able to compose directly into TONE, not on paper. In other words, we would be working directly with sound itself rather than with the symbols used to represent the results of imagined combinations of sound” [59].

17.9 Future Research in Interactive Vibroacoustic Therapeutic Intervention The author anticipates gaining additional knowledge of best practice in this field alongside developing skills and competences associated to tools and practice toward increased research in this field to test against VAT and further music therapy interventions (e.g. [43]). This means specific research on content feedback stimuli to ascertain differences aligned with literature cited herein—so for example use of a specific range of frequencies and mapping to sensors. A stumbling block over years has been that the research has not received funding specific to the vibratory research— and this has meant it has been almost stagnant for a number of years. However, once funding is in place, next plans (after discussion with experts at VIBRAC SIG to best practice, health and ethics) involve to add to the modularity aspect to test different sound boxes (vibroacoustic chambers) with larger speakers to increase amplitude of vibrations. This plan involves using specialist bass frequency speakers in different combinations, primarily testing from 10-, 12-, 15- and 18-inch units, in combinations of 2, 4, 6, 8 and 12 speakers. The speakers will be positioned on their backs

338

A. L. Brooks

in a bespoke assembly with a support board resting on the top for participant to lie on. Combinations are planned to be fed from different amplifiers in order to explore digital vs valve (tube) based amplification of sound. Testing is planned for this aspect to include blind and deaf to study any differences in tactile stimulus felt between the analogue and digital amplifications and to test if outcomes impact PMLD tests. The systems are planned to test additional devices to trigger and manipulate the auditory stimulus. These include music floor pedals to further empower and enhance self-control over the sound ([6], figure 9). These pedals include specifically the Moog Taurus III Foot Pedal Apparatus, an analogue monophonic synthesizer that has 13 large velocity-sensitive wooden church organ style pedals, which due to size, thus can be played by appendage beyond the feet. The Moog Taurus III sounds are generated by two sawtooth oscillators, a multiwaveform low-frequency oscillator, and has a 24 dB/oct resonant low-pass filter to emphasize low frequencies in the selected patches for increased tactile feedback in active vibroacoustic situations. The Moog also has an arpeggiator that can play back patterns of sounds from a pedal press. Guitar/Bass effect pedals are also planned to be incorporated to additionally enable a participant to ‘effect’ parameters of the sound that is experienced. This can also be a second participant so that sounds are ‘manipulated’ together. For example, these will range from subtle compression, chorus, reverberation, and echo to more distinct changes such as distortion, envelope follower, octave splitter, and more. Additional interfaces such as ultrasound sensor grids; touch strips; cross-hair tracking; and rubber electronic drum pads are also planned to be tested attached to sound boxes. With so many interfaces available, alongside software to manipulate content there is a wide gamut to research under this subject that deserves increased attention.

17.10 Postscript In closing it is pertinent to reflect on an experience from April 2008 from a workshop that the author conducted at ‘Casa da Música’ (CDM), Oporto, Portugal [7]. The workshop was in the form of an interactive installation/environment built in the location in the concert house rehearsal room for the national symphony orchestra— approximate dimension, 238 m2 floor area, 20 m high (where there was notably a wooden sprung floor for players to feel the music through their feet). A large ‘multiroom’ interactive multisensory setup was designed by the author and built with the technical crew. Large sub-woofer speakers were distributed around the room— this to impact the sprung floor with maximum vibrations from music generated by motions in the spaces. The workshop was titled “Ao Alcance de Todos Música” (Within Everyone’s Reach). Attendees were from regional special needs institutes, schools and hospitals, as invited by the CDM education department who organised and arranged the annual event. Attendees were across ages and handicap. Workshop attendance groups were of about twenty persons. One group attended from a local institute for the deaf. All attendees were asked to remove their shoes at the entrance.

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

339

The memorable experience was where a young boy about eight years-of-age who couldn’t hear found the “hot-spot” of sensors such that he was controlling audio and visuals with miniscule movements. Other attendees stopped to watch him as he started shaking emotionally in a positive fashion exploring what he had found. The author instructed all to lie on the floor and to watch the boy. The experience was enlightening and illustrated potentials of interactive vibroacoustics in therapeutic contexts with multisensory stimuli options. Such modular system setups allow mixing and matching of components to an individual participant’s profile (e.g. his/her preferences, needs and desires to enjoy and have a fun entertaining episode) aligned to the therapeutic output (therapist/facilitator targeted formal outcome for the participant) to thus enable a tailorable biofeedback experience of the self for the participant having developmental potential. Acknowledgements Stefan Hasselblad, Special Teacher/Carer; Children, Parents and Staff at Emaljskolan, Emaljgatan 1C, Landskrona 26143, Sweden: Casa da Música’, Oporto, Portugal staff and attending workshop participants 2006–2008 inclusive: Soundbeam personnel, Bristol UK: Eyesweb Project personnel, Genoa, Italy: VIBRAC Skille-Lehikoinen Centre for Vibroacoustic Therapy and Research, Finland. Professor Phil Ellis and Dr Lieselotte van Leeuwen whose work resulted in a Centre of Excellence in Teaching and Learning, resulting in a regional Sound Therapy Centre at Sunderland, UK. Professor Rolf Gehlhaar. Professor Jens Arnspang. Finally, thanks to the authors I have taken liberty in extensively quoting in this chapter’s citations; it is done with good intention so as not to change meanings in the original texts through my paraphrasing. Credits accompany each cited quote to acknowledge your contribution—here I say thank you again.

Bibliography 1. Ahonen, H., Deek, P., Kroeker, J.: Low frequency sound treatment promoting physical and emotional relaxation qualitative study. Int. J. Psychosoc. Rehabil. 17(1), 45–58 (2012) 2. Bartel, L.R., Chen, R.E.W., Alain, C., Ross, B.: Vibroacoustic stimulation and brain oscillation: from basic research to clinical application. Music Med. 9(3), 153–166 (2017) 3. Bergström-Isacsson, M., Lagerkvist B., Holck, U. & Gold, C.: Neurophysiological Responses to Music and Vibroacoustic Stimuli in Rett Syndrome (2014). https://www.ncbi.nlm.nih.gov/ pubmed/24691354 4. Boyd-Brewer, C., McCaffrey, R.: Vibroacoustic sound therapy improves pain management and more. Holist. Nurs. Pract. 18(3), 111–118 (2004) 5. Brooks, A.L.: Patent US6893407 - Communication method and apparatus (family of 6 patents) (2000) 6. Brooks, A.L.: Enhanced gesture capture in virtual interactive space (VIS). Dig. Creativity 16(1), 43–53 (2005) 7. Brooks, A.L.:. Ao Alcance de Todos Música: Tecnologia e Necessidades Especiais., Casa da Musica. In: Proceedings of 7th ICDVRAT with ArtAbilitation, Maia, Portugal, 2008. Reading University, UK (2008). https://vbn.aau.dk/ws/portalfiles/portal/41580679/Porto_Wor kshop_2008_paper.pdf 8. Brooks, A.L.: Intelligent decision-support in virtual reality healthcare & rehabilitation. Stud. Comput. Intell. 326, 143–169 (2011). https://doi.org/10.1007/978-3-642-16095-0

340

A. L. Brooks

9. Brooks, A.L.: Human computer confluence in rehabilitation: digital media plasticity and human performance plasticity. In: Stephanidis, I.C., Antona, M. (red.), Universal Access in HumanComputer Interaction: Applications and Services for Quality of Life, 7th International Conference, UAHCI 2013 Held as Part of HCI International 2013 Las Vegas, NV, USA, July 21–26, 2013 Proceedings, Part III (Bind 8011, s. 436–445). Heidelberg: Springer Publishing Company. Lecture Notes in Computer Science (2013). https://doi.org/https://doi.org/10.1007/978-3-64239194-1_51 10. Brooks, A.L., Camurri, A., Canagarajah, N. & Hasselblad, S.: Interaction with shapes and sounds as a therapy for special needs and rehabilitation. In: Sharkey, P., Sik Lányi, C., Standen, P. (eds.) 4th International Conference on Disability, Virtual Reality and Association Technologies, pp. 205–212. University of Reading Press (2002) 11. Brooks, A., Petersson, E.: Recursive reflection and learning in raw data video analysis of interactive ‘play’ environments for special needs health care. In: Healthcom: 7th International Workshop on Enterprise networking and Computing in Healthcare Industry, Busan, Korea, IEEE/Korea Multimedia Society, pp. 83–87 (2005) 12. Brooks, A.L., Petersson, E.: Facilitators’ intervention variance and outcome influence when using video games with fibromyalgia patients. In: Duffy, V.G. (ed.) Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management Healthcare and Safety of the Environment and Transport: Lecture Notes in Computer Science, vol. 8025, pp. 163–172. Springer Publishing Company, Berlin Heidelberg (2013) 13. Campbell, E.A., Hynynen, J., Ala-Ruona, E.: Vibroacoustic treatment for chronic pain and mood disorders in a specialized healthcare setting. Music Med. 3(9), 187–197 (2017) 14. Camurri, A., Mazzarino, B., Volpe, G., Morasso, P., Priano, F., Re, C.: Application of multimedia techniques in the physical rehabilitation of Parkinson’s patients. J. Visual. Comput. Anim. 14(5), 269–278 (2003) 15. Clements-Cortes, A., Ahonen, H., Evans, M., Freedman, M., Bartel, L.: Short-term effects of rhythmic sensory stimulation in Alzheimer’s disease: an exploratory pilot study. J. Alzheimers Dis. 52, 651–660 (2016) 16. Costa, M., Ricci Bitti, P.E., Bonfiglioli, L.: Psychological connotations of harmonic musical intervals. Soc. Res. Psychol. Music Music Educ. 28, 4–22 (2000) 17. Csikszentmihalyi, M.: Flow: The Psychology of Optimal Experience. Harper Row (1990) 18. Daprati, E., Sirigu, A., Pradat-diehl, P., Franck, N., Jeannerod, M.: Recognition of self-produced movement in a case of severe neglect. Neurocase Neural Basis Cognit 6, 477–486 (2000) 19. Ellis, P.: Touching sound—connections on a creative spiral. Int. J. Educ. Comput. 5, 127–132 (1989) 20. Ellis, P.: ‘Special Sounds for Special Needs: Towards the Development of a Sound Therapy. International Society for Music Education Conference proceedings (1994) 21. Ellis, P.: Developing abilities in children with special needs—a new approach. Children Soc. 9(4) 64–79 (1995a) 22. Ellis, P.: Incidental music: a case study in the development of sound therapy. Br. J. Music Educ. 12, 59–70 (1995b) 23. Ellis, P.: ‘Sound Therapy’ in Primary Music Today, 3. Peacock Press (1995c) 24. Ellis, P.: ‘Sound Therapy’ in Special Children, pp. 36–39 (1995d) 25. Ellis, P.: ‘Incidental Music’ Video with Booklet ‘Sound Therapy the Music of Sound’ Soundbeam Project 1996 26. Ellis, P.: The Music of Sound: a new approach for children with severe and profound and multiple learning difficulties. Br. J. Music Educ. 14(2)173–186 (1997) 27. Ellis, P.: Caress—An Endearing Touch. Developing New Technologies for Young Children. Trentham Books, pp. 113–137 (2004) 28. Ellis, P.: Vibroacoustic sound therapy: case studies with children with profound and multiple learning difficulties and the elderly in long-term residential care. Stud Health Technol. Inform. 103, 36–42 (2004b). https://www.ncbi.nlm.nih.gov/pubmed/15747903 29. Fischer, K., Bidell, T.: Dynamic development of psychological structures in action and thought. In: Handbook of Child Psychology, vol. 1: Theoretical Models of Human Development. Wiley, pp. 467–561 (1998)

17 Interactive Multisensory VibroAcoustic Therapeutic Intervention (iMVATi)

341

30. Gardner, H.: Frames of Mind: The Theory of Multiple Intelligences. Basic Books (1983) 31. Granott, N.: Patterns of interaction in the co-construction of knowledge: Separate minds, joint effort, and weird creatures. In: Wozniak, R., Fischer, K. (eds.) The Jean Piaget Symposium Series. Development in Context: Acting and Thinking in Specific Environments (pp. 183–207). Lawrence Erlbaum Associates, Inc. (1993) 32. Granott, N., Parziale, J.: Microdevelopment: Transition Processes in Development. Cambridge University (2002) 33. Grocke D.E., Wigram T.: Receptive Methods in Music Therapy: Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students. Jessica Kingsley Publishers (2007) 34. Hagedorn, D.K., Holm, E.: Effects of traditional physical training and visual computer feedback training in frail elderly patients. a Randomized Intervention Study: Eur. J. Phys. Rehabil. Med. 46(2), 159–168 (2010) 35. Hagman, G.: The Artist’s Mind. Routledge (2010) 36. Hasselblad, S., Petersson, E., Brooks, A.L.: Empowered interaction through creativity. Dig. Creativity 18(2), 89–98 (2007) 37. Huntley, H.E.: The Divine Proportion: A Study in Mathematical Beauty. Dover Press (1970) 38. Hynynen, J., Aralinna, V., Räty, R., Ala-Ruona, E.: Vibroacoustic treatment protocol at Seinäjoki Central Hospital. Music Med. 9(3), 184–186 (2017) 39. Jeannerod, M.: The mechanism of self-recognition in human. Behav. Brain Res. 142, 1–15 (2003) 40. Kantor, J., Kantorová, L., Mareˇcková, J., Peng, D., Vilímek, Z.: Potential of vibroacoustic therapy in persons with cerebral palsy: an advanced narrative review. Int. J. Environ. Res. Public Health 16(20), 3940 (2019). https://doi.org/10.3390/ijerph16203940 41. Katuši´c, A., Mejaški-Bošnjak, V.: Effects of vibrotactile stimulation on the control of muscle tone and movement facilitation in children with cerebral injury. Collegium Antropol. 35(1):57– 63 (2011) 42. King, L.K., Almeida, Q.J., Ahonen, H.: Short-term effects of vibration therapy on motor impairments in Parkinson’s disease. Neurorehabilitation 25(4), 297–306 (2009) 43. Magee, W.: Ph.D.: Singing My Life: Playing My Self (1998) https://etheses.whiterose.ac.uk/ 6021/1/299542_VOL1.pdf 44. Maher, T.F.: A rigorous test of the proposition that musical intervals have different psychological effects. Am. J. Psychol. 93(2), 309–327 (1980) 45. Moore, S.: Interval Size and Affect: An ethnomusicological perspective. Emp. Musicol. Rev. 7(3–4) (2012) 46. Mortensen, J., Kristensen, L.Q., Brooks, E.P., Brooks, A.L.: Women with fibromyalgia’s experience with three motion-controlled video game consoles and indicators of symptom severity and performance of activities of daily living. Disab. Rehabil. Assist. Technol. 10(1), 61–66 (2015). https://doi.org/10.3109/17483107.2013.836687 47. Naghdi, L., Ahonen, H., Macario, P., Bartel, L.: The effect of low frequency sound stimulation on patients with fibromyalgia: a clinical study. Pain Res. Manage. 20(1), 21–27 (2015) 48. Park, J.M., Park, S., Jee, Y.S.: Rehabilitation program combined with local vibroacoustics improves psychophysiological conditions in patients with ACL reconstruction. Medicina (Kaunas), 55(10) (2019). https://www.ncbi.nlm.nih.gov/pubmed/31574964 49. Pimentel, K., Teixeira, K.: Virtual Reality: Through the New Looking Glass. McGraw-Hill (1995) 50. Punkanen, M., Ala-Ruona, E.: Contemporary vibroacoustic therapy: perspectives on clinical practice, research, and training. Music Med. Online 4(3), 128–135 (2012) 51. Raghu, M.: A study to explore the effects of sound vibrations on consciousness. Int. J. Soc. Work Hum. Serv. Pract. 6(3), 75–88 (2018) 52. Rüütel, E., Vinkel, I., Eelmäe, P.: The effect of short-term vibroacoustic treatment on spasticity and perceived health condition of patients with spinal cord and brain injuries. Music Med. 9(3), 202–208 (2017)

342

A. L. Brooks

53. Rüütel, E.: The psychophysiological effects of music and vibroacoustic stimulation. Nordic J. Music Therapy 11(1), 16–26 (2002) 54. Rüütel, E., Vinkel, I.: Vibro-acoustic therapy – research at Tallinn University. In: Prstaˇci´c, M., (ed.) Art and science in life potential development. Zagreb, Croatia: Croatian Psychosocial Oncology Association; Croatian Association for Sophrology, Creative Therapies and Arts-Expressive Therapies; Faculty of Education and Rehabilitation Sciences University of Zagreb, pp. 42–44 (2011). https://www.vinkelheli.com/wp-content/uploads/2017/01/RyytelE.-Vinkel-I._2011_VAT-Research-at-TLU.pdf 55. Rüütel, E., Vinkel, I., Laanetu, M.: Vibroacoustic Therapy and Development of a New Device: A Pilot Study in the Health Resort Environment (2018). https://www.hrpub.org/download/201 80830/UJPH2-17611332.pdf 56. Skille, O.: Vibroacoustic therapy. Music Therapy 8(1), 61–67 (1989) 57. Skille, O., Wigram, T.: The effects of music, vocalization and vibration on brain and muscle tissue: Studies in vibroacoustic therapy. In: Wigram T., Saperston B., West R., eds. The art & science of music therapy: A handbook. Chur, Switzerland: Harwood Academic (1995) 58. Staud, R., Robinson, M.E., Goldman, C.T., Price, D.D.: Attenuation of experimental pain by vibro-tactile stimulation in patients with chronic local or widespread musculoskeletal pain. Eur. J. Pain 15(8), 836–842 (2011) 59. Swingler, T.: That Was Me!: Applications of the Soundbeam” MIDI controller as a key to creative communication, learning, independence and joy. CSUN98 Technology and Persons with Disabilities conference (1998). https://www.dinf.ne.jp/doc/english/Us_Eu/conf/csun_98/ csun98_163.html 60. Tononi, G.: Integrated information theory of consciousness: an updated account. Madison, WI, USA: Department of Psychiatry, University of Wisconsin. Arch. Ital. Biol. 150, 290–326 (2012) 61. Tononi, G. (2015). Scholarpedia 10(1), 4164, https://www.scholarpedia.org/article/Integr ated_information_theory 62. Wigram, T.: The effect of VA therapy on multiply handicapped adults with high muscle tone and spasticity. In: Wigram, T., Dileo, C. (eds.) Music Vibration and Health, pp. 57–68. Jeffrey Books (1997) 63. Wigram, T., Pedersen, I.N., Bonde, L.O.: A Comprehensive Guide to Music Therapy: Theory. Clinical Practice, Research and Training, Jessica Kingsley (2002) 64. Wigram, T., Saperston, B., West, R.: Art & Science of Music Therapy: A Handbook. Routledge (1995/2013) 65. Yan, Z.: Measuring Microdevelopment of Understanding the VMS-SAS Structure: A Developmental Scale Pilot. Harvard University (1998) 66. Yan, Z.: Dynamic analysis of Microdevelopment in Learning a Computer Program (Doctoral dissertation). Harvard Graduate School of Education (2000) 67. Yan, Z., Fischer, K.: Always under construction: dynamic variations in adult cognitive microdevelopment. Hum. Dev. 45, 141–160 (2002)

Part III

Health and Well-Being

Chapter 18

Health and Well-Being Anthony Lewis Brooks

Abstract Third part of this volume is titled Health and Well-Being. It covers topics ranging from disabled and technology, through to a new technology referred to as Electrorganic and use in music therapy. Autism is in focus of two chapters. It also includes creative work from Mexico where an exhibition reflects on death. The final chapter reports on Cinematic Virtual Reality and sonic interaction design. Keywords Disabled and technology · Electrorganic · Music therapy · Autism · Death · Exhibition · Installation · Cinematic virtual reality · Sonic interaction design

18.1 Introduction The book contents are segmented into four parts with chapters being selected to each. Specifically, Part 1: Gaming, VR, and Immersive Technologies for Education/Training; Part 2: VR/Technologies for Rehabilitation; Part 3: Health and Well-Being; and Part 4: Design and Development. This third part is themed Health and well-being and includes chapters on (1) Current trends in technology and wellness for people with disabilities: An analysis of benefit and risk, (2) Electrorganic technology for inclusive well-being in music therapy, (3) Interactive multimedia: A take on traditional Day of the Dead altars, (4) Designing an accessible interface with and for children with Autism Spectrum Disorder, and (5) Combining cinematic virtual reality and sonic interaction design in exposure therapy for children with autism. This chapter represents a focused, and sometimes extended, ‘miniscule-review of the field’ by introducing the chapters in the opening part of this volume on ‘Gaming, VR, and immersive technologies for education/training’. Each paper’s

A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_18

345

346

A. L. Brooks

author(s) acknowledgement aligns to using their source text to create these snippets to overview and to introduce readership. The following sections introduce the chapters and authors.

18.1.1 Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk [1] In their chapter titled ‘Current trends in technology and wellness for people with disabilities: An analysis of benefit and risk’,—Hung Jen Kuo, affiliated under California State University, Los Angeles; Connie Sung, affiliated under Michigan State University; Nigel Newbutt, affiliated under University of the West of England; Yurgos Politis, affiliated under the University College Dublin; and Nigel Robb, affiliated under the University of Tokyo, cover across a gamut of technology application areas. Taking inspiration from Director Robert Zemeckis’ “Back to the Future” film, and its technology focus—the chapter introduces sections on ‘Technology as Daily Routine’; ‘Technology for Mainstreaming Assistive Device’; ‘Technology for Education and Employment’; ‘Technology for Service Delivery’; ‘Technology for Social Interaction and Recreation’; ‘Assistive Technology Being Abandoned’; ‘Technology as Ethical Concerns’; ‘Technology as Social Disincentive’. Conclusions in the chapter are how the authors relate to service industries needing to catch up to the technologies where they posit that:Technology bears tremendous potentials in enhancing people’s quality of life. The advancement of technology has made the impossible possible for many. Particularly, assistive technology has made education and employment more accessible to persons with disabilities. However, whereas its positive impact is apparent, technology has its own challenges as well. These challenges may not directly involve the functionalities of technology. For example, users’ preferences and potential threats to confidentiality social isolation are not necessarily considered by these device designers. As such, while celebrating the technology advancement, it is also critical to think beyond just the devices and their functions. After all, it is the users’ experiences that are the most important.

18.1.2 Electrorganic Technology for Inclusive Well-being in Music Therapy [2] Inclusive well-being in the field of music therapy is questioned regarding the potentials of a new technology referred to as Electrorganic is central in the next chapter authored by Anthony Lewis Brooks, affiliated under Aalborg University; and Carl Boland who works for ATV in Japan. This chapter presents a contemporary and original musical instrument proposed for use in music therapy—the instrument is namely ATV Corporation’s electrorganic aFrame. This chapter reports on initial aFrame intervention testing by music therapists as a part of the second phase of

18 Health and Well-Being

347

research. This follows over six-months of proof-of-concept trials questioning reactions across a range of contemporary musical instruments and their potential use in music therapy, wherein the aFrame was the preferred device by testers. As the name suggests, the aFrame is modelled on a traditional frame drum, and appears to be aimed at skilled hand percussionists, with the appeal of applying natural playing techniques to an electronic instrument. What makes the aFrame unique is the sensor array and electronic sound module combination that generates a richly expressive palette of sounds beyond the scope of other electronic percussion instruments. The posited hypotheses are concerned with the applied potentials of the aFrame in the field of music therapy. This chapter technically elaborates on the aFrame alongside an explanation of the electrorganic concept behind the realisation of the instrument. Initial critique and reflections from secondary tests are informed by two Danish music therapists. This chapter details on the basic technical aspects and music-making affordances of the aFrame electrorganic musical instrument. It informs on the background of the realisation of the device and suggests the engineering complexity behind what can be played as a simple musical instrument. Highlighted is how the aFrame experience aligns to that of a traditional acoustic musical instrument—a frame drum. Through this, a hypothesis originating in this research is of how it can contribute to a discipline focused upon music therapeutic intervention. This chapter builds incrementally upon reported trials throughout 2019 that focused on hands-on testing by professionals and possible end-users across profiles of function and age (-as reported in Brooks 2020). It builds upon what was an earlier short article that briefly introduced the first author’s concept behind the explorative study having a focus upon sharing of music therapists’ first reactions from hands-on experience of both MIDI controllers and the electrorganic aFrame. This chapter details the aFrame, which was found to be preferred by music therapists, musicians, and teachers, as reported earlier (ibid). Expert input by the second author elaborates on the history and detail of the aFrame. Literature on the use of the aFrame, or any other similarly conceived electrorganic musical instruments (if they exist), in music therapy was not discovered because, to the authors’ knowledge, electroacoustic instruments such as the aFrame have never been used in music therapy anywhere in the world. There is no literature that can be cited to argue such use and thus this work in progress is seen as avant-garde in advancing the field to explore such new opportunities for therapists. The authors predict that there will be numerous experiments and explorations within the testing phase of the studies with the electrorganic aFrame so as to determine best-fit scenarios, while working towards developing an implementation and training protocol to support music therapists in practice. This is envisaged to begin in Denmark and from there to collaborate internationally with interested researchers and practitioners. The team behind this work-in-progress report on the initial phases of the study and is looking positively towards producing future reports on the proof of concept and feasibility that are anticipating as potentially disrupting the field in a positive manner. Subsequent publications will thus report on use in the field and the development of use-methods as applicable.

348

A. L. Brooks

The chapter states how it is worthy of mention that in instances where end-users may not be optimally stimulated via the auditory channel, the audio signals output from the aFrame can be routed and processed by a visual synthesiser to generate audio-visual correspondences that may stimulate an end-user’s visual channel. The rich soundscapes generated by the aFrame lends the instrument to an exciting potential pairing with a visual synthesizer. The chapter closes with the authors positing that future work will include analysis built upon the first phases of therapist-based (practice-evaluation) input as reported herein. Accordingly, the authors will seek to evolve their research objectives with the aim of maximizing benefits to end-users in ways that are inclusive of their various creative endeavours, whether it be it performing and/or composing music, or just finding a way to relax. As reflected in this chapter’s related research, evidence is reported in the literature of how non-formal enjoyable and fun recreational and leisure activities can have under-layers that target formal therapeutic benefit. It is clear that technological solutions can enable more tailoring and adaption to specific individual needs, requirements and preferences in order to motivate activities. Additionally, such solutions are predicted to contribute to increasing accessibility and improvement of inclusion whilst offering ‘measurable’ outcomes if that is the targeted outcome associated to end-user benefit.

18.1.3 Interactive Multimedia: A Take on Traditional Day of the Dead Altars [3] The next chapter is by a team of co-authors/creators from the Architecture, Design and Art Institute, Ciudad Juárez Autonomous University, Ciudad Juárez, Chihuahua, México—they are namely Ramón Iván Barraza Castillo; Alejandra de la Torre Rodríguez; Rogelio Baquier Orozco; Gloria Olivia Rodríguez Garay; Silvia Husted Ramos, and Martha Patricia Álvarez Chávez. This chapter opens with an introduction statement positioning their creative work—they state—‘Society’s perception of death varies depending on the culture, context, and traditions. For many countries it is a taboo topic, something that should not be talked about. For the people of México death has a different connotation, it is still a tragic event and one that entails grieving for the loss.’ In this text, the authors presents the creation of a traditional and technologically enhanced Mexican Day of the Dead altar. The authors offer a detailed view of the entire process, from the conception of the idea, identification and classification of narrative elements, construction of the offering based on an interactive multimedia user experience model, the inner workings as well as the construction, installation, and exhibition. The altar was presented and evaluated during a mass public event in the Mexican town of Juárez, during a celebration of the Day of the Dead. The idea behind this project is to enhance this century-old tradition with a non-invasive approach to technology to infuse a non-linear narrative experience that connects with the user and promotes spiritual well-being. The focus of this work was to re-tell the

18 Health and Well-Being

349

life story of the deceased through the offering’s items, without losing the value and essence of a traditional altar. With the use of different capacitive materials, it was possible for the co-creators to craft unobtrusive sensors, that passed as regular items found on an offering. The authors inform how the inherent mysticism that surrounds the festivity, combined with the atmospheric sound and lighting effects, aromas of the food, and burning copal, all orchestrated by the hardware and software setup, made for an interesting experience that was enjoyed by hundreds. The authors reflect how there are several implications to their approach, with the first one being that the spectator is no longer just that, he or she becomes part of the experience, as he/she was given a level of control where they can decide when and how the information was presented, thus breaking with the linearity of the narration. Second, the elements on the offering, loaded with special meanings, can now properly convey the story behind them and not pass inadvertently. Third, the authors report how participants were invested in the experience, they felt the connection with the person to whom the altar was dedicated because they know their history, their experience, and legacy, it is not left to interpretation or how well they knew the person. To conclude the authors describe that Day of the Dead is a tradition-filled with syncretism, it embodies the feel of a nation towards life and death. Mexican people truly believe and find comfort in knowing that at least for one night a year, their deceased family and friends can come back and rejoice with them. That is why so much effort and care goes into building the altars and offerings to welcome and honor them. This connection with death brings peace of mind, resignation, and wellness to the bereaved. Though the authors collected information about the end-user experience, it was not in the interest of the study to apply a Technology Acceptance Model (TAM) validation. The survey was aimed at answering the question of whether the reading and narrative of the altar was changed. The authors state how the data showed that the message was not altered by the inclusion of technology and interaction, but rather how it was delivered. In stating their future iterations of the exhibit, the authors hint at the inclusion of Augmented and or Virtual Reality that might lead to an in-depth TAM analysis.

18.1.4 Implementing Co-design Practices for the Development of a Museum Interface for Autistic Children [4] The three authors, namely Dimitra Magkafa, Nigel Newbutt, Mark Palmer are all affiliated at The University of the West of England, in Bristol, United Kingdom. This chapter opens with a definition and prevalence figures of autism via citations from the American Psychiatric Association [5, 6], Centers for Disease Control and Prevention [7], and the UK Department of Health [8]. This chapter posits how technology-based programs can improve the lives of autistic people. A reflection on the literature suggests that over the last two decades an emergence of interactive technologies

350

A. L. Brooks

for children with autism has increased with goals of improving the lives of autistic people and to teach important skills. The authors state that to design accessible programs that address target groups’ needs, participatory design (PD) approaches are of core importance. This chapter focuses on technology co-design with three groups of autistic pupils within the context of participatory design often alongside traditionally-developed pupils. Research was on the process rather than outcomes from the co-design. This approach helped to gain insights into participants’ experiences, to support the ideation and inform the design of a museum-based application. The aim was to develop an accessible interface that allowed the users to have an engaging experience in a museum environment. The stages of the design cycle of an interface are described. In this study, the authors consider the value of co-design with autistic participants as it contributes positively to acquiring knowledge. The design activities (to design and develop a touchscreen-based application to support autistic children mounted on the museum walls) enabled active participation of the autistic groups and gave them a voice to express themselves resulted in strengthening their action in the process. The authors report how conclusions point out that the analysis of the co-design sessions and the perspectives of teachers provided insights into the factors that influenced the outcome of the co-design practices. These included building rapport, creativity, suitable environments and the use of visual means. This process resulted in the development of a framework to help the researchers coordinate the co-design sessions. The work illustrates that the children’s involvement at various stages appeared to be valuable to facilitate the design process as well as to refine the interface. The children felt empowered to uncover their creativity and contribute through idea generation. The authors inform that one way to accomplish these is by incorporating continuous support appropriate for the target group and structuring the environment and the activities according to the children’s needs. Usability of the platform was also been tested whereby the authors suggest the use of a child-centred approach is important to enable the users to have their own say. The authors’ question to what extent can autistic children’s role be considered valuable in the user-centred process? It also entails re-thinking on in the roles in the final decision-making. The research results, inform the authors, suggest several directions for future research and these are elaborated. A reflection was how Frauenberger et al. [9] highlights that a narrative story and sensory exploration through different techniques contributed to effective participation, which aligns to other work in the field internationally. With their approach, an extended understanding of the needs of autistic pupils was obtained by the researchers/authors while ensuring their active involvement in the design process. Finally, this work reports some invaluable insights and can serve as a guidance for future research in co-developing technology for autistic users.

18 Health and Well-Being

351

18.1.5 Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism [10] The authors of the next chapter are Lars Andersen, Nicklas Andersen, Ali Adjorlu, and Stefania Serafin, who are all from the Copenhagen campus of Aalborg University in Denmark. The title of the chapter is ‘Combining cinematic virtual reality and sonic interaction design in exposure therapy for children with autism’. This chapter presents a preliminary study whose goal was to investigate the benefits of cinematic virtual reality combined with sonic interaction design in exposure therapy for autistic children. In the work, a setup was built with two players, one child and one guardian, which together could virtually interact during a children’s concert. Results of an evaluation test in a school for children with special needs shows the potential of VR for exposure therapy. The prototype used interactive instruments in combination with the social possibility of having both a child and its guardian inside the same VR, to enhance a virtual concert scenario. A qualitative approach was conducted and evaluated. The study provided insight on children’s motivation with novel technology, though with the necessity for a readiness phase to provide the children with enough space to become comfortable with a new experience. The interactive objects caught too much focus from the children, as they were not optimized well-enough. However, observations showed that the interactive objects provided the participants with a fun and playful experience, which provides a motivation and readiness with the children. This chapter, alongside others in this book, illustrates potentials from researching technologies for inclusive well-being—such as virtual reality, and the potential impact at a societal level.

18.2 Conclusions This chapter reviews aspects associated to the disabled and technology; Electrorganic instrument use in music therapy; Autism; Cinematic Virtual Reality and sonic interaction design. It also presents a brief overview of a creative work reflecting on Mexican’s perspective on death. It is anticipated that scholars and students will be inspired and motivated by these contributions to the field of ‘Technologies of Inclusive Well-Being’ towards inquiring more on the topics. The fourth and final part of this book follows the five chapters herein this second part—it is themed ‘Design and Development’—enjoy. Acknowledgements Acknowledgements are to the authors of these chapters in this opening part of the book. Their contribution is cited in each review snippet and also in the reference list to support reader cross-reference. However, the references are without page numbers as they are not known at this time of writing. Further information will be available at the Springer site for the book/chapter.

352

A. L. Brooks

References 1. Kuo, H.J., Sung, C., Newbutt, N., Politis, Y., Robb, N.: Current trends in technology and wellness for people with disabilities: an analysis of benefit and risk. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 2. Brooks, A.L., Boland, C.J.: Electrorganic technology for inclusive wellbeing in music therapy. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 3. Castillo, R., Rodríguez, A., Orozco, R., Garay, G., Ramos, S., Chávez, M.: Interactive multimedia: a take on traditional day of the dead altars. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 4. Magkafa, D., Newbutt, N., Palmer, M.: Implementing co-design practices for the development of a museum interface for autistic children. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 5. American Psychiatric Association (2013a) Autism Spectrum Disorder. Available from: https:// www.psychiatry.org/patients-families/autism/what-is-autism-spectrum- disorder 6. American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders: DSM- 5, 5th edn. Washington, DC, London, England, American Psychiatric Publishing (2013b) 7. Centers for Disease Control and Prevention: Autism Spectrum Disorder (ASD) (2014). Available from: https://www.cdc.gov/ncbddd/autism/facts.html. 8. Department of Health: Progress in implementing the 2010 Adult Autism Strategy (July, 2012). Available from: https://www.nao.org.uk/report/memorandum-progress-in-implementing-the2010-adult-autism-strategy/ 9. Frauenberger, C., Good, J., Alcorn, A., Pain, H.: Supporting the design contributions of children with autism spectrum conditions. In: Proceedings of the 11th International Conference on Interaction Design and Children, (pp. 134–143) ACM (2012) 10. Andersen, L., Andersen, N., Adjorlu, A., Serafin, S.: Combining cinematic virtual reality and sonic interaction design in exposure therapy for children with autism. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021)

Chapter 19

Current Trends in Technology and Wellness for People with Disabilities: An Analysis of Benefit and Risk Hung Jen Kuo, Connie Sung, Nigel Newbutt, Yurgos Politis, and Nigel Robb

Abstract The advancement of modern technology has changed our life. Amongst many influences, assistive technology is particularly impactful to individuals with disabilities. These technologies have made the impossible possible. Specifically, functions such as the built-in accessibility feature of a cellphone have mainstreamed the accommodation and made one step closer to the universal design. The mainstream of these technologies is critical since it could lower the cost of the devices and reduce the stigma associated with using specialized devices. In addition, technology has also helped improve many aspects of the quality of life for individuals with disabilities such as education attainment, employment participation, and social interaction and recreation. However, the advancement of technology does not come without consequences. In fact, whereas benefits are indisputable, there are potential risks associated with the technology evolution. These threats include the high abandonment of assistive technology, confidentiality concerns associated with distance counseling, and potentially social isolation from overly reliance on Internet communication. It is worth noting that assistive technology itself cannot be beneficial or helpful for individuals with disabilities without considerable services delivered by the professionals. While functions of the technology continue to evolve, services have to catch up so that these potential risks can be unraveled. H. J. Kuo (B) · C. Sung Michigan State University, East Lansing, USA e-mail: [email protected] C. Sung e-mail: [email protected] N. Newbutt University of Florida, Gainesville, USA e-mail: [email protected] Y. Politis Technological University Dublin, Dublin, Ireland e-mail: [email protected] N. Robb University of Tokyo, Tokyo, Japan e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_19

353

354

H. J. Kuo et al.

Keywords Disability · Technology · Assistive technology · Ethics · Distance counseling · Social media

19.1 Introduction: Technology as Daily Routine In 1985, a famous science fiction movie “Back to the Future” directed by Robert Zemeckis received a great amount of attention from its audience and was one of the best-selling films at the time. Particularly, the part II of the trilogy depicted that the main character travels through time to the year 2015, just a few years before the publishing of this chapter. One of the main reasons for the success of the film is that it offered the audience a chance to peek into the future. In the film, there were many future ideas introduced, such as flying cars, levitating skateboards, fingerprint readers as a way to replace credit card payment, video conferencing, self-drying cloth, and self-tying shoes amongst many more. BBC News [1] rolled out an article with the title “Back to the Future II: What did it get right and wrong?” to examine the predictions made by the movie. Some of the predictions were quite accurate and that we may be able to see those technologies today, and others are still to be invented. However, regardless of whether the specific device prediction was accurate, the movie certainly hit the bullseye on one aspect: technology has become a daily routine for us. Technology has become part of our life today. According to a study conducted by the Pew Research Center [2], 68% of US adults had a smartphone in 2015, compared to 35% in 2011. When adding age into the equation, the smartphone possession rate for adults aged between 18 and 29 was as high as 86% in 2015. A similar high possession rate can be found in computers. It is estimated that 78% of US adults under 30 years of age at least own a desktop or a laptop computer. The mainstreaming of technology has created many possibilities. For example, software applications and Internet services have enabled lots of opportunities that are not possible otherwise: Emails and text messages successfully transformed physical letters into an electronic form and dominates the methods of communication amongst family, friends and professionals [3, 4]. It not only shortens the distance between the senders and the recipients but also expands what a letter can do (e.g., pictures and videos included in the email/text). Without a doubt, the advancement of technology has brought convenience and created new possibilities for our life. Similar benefits can also be seen in the world of rehabilitation and people with disabilities. While technology does not equal to assistive technology, the advancement and mainstream of the modern technology have made many accommodation devices much more accessible and affordable, which in turn enhanced the quality of life of people with disabilities [5– 7]. However, just as many other aspects of our life, where there are benefits, there are potential risks as well. Risks associated with the use of technology are especially alarming when it comes to users with disabilities. For example, in a study with 2775 young adults, Lopez-Fernandez et al. [4] reported the over-dependence issues on mobile devices in young adults. This over-reliance, or addiction according

19 Current Trends in Technology and Wellness for People …

355

to Samaha and Hawi [8], has led to potential negative consequences such as excessive use with loss of control, use in socially inappropriate/dangerous situations, and functional/behavioral impairments [9]. In this chapter, we will discuss the benefits and risks of using technology for people with disabilities. Taking time out from the technology-based communications and screen time may be beneficial and allow us to maintain and enhance our physical interactions with those around us who may get to know us on a much deeper level of understanding. Finding a balance between users’ emotional well-being and mental health is worthy of perusing. There is no denying the benefits of our technological world, but the balance needs to be there so that we are emotionally and mentally fit enough to enjoy its benefits.

19.2 Benefits 19.2.1 Technology for Mainstreaming Assistive Device According to the Assistive Technology Act of 2004, assistive technology (AT) is defined as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of individuals with disabilities.” As Connor et al. [10] noted, the use of the word “technology” in assistive technology has become blurry and may be used to represent any device including high (e.g., computer and mobile device) and low-tech (e.g., pencil grip and magnifying glasses) equipment. In particular, the advancement of mainstream technology has hazed the line between a device that is “assistive” and the one that is not. For instance, with the accessibility functioning embedded in the mainstream smartphones (i.e., Apple IOS and Android), people with disabilities may be able to enjoy the convenience a cellphone offers as much as anyone without a disability. For example, the accessibility features on a smartphone have made applications more accessible for individuals with visual impairments [11]. As Rose et al. [12] aptly pointed out, AT and universal design are two sides of the same coin which have technology and disability at their core. The only major difference between these two is that assistive technology is traditionally developed just for people with disabilities. With the evolution and advancement of modern technology and the rise of disability rights and public awareness, the line separating these two has become thinner and thinner. This evolution retrospectively serves as a strong force that normalizes the use of assistive technology. The mainstream is particularly important as the use of assistive technology has been considered to be stigma provoking [13].

356

H. J. Kuo et al.

19.2.2 Technology for Education and Employment AT has substantially changed the life of people with disabilities. A plethora of evidence has shown the positive influences of using AT to an individual’s educational attainment and labor force participation [5, 14, 15]. When using in the educational settings, Parette and Stoner [16] describes that assistive technology is efficient in promoting children’s (a) attending behaviors, (b) understanding and compliance of the rules of appropriate behaviors, and (c) communication skills. For attending behaviors, when teachers can leverage the multimedia and technology-enhanced instructional designs such as Microsoft Powerpoint and Boardmarker, students tend to have a longer attention span. When teaching socially appropriate behaviors, technology incorporated curriculum such as visual aids and video modeling tend to be easier for students with cognitive disabilities to understand and make sense of what teachers are trying to deliver. As for communication skills, the use of different software application such as instant message has diversified the method of communication and thus greatly help children with communication challenges. Similarly, Zilz and Pang [17] also argued that technology can promote the inclusiveness of the learning environment. When implementing as worksite accommodations, AT also plays an important role in supporting people with disabilities performing essential functions and tasks of the job. Since the pass of the American with Disability Act [18], employment for people with disabilities and the use of the accommodations become more and more important. As the legislation states, “an employer with 15 or more employees to provide reasonable accommodations for individuals with disabilities, unless it would cause undue hardship.” While job accommodation can be a variety of different implementations (e.g., workstation restructure and flexible working hours), AT may also be utilized when needed (e.g., ergonomic chair, task reminder, and computer screen reader). Langton and Ramseur [14] states that AT should be considered as one of the most important strategies to successfully accommodate job tasks. In a study analyzing the U.S. national vocational rehabilitation database, Sprong et al. [19] reported that the use of AT significantly contributes to employment outcomes. In addition, in a survey study conducted by Yeager et al. [20] in California, they found that although the general employment rate for people with disabilities is still low, those who currently are working tend to significantly use AT. While promising, the use of AT is not without concerns. Yeager et al. [20] noted that although more research is needed to understand why some individuals fail to request AT and job accommodations, they speculated that it may be due to the concerns of AT costs. The similar concerns were also reported by Inge et al. [21], where they found in a focus group study that the participated employers raised numerous questions regarding the cost of the AT devices. Answering the concerns about the cost, Simpson et al. [22] found that the cost of AT devices is not greater than any other work-related accommodations. The low-cost and high effectiveness of AT is consistent with a number of studies [20, 21]. Specifically, Inge et al. [21] reported that amongst 100 AT devices used by the research participants, the average cost of the device was only

19 Current Trends in Technology and Wellness for People …

357

$112.35. As such, AT could be the answer to the low employment rate for people with disabilities.

19.2.3 Technology for Service Delivery While technology is powerful and effective when used as job accommodations, it has much more to offer to individuals with disabilities. In fact, technology has been used as a means to deliver rehabilitation services. For instance, since technology and the Internet became more accessible, distance counseling (telecounseling) has become more popular as well. As Riemer-Reiss [23] described, one of the biggest challenges faced by the vocational rehabilitation counselors is that a significant number of clients lived in rural areas where accessing services may not be possible. This could be alarming especially people who lived in rural areas may coincide with the “minority in minority” status and maybe even more in need of additional supports. Distance counseling thus becomes a viable solution that breaks the geographic constraints. As described by Teufel-Prida et al. [24], the demand for technology-assisted counseling has grown rapidly, and the advantages that technology can offer make it appeal to the profession. For example, the use of email and telephone systems have helped practitioners to manage large caseload and expedite the communication process so that the clients do not have to wait for in the long line to meet with the case managers. This is particularly helpful when the conversation is short and has a “check-in” intention. In addition, emails are superior to traditional telephone communication in that it is asynchronized in nature and that the practitioners can take their time to organize their response to the clients. Most importantly, the conversations over the emails are automatically documented and can be used for future planning reference. With the modern invention of video chatting and video conferencing tools become more readily available, the true telecounseling has become an option to the clients as well. The two-way video communication allows the counselors to read into clients’ verbal communications and non-verbal behaviors which is not possible with the traditional telephone services. As described by Lannin et al. [25], online counseling can provide an additional layer of anonymity for clients who may have concerns with traditional face-to-face counseling services.

19.2.4 Technology for Social Interaction and Recreation New and changing technology is built into nearly every part of daily living and has had a profound impact on the way people live––not only for education and employment but also for recreation and social participation which are essential elements for promoting health and quality of life. Disabilities, due to physical/sensory impairments (e.g., vision/hearing loss, spinal cord injury, amputation), cognitive/mental

358

H. J. Kuo et al.

limitations (e.g., intellectual disability, dementia, schizophrenia), social communication deficits (e.g., autism spectrum disorder, social anxiety), and/or neurological conditions (e.g., brain injury, stroke, multiple sclerosis), can present barriers to an individual’s social and community participation. One common challenge encountered by many people with disabilities is the inability to engage in leisure activities independently [26–28]. This apparent inability may be largely due to difficulties in reaching and operating devices typically used to access leisure activities (e.g., television, computer, sports equipment, and music instruments). Finding meaningful avenues for quality of life, recreation, and leisure activities is a key part of working toward the best outcomes for individuals with disabilities [29]. The advancement and proliferation of AT have been useful in eliminating barriers to social participation and assisting individuals with disabilities in accessing a variety of recreational and leisure activities (e.g., sports, play activities, arts) by simply adding cues or modifying equipment that is readily available to accommodate the various needs of people with disabilities [30]. AT ranges from high-tech and lowtech devices, including but not limited to adapted/adjustable equipment, specially designed equipment, electronic aids, computer-facilitated activities, online/virtual experiences. Some examples of AT for recreation and leisure includes, adapted sporting equipment (e.g., switch-adapted fishing rods, adapted golf clubs and golf cart, adapted waterski/wakeboard equipment, adapted ski/snowboard equipment, pool lift, adjustable height basketball hoops, soccer ball with rattle pods, etc.), image stabilization binoculars, adapted gardening tools, adapted board games, adapted playing cards and cardholders, switch-adapted digital camera, switch-adapted toys, wheelchair accessible tent, and specially designed musical instruments. Besides adaptive equipment, the computer is such a versatile tool that its benefits can apply across the spectrum of recreation and leisure activities. Computer and video games are popular and age-appropriate recreational choices that are often easily accessible to individuals with disabilities of different ages. A wide variety of off-the-shelf interactive computer software [31, 32], mobile technology [33], video models [34, 35], and virtual reality devices/programs [36, 37] are available that can be used to teach skills and/or provide social support intervention with welldocumented benefits. For example, children with autism can develop meaningful relationships, fine motor skills, and critical thinking abilities by playing video games [38]. Digital gaming can also facilitate social connections for persons with disabilities [39]. Movement-sensitive robotic systems are also available that can be used to guide motivating and meaningful physical exercise and provide ongoing supervision to people with mobility limitations [40], promote social interactions for individuals with social communication difficulties [41–43]. Further, touchscreens and interactive whiteboards offer different access and more physical involvement in computing. Many mainstream AT, such as smartphones that many people use on a daily basis, have been shown to have the potential for teaching students with disabilities to increase their independence and leisure options [44–47]. Compared to older hand-held devices (e.g., PDAs, SGDs), the touch-screen devices (e.g., smartphones, tablets) are more accessible and in accord with the principles of universal design [47]. They are portable, can be used in many environments, are

19 Current Trends in Technology and Wellness for People …

359

relatively inexpensive, have much longer battery life than older devices and there is an extremely large number of applications that can be installed [48, 49]. Also, given the fact that these mainstream AT are increasingly difficult to distinguish from AT that are specifically designed to assist people with disabilities, hence the use of these AT is more inclusive and stigmatizing. In terms of use of web-based technology, a recent study by Pew Research Center [50] reported that between 2005 and 2015, the percentage of adults using social media skyrocketed from 7 to 65%, with usage rates of young adults aged between 18 and 29 increasing from 12 to 90% in that period. The Internet and social media offer news, information, online learning, civic participation, health care, social networking, entertainment, and more. Creating meaningful relationships is often about sharing our lives with others, and technology allows us to do so through photos, videos, text, and music. The use of the Internet and social networking sites can be powerful online communities that provide invaluable social connectedness and leisure pursuits. Individuals with disabilities can chat, share interests, play games and stay in touch with their peers with and without disabilities on the web. Studies have shown that persons with disabilities, similarly to others without a disability, can gain many benefits from the use of the Internet, including those with motor, speech, visual, or hearing impairments [51], physical disability [52], mental illness [53], intellectual disabilities [54, 55]. Recreation activities can be carefully adapted to integrate digital media in order to facilitate greater inclusion and accommodate varying ability levels. The benefits of digital media for persons with disabilities include greater social interactions, connectedness, participation in mutual support groups, and access to information [51, 53, 55, 56]. A recent review found that digital media and social media use by people with intellectual disabilities has the potential to provide positive social and emotional experiences for this population in the area of friendships, development of social identity and self-esteem, and enjoyment [57]. Additional benefits include increased opportunities for education, creativity, learning, communication, and civic engagement [58]. Similarly, online support groups can also help families of children with disabilities gain knowledge and decrease feelings of isolation [59]. In fact, engaging in recreation online has been shown to enhance social connections, perceived social support, and supplement offline leisure engagement not only for youths but for older adults as well [60, 61]. It also contributes to self-preservation, and serve as an opportunity for self-discovery and growth [62]. Further, AT developments greatly expand communication options for older adults with mobility limitations, resulting in positive effects for well-being [63]. In turn, AT not only increases connectivity, strengthens social interactions and enhances social relations between people and their access to information, but further helps to raise living standards, improves self-esteem, provides health benefits, and enhances inclusion and well-being [64]. In fact, AT has contributed to the cohesion and persistence of the community [65] and has resulted in a sense of empowerment for individuals with disabilities [62, 66]. In a society where people have become quite mobile, and family and friends are often geographically separated, technology is a convenient tool to reduce the “distance”

360

H. J. Kuo et al.

and bring people “closer”. Finally, virtual worlds also allow people with disabilities to experience activities and to assume other characters in a way not tied to their own limitations. It can provide good practice and valuable freedom [67].

19.3 Risk 19.3.1 Assistive Technology Being Abandoned The impact of assistive technologies can be seen in many aspects of people’s life such as promoting education attainment [68], increasing employment rate [19], and enhancing the overall quality of life [69]. Due to these promising effects, various AT devices were created to accommodate different functional limitations. Whereas AT can be beneficial in assisting the physical needs of individuals with mobility disabilities [70], they can also be effective in enhancing individuals’ cognitive capacity [71]. For example, audio recorder, digital to-do list, and cellphone integrated calendar are helpful for the individual to manage complicated daily tasks. With the expansion of its application, AT has become popular and caught much attention among rehabilitation practitioners [19]. In addition, as discussed earlier in this chapter, the advance of modern technology allows the mainstreaming of the assistive equipment, which in turn lower the cost and increase the acceptance rate [13]. Contradicting to the traditional myth that AT device would cause significant financial burden to both the individual with disabilities and the employers, the mainstreaming of the AT devices and applications has changed the world of accommodation and that in most cases, it is no longer an “undue hardship” [72]. While the evolution of AT has been encouraging and promising, it is not a magic wand. AT has its own challenges as well. For example, Scherer [73] divides the concept of AT into (a) AT devices and (b) AT services. AT devices pertain to hardware (e.g., adjustable chair and table) and software (e.g., screen reader) that can be used to enhance the functioning of individuals with disabilities. AT services pertain to procedures that ensure the match of the device and the person is appropriate. Particularly, services may include AT assessment, selection, customization, training, implementation, and ongoing support [10]. In order to ensure a successful implementation of AT, service is just as important, if not more, as the device itself. In fact, Connor et al. [10] emphasized that AT service may have not kept up with the speed of the AT device evolvement. In a study conducted by Phillips and Zhao [74], they found that 29.3% of users of AT abandoned their devices shortly after the implementation. A similar abandon rate was also reported by Cushman and Scherer [75]. The high abandonment issue is particularly alarming, as noted by Kuo [76], because the consequence could be the extra financial burden to the individuals with disabilities and the rehabilitation systems, and that it may also affect the person’s faith in AT solutions. When analyzing the factors that cause the abandonment, Phillip and Zhao [74] found that while there are factors related to the device itself (i.e.,

19 Current Trends in Technology and Wellness for People …

361

changing needs and that the AT device is no longer helpful), most salient reasons were associated with the AT service. For example, Phillip and Zhao claimed that one of the most commonly mentioned reasons is the lack of consideration of the user’s opinion in selecting AT devices. In other words, whereas the function of the AT may have accurately addressed the person’s physical needs, users’ psychological needs were not met and should be part of the equation. Phillip and Zhao concluded that AT service should be improved and include more consumer involvement. More than two decades have passed since Phillip and Zhao pointed out the challenge. Although improved, Sugawara et al. [77] argued that the abandonment issue remains (19.38%) and requires more attention to users’ perception. The call for better AT service is not unique to Phillip et al. [74] and Sugawara et al. [77]. In fact, Connor et al. [10] also reported that AT service should be an emphasis on the pre-service training for rehabilitation practitioners. Recognizing that AT service is complicated and that different rehabilitation professions should be equipped to address part of the services (e.g., rehabilitation engineer is prepared to design and develop the AT device, the medical practitioner is poised to examine medical needs, and rehabilitation counselor and psychologist are trained to address consumers’ psychological needs). Connor et al. [10] proposed that there should be a systematic way to incorporate services among experts to deliver a comprehensive service that addresses consumer’s biopsychosocial needs. Specifically, the flow of the multidisciplinary team approach of AT service by Connor et al. [10] can be seen in Fig. 19.1. Whereas the detail of the model is beyond the scope of the current chapter, the main concept of the model is to invite rehabilitation practitioners to think beyond the device but focus more attention on the AT service delivery and multidisciplinary collaborations.

19.3.2 Technology as Ethical Concerns As previously described, technology has been used to expand the means of service delivery. As Centore and Milacci [78] described, the history of telephone counseling can be traced back as early as 1953 when the first telephone suicide prevention program was offered in London, UK. Since then, technology has been used in enhancing counseling service delivery in many ways. For example, database incorporated case management system has been useful for tracking clients’ progress, the online communication system has made the conversation between client and counselor more effective, and computerized assessment has made the scoring and interpretation more efficient. Although there are many potential benefits associated with the use of distance counseling, there are concerns as well. For example, the Internet offers greater flexibility and efficiency to the communication between the client and the counselor, it could also potentially suffer from hacker’s attack and the Internet instability [79]. In fact, the confidentiality issue and data breach have been one major concern when it comes to distance counseling [80]. Additionally, Centore and Milacci [78] reported

Fig. 19.1 (Biopsychosocial model of AT/AE team collaboration). Depicts iterative, collaborative process addressing medical and psychosocial needs of clients within a temporal framework. 1 MD = medical doctor, OT = occupational therapy, PT = physical therapy, RN = registered nurse, RC = rehabilitation counselor, RPsy = rehabilitation psychologist, SLP = speech language pathologist

362 H. J. Kuo et al.

19 Current Trends in Technology and Wellness for People …

363

that counselors, in general, do not feel that they are prepared to fulfill ethical duties when providing distance counseling. This could be especially troublesome considering that distance counseling will inevitably increase its popularity and that the clients from geographically disadvantaged areas may need this type of service more, combining with those counselors who do not feel ready to provide distance counseling. If current training for counselors does not improve, they will be forced to serve something that is underprepared. As such, another type of ethical concern may be raised. Fortunately, more and more efforts have been put in to ensure that the counselors are aware of the potential threats from distance counseling [78]. Most professional counseling organizations such as the American Psychology Association (APA), the American Counseling Association (ACA), the National Board for Certified Counselors (NBCC), and Commission on Rehabilitation Counselor Certification (CRCC) have included distance counseling as part of their ethical practice guidelines. This really speaks volumes that the use of distance counseling and the ethical concerns associated with it should not be overlooked.

19.3.3 Technology as Social Disincentive It is an undeniable fact that both technology and the Internet have become one of the most important achievements of modern society. They bring their own revolution in human daily life eliminating the distances and offering immediate and easy access to information and communication. With the continuous development of new AT and the popularity of the use of the Internet, one would think that these tools can be used to meet people all over the globe and understand other cultures, effectively communicate and connect with others, establish and maintain social relationships, as well as helping people to become more socially adept through tools such as social media, e-mail, instant messaging, video chat, discussion boards, online gaming, websites, etc. Positive aspects of technology for people include being able to speak more freely (finding one’s voice/community) online, learning/knowledge gains, communication/engagement with others, and creative exploration. Technology has had a profound impact on social participation; however, simply sharing common interests and pursuits with people through technology does not necessarily have a positive impact on social skills and relationship development [81]. AT developments are fundamentally changing the ways in which we experience social relations, and may impact health and well-being accordingly. Pew Research Center [82] revealed that of US adults and those aged 18–29, respectively, 88% and 99% use the Internet. As of 2017, approximately 95% of American adults have a cell phone and 77% a smartphone. On a typical day, American teenagers (13- to 17-yearolds) spend an average of six and a half hours and tweens spend an average of four and a half hours on screen media use. While social media or social networking sites (e.g., Facebook, Twitter, Instagram, LinkedIn, Tumblr, YouTube, Snapchat) is a powerful

364

H. J. Kuo et al.

tool for connecting with people and disseminating information in virtual communities, behaviors, experiences, and events encountered in social media use may be positive or negative, healthy or unhealthy, and normal or problematic. In fact, many people are involved in an abundant number of relationships through technology, but sometimes the quantity of these associations leaves people feeling qualitatively empty. Weiser [83] studied the reasons and goals for using the Internet among adolescents and found two main functions—which are labeled as socio-affective regulation (i.e., social or affiliative orientation toward Internet use) and goods-and-information acquisition (i.e., utilitarian or practical orientation toward Internet use). The results further showed that psychological well-being (i.e., loneliness, depression, and life satisfaction) was negatively related to Internet use driven by socio-affective regulation (e.g., meeting new people or looking for romance) but positively related to Internet use driven by goods and information acquisition (e.g., searching for goods or staying well informed). Thus, despite the potential benefits offered by the Internet and social media, concerns about excessive use of those have arisen such as potential changes in mood, promotion of sedentary lifestyles, withdrawal from other activities, and severely impaired sleep patterns [84]. Problematic social media behaviors may range from disinhibition and the posting of ill-advised photos, to more extreme examples like cyberbullying, Internet pornography, online grooming through social networks, cybersuicide, Internet addiction, and social isolation, cyber racism, and other destructive or addictive behaviors [85–87]. Disinhibition of behaviors may be encouraged by the ability to post anonymously [88]. van den Eijnden et al. [89] explained that frequent online communication may displace valuable everyday social interaction with family and friends which has negative impact/implications on users’ psychosocial well-being. As a result, online communications would particularly relate to depression when they involve weak-tie relationships (e.g., strangers and acquaintances) as opposed to strong-tie relationships (e.g., close friends and family members). Other studies (e.g., [90, 91]) have also observed that more time spent on social media is associated with an increased risk of fatigue, stress, loneliness, depression and social isolation. While AT is beneficial in meeting the complex needs of individuals with disabilities, efforts in ensuring safe Internet and social media use are critical as it could pose unintended risks to people with disabilities, thereby, hindering progress towards their social inclusion. Some asserted that due partly to physical distance and perceived anonymity, it may more easily evoke verbal aggression than face-to-face communication, and this verbal aggression may particularly target socially vulnerable populations. Others raised concerns regarding how some digital AT advances could affect people’s well-being by causing them to be distracted, overly stressed, and increasingly isolated [92]. More recent studies have raised concerns about social phenomena with regard to the widespread use of the Internet and social media. For instance, a recent systematic review by Caton and Chapman [57] found that safeguarding concerns, cyber-language and cyber-etiquette, literacy and communication challenges, and problems with accessibility such as lack of appropriate equipment were preventing people with intellectual disability from effectively using social media.

19 Current Trends in Technology and Wellness for People …

365

In addition, Holmes and O’Loughlin [93] revealed that individuals with intellectual disability had encountered cyberbullying, including unwanted massages, personal remarks about their appearance and activities, and unwanted sharing of private and personal information online, as well as financial and sexual threats as a result of the cyber-language and cyber-etiquette issues. It is apparent that AT has the potential to enhance or harm a person’s social skills and social life, especially those with disabilities. However, perhaps the greatest danger in the web-based/mobile technology revolution is that the excitement over these new AT will result in an isolated focus on the AT alone, to the neglect of the true end goal—communication and relationship. While there are many considerations when it comes to using AT in social context among people with disabilities, the key is to critically analyze how AT affects individuals socially. Do AT help him/her build positive, meaningful relationships, or do AT hinder this process? Is he/she better able to communicate, listen, and share because of AT in his/her life? Does he/she use AT to improve his/her relationships and build new ones? Such are the critical questions regarding AT and social development. It is vital that we engage critically in discussions of how AT is best implicated in social interactions and recreations and be attentive to issues of inappropriate use of AT among individuals with disabilities. In sum, AT can have a major impact on an individual’s well-being and satisfaction. While many think that online communication is able to overcome the barriers encountered by people with disabilities and improve social participation, the full potential of AT to enhance the social inclusion and well-being of people with disabilities is yet to be realized despite years of evidence supporting such potential. The overall quality of life of people with disabilities and their social inclusion goals should not be compromised due to the use of AT. Therefore, service providers and users should pay attention that AT is simply a tool; there is no inherent value in the procurement or operation of the tool in and of itself, but only in the power of this tool to facilitate effective communication and increase participation in society. Hence, a complex array of factors must be taken into account if meaningful access to new AT is to be provided, including physical, digital and social resources and relationships; content and language; literacy and education; community and institutional structures as well as making sure the content is applicable to the lives of individuals with disabilities and their communities [94].

19.4 Conclusion Technology bears tremendous potentials in enhancing people’s quality of life. The advancement of technology has made the impossible possible for many. Particularly, assistive technology has made education and employment more accessible to persons with disabilities. However, whereas its positive impact is apparent, technology has its own challenges as well. These challenges may not directly involve the functionalities of technology. For example, users’ preferences and potential threats to confidentiality social isolation are not necessarily considered by these device designers. As such,

366

H. J. Kuo et al.

while celebrating the technology advancement, it is also critical to think beyond just the devices and their functions. After all, it is the users’ experiences that are the most important.

References 1. Kelion, L.: Back to the future II: hits and misses. BBC News, 20 Oct 2015. https://www.bbc. com/news/technology-34569759. 2. Pew Research Center: Fact sheet on Internet use (2015). https://www.pewInternet.org/factsheet/. Accessed 18 Dec 2019 3. Hall, J.A., Baym, N.K.: Calling and texting (too much): mobile maintenance expectations, (over)dependence, entrapment, and friendship satisfaction. New Med. Soc. 14(2), 316–331 (2012). https://doi.org/10.1177/1461444811415047 4. Lopez-Fernandez, O., Kuss, D.J., Romo, L., Morvan, Y., Kern, L., Graziani, P., Rousseau, A., Rumpf, H.J., Bischof, A., Gässler, A.K., Schimmenti, A., Passanisi, A., Männikkö, N., Kääriänen, M., Demetrovics, Z., Király, O., Chóliz, M., Zacarés, J.J., Serra, E., Griffiths, M.D., Pontes, H.M., Lelonek-Kuleta, B., Chwaszcz, J., Zullino, D., Rochat, L., Achab, S., Billieux, J.: Self-reported dependence on mobile phones in young adults: a European cross-cultural empirical survey. J. Behav. Addict. 6(2), 168–177 (2017). https://doi.org/10.1556/2006.6.201 7.020 5. Agree, E.M., Freedman, V.A.: A quality-of-life scale for assistive technology: results of a pilot study of aging and technology. Phys. Ther. 91(12), 1780–1788 (2011) 6. Betz, K.: Assistive technology for sports & recreation ~ you can play a role. In: 25th International Seating Symposium (ISS 2009) (2009). Retrieved from https://search.proquest.com.pro xy1.cl.msu.edu.proxy2.cl.msu.edu/docview/41939423/13BAFFAC07EDE5D026/15?accoun tid=12598 7. Brandt, Å., Samuelsson, K., Töytäri, O., Salminen, A.L.: Activity and participation, quality of life and user satisfaction outcomes of environmental control systems and smart home technology: a systematic review. Disabil. Rehabil. Assist. Technol. 6(3), 189–206 (2011). https:// doi.org/10.3109/17483107.2010.532286 8. Samaha, M., Hawi, N.S.: Relationships among smartphone addiction, stress, academic performance, and satisfaction with life. Comput. Hum. Behav. 57, 321–325 (2016). https://doi.org/ 10.1016/j.chb.2015.12.045 9. Nikhita, C.S., Jadhav, P.R., Ajinkya, S.A.: Prevalence of mobile phone dependence in secondary school adolescents. J. Clin. Diagn. Res. 9(11), 06–09 (2015). https://doi.org/10.7860/JCDR/ 2015/14396.6803 10. Connor, A., Kuo, H.-J., Leahy, M.J.: Assistive technology in pre-service rehabilitation counselor education: a new approach to team collaboration (2018). Retrieved 27 May 2018, from https://www.ingentaconnect.com/contentone/springer/rrpe/2018/00000032/ 00000001/art00002 11. Griffin-Shirley, N., Banda, D.R., Ajuwon, P.M., Cheon, J., Lee, J., Park, H.R., Lyngdoh, S.N.: A survey on the use of mobile applications for people who are visually impaired. J. Vis. Impair. Blind. 111(4), 307–323 (2017). https://doi.org/10.1177/0145482X1711100402 12. Rose, D.H., Hasselbring, T.S., Stahl, S., Zabala, J.: Assistive Technology and Universal Design for Learning: Two Sides of the Same Coin (2005) 13. Parette, P., Scherer, M.: Assistive technology use and stigma. Educ. Training Dev. Disabil. 39(3), 217–226 (2004) 14. Langton, A., Ramseur, H.: Enhancing employment outcomes through job accommodation and assistive technology resources and services. J. Vocat. Rehabil. 16(1), 27–37 (2001) 15. Wang, I.T., Lee, S.J., Bezyak, J., Tsai, M.W., Luo, H.J., Wang, J.R., Chien, M.S.: Factors associated with recommendations for assistive technology devices for persons with mobility

19 Current Trends in Technology and Wellness for People …

16.

17. 18. 19.

20. 21.

22. 23. 24.

25.

26.

27.

28.

29.

30.

31.

32.

33.

34.

367

limitations using workplace accommodation services. Rehabil. Counsel. Bull. 61(4), 228–235 (2017). https://doi.org/10.1177/0034355217711865 Parette, H.P., Stoner, J.B.: Benefits of assistive technology user groups for early childhood education professionals. Early Child. Educ. J. 35(4), 313–319 (2008). https://doi.org/10.1007/ s10643-007-0211-6 Zilz, W., Pang, Y.: Application of assistive technology in inclusive classrooms. Disabil. Rehabil. Assist. Technol., 1–3 (2019). https://doi.org/10.1080/17483107.2019.169596 Americans with Disabilities Act of 1990, Pub. L. No. 101–336, § 2, 104 Stat. 328 (1990) Sprong, M.E., Dallas, B., Paul, E., Xia, M.: Rehabilitation technology services and employment outcomes among consumers using division of rehabilitation services. Disabil. Rehabil. Assist. Technol. 14(5), 445–452 (2019). https://doi.org/10.1080/17483107.2018.1463400 Yeager, P., Kaye, H.S., Reed, M., Doe, T.M.: Assistive technology and employment: experiences of Californians with disabilities. Work 27(4), 333–344 (2006) Inge, K.J., Strobel, W., Wehman, P., Todd, J., Targett, P.: Vocational outcomes for persons with severe physical disabilities: design and implementation of workplace supports. Neuro Rehabil. 15(3), 175–187 (2000) Simpson, E.B., Loy, B., Hartnett, H.P.: Exploring the costs of providing assistive technology as a reasonable accommodation. J. Appl. Rehabil. Counsel. Manass. 48(2), 26–31 (2017) Riemer-Reiss, M.: Vocational rehabilitation counseling at a distance: challenges, strategies and ethics to consider. J. Rehabil. 66(1), 11–17 (2000) Teufel-Prida, L.A., Raglin, M., Long, S.C., Wirick, D.M.: Technology-assisted counseling for couples and families. Fam. J. 26(2), 134–142 (2018). https://doi.org/10.1177/106648071877 0152 Lannin, D.G., Vogel, D.L., Brenner, R.E., Abraham, W.T., Heath, P.J.: Does self-stigma reduce the probability of seeking mental health information? J. Couns. Psychol. 63(3), 351–358 (2016). https://doi.org/10.1037/cou0000108 Badia, M., Orgaz, M.B., Verdugo, M.Á., Ullán, A.M.: Patterns and determinants of leisure participation of youth and adults with developmental disabilities. J. Intellect. Disabil. Res. 57, 319–332 (2013). https://doi.org/10.1111/j.1365-2788.2012.01539.x King, G., Gibson, B.E., Mistry, B., Pinto, M., Goh, F., Teachman, G., Thompson, L.: An integrated methods study of the experiences of youth with severe disabilities in leisure activity settings: the importance of belonging, fun, and control and choice. Disabil. Rehabil. 36, 1626– 1635 (2014). https://doi.org/10.3109/09638288.2013.863389 Lancioni, G., O’Reilly, M., Singh, N., et al.: Technology to support positive occupational engagement and communication in persons with multiple disabilities. Int. J. Disabil. Hum. Dev. 15, 111–116 (2016). https://doi.org/10.1515/ijdhd-2015-0023 Dahan-oliel, N., Shikako-thomas, K., Majnemer, A.: Quality of life and leisure participation in children with neurodevelopmental disabilities: a thematic analysis of the literature. Qual. Life Res. 21, 427–439 (2012). https://doi.org/10.1007/s11136-011-0063-9 Bort-Roig, J., Gilson, N.D., Puig-Ribera, A., et al.: Measuring and Influencing physical activity with smartphone technology: a systematic review. Sports Med. 44, 671–686 (2014). https:// doi.org/10.1007/s40279-014-0142-5 Bauminger-Zviely, N., Eden, S., Zancanaro, M., Weiss, P.L., Gal, E.: Increasing social engagement in children with high-functioning autism spectrum disorder using collaborative technologies in the school environment. Autism 17, 317–339 (2013). https://doi.org/10.1177/136236 1312472989 Hopkins, I.M., Gower, M.W., Perez, T.A., Smith, D.S., Amthor, F.R., Wimsatt, F.C., Biasini, F.J.: Avatar assistant: Improving social skills is students with an ASD through a computer-based instruction. J. Autism Dev. Disord. 41, 1543–1555 (2011) Stephenson, J., Limbrick, L.: A review of the use of touch-screen mobile devices by people with developmental disabilities. J. Autism Dev. Disord. 45, 3777–3791 (2015). https://doi.org/ 10.1007/s10803-013-1878-8 Grosberg, D., Charlop, M.: Teaching persistence in social initiation bids to children with autism through a portable video modeling intervention (PVMI). J. Dev. Phys. Disabil. 26, 527–541 (2014)

368

H. J. Kuo et al.

35. Scheflen, S.C., Freeman, S.F.N., Paparella, T.: Using video modeling to teach children with autism developmentally appropriate play and connected speech. Educ. Train. Autism Dev. Disord. 47, 302–318 (2012) 36. Kandalaft, M., Didehbani, N., Krawczyk, D., Allen, T., Chapman, S.: Virtual reality social cognition training for young adults with high-functioning autism. J. Autism Dev. Disord. 43, 34–44 (2012) 37. Smith, M.J., Ginger, E.J., Wright, K., Wright, M.A., Taylor, J.L., Boteler, H.L., Olsen, D., Bell, M.B., Fleming, M.F.: Virtual reality job interview training in adults with autism spectrum disorder. J. Autism Dev. Disord. 44, 2450–2463 (2014) 38. Finke, E.H., Hickerson, B., McLaughlin, E.: Parental intention to support video game play by children with autism spectrum disorder: an application of the theory of planned behavior. Lang. Speech Hear. Serv. Sch. 46, 154–165 (2015). https://doi.org/10.1044/2015_LSHSS-13-0080 39. Kahlbaugh, P.E., Sperandio, A.J., Carlson, A.L., Hauselt, J.: Effects of playing Wii on wellbeing in the elderly: physical activity, loneliness, and mood. Activ. Adapt. Aging 35(4), 331–344 (2011). https://doi.org/10.1080/01924788.2011.625218 40. Lancioni, G.E., O’Reilly, M.F., Sigafoos, J., Campodonico, F., Perilli, V., Alberti, G., Ricci, C., Miglino, O.: A modified smartphone-based program to support leisure and communication activities in people with multiple disabilities. Adv. Neurodevelop. Disord. 2, 293–299 (2018). https://doi.org/10.1007/s41252-017-0047-z 41. Boccanfuso, L., O’Kane, J.M.: CHARLIE: an adaptive robot design with hand and face tracking for use in autism therapy. Int. J. Soc. Robot. 3, 337–347 (2011) 42. Cabibihan, J., Javed, H., Ang, M., Aljunied, S.M.: Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism. Int. J. Soc. Robot. 5, 593–618 (2013). https://doi.org/10.1007/s12369-013-0202-2 43. Robins, B., Dautenhahn, K.: Tactile interactions with a humanoid robot: novel play scenario implementations with children with autism. Int. J. Soc. Robot. 6, 397–415 (2014) 44. Kagohara, D.M., van der Meer, L., Ramdoss, S., O’Reilly, M.F., Lancioni, G.E., Davis, T.N., Rispoli, M., Lang, R., Marschik, P.B., Sutherland, D., Green, V.A., Sigafoos, J.: Using iPodsand iPads in teaching programs for individuals with developmental disabilities: a systematic review. Res. Dev. Disabil. 34, 147–156 (2013). https://doi.org/10.1016/j.ridd.2012.07.027 45. Mechling, L.C.: Review of twenty-first century portable electronic devices for persons with moderate intellectual disabilities and autism spectrum disorders. Educ. Train. Autism Dev. Disabil. 46, 479–498 (2011) 46. Reichle, J.: Evaluating assistive technology in the education of persons with severe disabilities. J. Behav. Educ. 20, 77–85 (2011). https://doi.org/10.1007/s10864-011-9121-1 47. Wehmeyer, M.L., Palmer, S.B., Smith, S.J., Davies, D.K., Stock, S.: The efficacy of technology use by people with intellectual disability: a single-subject design meta-analysis. J. Spec. Educ. Technol. 23, 21–30 (2008) 48. Cumming, T.M., Strnadova, I.: The iPad as a pedagogical tool in special education: promises and possibilities. Spec. Educ. Perspect. 21, 34–46 (2012) 49. Douglas, K.H., Wojcik, B.W., Thompson, J.R.: Is there an app for that? J. Spec. Educ. Technol. 27, 59–70 (2012) 50. Pew Research Center: Social media usage: 2005–2015 (2015). Retrieved 24 Jan 2018, from https://www.pewinternet.org/2015/10/08/social-networkingusage-2005-2015/ 51. Lidström, H., Hemmingsson, H.: Benefits of the use of ICT in school activities by students with motor, speech, visual, and hearing impairment: a literature review. Scand. J. Occup. Ther. 21, 251–266 (2014). https://doi.org/10.3109/11038128.2014.880940 52. Lidström, H., Almqvist, L., Hemmingsson, H.: Computer-based assistive technology device for use by children with physical disabilities: a cross-sectional study. Disabil. Rehabil. Assist. Technol. 7, 287–293 (2012) 53. Miller, B.J., Stewart, A., Schrimsher, J., Peeples, D., Buckley, P.F.: How connected are people with schizophrenia? Cellphone, computer, email, and social media use. Psychiatry Res. 225, 458–463 (2015). https://doi.org/10.1016/j.psychres.2014.11.067

19 Current Trends in Technology and Wellness for People …

369

54. Lough, E., Fisher, M.H.: Internet use and online safety in adults with Williams syndrome. J. Intell. Disabil. Res. 60, 1020–1030 (2016). https://doi.org/10.1111/jir.12281 55. Shpigelman, C.N.: Leveraging social capital of individuals with intellectual disabilities through participation on facebook. J. Appl. Res. Intellect. Disabil. 31, e79–e91 (2018). https://doi.org/ 10.1111/jar.12321 56. Molin, M., Sorbring, E., Löfgren-Mårtenson, L.: Teachers’ and parents’ views on the Internet and social media usage by pupils with intellectual disabilities. J. Intell. Disabil. 19, 22–33 (2015). https://doi.org/10.1177/1744629514563558 57. Caton, S., Chapman, M.: The use of social media and people with intellectual disability: a systematic review and thematic analysis. J. Intell. Dev. Disabil. 41(2), 125–139 (2016). https:// doi.org/10.3109/13668250.2016.1153052 58. Chadwick, D.D., Quinn, S., Fullwood, C.: Perceptions of the risks and benefits of Internet access and use by people with intellectual disabilities. Br. J. Learn. Disabil. 45, 21–31 (2017). https://doi.org/10.1111/bld.12170 59. Sharaievska, I., Burk, B.: Recreation in families with children with developmental disabilities: caregivers’ use of online and offline support groups. Therap. Recreat. J. 52, 42–57 (2018). https://doi.org/10.18666/TRJ-2018-V52-I1-8446 60. Cotten, S.R., Anderson, W.A., McCullough, B.M.: Impact of internet use on loneliness and contact with others among older adults: cross-sectional analysis. J. Med. Internet Res. 15(2), e39 (2013). https://doi.org/10.2196/jmir.2306 61. Genoe, R., Kulczycki, C., Marston, H., Freeman, S., Musselwhite, C., Rutherford, H. (2018). E-leisure and older adults: findings from an international exploratory study. Ther. Recreat. J. 52. https://doi.org/10.18666/TRJ-2018-V52-I1-8417 62. Nimrod, G.: Seniors’ online communities: a quantitative content analysis. Gerontologist 50, 382–392 (2010) 63. Jaeger, P.T., Xie, B.: Developing online community accessibility guidelines for persons with disabilities and older adults. J. Disabil. Policy Stud. 20, 55–63 (2009). https://doi.org/10.1177/ 1044207308325997 64. Antonucci, T.C., Ajrouch, K.J., Manalel, J.A.: Social relations and technology: continuity, context, and change. Innov. Aging 1(3) (2017). https://doi.org/10.1093/geroni/igx029 65. Leonard, K.C., Hebblethwaite, S.: Exploring community inclusion in older adulthood through the use of computers and tablets. Therap. Recreat. J. 51, 274–290 (2017). https://doi.org/10. 18666/TRJ-2017-V51-I4-8526 66. Nimrod, G.: The Internet as a resource in older adults’ leisure. Int. J. Disabil. Hum. Dev. 8, 207–214 (2009) 67. Newbutt, N., Sung, C., Kuo, H.J., Leahy, M.J.: The acceptance, challenges, and future applications of wearable technology and virtual reality to support people with autism spectrum disorders. In: Brooks, A.L., Brahnam, S., Kapralos, B., Jain, L.C. (eds.) Recent Advances in Technologies for Inclusive Well-Being, vol. 119, pp. 221–241. Springer International Publishing, Cham (2017) 68. McNicholl, A., Casey, H., Desmond, D., Gallagher, P.: The impact of assistive technology use for students with disabilities in higher education: a systematic review. Disabil. Rehabil. Assist. Technol., 1–14 (2019). https://doi.org/10.1080/17483107.2019.1642395 69. Rosner, Y., Perlman, A.: The effect of the usage of computer-based assistive devices on the functioning and quality of life of individuals who are blind or have low vision. J. Vis. Impair. Blind. 112(1), 87–99 (2018). https://doi.org/10.1177/0145482X1811200108 70. Dicianno, B.E., Joseph, J., Eckstein, S., Zigler, C.K., Quinby, E.J., Schmeler, M.R., Schein, R.M., Pearlman, J., Cooper, R.A.: The future of the provision process for mobility assistive technology: a survey of providers. Disabil. Rehabil. Assist. Technol. 14(4), 338–345 (2019). https://doi.org/10.1080/17483107.2018.1448470 71. Øksnebjerg, L., Woods, B., Vilsen, C. R., Ruth, K., Gustafsson, M., Ringkøbing, S.P., Waldemar, G.: Self-management and cognitive rehabilitation in early stage dementia—merging methods to promote coping and adoption of assistive technology. A pilot study. Aging Ment. Health, 1–10 (2019). https://doi.org/10.1080/13607863.2019.1625302

370

H. J. Kuo et al.

72. Lynch, K.A.: Survey reveals myths and misconceptions abundant among hiring managers about the capabilities of people who are visually impaired. J. Visual Impair. Blind. 107(6), 408–410 (2019). https://doi.org/10.1177/0145482X1310700603 73. Scherer, M.J.: Assistive technology: Matching device and consumer for successful rehabilitation. Washington, DC: American Psychological Association (2002) 74. Phillips, B., Zhao, H.: Predictors of assistive technology abandonment. Assist. Technol. 5(1), 36–45 (1993) 75. Cushman, L., Scherer, M.: Measuring the relationship of assistive technology use, functional status over time, and consumer-therapist perceptions of ATs. Assist. Technol. 8(2), 103–109 (1996) 76. Kuo, H.J.: Rehabilitation counselors’ perceptions of importance and competence of assistive technology. Ph.D. dissertation. Michigan State University, United States, Michigan (2013). Retrieved from https://search.proquest.com.proxy1.cl.msu.edu/docview/1491163236/ abstract?accountid=12598 77. Sugawara, A.T., Ramos, V.D., Alfieri, F.M., Battistella, L.R.: Abandonment of assistive products: assessing abandonment levels and factors that impact on it. Disabil. Rehabil. Assist. Technol. 13(7), 716–723 (2018). https://doi.org/10.1080/17483107.2018.1425748 78. Centore, A.J., Milacci, F.: A study of mental health counselors’ use of and perspectives on distance counseling. J. Mental Health Couns. 30(3), 267–282 (2008) 79. Lustgarten, S.D., Elhai, J.D.: Technology use in mental health practice and research: legal and ethical risks. Clin. Psychol. Sci. Pract. 25(2), e12234 (2018). https://doi.org/10.1111/cpsp. 12234 80. Lustgarten, S.D.: Emerging ethical threats to client privacy in cloud communication and data storage. Profess. Psychol. Res. Pract. 46(3), 154–160 (2015). https://doi.org/10.1037/pro000 0018 81. Joshi, S.V., Stubbe, D., Li, S.T.T., Hilty, D.M.: The use of technology by youth: implications for psychiatric educators. Acad. Psychiatry 43, 101–109 (2019). https://doi.org/10.1007/s40 596-018-1007-2 82. Pew Research Center: Defining generations: where millennials end and post-Millennials begin (2018). https://www.pewresearch.org/fact-tank/2018/04/11/millennials-largest-generation-uslabor-force/ft_18-04-02_generationsdefined2017_working-age/. Accessed 18 Dec 2019 83. Weiser, E.B.: The functions of Internet use and their social and psychological consequences. Cyberpsychol. Behav. 4, 723–743 (2001). https://doi.org/10.1089/109493101753376678 84. Carson, N.J., Gansner, M., Khang, J.: Assessment of digital media use in the adolescent psychiatric evaluation. Child Adolesc. Psychiatr. Clin. North Am. 27, 133–143 (2018) 85. Griffiths, M.D., Kuss, D.J., Billieux, J., Pontes, H.M.: The evolution of Internet addiction: a global perspective. Addict. Behav. 53, 193–195 (2016). https://doi.org/10.1016/j.addbeh.2015. 11.001 86. Pontes, H.M.: Investigating the differential effects of social networking site addiction and Internet gaming disorder on psychological health. J. Behav. Addict. 13, 1–10 (2017) 87. Starcevic, V.: Is Internet addiction a useful concept? Aust. N. Z. J. Psychiatry 47, 16–19 (2013) 88. Suler, J.: The online disinhibition effect. Cyberpsychol. Behav. 7, 321–326 (2004). https://doi. org/10.1089/1094931041291295 89. Van den Eijnden, R.J.J.M., Meerkerk, G.J., Vermulst, A.A., Spijkerman, R., Engels, R.C.M.E.: Online communication, compulsive Internet use, and psychosocial well-being among adolescents: a longitudinal study. Dev. Psychol. 44(3), 655–665 (2008). https://doi.org/10.1037/00121649.44.3.655 90. Primack, B.A., Shensa, A., Sidani, J.E., Whaite, E.O., Lin, L.Y., Rosen, D., Colditz, J.B., Radovic, A., Miller, E.: Social media use and perceived social isolation among young adults in the US. Am. J. Prev. Med. 53(1), 1–8 (2017). https://doi.org/10.1016/j.amepre.2017.01.010 91. Lin, L.Y., Sidani, J.E., Shensa, A., Radovic, A., Miller, E., Colditz, J.B., Hoffman, B.L., Giles, L.M., Primack, B.A.: Association between social media use and depression among U.S. young adults. Depress. Anxiety 33, 323–331 (2016). https://doi.org/10.1002/da.22466 92. Kinetics, H.: Dimensions of Leisure for Life: Individuals and Society. Human Kinetics (2010)

19 Current Trends in Technology and Wellness for People …

371

93. Holmes, K.M., O’Loughlin, N.: The experiences of people with learning disabilities on social networking sites. Br. J. Learn. Disabil. 42(1) (2012). https://doi.org/10.1111/bld.12001 94. Warschauer, M., Knobel, M., Stone, L.: Technology and equity in schooling: deconstructing the digital divide. Educ. Policy 18, 562–588 (2004). https://doi.org/10.1177/0895904804266469

Chapter 20

Electrorganic Technology for Inclusive Well-being in Music Therapy Anthony Lewis Brooks and Carl Boland

Abstract This chapter presents a contemporary and original musical instrument proposed for use in music therapy—the instrument is namely ATV Corporation’s electrorganic aFrame. This chapter reports on initial aFrame intervention testing by music therapists as a part of the second phase of research. This follows over sixmonths of proof-of-concept trials questioning reactions across a range of contemporary musical instruments and their potential use in music therapy, wherein the aFrame was the preferred device by testers. As the name suggests, the aFrame is modeled on a traditional frame drum, and appears to be aimed at skilled hand percussionists, with the appeal of applying natural playing techniques to an electronic instrument. What makes the aFrame unique is the sensor array and electronic sound module combination that generates a richly expressive palette of sounds beyond the scope of other electronic percussion instruments. The posited hypotheses are concerned with the applied potentials of the aFrame in the field of music therapy. This chapter technically elaborates on the aFrame alongside an explanation of the electrorganic concept behind the realisation of the instrument. Initial critique and reflections from secondary tests are informed by two Danish music therapists. Keywords Electrorganic · Music · Therapy · Creativity · Differently-abled

20.1 Introduction and Background The authors are both musicians of many years’ experience who are, or have been, university educators and researchers with periodic vocations within the digital media and creative industries. They are both aware of advances across the music industries over recent years. A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] C. Boland ATV Corporation, Hamamatsu, Japan e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_20

373

374

A. L. Brooks and C. Boland

Within these profiles, a common goal is to realize societal impact through music toward benefitting the differently-abled. In pursuit of such a goal, a central aim of the work presented herein is to explore opportunities for supporting music therapists, principally to find ways to facilitate optimisations of their practice interventions. One way to approach this is by engaging with therapists directly and encouraging them to consider supplementing their arsenals of traditional instrumentation with contemporary instruments. Such instruments—most of which are electronic to some degree—were hypothesised at the start of this research, and linked to particular affordance criteria; Specifically, instruments that hold the potential of increased accessibility, inclusion, and opportunities for offering wider creative self-expression, alongside raising the user’s sense of self-agency, self-efficacy, self-achievement, and success. Following preliminary testing by music therapists (and others) over several handson playing trials, a preference was found in favour of an electrorganic instrument when compared to a selection of purely digital devices e.g. MIDI instruments.1 The electrorganic instrument in question is the ‘aFrame’ and this chapter elaborates on the instrument and initial therapeutic responses following testing adoption.

20.2 Music and Music Therapy MacDonald et al. [22] posited how music has been implicated as a therapeutic agent in vast swathes of contemporary research studies, thereby reflecting that it is only recently that researchers have begun to explore and understand the positive effects that music can have on our well-being—across a range of cultures and musical genres. Contributions in the cited edited publication (ibid.) make clear a relationship between music, health, and well-being, whilst questioning scientific evidence and the lack of robust theoretical frameworks alongside empirical observations and methodological issues concerning the effects of musical interventions on health-related processes. Bonde’s [1] earlier review of the health and music literature relatedly informs a similar narrative. Small [24] referred to ‘music’ as a verb rather than as a noun. He considered music an action relating to the activity, and not as a self-standing entity or object. In defining the term ‘Musicking’, Small referred ‘to music’ as a performative action, of playing; relating to an act of dancing; or otherwise passively listening, as such, affording a means for participants to explore, assert, and acclaim their identities in whatever preferred way (ibid.). Small further posited on how the challenge for music educators was not to churn out more professional musicians, but was instead to “provide that kind of social context for informal as well as formal musical interaction that leads to real development and to the musicalizing of the society as a whole” (ibid., p. 208): See also Cohen [5]. 1 MIDI (Musical Instrument Digital Interface) is a musical instrument signal protocol—see https:// www.midi.org.

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

375

Stige [25] informs that health musicking was originally developed in a discussion of music therapy theory (p. 183), and accordingly it expounded on how the discipline of music therapy could be defined as the study and learning of relationships between music and health [25, p. 198]. From an anthropological perspective, music therapy has been defined as a field of study where integrative perspectives on health-related music practices could be developed [19]. In context, the research herein considers an elaboration on how accessible and inclusive development of music therapy can lead to music therapists enabling clients increased therapeutic potentials and possibilities resulting from their own creative expression and rewarding experiences. This via therapists, and the discipline in general, being more open to exploring new interfaces for musical expression aligned to development and musicalizing of society as a whole. In summing up this brief introduction, suffice to state that this chapter does not attempt to establish formal scientific evidence, a theoretical framework, or even empirical observations and method to question the effects of musical interventions on health-related processes associated to use of the aFrame instrument. It instead focuses upon simply introducing the background and technical aspects of the aFrame as well as sharing early-tester music therapists’ initial field responses associated to a ‘Musicking’ related framework [1, 5, 22, 24, 25].

20.3 Technology Empowered Musical Expression in Therapeutic Settings It has been observed in therapeutic settings how novel forms of creative and musical expression empowered by accessible technologies can offer a more fun, playful, sociable, and enjoyable—and thus entertaining—experience when supplementing traditional intervention strategies in (re)habilitation across a wide range of end-user profiles. Such interactions have also been found to be beneficial to clients’ physical and mental outcomes as targeted by therapists (e.g. [2, 6–12, 14, 15, 22]). Such motivated usage of technology has been reported to result in increased engagement with the facilitator/therapist in training activities, thereby leading to improved treatment program compliance. These benefits are aligned with what has been termed ‘Aesthetic Resonance’ (e.g. see [2, 4, 6–11, 14, 15, 17, 18]). Aesthetic resonance is the subject of elaboration and definition later in this chapter (see also [3]) as it is directly associated with the aFrame musical instrument, which is the main focus of this contribution to the field of music therapy. Such technology-enhanced interactions can be tailored, adaptive and selectable to a client’s profile of preference, need, and therapeutic requirement, leading toward optimized patient experiences across the sessions of a treatment program [2]. Brooks (e.g. [2]) details in the holistic research associated to this study (SoundScapes) how sensor-based technical devices that control auditory stimulus have been invented and self-created (e.g. in line with [14, 15]), whilst other devices in the field

376

A. L. Brooks and C. Boland

have been adapted for implementation from commercial apparatus (e.g. in line with [6, 9]). Notably, Gehlhaar, Ellis, and this chapter’s authors are all musicians. Both approaches to apparatus exploration in this field have resulted in positive outcomes as evaluated by professional therapists to offer benefit for clients with dysfunction across diagnosis, age, and targeted therapeutic outcome—(see also thirdparty reports directly associated to the research herein e.g. [13, 16, 21]). Posited is how a contributing factor is therefore related to knowledge of technologies aligned with comprehension of human activity associated with music-making. Additionally, an emotional and empathetic understanding that is associated with music-making is considered attributing to results questioning human performance where end-users are differently-abled/handicapped.

20.4 Alternative Musical Instruments and the aFrame in Music Therapy Contemporary literature on electroacoustic music and alternative ‘instruments’ are abundant, with the topic being central to such events as the International Conference of New Interfaces for Musical Expression (NIME) that offers annual conference proceedings freely available online.2 Within such archives reside numerous inventions on alternative means for a human to perform music-making. The first author’s concept on the use of the aFrame as an alternative musical instrument in therapeutic situations was first presented at the International Conference of Arts and Technology, Interactivity and Game Creation (ArtsIT)3 hosted in Braga, Portugal 2018. However, the presentation missed out on being able to showcase a live demonstration of the actual physical instrument, which would have been ideal to optimally enable audience members’ opportunities to have hands-on experiences of the device. However, throughout 2019 numerous trials were undertaken in Denmark with physical units to test the idea of the aFrame in this field. Testing was designed for music therapists to be able to compare selected contemporary MIDI-based digital musical instruments and the aFrame. Notable were the positive evaluations received from many of the over six-hundred international music therapy delegates from across an array of countries who had hands-on experiences of the aFrame at exhibition demonstrations given at the European Music Therapy Conference (EMTC 2019).4 The event was hosted at the prestigious Musikkens Hus5 (The House of Music), in the centre of Aalborg, Denmark’s fourth-largest city. At this event delegates were invited to attend a large exhibition stand set up outside the main auditoriums, giving 2 https://www.nime.org

[23]. and Technology, Interactivity, and Game Creation—see https://artsit.org. 4 https://www.musictherapy.aau.dk/emtc19/. 5 https://en.musikkenshus.dk/musikkens-hus/profil/om-musikkens-hus/. 3 Arts

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

377

them access to test various contemporary musical instruments—mostly digital/MIDI (see [3]). As anticipated, the aFrame (a non-MIDI device) was the preferred device from delegate testing. Interviews with those music therapy delegates who participated revealed that their preferences for the aFrame centred on its playability, which was both familiar and similar to a traditional frame drum. However, a key appeal of the instrument was that it offered a large spectrum of interesting musical sounds that could be easily selected, explored and modified by the players, and if desired, saved as a user preset sound for later recall. Other trials involving music therapist testing (also professional musicians and teachers) yielded similarly positive responses. At the end of 2019, two professional music therapists were contacted to initiate a loan of instruments for their testing in their practices to inform feasibility from field use. Critique and reflections from these therapists are elaborated on near the end of this chapter.

20.5 Musicality and Nuances of Expression Typically, musicality in playing an acoustic instrument has dependencies on an underlying accomplished technique; a core set of rigorously practiced and mastered abilities that enable a virtuosic musician to control nuances of expression that connect to intended communications and meanings during the performance of a musical work. Music Therapists are typically accomplished musicians who apply their skills within therapeutic situations to benefit others’ well-being. A background context to this chapter’s research asserts that Music Therapy education in Denmark is primarily focused upon acoustic/electric instruments, and/or vocals. Interviews with Danish music therapists strong suggest that digital MIDIbased instruments are rarely used in practice, although some therapists do use them on occasion. This suggestion could align with their trained musician sensibilities and preferences for fine, low-tolerance nuance of ‘real-time’ musical expression, that being in contrast to the relatively uncertain expressive quality and artificiality of MIDI-based instruments, noting that response latency and jitter between the interface and a sound generating module are negatively evident when compared to an acoustic instrument (see [3]). It was hypothesized early in the research that by demoing the aFrame instrument at The European Music Therapy Congress, thus exhibiting its ‘acoustic/electric’ (direct and immediate) related playability properties alongside its ‘sonic capability differences’, in comparison to other instruments that were MIDI-based, and also by allowing hands-on experience for attending therapists, potential future disruption and adoption of the aFrame could be indicated within the discipline. Thus, given the relative similarity to the attributes of an acoustic percussion framedrum instrument, as typically used in the field, the electrorganic aFrame was further

378

A. L. Brooks and C. Boland

hypothesized as a bridging mechanism that would appease those sceptical therapists who have a delimited traditional perspective and approach. Further,6 and importantly, acoustic instruments typically appease players with a specific natural timbral feedback—a resonance (e.g. distinct characteristic, reverberation, colouring, quality… ‘play feel’) that is difficult to specifically articulate with written words. The aFrame engineers have utilised such timbral qualities and mixed them with equalisation properties associated with performance attributes, subsequently processing the expressive acoustic sound source input to achieve a simply wonderful (electro-) organic playing experience, one which offers what has been referred to as ‘Aesthetic Resonance’ (see elsewhere in this chapter). Aesthetic Resonance relates to human qualities posited in the therapeutic field where a sense of self-agency and self-efficacy, as afforded by a human performance (playing) experience is achieved. In line with research focused upon Aesthetic Resonance, this contribution is considered a work in progress toward a long-term research investigation by professional music therapists in Denmark and ultimately internationally. As far as the authors are aware, this is the first time this particular electrorganic instrument, as illustrated in the following sections, has been proposed and tested in a music therapy or therapeutic context. Technical details of the instrument and its playability attributes are further expanded on in the following sections.

20.6 ATV Electrorganic aFrame Manufactured by ATV Corporation, the ‘electrorganic aFrame’ was the last electronic musical instrument developed under the guidance of the late Ikutaro Kakehashi7 —a music industry pioneer synonymous with the introduction of the MIDI standard. The design ethos that drove the aFrame’s development aimed at the creation of an electronic percussion instrument that would be uniquely expressive in response to a full range of gestures used by the performer—including palm muting, pressure articulated pitch bends, and frictional (rubbing) techniques. The technical realization of this bold idea immediately confronted the limitations of PCM sample triggering technologies used in most commercially available electronic percussion instruments. Consequently, the design team’s innovative solution focused on developing a sophisticated new DSP approach to processing signals from two contact microphones positioned at the centre of the playing surface and also in contact with the bamboo wooden frame (see Fig. 20.1).

6 The

first author was a professional contra-bassist, thus versed in acoustic instrument playing experiences. 7 Also the founder of the Roland Corporation and ATV Corporation.

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

379

Fig. 20.1 (Left) The aFrame’s two contact microphone locations looking from the front—Bamboo wooden frame supports the polycarbonate playing surface (grey)—(Right) rear view showing electronic sound module location (images with permission of ATV Corporation)

This signal processing approach, later termed Adaptive Timbre Technology (ATT), is further supplemented with expressive control input from a pressure sensor positioned behind the textured polycarbonate playing surface.

20.7 Adaptive Timbre Technology The aFrame’s sound design framework is built on the structural concept of a Tone–a signal-processing patch with separate instrument and effect components. Within an aFrame’s project, instruments and effects can be freely mixed to create new tones. An instrument is made up of four independently programmable timbre layers and generative processes termed Main, Sub, Extra and Dry. Of these, the Main and Sub timbre layers are the principal applications of Adaptive Timbre EQ Technology. The Adaptive Timbre EQ works by processing the mixed contact microphone signals through a series of parallel band pass filters. The end user may select up to 32 filters (overtones) for input signal processing. The centre frequency of each filter is determined by a selected overtone model—a function that expresses the mathematical relationships of harmonic and inharmonic partials in a sound spectrum. The generative product of this parallel filtering is a new spectrum of band-limited signals that preserve certain frequency content of the input (contact microphone) signals. These combined filtered signals that form the processed spectrum is then pitched-shifted and tuned to a fundamental frequency in order to create usable musical notes. It is also possible to generate note sequences in certain musical keys and scales by varying pressure applied to the playing surface while tapping on it. This form of pressure-controlled musical note generation within a key and scale is used in many of the aFrame’s multi-layered tones.

380

A. L. Brooks and C. Boland

The remaining two timbre layers that make up a tone’s instrument component combine a simple two DCO percussion synthesizer patch based on X-FM synthesis (Extra layer) with the ‘dry’ signals picked up by the contact microphones (Dry layer). All four of the timbre layers that comprise an instrument tone can be flexibly mixed, panned and processed through a selected effect. Many of the preset aFrame tones use some type of spatial effect (Delay + Reverb) that can also be expressively controlled by varying pressure on the playing surface. The aFrame also implements a binaural 3D spatialisation effect (Space Z, Space R) to create more immersive soundscapes. The aFrame’s electronic sound module is fixed on the rear of the instrument between two bamboo supports (see Fig. 20.1-right image). Connections on the unit include outputs for headphones (stereo mini-jack); 2 × line out 1/4 mono jacks; DC 5-V power input/AC adapter (supplied); USB micro type B connector (battery and computer connectivity) and a MicroSD card slot for storing aFrame project and tone data. The aFrame also has two onboard memory slots for storing project data. An aFrame project organizes the structure of tones (instrument + effect combinations) in groups (A, B, C, D, A , B , C , D ). A project can utilize up to 80 unique instruments and effects that can be freely combined. Each group can store up to a maximum of 40 tones, although 10 is typical for most projects. The weight of the unit is 1.6 kg (3 lb 8.5 oz) and the dimensions are 380 (high) × 380 (wide) × 44 (deep) millimetres i.e. 15 × 15 × 1.75 in. To support mobility, an UNC camera adapter (female) mount is positioned in the centre of the unit’s back panel Fig. 20.1-right image and as clearly marked in Fig. 20.2 that allows a strap to be attached that can be worn around the player’s neck. Bundles are also retailed with the aFrame, including a padded carry case, and a velcro-attachable battery pack that eliminates the need for a power cable in performance. The aFrame back panel has a number of readily accessible sound controls including sensitivity knobs for the ‘edge’ and ‘centre’ contact microphone sensors i.e. piezo microphones. The back panel is shown in more detail in Fig. 20.2. An optional footswitch connected to the aFrame via USB can be used to switch sequentially between groups and tones within groups. This useful extension to the aFrame avoids the need to pause during a performance in order to switch tones using the group navigation buttons positioned on the back panel (see Fig. 20.2). This is considered an essential addition in the context of use with the differently abled (as in music therapy), especially those end-users with limited ability to be able to physically turn the aFrame in order access controls that change tones. In such a context, i.e. working with those with limited strength/mobility/dexterity etc., the empowered ability to self-change/control what is played is important from an efficacy perspective. Even with a footswitch, it is speculated that it may be necessary to enable another form of change control for those end-users without limb strength to press the buttons.

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

381

Fig. 20.2 aFrame back panel—electronic sound module with display showing instrument/sound name as Harmo Drum, group A, tone number-01, and number of tones in group/10 (with permission ATV Corporation) (see Fig. 20.1 for position within frame)

In use with the optional battery, the footswitch requires an additional cable so that the battery feeds the footswitch (USB to power cable) and the footswitch feeds the aFrame (USB to USB).

382

A. L. Brooks and C. Boland

20.8 The Electrorganic aFrame in Use The design of the electrorganic ATV aFrame reflects the ‘Artware’ philosophy of Ikutaro Kakehashi [20]. This design philosophy directed an exceptional group of engineers working on the project, including Ikuo Kakehashi—a professional percussionist and sound designer for the aFrame. Artware refers to the infusion of human and artistic sensibilities. It is only when Artware is combined with hardware and software that machines become true musical instruments [20]. More background information about the development of the aFrame can be accessed at the online link https://aframe.jp/story/. The instrument’s development benefitted greatly from the input of select musicians, and since its release it has been embraced by a number of master percussionists, as illustrated by the variety of published videos online that demonstrate different setups and methods of performing (see aFrame #001 https://youtu.be/T2iyF1UV2f4 “ATV aFrame”). Such methods also include examples of using external additional pedals to process sounds in real-time, as well as to enable other improvisational strategies, such as performing with self-created layers of multiple loops. Playing the unit can be via strikes to the bamboo frame or the textured playing surface (drumhead). ATV recommends that sticks are not used, but the instrument’s sensitive contact microphones ensure that the sounds of percussive gestures made by the hands, fingers, and nails are captured in fine detail for further processing. Beyond what many would use as percussion tools, those with a more experimental bent may explore a wide selection of methods to initiate textured input sounds, including small motor driven devices, inducers, soft drum brushes and woollen mallets etc. The aFrame is provided with access to a multitude of preset tones that the user can play and manipulate, using a variety of gestural means to articulate the sounds (dynamic playing at the edge and centre, applying pressure to the surface, muting, etc.). Each tone is programmed to respond in a unique way to the player’s input, so that it represents an entirely new sound world to explore. This explorative experience underpins the authors’ hypothesis of usefulness within therapeutic contexts where social performing, fun, and enjoyment are typically paramount. Furthermore, because the aFrame is equipped with a headphone output, it was posited that use by/with patients in hospital beds has great potential in music therapy because minimal disturbance to others would be optimized. Alternatively, the collective use of the aFrame in drum circles, to explore communal melodic, harmonic and rhythmic percussion playing, has been proposed for trials across kindergartens/pre-schools, high-schools, universities, care homes for the elderly, establishments for differently-abled, and many other contexts. Whilst these differing contexts are speculated as being receptive of this electrorganic instrument, it is within the domain of music therapy that the authors posit the

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

383

aFrame’s potential to have the most impact. This is because this context provides a suitable adaptable means to safely and optimally position the device for accessibility and inclusion for all.

20.9 European Music Therapy Conference (EMTC), Aalborg Denmark 20198 (See Brooks [3]) In line with the hypothesis that music therapists could benefit from the use of the aFrame electrorganic instrument in their practice, two units were showcased at the European Music Therapy Conference (EMTC) 2019 when hosted at the 20,000 square meter Musikkens Hus9 Aalborg (The House of Music), an architectural gem overlooking the ‘LimFjord’ (the body of water that cuts through Jutland) and the musical gathering point for Northern Denmark. The first author established an exhibition under the collaboration of two departments of Aalborg University, namely the departments of (1) Learning and Philosophy, and (2) Architecture and Design: Media Technology (CREATE), for the EMTC 2019 event. Alongside this, a separate presentation of the device was given as a session in the program on ‘Aesthetic Resonance’ (see elsewhere in this chapter), which aligned to the theme of the event ‘Fields of Resonance’. The collaboration (between university departments 1 and 2) is operating as an outreach project under the direction of Xlab under Aalborg University. Xlab is an experience, experiment, and exploration laboratory located at the university’s main campus outside the city. Xlab has a focus on researching professional development aligned to children’s creative engagement with technologies. The Xlab complex runs in-house workshops for teachers and instructors across education levels with academic research staff in attendance. Children (and teachers) typically attend Xlab to engage in workshops aimed at facilitating testing, creativity, and play with the latest in interactive technologies. Activities, led by team-members, cover diverse areas such as robotics programming, claymation, music performance and composition, Virtual Reality, and more. The Xlab exhibition10 at EMTC 2019 had a goal to present to attendees several contemporary alternative instruments (many MIDI based11 ) for informal testing, first exposure, and evaluation. However, a special focus was placed on showcasing the aFrame units, given the Xlab team’s belief in the instrument’s potential applications in music therapy contexts, and also with them being non-MIDI devices (seemingly a preference among Danish music therapists). The most successful design would allow

8 https://www.musictherapy.aau.dk/emtc19/. 9 https://en.musikkenshus.dk/musikkens-hus/profil/om-musikkens-hus/. 10 See

more on the exhibition and outcomes in Brooks [3].

11 MIDI (Musical Instrument Digital Interface) is a musical instrument signal protocol—see https://

www.midi.org.

384

A. L. Brooks and C. Boland

hands-on experiences, comparisons, and promote debate and discussions followed by interviews. A further purpose of the exhibition was to challenge the preconceptions of organizers and attendees who typically delimit their practice to a preference of working with traditional instruments (according to interviews). This uncertainty and resistance towards technology found among certain therapists is speculated to arise from issues of access to new technology in their professional domains, coupled with a lack of suitable training and technical support. The exhibition targeted to offer such access and help sceptical attendees get over their technophobic reticence towards alternative instruments. The exhibition exchanges resulted in a keen interest among attending music therapists in acquiring further information, and this was followed-up by mailing out further details on the aFrame and invitations for further contact. Further public exhibitions of the aFrame were undertaken at the Danish Science Festival in October where, for its three days duration, the Xlab team operated a large stand in the foyer of the Aalborg University city campus, with academic staff once again promoting hands-on testing and collecting non-formal public feedback and interviews—attendees were notably across a wider age range and were occupationally more diverse—beyond solely music therapists: The ambition of the Science Festival was to kindle a spark and breathe the curiosity of the young people who want to know more. (translated from Danish)

In addition to the public exhibition at the Danish Science Festival, the first author hosted a workshop at one of the main Musikkens Hus12 performance stage/auditoriums that was well attended by the public (including therapists, musicians, and their families). National television and radio stations covered the events and numerous external partners were involved in promoting the festival. The aFrame was subsequently showcased at a regional educators day attended by approximately 450 teaching staff and school children. Another exhibition was held at the 8th International Conference ArtsIT (Arts and Technology, Interactivity and Game Creation) collocated with the 3rd International Conference DLI (Design, Learning and Innovation), hosted over three days in November 2019. These two international conferences are affiliated to the European Alliance for Innovation (EAI) and steered and organised by the first author alongside the director of Xlab—professor Eva Brooks. Thus, the large foyer at Aalborg university city campus was again the venue, off which were three adjoining rooms for conference presentations. The foyer was also where all breaks and lunches were provided, thus proving an optimal area for offering hands-on demonstrations. Again, national television and radio covered the events. On all occasions to date, evaluations on the potentials of the electrorganic aFrame across therapeutic and educational contexts were positive. What seems clear from interviews is that in terms of the ‘tools of the trade’, music therapy in Denmark (and other countries) is struggling to stay in sync with 12 https://en.musikkenshus.dk/musikkens-hus/profil/om-musikkens-hus/.

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

385

a technologically-enabled society. This is unfortunate given the obvious practice benefits that even a limited adoption of new music technology might offer the therapist; Such as alternative play potentials supported by technical apparatus using different interfaces that open up new possibilities for accessibility and inclusion e.g. through instruments being played by breath, head, etc., as offered by non-traditional instruments/devices (e.g. see NIME literature as cited elsewhere in this chapter [23]). Following such positive evaluations, and towards informing and giving access for music therapists to potential new ‘tools of the trade’, an approach was made to invite professional music therapists to undertake trials with their clients. Two female music therapists accepted inclusion in the explorative study, one in private practice and one in an Aalborg municipality environment. The next section presents a report on the initial phase of the collaboration.

20.10 Proof of Concept and Feasibility Trials in Practice To further the inquiry on the potentials for music therapists following the numerous rounds of public exposure, testing, and evaluation overviewed in the previous sections, two ATV aFrame units have, in February 2020, been loaned to the two professional music therapists in Aalborg, Denmark. A (non-formal) goal was discussed whereby the therapists are asked to collect notes in a diary of experiences— good and bad—as part of their ongoing routine of work with the instrument. They are also asked to explore the instrument themselves, alone and away from clients so as to have expertise to transition to their practices. A goal of this is for the therapists to reflect, critique, and discuss potentials for the aFrame to impact well-being and quality of life via inclusion in their music therapy practices. On first handing over the two aFrames at one of the therapist’s locations, one therapist, who had attended EMTC 2019 and had tried the aFrame at the exhibition, was very excited and the other, who had not experienced the instrument previously (but had been informed by the first therapist), had a certain trepidation but was also very positive and excited about participating in the study. Upon initial testing, the first reactions and evaluations from both therapists have been positive, with a great deal of interest and curiosity in the instrument being reported by a variety of their end-users across profile in ‘real’ sessions (many profoundly dysfunctional). Findings point to how, for those clients who are wheelchair bound, the size of the aFrame has presented practical challenges for the therapists in mounting and positioning the unit in an appropriate manner to enable safe (without dropping) access to play. The second author, a technical support engineer for the aFrame has advised the music therapists and researchers against tripod support due to the size and weight of the aFrame. The team are considering a solution that will fit onto a wheelchair table to support the electrorganic instrument—possibly a form of bean bag that can be

386

A. L. Brooks and C. Boland

Fig. 20.3 (Left) Optional aFrame switch = bank and patch change of sounds (with permission ATV Corporation) (right) supplemented with two adaptive switches each side for easier operation by users having hand weaknesses. These are input–output devices that allow individuals with physical disabilities to independently activate devices such as the aFrame

flexibly shaped to offer support alongside different positions, and therefore providing secure, safe locating and thus optimal operation as best-fit. As a further step aimed to prevent accidental dropping of the unit, a ‘camera’ strap for each unit has been purchased and will be delivered to the music therapists to attach to each aFrame unit and to offer placement around a player’s neck. Furthermore, to facilitate the end-user’s engagement and ability to ‘self-control’ tone selections, which each time necessitates the need to remove the unit from its support—turn it over—and change selected parameters—turn back—and reaffix to its support; the team are currently discussing and evaluating how best to use the foot switch unit that is sold as an add-on purchase for the aFrame (see Fig. 20.3-left). This targeting for (1) music therapist operation, and (2) optimising use according to an end-user’s functional abilities. In the case of (1), the music therapist can of course physically turn over and change parameters on the rear, however, this can of course disrupt ‘conduit’ contact between participants within a session and this can be a ‘play’ aspect where a change is made by the therapist to observe client reaction to a change. Thus, in the holistic body of work behind this study, a flexible and modular system approach is targeted to tailor and adapt for individual profiles toward optimised experiences. This approach also empowers therapists to be creative in their experimental interventions. To date, two switches that are typically associated with handicapped end-users, which enable light pressure to trigger a change, have been tested to good effect (see Fig. 20.3-right). The purpose of additional switches is toward promoting a sense of self-ownership whereby the client, as much as possible, is empowered to selfchange parameters. This is again toward increasing the sense of self-agency and self-efficacy for the client toward optimising experience. Another discussed topic is a play context involving multiple clients, where one performs on the aFrame whilst another can change sounds (see next section for how this may function). Beyond these feedbacks, the impact of the Covid-19 pandemic in Denmark prevented extended trials and limited live music therapy sessions. The music therapists informed the team that they took the aFrame instruments home to spend time

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

387

exploring the tone bands, with the intention of developing their competences in being able to transfer to practices.

20.10.1 Next Steps—A Speculation ATV Corporation has developed a companion software editor for the aFrame Fig. 20.4 that can be downloaded. This tool allows fine-tuning of parameters to modify, program, and organise aFrame tones on a computer screen as opposed to working physically with the aFrame’s rear panel controls and small LED screen (via the electronic sound module knobs and buttons). The editor software connects to the aFrame via wired USB. Persons (clients) diagnosed as handicapped, who are profoundly physical dysfunctional, are empowered to use personal computers via different functioning technological controllers. These controllers include ‘alternative mouse devices’ where for example, small motions, such as sensed by an eye-movement device with ‘dwell click’ function, is used by severely handicapped end-users. In instances where a person with such profound dysfunction wishes to be included in a music-making session, it may be possible to empower that person to change and manipulate sounds whilst another person physically performs in playing the aFrame instrument. This builds upon prior multiple user-research where for example 5-V music rocker pedals and other physical controllers are used to directly manipulate audio signals or, via translators (to MIDI or to DMX 512), to manipulate digital

Fig. 20.4 aFrame software editor and patch changer (with permission ATV Corporation)

388

A. L. Brooks and C. Boland

signals that effect audio signals (or other devices impacting other feedback stimuli) (e.g. see [2]).

20.11 Conclusion This chapter details on the basic technical aspects and music-making affordances of the aFrame electrorganic musical instrument. It informs on the background of the realisation of the device and suggests the engineering complexity behind what can be played as a simple musical instrument. Highlighted is how the aFrame experience aligns to that of a traditional acoustic musical instrument—a frame drum. Through this, a hypothesis originating in this research is of how it can contribute to a discipline focused upon music therapeutic intervention. This chapter builds incrementally upon reported trials throughout 2019 that focused on hands-on testing by professionals and possible end-users across profiles of function and age (—as reported in [3]). It builds upon what was an earlier short article that briefly introduced the first author’s concept behind the explorative study having a focus upon sharing of music therapists’ first reactions from hands-on experience of both MIDI controllers and the electrorganic aFrame. This chapter details the aFrame, which was found to be preferred by music therapists, musicians, and teachers, as reported earlier (ibid.). Expert input by the second author elaborates on the history and detail of the aFrame. Literature on use of the aFrame, or any other similarly conceived electrorganic musical instruments (if they exist), in music therapy was not discovered because, to the authors’ knowledge, electroacoustic instruments such as the aFrame have never been used in music therapy anywhere in the world. There is no literature that can be cited to argue such use and thus this work in progress is seen as avant-garde in advancing the field to explore such new opportunities for therapists. It is predicted that there will be numerous experiments and explorations within the testing phase of the studies with the electrorganic aFrame so as to determine bestfit scenarios, while working towards developing an implementation and training protocol to support music therapists in practice. This is envisaged to begin in Denmark and from there to collaborate internationally with interested researchers and practitioners. The team behind this work-in-progress is excited to report on the initial phases of the study and is looking positively towards producing future reports on the proof of concept and feasibility that are anticipating as potentially disrupting the field in a positive manner. Subsequent publications will thus report on use in the field and the development of use-methods as applicable. It is worthy of mention that in instances where end-users may not be optimally stimulated via the auditory channel, the audio signals output from the aFrame can be routed and processed by a visual synthesiser to generate audio-visual correspondences that may stimulate an end-user’s visual channel. The rich soundscapes generated by the aFrame lends the instrument to an exciting potential pairing with a visual synthesizer.

20 Electrorganic Technology for Inclusive Well-being in Music Therapy

389

Interested researchers or therapists (music- or otherwise) are welcome to contact either author should they be interested in further information or uptake leading towards a similar study. To close it is posited that future work will include analysis built upon the first phases of therapist-based (practice-evaluation) input as reported herein. Accordingly, the authors will seek to evolve their research objectives with the aim of maximizing benefits to end-users in ways that are inclusive of their various creative endeavours, whether it be it performing and/or composing music, or just finding a way to relax. As reflected in this chapter’s related research, evidence is reported in the literature of how non-formal enjoyable and fun recreational and leisure activities can have underlayers that target formal therapeutic benefit. It is clear that technological solutions can enable more tailoring and adaption to specific individual needs, requirements and preferences in order to motivate activities. Additionally, such solutions can increase accessibility and improve inclusion whilst offering ‘measurable’ outcomes if that is the targeted outcome associated to end-user benefit. Accepting the obvious need to support the individuated goals of therapy across cases, the authors will always remain focused on a more general goal: To find ways to improve the quality-of-life of people (individual and communities) through interactive technologies, particularly for those considered as differently-abled. In this spirit, the authors consider the electrorganic aFrame to be an inclusive musical instrument that holds vast potential to elevate the well-being of those who use it.

References 1. Bonde, L.O.: Music and Health. An Annotated Bibliography (2008). Retrieved 29 Nov 2009, from: https://www.nmh.no/Senter_for_musikk_og_helse/Litteratur/66817 2. Brooks, A.L.: SoundScapes: The Evolution of a Concept, Apparatus, and Method where Ludic Engagement in Virtual Interactive Space is a Supplemental Tool for Therapeutic Motivation. Institut for Arkitektur og Medieteknologi, AD:MT, vol. 57 (2011) 3. Brooks, A.L.: Shifting Boundaries in Music Therapy. Digital Creativity: Shifting Boundaries: Practices and Theories, Arts and Technologies (2020) (in press) 4. Camurri, A., Mazzarino, B., Volpe, G., Morasso, P., Priano, F., Re, C.: Application of multimedia techniques in the physical rehabilitation of Parkinson’s patients. J. Visual. Comput. Animat. 14(5), 269–278 (2003) 5. Cohen, M.L.: Christopher small’s concept of musicking: toward a theory for choral singing pedagogy in prison contexts. Ph.D. thesis. Music Education and Music Therapy, the Graduate School of the University of Kansas. UMI Number: 3277678, USA (2007) 6. Ellis, P.: Special sounds for special needs: towards the development of a sound therapy. In: Musical Corrections, Tradition and Change, pp. 201–206. The International Society for Music Education, ISME (1994) 7. Ellis, P.: Developing abilities in children with special needs: a new approach. Child. Soc. 9(4), 64–79 (1995) 8. Ellis, P.: Layered analysis: a video-based qualitative research tool to support the development of a new approach for children with special needs. Bull. Counc. Res. Music Educ. 130, 65–74 (1996) 9. Ellis, P.: The music of sound: a new approach for children with severe and profound and multiple learning difficulties. Br. J. Music Educ. 14(2), 173–186 (1997)

390

A. L. Brooks and C. Boland

10. Ellis, P.: Caress—an endearing touch. In: Siraj-Blatchford, J. (ed.) Developing New Technologies for Young Children, pp. 113–137. Trentham Books (2004) 11. Ellis, P.: Moving sound. In: MacLachlan, M., Gallagher, P. (eds.) Enabling Technologies in Rehabilitation: Body Image and Body Function, pp. 59–75. Churchill Livingstone (2004) 12. Ellis, P., Van Leeuwen, L.: Living sound: human interaction and children with autism. Reson. Music Educ. Ther. Med. 6, 33–55 (2002) 13. Falkenberg, S.: Letter to Anthony Lewis Brooks, 23 November—in Brooks 2011. Ph.D. see above (1999) 14. Gehlhaar, R.: SOUND=SPACE, the interactive musical environment. Contemp. Music Rev. Live Electron. 6(1), 59–72 (1991) 15. Gehlhaar, R.: Telephone Conversation with Anthony Lewis Brooks, 23 June—in Brooks 2011. Ph.D. see above (2005) 16. Hagedorn, D.K., Holm, E.: Effects of traditional physical training and visual computer feedback training in frail elderly patients. A randomized intervention study. Eur. J. Phys. Rehabil. Med. 46(2):159–168 (2010). https://www.ncbi.nlm.nih.gov/pubmed/20485221 17. Hagman, G.: Aesthetic experience: beauty, creativity, and the search for the ideal. Rodopi (2005) 18. Hagman, G.: The musician and the creative process. Am. Acad. Psychoanal. 33, 97–118 (2005) 19. Janzen, J.M.: Theories of music in African ngoma healing. In: Gouk, P. (ed.) Musical Healing in Cultural Contexts, pp. 46–66. Ashgate Publishing (2000) 20. Kakehashi, I.: An Age Without Samples: Originality and Creativity in the Digital World. Hal Leonard (2017) 21. Lyon, E.B.: Design af et system til træning af hjerneskadede patienter. IT University, Copenhagen, Denmark (2002) 22. MacDonald, R., Kreutz, G., Mitchell, L.: Music, Health, and Wellbeing. Oxford Press (2012) 23. NIME: The International Conference on New Interfaces for Musical Expression (2019). https:// www.nime.org/archives/ 24. Small, C.: Musicking: The Meanings of Performing and Listening. University Press of New England (1998) 25. Stige, B.: Health musicking: a perspective on music and health as action and performance. In: MacDonald, R., Kreutz, G., Mitchell, L. (eds.) Music, Health, and Wellbeing, pp. 183–195. Oxford University Press (2012)

Chapter 21

Interactive Multimedia: A Take on Traditional Day of the Dead Altars Ramón Iván Barraza Castillo, Alejandra Lucía De la Torre Rodríguez, Rogelio Baquier Orozco, Gloria Olivia Rodríguez Garay, Silvia Husted Ramos, and Martha Patricia Álvarez Chávez Abstract This chapter presents the creation of a traditional and technologically enhanced Mexican Day of the Dead altar. The authors offer a detailed view of the entire process, from the conception of the idea, identification and classification of narrative elements, construction of the offering based on an interactive multimedia user experience model, the inner workings as well as the construction, installation, and exhibition. The altar was presented and evaluated during a mass public event in the Mexican town of Juárez, during a celebration of the Day of the Dead. The idea behind this project is to enhance this century-old tradition with a non-invasive approach to technology to infuse a non-linear narrative experience that connects with the user and promotes spiritual well-being. Keywords Day of the dead celebration · Arduino · Interactive multimedia · Non-linear narrative · User experience · User interface

R. I. Barraza Castillo · A. L. De la Torre Rodríguez (B) · R. Baquier Orozco · G. O. Rodríguez Garay · S. Husted Ramos · M. P. Álvarez Chávez Architecture, Design and Art Institute, Ciudad Juárez Autonomous University, Ciudad Juárez, Chihuahua, México e-mail: [email protected] R. I. Barraza Castillo e-mail: [email protected] R. Baquier Orozco e-mail: [email protected] G. O. Rodríguez Garay e-mail: [email protected] S. Husted Ramos e-mail: [email protected] M. P. Álvarez Chávez e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_21

391

392

R. I. Barraza Castillo et al.

21.1 Introduction Society’s perception of death varies depending on the culture, context, and traditions. For many countries is a taboo topic, something that should not be talked about. For the people of México death has a different connotation, it is still a tragic event and one that entails grieving for the loss. But as writer Octavio Paz once said “Mexicans frequents death, mocks her, caresses her, sleeps with her, celebrates her, is one of her favorite toys and her most permanent love” [1]. According to Walter et al., “Death is irreducibly physical, but it is also social. Getting frail or terminally ill and then dying disrupts social networks; bereavement entails a restructuring of social engagement, with both the living and the dead”[2], and thus the practice of remembering and mourning is also changing. A study presented by O’Rourke, Spitzberg, and Hannawa [3] suggests that the benefits of participating in funeral ceremonies include receiving support and comfort, as well as the public expression of emotions such as anguish, sadness, pain, loss, and regret. Contributing to an overall state of well-being. History has thought us that humans have an intrinsic need to socialize and create bonds with others, this is noticeable by their need to pass information through stories, some anthropologists say that storytelling is essential to existence, it’s what makes us humans [4]. According to Moore “telling stories is a universal and fundamental human activity” [5], one that not only servers entertaining purposes but helps interpret and transfer experiences and knowledge. In her work, Pimentel states that “narrative transcends not only generic and modal boundaries but semiotics, since the narrative can be seen in different mediums and meanings” [6]. She further states that a “story is an abstraction, a construction of reading, such abstraction is capable of being transmitted by other means of representation and meaning” [6]. What this means is that a story can transcend the medium, it is not bound to traditional ways such as literature, performing arts, paintings to name a few. In a broader sense, any object can transmit a message to a receiver if it is placed in a certain context. Technology is the driving force behind many human innovations, it permeates almost every part of our everyday life, as such, storytelling has evolved along with it. From traditional printed books, radio, and cinema; mankind has adapted the way it conveys a story. Not only that, but digital media has also allowed a different scenario, where the reader is no longer just a bystander, he can now interact, create, or change the narrative. Even though people can still enjoy the pleasures of reading a printed book, some e-book readers now allow the users not only the possibility of having hundreds if not thousands of books available to them anywhere they go, but to listen to them as audiobooks. They also provide contextual information regarding a character, topic, or writer with a few taps on the screen. These changes make for a non-linear narrative, and yet its structure is preserved. Folklore and traditional celebrations are one of the most important ways to keep stories alive, as they are told and represented from generation to generation. The

21 Interactive Multimedia: A Take on Traditional …

393

term tradition as defined by Verti “it is the simplest, flattest way of communicating or transmitting cultural and artistic values and manifestations over time, is also the simplest and most direct way of making history” [7]. México is known to be rich in these kinds of traditions, its people are sociable beings, loaded with cultural nuances, ranging from the wide gastronomy, odd sweets, music, dance, games, and toys, to the festivities that mark the Mexican calendar. Implicit in these traditions is a process of organization of colors, textures, rhythms, and flavors that form a representative group. This chapter presents an interactive multimedia narrative experience of the Day of the Dead celebration, by incorporating non-invasive technology the authors create a non-linear experience way, to tell and honor the life story of the departed.

21.2 Day of the Dead Perhaps the best-known example of a traditional Mexican celebration is that of Día de Muertos, its roots can be traced to pre-Hispanic Mexican civilizations. Where the act of dying was the beginning of a journey to Mictlán, the kingdom of the fleshless dead or underworld, also called Xiomoayan, a term that the Spanish translated as hell. This trip lasted four days. Upon arrival at their destination, the traveler offered gifts to the lords of Mictlán: Mictlantecuhtli (lord of the dead) and his companion Mictecacíhuatl (the lady of the inhabitants of the underworld). They will then sent him to one of nine regions, where the deceased remained a four-year trial period before continuing his life in Mictlán and thus reaching the top level, which was the place of his eternal rest, called obsidian of the dead [8].

After the Spanish conquest, it merged with the European Christian festivities of All Saints’ Day and All Souls’ Day. The celebration begins on November 1st in memory of the souls of the infants that passed away and it ends on November 2nd with families gathering to remember and welcome the souls of their elder loved ones. It was inscribed in the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO in 2008 [9]. Central to the celebration are the altars, these offerings are dedicated to an individual as a way to welcome his or her soul back to the world of the living; family and friends of the deceased believe that during that day, the spirit returns from the dead and joins them in celebration and help comfort them for the loss. According to Rodríguez “an altar is the representation of the vision that an entire people have about death, and how in the allegory it leads in its meaning to different implicit themes and represents them harmonically within a single view” [8]. Despite the solemn tone that surrounds the celebration, technology has found its way into altars; the use of controlled lights, screens, projectors, fog machines, and other practical special effects, as well as music and other recorded sounds that help with the atmosphere, are becoming popular, especially on the so-called display altars, however, is rarely seen in personal home offerings.

394

R. I. Barraza Castillo et al.

21.3 Literature Review Day of the Dead celebration has been the subject of multiple studies, but the vast majority tend to approach the matter from a heritage and identity perspective. That is not to say there have not been reports regarding technology-enhanced exhibitions. Furthermore, there is not much research involving the creation of altars with inclusion in mind, meaning, experiences that can be enjoyed in different ways, and evoke diverse reactions and mental states in the spectators. Also, the design, creation, and sharing of the offering can be used as a therapeutic exercise with positive implications on the mental health of both the creator and the viewers.

21.3.1 Technology-Enhanced Exhibitions In 2012 Mexico’s City local government issue a call to participate on a six-project initiative called Ciudad Intervenida, it was a project that brought together six art directors with their respective animation studios, the task was to intervene some of the most emblematic spaces of Mexico City and reinterpret them. One of these interventions was Santolo from Llamarada Studio [10], it consisted of a video mapping showcase at the largest and most important cemetery in the city, Dolores burial grounds. Using the gravestones as projection surfaces, they cast a short animation that included dancing masked human figures and other creatures inspired by the Day of the Dead motif. Altar Ego is a project created by Howie Katz, the author describes it as an interactive computer-driven altar in which the viewer himself becomes the person being memorialized [11]. The mechanics behind this exhibition relies on the user login into their Facebook account, the system will then gather information based on the users interactions, reactions and sharing behavior on the social media platform, as well as other services, to fill a predefined webpage template that in turn gets projected onto blank items on an altar. In 2014 Studio Chirika through the ChaMeshiJi project, set out to build a Day of the Dead altar for everyone. The proposal of the project director and also Japanese filmmaker YupicaYukkunn consisted of an installation called Encuentros-Reencuentros, “an exhibition that allows the conception of space between the similarity and the difference of two traditions connected by the same motive: death” [12]. In addition to uniting the Mexican and Japanese cultures, it invited the general public to send a photo of a loved one so that it can be projected onto rice wafers that hung throughout the altar, and thus celebrate both, Day of the Dead and O-bon. The Mexican city of Queretaro through its tourism agency requested the creation of an installation called Altar Monumental, to attract tourism and maintain traditions alive. According to its creators, to achieve this “we seek to design an empathetic altar, which allows us to see it from any point, has a traditional design and colonial ornamentation, and includes video projections on mylar with animations depicting

21 Interactive Multimedia: A Take on Traditional …

395

Day of the Dead motifs” [13]. From the perspective of the authors, this was a fusion of tradition and spectacle. A study by Rodríguez, Caillahua-Castillo, Delgado-Valenzuela, Zhou, and Andrade was developed “to find out how the students’ experience using the AR prototype affects the learning and appreciation of the cultural value of the Day of the Dead altar tradition” [14]. An Android application was created and the users were given a tablet to explore an altar that was decorated with AR markers, according to the findings of the authors they “observed that the AR prototype enabled students to navigate freely through the altar’s elements while interacting with their peers to discuss information discovered through the AR” [14]. Interactive books, games, and even educational material that aid teachers that might not have the skills or time to develop mobile applications are also present in the literature. One example is the tool created by Mercado [15] to teach about Mexican–American cultural traditions. In this scenario, however, the user doesn’t interact with a physical altar, rather by tapping on objects on a tablet,in these cases, technology takes over and the fundamental nature of the traditional altar is lost since all physical elements are reduced to graphics on a screen.

21.3.2 Exhibitions, Interventions, and Mental Well-being Multimedia exhibitions and interventions have been documented to aid with psychological well-being. A study in Italy by Testoni et al., reveals the psychological effects of middle school children that took part on a Death Education, an intervention to “address existential issues and enhance the meaning of life through positive intentions for the future and reflection on mortality” [16]. They found that by engaging with films, workgroup activities, photovoice, and psychodrama they could decrease difficulty in describing one’s feelings and externally oriented thinking might have an important positive impact on resilience and psychological well-being [16]. As part of his master’s thesis, author Peter Treagan created the project Altar States, and according to his description “is an interactive tech-art exhibit showcasing the cross-pollination of the visionary experience and ancestral worldviews within the global movement of transformational Festival culture” [17]. Though not really a Day of the Dead altar, it explores the intersection between art and technology to create an immersive multisensory environment. In 2014, Marius Ursache pitched the idea of “Skype with the dead” after pondering what happens to avatars on social platforms when the person passes away. Soon after, he launched a website for what it was supposed to become Eternime, “an avatar that will eventually become your digital alter ego, your immortal bits-and-bytes clone” [18]. According to the author, the idea was in part driven by the loss of his grandmother who struggled with Alzheimer’s, he was only left with his memories and a few pictures to remember her by and cope with her parting. He felt “frustrated when I realized that my grandmother’s life story (she was almost 90 when she passed away)—full of struggle, joy, love, desperation, and faith—left behind only a few

396

R. I. Barraza Castillo et al.

photos and memories. Everything else was lost forever” [18]. Although this project is currently on hold, the idea is not only to perpetuate the memory of a person as a chatbot; but to help with the psychological aspects and closure needed to deal with the loss of a loved one. A qualitative study conducted by Krause and Bastida [19] indicates that older Mexican Americans may experience an enhancement in their overall well-being by keeping contact with the dead, as this may reduce death-related anxiety levels. According to the study, the contact might be visual, physical, or sometimes indirect (ambient noise, dreams, or one-way communication). Whichever the case, the participants of the study said that this contact facilitates the grieving process, reassures them that they will be reunited with their families, and the possibility of contacting those who are left behind when they die. In the article published by Olguin and Martinez [20], they look at the process of creating Day of the Dead offerings as a group therapy resource for older adults. Their findings show that “The impact of the offering ritual can be seen at different levels: a sensory-perceptual level, an intrapsychic level, and an interpersonal level, which are in turn in a constant relationship” [20], meaning it had an important emotional and psychological effect on the participants’ welfare. Literature review showcased different approaches to enhancing Day of the Dead altars through technology, unfortunately, the integration is either too simple, obtrusive or goes overboard with too many elements that detract from the focus of the altar. The rest of this chapter focuses on creating a discreet interactive multimedia interface to tell the story of the departed, without drastically changing or interfering with the traditional elements of the altar.

21.4 Method To create the interactive multimedia altar, a four-phase multi-tier methodology was followed. Figure 21.1 shows each of the phases along with the steps needed to complete. This is required because it is not just a matter of constructing the physical

Traditional altars

Narrative Elements

Interactivity and user experience

Neoaltar Installation

Meaning

Classification

Model

Structure

Structure Elements

Analysys and identification

Fig. 21.1 Neoaltar four-phase creation process

Hardware Software

21 Interactive Multimedia: A Take on Traditional …

397

installation, every step of the process must be thoughtfully planned and executed to work seamlessly. A more in-depth explanation will be presented in the following sections.

21.4.1 Traditional Altars Death has always intrigued and fascinated mankind, almost every culture around the globe at some point made offerings to its dead relatives, ancestors, or gods. From ancient Egypt, Greece, China to Nordic countries evidence can be found regarding this matter. It should come as no surprise that in the pre-Hispanic cultures of America, this was also a common theme. In the case of México, what makes this so special is that death is not feared, but celebrated, welcomed, and revered. The vision of death is a mixture of festivity, solemnity, religious beliefs, and even humor. Like in many other rituals, there are some physical elements and sings associated with the Day of the Dead. The altar and its offerings are probably the most evident ones in the celebration.

21.4.1.1

Meaning

Traditional altars were offerings to honor different pre-Hispanic deities, such as earth, rain, water, agriculture, and death. After the Spanish conquest, the meaning had to change to honor the departed instead of the pagan’s gods as it did not resonate well with the newly instated religion. The altar as a tangible element follows a predefined set of rules for its construction. They can range from a simple structure to an elaborate installation but are always decorated with personal objects, favorite food, and other elements in memory of the departed.

21.4.1.2

Structure

The construction of the altar can vary depending on several factors, mainly if it is a personal offering as seen in Fig. 21.2a (usually built inside the house of one of the family members) or in Fig. 21.2b if it is meant for a public event or exhibition. Regardless of this, the altar still maintains its essence of a multi-level structure, where each of the levels represents a step to reach the place of eternal rest. According to Marín “the sense of the Mexican order imposes on the altar—within its diversity—a common aspect: the perfect arrangement and symmetry” [21]. According to Denis Rodríguez et al. [8], there are three different types of altars based on the number of levels. The simplest one consists of just two tiers that signify earth and heaven, there is a three-level variant that along with the previous two planes of existence it includes the concept of purgatory, introduced by the Spanish

398

R. I. Barraza Castillo et al.

a

b

Fig. 21.2 a Home offering (Reproduced from INPI [22]) b Public event offering (Reproduced from [23])

Table 21.1 Seven-level altar structure and what they represent Level

Description

First

The topmost level represents heaven and includes a picture of the saint the deceased was devoted to

Second

Representing the purgatory includes a reference to the souls that inhabit it and that will free the soul of the deceased

Third

It symbolizes the purification; salt should be placed on this level to aid with the process

Fourth

The Eucharist or Holy Communion. Instead of sacramental bread, a special sweet loaf known as the bread of the dead is used

Fifth

The last meal is symbolized by the deceased favorite dishes, fruits, and beverages

Sixth

To honor the memory of the departed, a picture is placed at this level

Seventh

The bottommost level represents the earth. Fruits and seeds are arranged in the shape of a cross, sometimes flowers and incense are used as well

through the Christian Catholic faith, as an intermediate state the deceased person has to undergo before he or she may enter heaven. The third and final type is the seven-level altar, the pinnacle of Mexican tradition, it represents the steps needed to achieve eternal rest, it is said that it also has to do with the voyage to reach the Aztec underworld Mictlán. Table 21.1 enlists each tier and a short description of what they represent.

21.4.1.3

Element

The structure of the altar itself it is just part of the complexity of the celebration, every element that is found on each level has a meaning and a reason to be there. The offering is that colorful ritual where the individual and the community are represented with their gift; it is a sacred act, but it can also be profane: popular tradition is the symbiosis of sacred devotion and profane practice.

21 Interactive Multimedia: A Take on Traditional …

399

Table 21.2 Indispensable elements and their meaning in a traditional Day of the Dead offering Item

Description

Water

As in many cultures it is seen as the source of life, is offered to the souls to quench their thirst after their long journey and to strengthen their return

Salt

Used as the element of purification

Candles

Serve as beacons to guide the souls to their old home

Copal and Incense

Used to cleanse the environment of evil spirits so that the soul can enter the house without any danger

Flowers

The Zempoalxóchitl flower, the symbol of the festivity. They decorate with their colorful petals and fill the place with a pleasant aroma during the stay of the soul

Bread

It symbolizes the Holy Communion. The act of sharing between the living and the dead

Golletes and sugar canes

It is another type of bread that is hoisted on sugar canes, associated with pre-Hispanic sacrifices. It alludes to the impaled skulls of the defeated enemies

Izcuintle

Is the dog that helps souls cross the Chiconauhuapan river, which is the last step to reach Mictlán. It is mostly seen on altars devoted to infants, as a toy for them to play when they arrive

Petate

It is a bedroll made from the woven fibers of the palm tree and serves both as a resting place for the souls as well as the tablecloth for the banquet

According to México’s National Institute of Indigenous People, the meaning of the offering is “to share bread, salt, fruits, culinary delicacies, water, and wine with the departed. Is being close to our dead to dialogue with their memory, with their life. Is the reunion with a ritual that summons memory” [22]. Table 21.2 shows a list of the essential elements that must be part of any offer. If one of them is missing, the spiritual charm that surrounds this religious heritage is partially lost.

21.4.2 Narrative Elements Though the concept of narrative is commonly associated with writing, it is not constrained to it. It can be found in radio, movies, videogames, and other mediums. In the book Mediatic Narratives, author Rincón states that narration “is a process by which an audiovisual work suggests to a spectator the steps that lead him to complete a story, to understand what is told” [24]. This is in line with what Pimentel [6] said several years before in regards to a story being an abstraction and construction of reading.

400

R. I. Barraza Castillo et al.

In a broader sense reading can apply not just to words, but to the interpretation of any visual work, such as a painting, picture, sculpture, or in the case of the altar, the offering. By looking at the items and elements, the spectator can infer how it relates to the person who is being honored. This is what Pimentel refers to as the diegetic world, a place that is created and who’s inhabitants exist and interact only when the reader pieces together all the elements the author provided. As previously stated, traditional altars can be classified as either personal offerings or public installations. In the former, family members are the ones that choose what items to place along with the essential elements, close relatives know the life story of the person being honor and thus do not require much context of why those items were selected. On the latter, the altar is usually devoted to a public and or important personality, this means not everyone that sees the installation fully understands why certain items are displayed along with the usual elements.

21.4.2.1

Classification

The second step towards building an interactive experience was to classify which items are commonly placed along with the essential elements and why. Around 30 altar creators were interviewed and asked what messages were, they trying to transmit through the altar and which elements they thought conveyed the message. With this information, a classification was devised following the methodology that Vladimir Propp used in his book Morphology of the Folktale. He first isolated parts of the tales according to the actions of the characters, once he managed to separate them, he compared them with other folktales and formed a morphological work, that is, a description of the parts of the story, how they interact and develop the plot and how they can be compared with other stories [25]. The classification yielded 11 elements that the authors refer to as the main narrative functions of the altar. These items communicate a specific message and combined with the other functions, make for the narrative structure that is offered to the spectator for its interpretation. Table 21.3 shows the list of items and what they are used for.

21.4.2.2

Analysis and Identification

After the classification of the narrative functions, the authors visited three venues on November 2nd, 2016, and analyzed 24 altars. The data showed that the creators of the offerings used a wide array of elements to help the viewer understand the life story of the deceased. Table 21.4 shows a matrix of the altars and the narrative functions that each one incorporated. Figure 21.3 shows an altar dedicated to Mexican singer-songwriter José Alfredo Jiménez and the analysis of the available narrative functions using the proposed classification method. It possesses 10 of the 11 functions (F_N_V_OP_B_ap_r_e_SoP_fr).

21 Interactive Multimedia: A Take on Traditional …

401

Table 21.3 Main narrative functions in an altar Function ID

Function name

Function

Item

F

Picture

How he/she looked, what type of person he/she was, and gives hints to what he/she did

Framed photograph

V

Clothing

How he/she used to dress, also hints to his profession

Set of clothing

ap

Personal item

Help to easily identify the person or an important event in his/her life

A specific object associated with the person

OP

Object of Profession

Describe what his/her occupation was in life

An object that represents an occupation

A

Food

Show what his/her favorite Prepared dish dish was or how it relates to an important event in his/her life

B

Beverage

Show what his/her favorite drink was or how it relates to an important event in his/her life

Glass or bottle

N

Name

Identify the person being honored

Written in sawdust or flowers in a mat

r

Bio

A short bio with memorable Written letter dates, events, and important acts

fr

Quotes

Remember his/her trajectory, accomplishments, or famous words

Signboard

s/SoP

Sound/Sound of Profession

Enhance the environmental setting

Ambient sound or music

E

Set design

Setting up the overall theme Set elements of the altar

21.4.3 Interactivity and User Experience The concept of interactivity applies to many contexts, according to the Oxford Reference it is “A dynamic and reciprocal communicative relationship between a user and a computerized media device where each new action is contingent on a previous action” [26]. Jenkins [27] emphasizes the difference between interactivity and participation, terms that are often used interchangeably, for Jenkins, “interactivity refers to the ways in which new technologies have been designed to better respond to consumer reaction” [27]. Another interesting approach to the term is presented by Smuts [28],

Picture

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Altar

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Clothing

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Personal item

Table 21.4 Altar item analysis matrix

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Profession

X

X

X

X

Food

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Beverage

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Name

X

X

X

X

X

X

X

X

X

X

X

Bio

X

X

X

X

X

X

X

X

Music

X

X

X

X

Quotes

X

X

X

X

X

X

X

X

X

X

X

X

X

(continued)

Set design

402 R. I. Barraza Castillo et al.

Picture

X

X

X

Altar

22

23

24

X

X

Clothing

Table 21.4 (continued)

X

Personal item X

X

Profession

X

X

Food

X

X

X

Beverage

X

X

X

Name X

Bio X

Music X

Quotes

Set design

21 Interactive Multimedia: A Take on Traditional … 403

404

R. I. Barraza Castillo et al.

Fig. 21.3 Analysis of an altar, Picture (F), Clothing (V), Personal item (ap), Object of Profession (OP), Beverage (B), Name (N), Bio (r), Quotes (fr), Set design (e), Sound (SoP)

he states that “something is interactive if and only if it (1) is responsive, (2) does not completely control, (3) is not completely controlled, and (4) does not respond in a completely random fashion” [28]. From the previous definitions, interactivity must respond to the user, in the case of the proposed multimedia altar, this response is information about the departed. Scolari and March [29] call this type of interactive systems, information visualization systems, whose main characteristic is the integration of dynamic representations and actions in the same environment. This means that the system represents information with the actions of the user within the same place or environment. In the case of the

21 Interactive Multimedia: A Take on Traditional …

405

multimedia altar, the information that is presented is done through the sensors that are embedded in the same traditional context. For Zhang [30] the concept of immersion is key in an interactive system and describes it as the ability to transport the user to another world through the senses and perception, making him believe that he is physically in another place. However, the proposed multimedia altar described in this chapter does not aim to recreate this type of immersion, it strives to integrate the traditional setting, with interactive multimedia technology. As the idea is not to turn it into a museum type of exhibit, rather keep it as close to traditional offerings.

21.4.3.1

Model

User Experience (UX) is the sum of all the interactions that a user has with a product, this interaction is not only for digital interfaces but also physical. Therefore, the product should be designed with the user experience in mind both during and after its interaction. According to Morville [31], seven facets make up UX, he calls it the user experience honeycomb and in it, references that a product should be useful, usable, desirable, findable, accessible, credible, and valuable. There are different UX models and evaluation methods to choose when building a product, the authors opted for the one proposed by Garrett [32], though originally conceived for website development it can be applied to any product design. Figure 21.4 shows the five planes of the model, starting from an abstract view at the bottom and ending with the concrete implementation at the top. The five planes of the model must work together since each depends on the previous one to create a successful user experience. Each plane establishes what it needs to successfully create the one that follows.

21.4.4 Altar Installation The final step in the process was to bring together everything that was done in the previous phases. Design and create the physical structure of the altar, procure all of the items for the offering, acquire, experiment and prototype with different hardware and software configurations to achieve the desired result, as well as build the final version of the altar for exhibition and evaluation during the most important Day of the Dead event in the city.

21.4.4.1

Structure

The authors chose to dedicate the altar to Mexican painter Magdalena Carmen Frida Kahlo y Calderón, best known as Frida Kahlo. She is internationally recognized not only for being married to famous muralist Diego Rivera but for her self-portraits

406

R. I. Barraza Castillo et al.

Fig. 21.4 The five planes of user experience. (Reproduced from Garrett [32])

and nationalist style. But as famous and recognized as she is, there still is a lot of information and misunderstanding regarding some aspects of her life that people do not know. Thinking about the size of the structure and the accessibility concerns, a three-level altar was planned and designed following Garrett’s five plane model, whilst keeping the traditional elements that the viewer comes to expect from an altar, Fig. 21.5 shows how each plane was implemented in the installation. Figure 21.6 is a rendered view of the distribution of the altar offering, all the touch sensors can be found in level one, they are not regular push buttons or touch screens, instead are embedded on the narrative function items such as the picture, food, beverage, etc. Level two contains all the essential elements of an altar, such as water, salt, candles, copal, etc. For aesthetic and practical reasons, level three becomes a smaller triangular shape as not to interfere with the projection area.

21.4.4.2

Hardware

The hardware setup starts with a computer as the main controller, all the audio, images, video, animations, music, and sound effects are store here. The computer then ties via USB to an Arduino UNO microcontroller which in turn connects to an Adafruit MPR121 12-Key Capacitive Touch Sensor Breakout board. A second

21 Interactive Multimedia: A Take on Traditional …

407

Surface •All the essential offering elements as well as the narrative functions concealed as items made of different capacitive materials.

Skeleton •Sketch and renderings of the elements and their distribution on the altar. •No navigation design is needed.

Structure •Flow diagrams that show how the users can interact with the offering and how it will react to them.

Scope •Narrative functions checklist. Includes the physical aspect of the item, behavior, and how they fit into the offering.

Strategy •Develop an enjoyable interactive multimedia experience around an altar to express the life story of the deceased in a nonlinear narrative. •Create a connection with the participants, get the user emotionally invested on the Day of the Dead tradition.

Fig. 21.5 Altar implementation of Garrett’s five-level design

Fig. 21.6 Rendered view of the altar and offering (Reproduced from Baquier et al. [33])

408

R. I. Barraza Castillo et al.

Fig. 21.7 Simplified connection diagram

USB connection runs from the computer to a DMX controller that links to a 27 RGB channel DMX decoder, this is used to control LED strips that light up and guide the user to the designated projection area when one of the sensors is touched. Figure 21.7 shows a simplified connection diagram of the components. Instead of using the analog I/O pins available on the Arduino and having to write the sensing and filtering routine, the Adafruit capacitive touch sensor breakout board was selected, this board includes 12 individual capacitive touchpads and works with the I2C protocol so it is easy to connect to any microcontroller. Figure 21.8 illustrates how to wire the breakout board to the Arduino. Seven of the twelve available input pads were used, they connect to items representing the main narrative functions. When a user touches these items, a capacitance variation is detected by the microcontroller that is constantly reading the sensors, it will then report this change to the application running on the computer to evaluate and act accordingly. Using concealed jumper cables and alligator clips the seven items were connected to the breakout board. Table 21.5 summarizes how each sensor was created, as a narrative function and what they represent on the altar.

21 Interactive Multimedia: A Take on Traditional …

409

Fig. 21.8 Capacitive sensor board connection diagram

Table 21.5 Capacitive sensors specifications Narrative function

Item and meaning

Capacitive material used

Picture (F)

A framed picture of Frida

Metal frame

Object of Profession (OP) Miniature easel with a self-portrait on a canvas

Bare Conductive Paint on the canvas

Quotes (fr)

Frida’s diary

Flexible ITO (Indium Tin Oxide) Coated PET film

Personal item (ap)

Miniature wheelchair, alluding to Frida’s accident

Aluminum foil tape to cover the wheelchair

Clothing (V)

Blouse embroidered with Mexican motifs

Stainless medium conductive threads

Beverage (B)

Wrapped bottle of tequila

Pressure-sensitive conductive sheet (Velostat)

Food (A)

Watermelons, suggesting to None, the watermelon is already Frida’s recurrent painting theme a conductive item

21.4.4.3

Software

Using the Arduino Integrated Development Environment (IDE) a sketch was coded to interface the breakout box and continuously check the state of the inputs. It is also responsible to communicate with the application running on the PC through a USB connection.

410

R. I. Barraza Castillo et al.

On the PC side, an application written in vvvv manages communication with the Arduino board, DMX controller, video projector, and speakers. According to the website “vvvv is a hybrid visual/textual live-programming environment for easy prototyping and development. It is designed to facilitate the handling of large media environments with physical interfaces, real-time motion graphics, audio, and video that can interact with many users simultaneously” [34]. The toolkit interface consists of gray windows called patches, using the visual programming paradigm, the user can drag and drop nodes, each node needs to be connected to another one to pass information, process it and show it through an output node, such as a DirectX renderer. Figure 21.9 illustrates the patch used to connect to the Arduino board, section A configures the RS232 node for serial communication at 9600 bauds on COM3. Section B takes the output from the serial port and passes it through a series of nodes to process the string response to the corresponding value for each of the sensors. Finally, section C consists of only an output node to the next patch. Fig. 21.9 Patch diagram for RS232 communication to Arduino UNO

21 Interactive Multimedia: A Take on Traditional …

411

The second patch is used to decide whether a response should be triggered, this depends on which sensors were active at any given time. Figure 21.10 shows the diagram for six of the seven sensors and how the data is received and processed from the first patch that has an output node labeled “S > SENSOR”, this means it will send data to any receiving node labeled “R < SENSOR”. In section E a series of nodes can be seen for each of the sensors, their task is to take the incoming stream of data and extract the relevant pin information. The nodes at section F are used as thresholds and are calibrated according to the conductive material used on the items. Just like before, the final section includes the output nodes that will feed other patches. The third patch handles media projection, sound and, LED lights. As observed in Fig. 21.11, the sensor trigger patch on section H feeds the nodes on section I, whose task is to load a multimedia resource, set its scale, position, and animation intro and outro effect. In section J a single renderer node takes the grouped multimedia output and according to its configuration parameters will project it onto the screen. Lastly, in Fig. 21.12a, an example view of the renderer node with no sensors activated is depicted, while Fig. 21.12b shows what the renderer would display if a user is touching the painting item of the altar. Each touch sensor is assigned a different multimedia content that elaborates on an aspect of Frida’s life, which is displayed in different areas of the screen, that way the user can identify what content corresponds to the sensor they are touching. To

Fig. 21.10 Patch diagram for the sensor trigger condition

412

R. I. Barraza Castillo et al.

Fig. 21.11 Patch diagram for media resource preparation

help with this, a bounding box appears around the multimedia content in the same color as the glowing sensor’s base and light strip.

21.5 Exhibition Once all the phases were completed, the interactive multimedia altar was put to the test. Even though Day of the Dead is such an important date to the Mexican people, the celebration on the northern states of the country is not really on par with those of the central and southern regions. So, the exhibition needed to take place in the most important event in the city. Juarez Autonomous University, through its Institute of Architecture, Design and Arts is host to the biggest Day of the Dead celebration, its open to all public and it is held every year on November 2nd. The main attraction is the altar competition, students from any academic program can team up to create exhibition altars dedicated to local, national, and even international personalities. The authors requested the school permission to install the proposed interactive offering during the 35th Altares y Tumbas event in November 2017. According to school information, over 10,000 persons attended that day [35]. Figure 21.13 shows the completed installation of the altar, including retro projected screen, laser cut panels, environmental illumination, and essential elements like flowers, candles, salt, bread, water, among others. It is important to note that the

21 Interactive Multimedia: A Take on Traditional …

413

a

b

Fig. 21.12 Renderer node view a Idle state, b Painting sensor active

technological elements are not in plain sight, they were meticulously concealed to maintain the essence of a traditional altar. Figure 21.14a shows the first three main narrative functions found on the left side of the altar: beverage (B), personal item (ap), clothing (V); whereas Fig. 21.14b displays the remaining four items on the right side: picture (F), quotes (fr), object of profession (OP) and food (A). To understand what the purpose of the interactive altar is, and not dismiss it as a gimmicky novelty attraction, we must remember what Day of the Dead means to most Mexicans. It is unlike any other celebration, not only is it a deeply rooted

414

R. I. Barraza Castillo et al.

Fig. 21.13 Interactive multimedia altar installation. Reproduced from Baquier et al. [33]

tradition to celebrate death as part of living but the bond that links us to our departed loved ones. Throughout the evening, assistants to the event walk around the venue looking at more than twenty or so altars that are on display. Ranging from crafty, humorous, and sometimes cheeky to beautiful and elaborate offerings that are a sight to behold. Yet, the interaction or rather, the lack of it, is limited to just that, looking. The elements of the offering, that were so carefully selected to represent the essence of the departed, convey their story, and emotionally connect with the spectator, might go unnoticed by many, as they are not familiar with the life of the person being honor. After finding their way from the open space to the room where the interactive altar was being exhibited, people felt unsure of what was going on. Some peeked into the room and left, others went in and were baffled at the sight of the exhibition. Most just stood at the door entrance and stared at it just like they would any other altar, trying to piece the elements together and find out who it was dedicated to. They were advised to enter the room, encouraged to approach the altar, and told they could touch elements of the offering. This seemed strange for a lot of them, as it is an unconventional practice and contrasted to what they previously experienced with the rest of the altars, most of which had some sort of barrier to avoid people getting to close and touching anything.

21 Interactive Multimedia: A Take on Traditional …

415

a

b

Fig. 21.14 Narrative elements on the altar

One of the most interesting parts of the interactive altar is its inclusive design, that allows several users to concurrently interact with it at any given time, making the experience more enjoyable for everyone involved. As the user approaches any of the sensors, the base where the element rests upon glows dimly as to attract their attention. Figure 21.15 demonstrates three users interacting with the different

416

R. I. Barraza Castillo et al.

Fig. 21.15 Multiple users interacting with the altar at the same time

elements and how the altar accommodates for this, displaying three visual elements that convey information by themselves and a cohesive story in combination. As part of the exhibition, users that interacted with the altar were asked to take part in an online survey. They were briefed on the nature of the data being collected and participants that consented responded to a questionnaire about the narrative aspects of the experience. Though this chapter will not go into detail about the results of the survey, it is worth to mention that 120 usable responses were collected and that 116 people concurred that altars are not just ceremonial pieces but that they are meant to connect and recount the life of a person. In this sense, it is valuable to note that over 87% of the participants responded that they could identify two or more stories about Kahlo’s life and felt a stronger connection to Frida as a person after spending time at the altar.

21.6 Conclusion This chapter offered a summarized view on the exploration and integration of interactive multimedia technology and the implementation of a user experience model to intervene in the narrative structure of a Day of the Dead altar. The focus was to re-tell the life story of the deceased through the offering’s items, without losing the value and essence of a traditional altar. With the use of different capacitive materials, it was possible to craft unobtrusive sensors, that passed as regular items found on an offering. The inherent mysticism that surrounds the festivity, combined with the atmospheric sound and lighting effects,

21 Interactive Multimedia: A Take on Traditional …

417

aromas of the food, and burning copal, all orchestrated by the hardware and software setup, made for an interesting experience that was enjoyed by hundreds. There are several implications to this approach, the first one is that the spectator is no longer just that, he becomes part of the experience, as he is now in control and decides when and how the information is presented, breaking with the linearity of the narration. Second, the elements on the offering, loaded with special meanings, can now properly convey the story behind them and not pass inadvertently. Third, the participants are invested in the experience, they feel the connection with the person to whom the altar is dedicated because they know their history, their experience, and legacy, it is not left to interpretation or how well they knew the person. Day of the Dead is a tradition-filled with syncretism, it embodies the feel of a nation towards life and death. Mexican people truly believe and find comfort in knowing that at least for one night a year, their deceased family and friends can come back and rejoice with them. That is why so much effort and care go into building the altars and offerings to welcome and honor them. This connection with death brings peace of mind, resignation, and wellness to the bereaved. Though the authors collected information about the end-user experience, it was not in the interest of the study to apply a Technology Acceptance Model (TAM) validation at this point. The survey was aimed at answering the question of whether the reading and narrative of the altar was changed. The data showed that the message was not altered by the inclusion of technology and interaction, but rather how it was delivered. Future iterations of the exhibit, such as the inclusion of Augmented and or Virtual Reality might lead to an in-depth TAM analysis.

References 1. Paz, O.: El Laberinto de la soledad. Fondo de Cultura Económica, Mexico City (2004) 2. Walter, T., Hourizi, R., Moncur, W., Pitsillides, S.: Does the internet change how we die and mourn? Overview and analysis. Omega J. Death Dying 64(4), 275–302 (2011). https://doi.org/ 10.2190/OM.64.4.a 3. O’Rourke, T., Spitzberg, B., Hannawa, A.: The good funeral: toward an understanding of funeral participation and satisfaction. Death Stud. 35(8), 729–750 (2011) 4. Rose, F.: The art of immersion: why do we tell stories? Wired Bus. 3(8), 11 (2011) 5. Moore, S.G.: Some things are better left unsaid: how word of mouth influences the storyteller. J. Consum. Res. 38(6), 1140–1154 (2012) 6. Pimentel, L.A.: El relato en perspectiva: estudio de teoría narrativa. Siglo XXI (1998) 7. Sebastián, V.: Tradiciones Mexicanas. México D.F: Diana (1991) 8. Denis Rodríguez, P.B., Andrés Hermida Moreno, P., Huesca Méndez, J.: El altar de muertos: origen y significado en México. Revista de Divulgación Científica y Tecnológica de La Universidad Veracruzana, 25(1), 1–7 (2012) 9. UNESCO.: Las fiestas indígenas dedicadas a los muertos—patrimonio inmaterial - Sector de Cultura—UNESCO. Retrieved 1 Oct 2018, from https://ich.unesco.org/es/RL/las-fiestas-ind igenas-dedicadas-a-los-muertos-00054 (2008) 10. Llamarada.: Panteon de Dolores. Retrieved from https://vimeo.com/ciudadintervenida (2012) 11. Katz, H.: Altar Ego. Retrieved Oct 1 2018 (2012), from http://howiekatzart.com/artsite/art/ large-sculpture-and-installation/altar-ego/

418

R. I. Barraza Castillo et al.

12. ChaMeshiJi.: De arroz me como un taco Encuentros-Reencuentros. Retrieved from http://www. chameshiji.com/2014-encuentros-reencuentros-2014.html (2014) 13. Primo, F., Leal, I., Steck, C.: Monumental altar de muertos. Retrieved from http://fernandos arvide.blogspot.com/2015/12/monumental-altar-de-muertos-queretaro.html (2015) 14. Rodríguez, M.D., Caillahua-Castillo, K., Delgado-Valenzuela, H.R., Zhou, Y.H., Andrade, Á.G.: Enhancing the Children’s Learning Experience of Mexican Traditions through augmented reality. In: Multidisciplinary Digital Publishing Institute Proceedings, vol. 31, p. 16. Toledo (2019) 15. Mercado, A.Y.: Multicultural Educational Digital Game: A Report on the Importance of Creating Digital Cultural Games and an Analysis of a Mexican-American Cultural Game. The University of Texas at Austin (2018) 16. Testoni, I., Tronca, E., Biancalani, G., Ronconi, L., Calapai, G.: Beyond the wall: Death education at middle school as suicide prevention. Int. J. Environ. Res. Pub. Health 17(7) (2020). https://doi.org/10.3390/ijerph17072398 17. Treagan, P.: Altar States: Spirit worlds and transformational experiences. California State University, Chico (2019) 18. Ursache, M.: The Journey to Digital Immortality. Retrieved 29 May 2020, from https://med ium.com/@mariusursache/the-journey-to-digital-immortality-33fcbd79949 (2015) 19. Krause, N., Bastida, E.: Exploring the interface between religion and contact with the dead among Older Mexican Americans. Rev. Relig. Res. 51(1), 5–20 (2010). Retrieved from http:// www.ncbi.nlm.nih.gov/pubmed/21399735 20. Olguin, F.Q., Martinez, K.I.L.: Day of the Dead offering as an art psychotherapy strategy for older adults. Psicología y Salud 27(1), 127–136 (2017). Retrieved from http://psicologiays alud.uv.mx/index.php/psicysalud/article/view/2443 21. Marín, F.J.R.: ¿ Instalación, perfomance o celebración tradicional?: Sincretismo cultural en el altar de muertos mexicano. Isla De Arriarán: Revista Cultural Y Científica 28, 327–338 (2006) 22. INPI.: Conoces el significado de los elementos de una ofrenda de Día de Muertos? Retrieved May 7, 2020. from https://www.gob.mx/inpi/articulos/conoces-el-significado-de-los-elemen tos-de-una-ofrenda-de-dia-de-muertos (2019) 23. UACJ.: Altares y Tumbas 2019. Retrieved 7 May 2020, from https://comunica.uacj.mx/altaresy-tumbas-2019 (2019) 24. Rincón, O. (2006). Narrativas mediáticas: O cómo se cuenta la sociedad de entretenimiento, vol. 23. Editorial Gedisa 25. Propp, V. (1998). Morfología del cuento, vol. 31. Ediciones Akal 26. Oxford Reference.: Interactivity - Oxford Reference. Retrieved 10 May 2020, from https:// www.oxfordreference.com/view/10.1093/oi/authority.20110803100006404 (2019) 27. Jenkins, H.: Convergence Culture: La cultura de la convergencia de los medios de comunicación. Grupo Planeta (2008) 28. Smuts, A.: What is interactivity? J. Aesthetic Edu. (2009). https://doi.org/10.1353/jae.0.0062 29. Scolari, C.A., March, J.M.: Hacia una taxonomía de los regímenes de info-visualización. Retrieved from https://repositori.upf.edu/handle/10230/27213 (2004) 30. Zhang, C., Hoel, A.S., Perkis, A.: Quality of alternate reality experience and its QoE influencing factors. In: AltMM 2017—Proceedings of the 2nd International Workshop on Multimedia Alternate Realities, co-located with MM 2017, pp. 3–8 (2017). Association for Computing Machinery, Inc. https://doi.org/10.1145/3132361.3132365 31. Morville, P.: User Experience Design. Retrieved May 12, 2020 (2004), from http://semantics tudios.com/user_experience_design/ 32. Garrett, J.J.: Elements of user experience, the: user-centered design for the web and beyond. Pearson Education (2010) 33. Baquier Orozco, R., Barraza Castillo, R.I., Husted Ramos, S.: Neoaltar: an interactive multimedia day of the dead experience. Heliyon 6(2), e03339 (2020). https://doi.org/10.1016/j.hel iyon.2020.e03339

21 Interactive Multimedia: A Take on Traditional …

419

34. vvvv group.: vvvv—a multipurpose toolkit. Retrieved 15 Oct 2018, from https://vvvv.org/ (2018) 35. UACJ.: Celebran la muerte. Gaceta Universitaria, 10. Retrieved from https://issuu.com/gaceta uacj/docs/gacetauacj_240 (2018)

Chapter 22

Implementing Co-Design Practices for the Development of a Museum Interface for Autistic Children Dimitra Magkafa, Nigel Newbutt, and Mark Palmer

Abstract Technology-based programs can provide help in various ways and improve the lives of autistic people. To design accessible programs that address target groups’ needs, participatory design approaches are of core importance. This chapter focuses on technology co-design with three groups of autistic pupils within the context of participatory design. This approach helped to gain insights into participant’s experience, to support the ideation and inform the design of a museum-based application. The aim was to develop an accessible interface that allowed the users to have an engaging experience in a museum environment. The stages of the design cycle of an interface are described. In this study, we consider the value of co-design with autistic participants as it contributes positively to acquiring knowledge. With our approach, an extended understanding of the needs of autistic pupils was obtained while ensuring their active involvement in the design process. Finally, this work provided some invaluable insights and can serve as a guidance for future research in co-developing technology for autistic users. Keywords Autism · Involvement · Participatory design · Museum-based application · development process

22.1 Introduction The Diagnostic and Statistical Manual of Mental Disorders [1] defines autism as a group of complex neurodevelopmental disabilities which can be detected in early childhood. ASD affects how individuals communicate and interact with each other and the presence of impaired sensory sensitivities and repetitive- stereotype behaviours. Autism is a spectrum condition with a wide range of symptoms, abilities, and levels of severity, affecting each person differently [2]. According to the Centers for Disease Control and Prevention (CDC), the recorded prevalence of autism D. Magkafa (B) · N. Newbutt · M. Palmer The University of the West of England, Bristol, UK e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_22

421

422

D. Magkafa et al.

was found to be 1 in 56 children [3], whilst in 2008, it was closer to 1:88 [2]. In the UK, the latest prevalence rate of autism is 1 in 100 children [4]. Due to the high rates of autism and the challenges associated with the daily living, the focus of research community has shifted towards supporting autistic people across multiple domains. Evidence-based treatment approaches were designed to better support and improve autistic people’s well-being [5]. Several therapies and services have been introduced since the early 1990s to help overcome challenging behaviours and these have led to positive outcomes [6, 7]. The aim of these therapies is to help autistic people develop greater functional skills that are important for their daily living. As technology has become prominent, technology-based interventions were developed to target the core impairments of autistic people and help them attain functional independence on a daily basis [8–10]. Their potential to provide a structured and organised visual environment seem to particularly effective in addressing their complex needs [11]. Although several studies in the area of autism have dealt with the evaluation of technology-based programs [12, 13], little is known about the usability of such digital platforms [14, 15, 17]. In the context of developing interfaces for autistic groups, the way an interface is designed can have an impact on a child’s behaviour and attitude when they are interacting with the platform [18]. A common outcome is that technology programs cannot fit autistic users’ needs and these pitfalls can lead to lack of motivation and engagement [16]. This in turn may cause undesirable behaviours and frustration to the users. The reasons for this may be attributed to (a) limited inclusive design approaches and best practices on informing the design of these technologies [14–17], or (b) a lack of understanding of autistic people’s needs and contributions in the design process as technology has been designed for rather with autistic participants [19, 20]. As a consequence, this can restrict the autistic children’s use of technology programs. In order to tackle these issues, research highlighted that active involvement in technology development seemed to have several advantages for both sides; autistic users and designers [8]. Participatory design (PD) entails the active involvement of end-users in the design process by asking them to give their own ideas and to reflect on the prototypes [8, 21]. The process aims to generate ideas entailing an iterative refinement design based on their user’s feedback, needs and desires [22]. The advantages of this approach lay in the fact that it could identify limitations, enhance functionality and test the validity of the interface thus facilitate the development of novel and empathetic technology products. This is of core importance for the successful design of new technologies. Due to the lack of research on the topics of co-development and designing requirements for digital services for autistic users, this study will seek to address these gaps. This work describes the design process drawn from a multidimensional approach to conceptualise and design an accessible museum-based app. It also presents the results in which autistic children and teachers became active agents and contributed to the design of a museum app. Particularly, the current study sought to address the following:

22 Implementing Co-Design Practices for the Development …

423

• what factors need to be considered during the co-design process for autistic children, and • to obtain design implications for the design of a museum-game app. What follows is a brief overview of recent work related to technology-based studies for people with autism. It will consider the concept of the PD approach and its role in the design of technology programs for children with ASD. Section 22.2 will provide an overview of the project and the development process of a museum-based app and explain how the theoretical and methodological underpinning was applied in the present project. Finally, the chapter conclude (Sect. 22.3) by reflecting on the themes that emerged from this experience and by considering future implications related to technology design for autistic groups.

22.2 Literature Review 22.2.1 The Emergence of Interactive Technologies for Children with Autism Over the past twenty years, designers’ and researchers’ attention has been given to the development and the evaluation of software and hardware programs to support people with autism. A great deal of empirical evidence in previous studies has documented the potential to improve the lives of autistic people and teach important skills. These skills can have a significant impact on the performance in school and on other markers for quality of life in autistic groups. Research has highlighted the potential of technology-based programs to enhance communication [23], social skills competence [24], facial expression recognition [25, 26], literacy acquisition [27, 28], and teaching daily transitions and functional life skills [29]. According to these studies, it can be seen that digital environments and computerised learning are popular with and appealing to autistic users for various reasons. The unique affordances of technology with consistent and predictable environments might be reinforcing to autistic people because of their desire for sameness and predictable rules [27, 30]. These characteristics could be beneficial for autistic people by maintaining their motivation and engagement in the intervention thus leading to better learning outcomes. In addition, the visual medium has potential benefits for autistic users, who have been reported to be strong visual thinkers. Rather than processing information through words, autistic individuals are keen on communicating and learning through visual means [31]. The use of visual media via electronic devices includes dynamic features, such as video, audio, and buttons to allow the user to turn from one task to the next [32]. As such, all these features are likely to motivate autistic people to respond efficiently and allow engagement through repeated imitation [33]. However, the majority of the studies tend to focus on the outcomes of these programs whilst details of engaging autistic participants in the co-design process are limited.

424

D. Magkafa et al.

22.2.2 Research on Co-Design Technology for Autistic Over the past ten years, the rapid growth of technologies for autistic children has made researchers consider their input. Researchers adopted a child-centred approach to designing products and novel work has produced platforms within the involvement of autistic children in the design process [34]. Participatory Design (PD) is seen as a well evidence-based practice to involve the end-users and other stakeholders within the context of the design process and to identify how novel platforms can work in real situations [22]. Druin [35] highlights that the cognitive level of the participants and the level of involvement play a significant role when designing with children. Based on this, Druin [35] identifies four levels of children’s involvement in technology research and in the product development process: (1) user, (2) tester, (3) informant and (4) design partner. Each role encompasses a broad range of involvement at differing phases; the child’s involvement can range from the minimum as users following to the product’s development, to being equal design partners throughout the design process. In the field of autism, this framework has been adopted and different interactive technology platforms have been developed with the children’s input. The children’s roles have varied from testers [36] to equal design partners [37] while the involvement of indirect stakeholders, such as teachers, parents, and technology practitioners, was considered necessary in those cases where the end-users were less able to communicate with the research team. Emerging research demonstrates the value of autistic children’s inclusion and has highlighted the potential of enabling their voices heard and giving them the sense of empowerment within the design process [34, 38]. From designer’s perspective, it is also seen as an approach (a) to identify “the acceptance, ownership and the odds of a successful design” [39; p. 22], (b) to gather user feedback over the development process [40] and then refine the design by iteratively brainstorming and prototyping. Some examples include the projects by [37, 41–45]. These projects were designed by incorporating user-centered techniques and collaborating with indirect stakeholders, such as parents, autistic children, teachers, and assistive technology practitioners throughout the process. More specifically, initial approaches to include autistic children in the process carried out by [41], who co-designed tablet applications to help autistic children to improve their social skills. Two groups were included in this study. A group of typically developing children along with a group of autistic children worked together and were involved in various design activities for developing social skills for both groups. Within the ECHOES project [44], the focus was on the process of the co-design sessions instead of the outcomes, which gave details as to how the sessions were structured and what implications emerged throughout the process. The design activities enabled active participation of the autistic groups and gave them a voice to express themselves resulted in strengthening their action in the process. Frauenberger et al. [44] highlight that a narrative story and sensory exploration through different techniques contributed to effective participation. Benton et al. [42, 43] proposed the IDEAS framework, a design approach which scaffolds designers in supporting an effective and creative involvement of participants

22 Implementing Co-Design Practices for the Development …

425

with ASD. This framework developed to help researchers plan interactive sessions for people with autism. IDEAS was based on TEACCH (a structured teaching intervention approach) and includes four steps: (1) understanding the culture, (2) tailoring to the individual, (3) structuring the environment, and (4) providing supports. All those steps facilitate the use of PD methods by putting the emphasis on children’s abilities and providing the best environment for them to make their own contributions. Malinverni et al. [35] further investigated how PD activities can help validate some initial designs, collect new ideas from autistic children’s perspective and to assess which aspects make children more motivated. In order to conduct these sessions, various techniques were used such as (a) little puppet theatre, (b) causal tables and cut-outs images, (c) motor activities, and d) drawings. In another example, Bossavit and Parsons [37] co-designed an educational game to teach geography focusing on academic skills. In this project, autistic participants were assigned as design-partners throughout the development process and were tasked with creating a game. The iterative design lifecycle enabled the involvement of teachers and children at various stages. The results from these sessions reported the positive aspects of directly involving autistic children to contribute to the design of the game. The children’s input varied; some of them were more motivated and actively involved while others were more reluctant to participate. The participants’ contributions and the various techniques used in different stages enabled them to foster idea generation and feedback, thus informing the final design of the product.

22.3 Study Design The present project aims to design and develop a museum-based application for a group of autistic children. The study seeks to examine whether a digital platform, such as a touchscreen-based application, can enable groups of autistic individuals to have an enjoyable experience in a barrier-free environment like a museum. In doing so, the study’s primary aim was to obtain insights into children’s interests and opinions, thereby informing the design of the app. This chapter considers the children’s input within the technology design process and discusses the design ideas generated through different activities. The project was theoretically informed by the user-participatory approach, and it addressed the preferences of a case study group. Cycles of iterative process and refinement were included to create the final version of the app and are illustrated in Fig. 22.1.

426

D. Magkafa et al.

Fig. 22.1 Stages followed for the development of the interface

22.3.1 Design and Development 22.3.1.1

Participants

For the purpose of the project, a special needs school in Bristol (UK) was recruited to take part in the research. Two classes (Blue class and Green class) were selected

22 Implementing Co-Design Practices for the Development …

427

to participate in the co-design activities. In total, 13 children, the teaching members of those classes who were familiar with the children and the principal researcher attended those sessions. The children’s age range was 10–13 years (mean age = 11.9 years, standard deviation = 1.14). Their cognitive and language difficulties varied from significant language and communication issues to well-developed functional speech. None of the students had any physical disability, some had been formally diagnosed with autism following the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria whilst others had been diagnosed with nonspecific disabilities, such as Foetal Alcohol Disorder (FAS) (see Table 22.1). The present study was approved by the Ethics Committee and parental written consent was obtained prior to the study. Table 22.1 presents the demographic characteristics of the participants. To secure their anonymity, pseudonyms were used at all times. Based on Druin’s framework for children’s involvement [28], in the current project, the children had two distinct roles: in the first phase, they took the role of informant, and in the second phase the role of tester. In the early stages, the children contributed as informants by giving the direction to the design process and providing input. Another group with learning difficulties participated in the role of tester, which is examined in detail below. Table 22.1 Demographic information of the participants Group 1 (blue class)

Group 2 (green class)

1 FAS

Participant

Age

Gender

Status

P1 Charlie

10

Male

ASC

P2 Hanna

10

Female

ASC

P3 Anna

11

Female

ASC

P4 Tina

12

Female

FAS1

P5 Chloe

11

Female

ASC-OCD2

P6 Bruce

12

Male

ASC-ADHD

P7 Samira

11

Female

ASC-PDA3

P1 Rob

13

Male

ASC

P2 Tom

13

Male

ASC

P3 Matt

13

Male

ADHD

P4 Jason

13

Male

ASC

P5 Andrew

13

Male

ADHD

P6 David

13

Male

ASC

foetal alcohol disorder 2 OCD obsessive compulsive behaviour 3 PDA pathological demand avoidance

428

D. Magkafa et al.

22.3.2 Stage 1 Discovery 22.3.2.1

Co-Design Activities with Autistic Children

The features of a digital platform are considered to play a key role in developing an accessible interface and capturing the user’s interest [17]. In this study, the PD approach was applied to obtain further information about technology use and to evaluate the existing best practices with the purpose of further enhancing our understanding of the children’s needs. This approach was chosen in order to develop child-centred and valid interfaces that would align to autistic children’s preferences and strengths. The activities were centred on the design of a museum game-based application. To achieve this, activities took place in the participant’s classroom as a class activity. In total, we conducted four participatory sessions, with each session lasting 40 min. The sessions were led by the principal researcher of the project whilst a special needs teacher and teaching assistant were present. When a child was confused or showed little interest in contributing, the principal investigator or the facilitators were intervened to regain child’s interest. In each session, our aim was to examine the children’s preferences about specific features of the application in order to improve the accessibility of the platform. The participants were encouraged to give their own feedback and reflect on those concepts. The children’s input was given (a) by eliciting their feedback through their words and/or behaviour, and (b) producing their own ideas by sketching low-fidelity prototypes. The role of the children over the course of the sessions involved more than merely being informants. By combining different participatory techniques, such as focus groups via PowerPoint slides (sticky notes), brainstorming (drawing), and low-fidelity prototypes, we were able to capture users’ perceptions in terms of the functionality of the interface and so determine the level of difficulty. Following to co-design sessions, an analysis of the teacher’s feedback provided insights into which aspects were perceived to be valuable to engage autistic children.

Focus Group- PowerPoint Slides The aim of the first session was to examine some functionalities of the platform, as there is a lack of knowledge in the literature for this targeted population. The sessions were structured as follows. Using PowerPoint slides and presenting examples of existing applications, the researcher asked the children to help evaluate the features of those applications. The children were provided with sticky notes and they were asked to give their own opinion. Then, the children were requested to write their names on the sticky notes, their likes or dislikes and to place them in their preferred answer on the whiteboard. The participants were able to communicate and get involved in the activities over the sessions; however, the presence of the teachers was useful to facilitate the process as well as to help those who had difficulties in understanding

22 Implementing Co-Design Practices for the Development …

429

Fig. 22.2 A child draws and explains his ideas about the app

some questions. One of the participants’ responses regarding how the information content would be represented in an interface was: “We want the simplest way”.

Brainstorming-Sketching The brainstorming’s session was creative-orientated, as each participant’s task was to draw their own ideas of how their app would look in a paper-based format. Alongside, visual supports were provided via Power Point slides with examples of how designers sketch out a prototype to generate their ideas. This session was structured by giving them step-by-step guidelines on how to make their sketches. However, the children were free to draw their ideas and include their own creativity. Blank A3 sheets, cutout images and art supplies were provided to inspire the children’s idea generation process. These supports helped the children to understand better what they were requested to do and to avoid their confusion when faced with a completely blank piece of paper with no text or guidance [36]. Within these paper interface prototypes (Fig. 22.2), the participants had to consider the position of paper-based actions buttons (home/menu/back), the types of rewards they would prefer when they completed challenges correctly, feedback, and to write or draw content that could be included in the interface. According to the field notes, there was a difference in the ability of the children to express their own ideas. For example, Hanna and Charlotte were willing to do the activity and were smiling every time completed a step. They were less able to work

430

D. Magkafa et al.

on their own and they looked for researcher’s attention and reinforcement to progress to the next step. Although they were able to make progress, they could not generate their own ideas independently due to their difficulties with spelling. Charlie, Samira, and Chloe, were able to work well and independently through the task. They seemed focused and confident regarding the nature of the activity, and they completed it on time. In Green class, all the children were initially involved in making prototypes, however, Andrew decided to withdraw from the activity. Based on the field notes, all the children listened well and seemed to be focused throughout the session. Some of the participants such as Rob and David showed signs of independence and confidence regarding the nature of the activity and made progress through the task to completion. Rob was keen to learn more about the app and the content of the museum. Rob: “What’s M-Shed about?”. Researcher: “M Shed is all about Bristol’s history. It tells stories about what happened in the city, what it has been found”. It tells the story about a dinosaur. It has busses, bikes, it talks about the second World War”. Rob: “ok so because the museums have many objects we can have a lot of options for the user”.

Low-Fidelity Prototypes In the 4th session, the activity was structured by presenting the combined ideas about the museum- game via low-fidelity prototypes. Meanwhile, hard copies of those prototypes were given to those who wanted to work with less intimate or personal contact. They were invited to provide their answers either verbally or through sticky notes and placed one of the answers on the whiteboard. The children were shown the low-fidelity prototype of the game and were asked to write or express their preferences about the functionality of the app. During these sessions, the participating children seemed to be engaged with the task by giving their own input. The children’s active role and engagement was shown by them suggesting alternative ideas. For example, both classes preferred to customize the colours. Bruce from Blue class appeared to lack confidence and required prompting and reassurance. Charlie was very engaged, and he wanted to give further suggestions. He commented: Charlie: You have different characters and a search bar, and the user can select one of those (guy, lady or child). On the corner, you can have a little thing with the face of the character. Researcher: If the app provides one character?

Charlie: So You Can Have Some Other Options for the Players to Select Which One They Want? Researcher: Like what? For example different items of clothing to dress up the character?

22 Implementing Co-Design Practices for the Development …

431

Fig. 22.3 Rob’s idea for the museum interface through a PowerPoint presentation

Charlie: Yes, why not? Or different eye’ colours. Now they need to scan the barcode which is around the museum and then a quiz will pop up. Researcher: When you answer the quiz correctly, should the players go to the next level? Charlie: For the next level, you keep doing that over and over again scanning barcodes and at the end to solve the mystery. In Green class, Rob showed an interest in talking further about the interface of the app and referred to the integration of new features that can make the app more engaging for the users. Rob gave the following recommendations: (a) short and long term goals “The users need to get two items in order to open the first box and then four to open the golden one”. (b) a multiplayer mode of the game: “This part of the game can be a multiplayer game, so more people could play together”. (c) a non-linear route “The players can choose which spots to visit”. (d) personalized features of the main character “Can we use colouring options for the character?”. At the end of the session, Rob was still eager to further talk about the interface, and he expressed the desire to present his ideas in a PowerPoint presentation (three slides) as seen in Fig. 22.3. His presentation has shown that he had a strong interest in vehicles. The key points identified by the co-design sessions are summarised in Table 22.2. The participants’ preferences and input generated a list of design preferences which informed the present study. Most of the design recommendations confirmed

432 Table 22.2 Summarizes the interface elements obtained from the participants

D. Magkafa et al. Customization

– – – –

Layout (input controls) Colour of the interface Menu action (either top or bottom) Type of font

Accessibility

– Format of questions via text & sound – Provide sound

Game elements – Include some instructions – Cooperation working with other (customizable) – Feedback through pictures – Type of feedback: Faces – Rewards through sounds & images Aesthetics

– Include visual supports – Digital tools: animations, images and/or videos

the current best practices, however, some novel ideas were reported by the children during the co-design sessions. These include (a) the location of the buttons (at bottom) while the literature suggests to be on top, (b) to customise the colour’s interface; and (c) the integration of some game features.

22.3.3 Stage 2 Concept Development Following the data collection sessions, the research team considered the following question: How can we confirm that the present interface meets the users’ needs and captures the user’s interest? Building on the gathered user insights and the prior literature, this phase enabled the team to start the idea generation. A group of undergraduate students (e.g., graphic designers and an application developer) joined the research project in order to give shape to the app. The first step was to focus our attention on the concept and idea development so as to increase user’s engagement. According to the literature, a scenario of an alien is one of the most well-known techniques to create an interesting and appealing content for a product [46]. Regarding the content, for What’s Bristol, the plot was structured as follows: An alien with a name Wallis (in Welsh means foreigner) has been assigned to complete a mission: that is, to explore Earth, and its first destination is Bristol. Together, Wallis and two players are sent on a scavenger hunt game by exploring different key points and stories of Bristol’s past. Initially, the first thing that shows up on the app is a map of the museum’s layout, which guides the users to the spots they should visit. Each time they answer a task correctly, they are given a puzzle piece. Once they have visited all the spots and gathered all the puzzle pieces, the players should collaborate to find the answer to the final challenge. During the game-tour, the players need to work (a) individually by visiting the spots, and (b) in pairs by sharing information about the spots visited, and solving the last challenge. The devices enabled the players

22 Implementing Co-Design Practices for the Development …

433

to navigate the digital items by providing vocal guidance and relevant information on the screen of the device. After formulating the content of the app, wireframes, paper-based and digital prototypes were developed to mock up the design and interface of the app (Figs. 22.4 and 22.5). This process consisted of discussing ideas and identifying the key principles of significance to be addressed in the interface. Over the course of the brainstorming stage, the teachers were asked to provide feedback about the content and the activities included in the app.

Fig. 22.4 Paper-based prototype with the structure of the app

Fig. 22.5 Digital prototype of the app

434

D. Magkafa et al.

22.3.4 Stage 3 User-Testing- Evaluating the Interface In order to confirm its usability, the software was tested to determine the level of accessibility and usability. The usability test was trialled with a different class, and a group of autistic children was assigned to test and evaluate a short version of an early prototype interface. The class consisted of six children (n = 6). The tester role was to elicit any insights into user needs that need to be refined. A group of children tested the digital prototype, and the researcher observed and analysed how the participants interacted with the interface. Qualitative data were gathered through a questionnaire about its usability. These techniques of enquiry sought to uncover user’s impression and eagerness to use the platform and to detect any functionality issues in the overall platform. Over the session, the research team provided assistance by prompting the children verbally whenever necessary. Each child was provided with an iPad device with the application installed and was requested to interact with the interface. The questionnaire was based on visual representations (e.g., happy and sad faces) and covered issues of accessibility and usability. These included: navigation, information content, and the integration of media formats, such as the combination of text and sound. Of the six children, four (n = 4) completed and returned the questionnaire whilst one participant refused to complete it. The feedback from the tester’s group and from the school staff showed that the refinement of the initial prototype was considered necessary to make some adjustments. Overall, all the participants were found to be able to interact with the interface. One issue that emerged was that the size of the letters was detected to be “a bit hard” for one child, as he was not able to read the text clearly. Based on the children’s experience, the tasks were found to be quite easy and the size of the buttons “very big” to click on. Moreover, two (n = 2) participants expressed some design suggestions, for example: “Need more colour variety for the Alien”. Meanwhile, in accordance with the teachers’ suggestions, a very simple layout and content preferred.

22.3.5 Stage 4 Re-Design the Platform Following the children’s and teacher’s feedback, the interface was redesigned by the team with the ideate and brainstorming ideas to inform one another. The goal of this phase was the iteration and the concept refinement until a more optimal version is achieved. A testing session with the design team was employed at the museum environment as well. Figure 22.6 presents some screenshots of the app.

22 Implementing Co-Design Practices for the Development …

435

Fig. 22.6 Screenshots of the final version of the interface

22.4 Discussion The user-participatory approach, its techniques, and the collaborative partnership with the teachers have led to the design and development of an interface to support autistic users within the walls of a museum. The co-design activities and feedback described have had a direct impact not only on the design of the app, but also on researcher’s understanding of designing for and with children with ASD. While PD efforts often focus on specific elements of a software, it is therefore important to reflect generally on the process and identify what aspects were seen valuable in applying a user-centred framework for autistic children. Through a participatorycentred approach, a framework was obtained as a reflective practice. The framework as presented in the form of the diagram (Fig. 22.7) illustrates that this study focused initially on identifying the best practice in technology design for autistic children. Co-design activities were organised with a group of autistic children with teacher’s presence to involve them at the early stages of the design process. The analysis of the data led to the identification of five factors that can influence positively the experience of co-design practice for autistic children. During the co-design sessions, the children’s role, abilities and level of support influenced the design process. Issues around appropriate use of language and understanding of children’s abilities were identified but encountered at the beginning of the sessions. The main themes emerged from the co-design sessions, and the development process gave us some insights, as outlined below.

22.4.1 Engagement and children’s Input Based on Their Abilities The present study provides empirical evidence that the PD approach was a wellchosen practice for incorporating user’s input into the design process. As reported, social interactions can be challenging and problematic for autistic people [41]. In line with previous studies [37, 42] it has been noted in this research that most of

436

D. Magkafa et al.

Fig. 22.7 Framework capturing the process and articulating the factors for co-designing with autistic groups

the children were able to express views of their needs and preferences while they enjoyed the freedom of offering feedback and building upon their thoughts and ideas. The brainstorming and low-fidelity sessions produced a number of drawings and text on sticky notes that needed careful consideration to retrieve information and inform the design of the app. The proposed designs demonstrated the children’s initial interest and engagement in contributing to the design process. The difference in the involvement in the activities was subject to each child’s cognitive abilities and personal interests. It was observed that some children engaged independently with the task, working for a long time in a meaningful way while others needed re-assurance and reinforcement. This is also consistent with the teachers’ views. Teacher 3 commented that “for the higher ability ones [children], was really an enjoyable experience while the others needed an additional support to complete the tasks. It is very tricky class to work with these levels. Therefore, for those who needed

22 Implementing Co-Design Practices for the Development …

437

additional support and reinforcement, the level of behaviour was perceived positively as they seemed to behave and listen well throughout the sessions. Indicative of this was that only one child dropped out of the brainstorming session. As a reflection of their interest, the students engaged in the sessions by either asking related questions, or presenting-suggesting some ideas they have had about new design features, such as short and long term goals and additional accessibility features. Within this context, we identified some factors that provided support to active children’s participation.

22.4.2 Building Rapport The strategy of establishing positive relationships can be considered an important variable and seems to influence the child’s behaviour when task demands are given. As Frauenberger et al. [44] point out “strong and lasting relationships are the foundation that participatory work requires to flourish”. In this study, the children’s familiarity with the research prior to the onset of the sessions was considered an important factor to scaffold children’s active involvement and engagement. As Teacher 1 added “your attendance did help a lot and did work quite well”. This relationship allowed the children to feel comfortable with the presence of the researcher in advance and helped them be familiar with the nature of the project. The rapport phase was also seen as important for reducing the children’s anxiety and for facilitating their communication skills throughout the technology design process. By becoming a familiar figure in the class before the study led both groups to increase their self-confidence during the design activities thus giving their own ideas as part of the activities. According to the literature, the idea that child participants opinions matter as much as the ideas of other stakeholders is the foundation for the decision-making process [47]. The idea that children’s opinions were important were a key factor in achieving an equal balance between the researcher and the participants. This endeavour gave the children a legitimate voice in the design and scaffolded a decision-making power, as they expected that the agreed upon ideas would be included in the final design, and this idea made them feel valued.

22.4.3 Individuals It was observed that the children preferred different ways of expressing their opinions either verbally or through sticky notes depending on their abilities and skills. The researcher’s role was to ensure that the children were free to express themselves and to provide them with multiple materials and resources to help them represent their opinions in their preferred way. Based on these, the adjustment of language levels was seen to have a positive impact on the delivery of the sessions. Regarding this, Teacher 1 commented that the co-activities “worked quite well, it got better as they went on,

438

D. Magkafa et al.

you got the feel of the level of the children, you adjusted the language”. As children with autism learn effectively using visual supports, the incorporation of visual means may have encouraged the children’s communication and consequently their participation. Based upon the study’s findings, the use of visual supports [42] proved to be valuable for sparking the children’s interest and their subsequent involvement. Visual supports such as screenshots of existing applications and guidelines on how the children could sketch out their own ideas out were provided, and these proved to be useful during the process. Based on teacher’s comments, the visual aids were viewed as being appropriate and necessary to allow the children to carry out the tasks successfully and make the workshops closer to their abilities. One teacher commented, “The sessions became better as you started using the visual aids and adjusted the speech language level”. In fact, children with ASD may need additional support to be involved in the design activities [42] and to understand better the nature of activities. Research suggests that using images as aids to memory helps the children to understand and to predict what is intended to happen [48, 49].

22.4.4 Suitable Environments It has been observed that the increased children’s level of participation was that of being in a familiar environment. The structure of the activities within the group’s main classroom as a class activity and the lack of electronic devices, such as desk-top computers, contributed to keeping them focused and reinforcing their performance. Thus, reinforcing an initial engagement with an unfamiliar situation. Considering the difficulties autistic persons encounter in transition situations [50] the participatory activities in their classroom were found to increase their comfort level gradually from one session to another.

22.4.5 Creativity Potentials Another key theme which arose from this study was the concept of creativity. Creativity is defined as a core resource in participatory processes and occurs when proper methods and environments enable the participants to interact with and explore new situations [36]. In the case of children with neuro diverse difficulties, creativity is viewed as a powerful means of structuring successful and effective PD sessions [51]. In this study, the results are rather encouraging, as with an appropriate balance between positive support and freedom, the children were able to provide valuable input and to act as contributors for the design of the app. During the co-design sessions, a number of variables enabled the children to unfold their creativity potentials and reduce their anxiety. These included the idea of drawing, the surrounding environment (i.e., children’s classroom) alongside features such as free space (into

22 Implementing Co-Design Practices for the Development …

439

third and fourth sessions) and appropriate prompts, which were seen to provide opportunities to entice the participants develop their own ideas. The literature supports the idea that drawing technique is a proper way for children to fully express their thoughts and appears to be more creative [52]. As reported, the children performed several actions, such as using art-based supplies for brainstorming, drawing a model of their ideal app, and uncovering their imagination by writing down their thoughts and ideas. This process of engagement can be characterized as creative acts based on the children’s own interests. Consistent with the literature, this research found that when accessible tools are provided, such as basic art supplies and hands-on art activities [35], this can motivate the participants to uncover their creative potential. Through these activities, the children felt secure and confident to initiate interaction and conversation or draw content. The children were able to generate their thoughts within the paper-based template. Using the self-expression template as part of the design process, the children were provided with context and structure in order to direct their ideas. An interesting aspect of this process was that all of the children were able to write or draw on the piece of paper. However, a limitation that was encountered was that the templates were small, and this might have prevented some of the children from writing detailed descriptions or adding a detailed drawing. In addition, the feedback (direct or indirect) obtained through this approach revealed that some of the children’s ideas were either not applicable or understandable. However, valuable insights were gained into the children’s interests about potential museum themes and the topics that attracted them.

22.4.6 Teacher’s Involvement These findings also highlight the significant contribution of other stakeholders, such as teachers, to the process. Benton and Johnson [34] highlight that when teachers and/or carers are part of the design process, this can help the progression of the sessions with them observing the activities of the children and intervening effectively when necessary. In accordance with previous studies, in some cases we observed the children’s distraction and frustration in response to our requests and their need to have individualized guidance. Further, Guha et al. [53] point out that the presence of adults encourages the children to generate their own ideas. In terms of the process of generating ideas, as teachers are more aware of children’s difficulties and/or interests, it was found that some of the children were not able to start brainstorming their ideas without prompts by the facilitators or the researcher. These results confirm that the active role of the teachers as facilitators in the design process can be supportive and useful to ensure positive outcomes.

440

D. Magkafa et al.

22.5 Conclusion The project was undertaken to delineate the design and development of a touchscreenbased application to support autistic children in the museum walls. Towards this goal, the study focused on the process and two autistic groups were involved in the design of the interface to obtain a realistic picture of their needs and desires. To uncover these, a user-led participatory approach informed the design process by enabling the participants to give their own input. To maximize the usability of the interface, it was important to identify and generate practical design insights suitable for a group of autistic children. Integrating different techniques contributed to participant’s initial engagement and generated feelings of enthusiasm. Through these activities, some children were confident expressing themselves verbally and described their own design ideas on what features can be incorporated in the interface. The analysis of the co-design sessions and the perspectives of teachers provided insights into the factors that influence the outcome of the co-design practice such as building rapport, creativity, suitable environments and the use of visual means. This process resulted in the development of a framework to help researchers coordinate the co-design sessions. The present work advocates that the children’s involvement at various stages appeared to be valuable to facilitate the design process as well as to refine the interface. The children felt empowered to uncover their creativity and contribute through idea generation. One way to accomplish these is by incorporating continuous support appropriate for the target group and structuring the environment and the activities according to the children’s needs. The platform’s usability has also been tested. Our perception suggests that the tendency towards the use of a child-centred approach is considered important to enable the users to have their own say. However, the diversity of opinions in terms of the usability of the interface and the relatively limited sample cannot allow us to make generalizations. This further raises an important issue: to what extent can autistic children’s role be considered valuable in the user-centred process? It also entails re-thinking on what’s the limits between the researcher’s and autistic children’s roles in the final decision- making. The results of this study suggest several directions for future research. The childorientated approach that this work followed has resulted in the development of a framework that can have an impact on the co-design practice. In future studies, it will be interesting to build upon these and examine the proposed framework and work recommendations for other aspects that can be added. Through this work, design features of the museum interface were extracted focusing on different aspects such as customization, accessibility, game elements, and aesthetics. Building upon this, future work might consider to use the outcomes of this study and create technologydriven museum activities in order to further validate the effectiveness of the design elements identified in this study. Additional research is required to improve the interface’s usability. This study followed a systematic approach based on an iterative reflective process to improve the usability of a museum interface. However, future studies may adopt this approach in order to consider more thoroughly the needs and

22 Implementing Co-Design Practices for the Development …

441

expectations of autistic children through a non-linear approach. The use of technology platforms for autistic users has a potential to enhance their quality of life. However, it is important to ensure that end-users are included in the decision-making process of a technology-driven product through a non-linear approach. Future researchers need to focus on the process and satisfy autistic children’s needs through a continuous improvement of the user interface. In turn, this approach stands as essential priority in an effort to develop inclusive digital services tailored to their special needs.

References 1. American Psychiatric Association, A.P.A.D.S.M.T.F.: Diagnostic and Statistical Manual of Mental Disorders: DSM- 5. 5th edn. American Psychiatric Publishing, Washington, DC, London, England (2013) 2. American Psychiatric Association.: Autism Spectrum Disorder. Available from: https://www. psychiatry.org/patients-families/autism/what-is-autism-spectrum-disorder. (2013) 3. Centers for Disease Control and Prevention.: Autism Spectrum Disorder (ASD). Available from: https://www.cdc.gov/ncbddd/autism/facts.html. (2014) 4. Department of Health.: Progress in Implementing the 2010 Adult Autism Strategy. Available from: https://www.nao.org.uk/report/memorandum-progress-in-implementing-the-2010adult-autism-strategy/. (2012) 5. Zwaigenbaum, L., Bryson, S., Lord, C., Rogers, S., Carter, A., Carver, L., Chawarska, K., Constantino, J., Dawson, G., Dobkins, K., Fein, D.: Clinical assessment and management of toddlers with suspected autism spectrum disorder: insights from studies of high-risk infants. Pediatrics 123(5), 1383–1391 (2009) 6. Howlin, P.: Practitioner review: psychological and educational treatments for autism. J. Child Psychol. Psychiatry 39(3), 307–322 (1998) 7. Charman, T., Baird, G.: Practitioner review: diagnosis of autism spectrum disorder in 2- and 3-year-old children. J. Child Psychol. Psychiatry 43(3), 289–305 (2002) 8. Parsons, S., Cobb, S.: Reflections on the role of the ‘users’: challenges in a multi-disciplinary context of learner-centred design for children on the autism spectrum. Int. J. Res. Method Edu. 37(4), 421–441 (2014) 9. Ganz, J.B.: AAC interventions for individuals with autism spectrum disorders: state of the science and future research directions. Augmentative Altern. Commun. 31(3), 203–214 (2015) 10. Ramdoss, S., Mulloy, A., Lang, R., O’Reilly, M., Sigafoos, J., Lancioni, G., Didden, R., El Zein, F.:Use of computer-based interventions to improve literacy skills in students with autism spectrum disorders: a systematic review. Res. Autism Spectr. Disord. 5(4), 1306–1318 (2011) 11. Murdock, L., Ganz, J., Crittendon, J.: Use of an iPad play story to increase play dialogue of preschoolers with autism spectrum disorders. Autism Dev. Disord. 43(9):2174–2189 (2013) 12. Kagohara, D., Van der Meer, L., Ramdoss, S.S., O’Reilly, M., Lancioni, G., Davis, T., Rispoli, M., Lang, R., Marschik, P., Sutherland, D., Green, V., Sigafoos, J.: Using iPods and iPads in teaching programs for individuals with developmental disabilities: a systematic review. Res. Dev. Disabil. 34(1), 147–156 (2013) 13. Parsons, S.: Authenticity in virtual reality for assessment and intervention in autism: a conceptual review. Edu. Res. Rev. 19(138–157), 13 (2016) 14. Davis, M., Dautenhahn, K., Powell, S., Nehaniv, C.: Guidelines for researchers and practitioners designing software and software trials for children with autism. J. Assistive Technol 4(1), 38–44 (2010) 15. Putnam, C., Chong, L.: Software and technologies designed for people with autism: what do users want? In: International Conference of the 10th International ACM SIGACCESS Conference on Computers and Accessibility pp. 3–10 15 (2008)

442

D. Magkafa et al.

16. Ploog, B., Scharf, A., Nelson, D., Brooks, P.: Use of computer- assisted technologies (CAT) to enhance social, communicative and language development in children with autism spectrum disorders. Autism Dev. Disord. 43, 301–322 (2013) 17. Millen, L., Edlin-White, R., Cobb, S.: The development of educational collaborative virtual environments for children with autism. In: Proceedings of 5th Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT 2010) pp. 1–7 (2010) 18. Rogers, Y., Sharp, H., Preece, J.: Interaction Design: Beyond Human-Computer Interaction. 3rd edn. Wiley, Chichester (2011) 19. Brown, D.J., McHugh, D., Standen, P., Evett, L., Shopland, N., Battersby, S.: Designing location-based learning experiences for people with intellectual disabilities and additional sensory impairments. Comput. Educ. 56(1), 11–20 (2011) 20. Friedman MG, Bryen DN (2007) Web accessibility design recommendations for people with cognitive disabilities. Technol. Disabil. 19(4), 205–212 (2007) 21. Spinuzzi, C.: The methodology of participatory design. Tech. Commun. 52(2), 163–174 (2005) 22. Muller, M.J., Druin, A.: Participatory design: the third space in HCI. In: Jacko, J. (ed.) The Human-Computer Interaction Handbook, 3rd edn. Taylor & Francis, New York, pp. 273–291 (2011) 23. Kagohara, D., Sigafoos, J., Achmadi, D., O’Reilly, M., Lan-cioni, G.: Teaching children with autism spectrum disorders to check the spelling of words. Res. Autism Spectr. Disord. 6, 304–310 (2012) 24. Parsons, S., Mitchell, P., Leonard, A.: The use and understanding of virtual environments by adolescents with autism spectrum disorders. Autism Deve. Disord. 34(449–466), 23 (2004) 25. Silver, M., Oakes, P.: Evaluation of a new computer intervention to teach people with autism or Asperger syndrome to recognize and predict emotions in others. Autism 5(3) (2001) 26. Gay, V., Leijdekkers, P.: Design of emotion-aware mobile apps for autistic children. Health Technol. 4(1), 21–26 (2014) 27. Moore, M., Calvert, S.: Brief report: vocabulary acquisition for children with autism: teacher or computer instruction. J. Autism Dev. Disord. 30(4), 359–362 (2000) 28. Bosseler, A., Massaro, D.W.: Development and evaluation of a computer-animated tutor for vocabulary and language learning for children with autism. J. Autism Dev. Disord. 33(653– 672), 27 (2003) 29. Cihak, D., Fahrenkrog, C., Ayres, K., Smith, C.: The use of video modeling via a video iPod and a system of least prompts to improve transitional behaviours for students with autism spectrum disorders in the general education classroom. Positive Behav. Interv. 12, 103–115 (2010) 30. Ganz, J.B., Hong, E.R., Goodwyn, F.D.: Effectiveness of the PECS Phase III app and choice between the app and traditional PECS among preschoolers with ASD. Res. Autism Spectr. Disord. 7, 973–983 (2013) 31. Parsons, S., Cobb, S.: State-of-the-art of virtual reality technologies for children on the autism spectrum. J. Euro. J. Spec. Needs Edu. 26(3), 355–366 (2011) 32. Stromer, R., Kimball, J.W., Kinney, E.M., Taylor, B.A.: Activity schedules, computer technology, and teaching children with autism spectrum disorders. Focus Autism Dev. Disabil. 21(1), 14–24 (2006) 33. Bernard- Opitz, V., Sriram, N., Nakhoda-Sapuan, S.: Enhancing social problem solving in children with autism and normal children through computer- assisted instruction. Autism Dev. Disord. 31(4), 377–384 (2001) 34. Fails, J.A., Guha, M.L., Druin, A.: Methods and techniques for involving children in the design of new technology for children. Found. Trends Hum. Comput. Interact. 6(2), 85–166 (2012) 35. Druin, A.: The role of children in the design of new technology. Behav. Inf. Technol. 21(1), 1–25 (2002) 36. Frauenberger, C., Good, J., Alcorn, A., Pain, H.: Conversing through and about technologies: design critique as an opportunity to engage children with autism and broaden researcher perspectives. Int. J. Child-Comput. Interact. 1(2), 38–49 (2013) 37. Bossavit, B., Parsons, S.: Designing an educational game for and with teenagers with high functioning autism. In: Proceedings of the 14th Participatory Design Conference: Full papers (1), pp. 11–20 ACM (2016)

22 Implementing Co-Design Practices for the Development …

443

38. Benton, L., Johnson, H.: Widening participation in technology design: A review of the involvement of children with special educational needs and disabilities. J. Child-Comput. Interact. 3(4), 23–40 (2015) 39. Frauenberger, C., Good, J., Keay-Bright, W.: Designing technology for children with special needs—bridging perspectives through participatory design. J CoDesign 7(3), 1–28 (2011) 40. Spiel, K., Malinverni, L., Good, J., Frauenberger, C.: Participatory evaluation with autistic children. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 5755–5766 ACM (2017) 41. Hourcade, J.P., Bullock-Rest, N.E., Hansen, T.E.: Multi- touch tablet applications and activities to enhance the social skills of children with autism spectrum disorders. Pers. Ubiquit. Comput. 16(157–168), 36 (2012) 42. Benton, L., Johnson, H., Ashwin, E., Brosnan, M., Grawemeyer, B.: (2012) Developing IDEAS : supporting children with autism within a participatory design team. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2599–2608 43. Benton, L., Vasalou, A., Khaled, R., Johnson, H., Gooch, D.: Diversity for design: a framework for involving neurodiverse children in the technology design process. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 3747–3756 ACM (2014) 44. Frauenberger, C., Good, J., Alcorn, A., Pain, H.: Supporting the design contributions of children with autism spectrum conditions. In: Proceedings of the 11th International Conference on Interaction Design and Children, pp. 134–143 ACM (2012) 45. Malinverni, L., Mora-Guiard, J., Padillo, V., Mairena, M., Hervás, A., Pares, N.: Participatory design strategies to enhance the creative contribution of children with special needs. In: Proceedings of the 2014 Conference on Interaction Design and Children, pp. 85–94 ACM (2014) 46. Dindler, C., Eriksson, E., Iversen, O.S., Lykke-Olesen, A., Ludvigsen, M.: Mission from MarsA method for exploring user requirements for children in narrative space. In: Proceedings of Interaction Design and Children: Toward a More Expansive View of Technology and Children’s Activities, 40–47 (2005) 47. McNally, B., Guha, M.L., Mauriello, M.L., Druin, A.: Children’s perspectives on ethical issues surrounding their past involvement on a participatory design team. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 3595–3606 (2016) 48. Banda, D.R., Grimmett, E., Hart, S.L.: Activity schedules: helping students with autism spectrum disorders in general education classrooms manage transition issues. Teach. Except. Child. 41(4), 16–22 (2009) 49. Thelen, P., Klifman, T.: Using daily transition strategies to support all children. YC Young Child. 66(4), 92–98 (2011) 50. Campbell, P.H., Milbourne, S.A. and Kennedy, A.A.: Cara’s Kit for Toddlers: Creating Adaptations for Routines and Activities. Paul H. Brookes Publishing Company (2012) 51. Makhaeva, J., Frauenberger, C., Spiel, K.: Creating creative spaces for co-designing with autistic children: the concept of a Handlungsspielraum. In: Proceedings of the 14th Participatory Design Conference: Full papers (1). ACM, pp. 51–60 (2016) 52. Guha, M., Druin, A., Chipman, G., Fails, J., Simms, S., Farber, A.: Mixing ideas: a new technique for working with young children as design partners. In: Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community. Maryland, USA, 01–03 June 2004. ACM, pp. 35–42 (2004) 53. Guha, M.L., Druin, A., Fails, J.A.: Designing with and for children with special needs: an inclusionary model. In: Proceedings of the International Conference on Interaction Design and Children, pp. 61–64 (2008)

Chapter 23

Combining Cinematic Virtual Reality and Sonic Interaction Design in Exposure Therapy for Children with Autism Lars Andersen, Nicklas Andersen, Ali Adjorlu, and Stefania Serafin

Abstract This chapter presents a preliminary study whose goal is to investigate the benefits of cinematic virtual reality combined with sonic interaction design in exposure therapy for autistic children. A setup was built with two players, one child and one guardian, which together could virtually interact during a children’s concert. Results of an evaluation test in a school for children with special needs shows the potential of VR for exposure therapy. Keywords Virtual reality · Sonic interaction design · Autism

23.1 Introduction Exposure therapy using virtual reality (VR) can systematically confront patients with their feared stimuli rather than through exposure in vivo (i.e., carried out in real-life situations) or imaginal exposure (i.e., carried out through imagination) [1]. Currently, Denmark and the rest of the world are seeing a prevalence in children being diagnosed with autism spectrum disorders (ASD) [2]. ASD is an neurodevelopmental disorder, characterized by impairments in social interaction, communication, and by repetitive behavior by individuals diagnoesd with it [3]. It would therefore be advantageous to study whether having the children use VR could have a potential for exposure therapy sessions. L. Andersen (B) · N. Andersen · A. Adjorlu · S. Serafin Aalborg University Copenahgen, København, Denmark e-mail: [email protected] N. Andersen e-mail: [email protected] A. Adjorlu e-mail: [email protected] S. Serafin e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_23

445

446

L. Andersen et al.

While there is no framework on how to implement such a combination, there has been previous attempts and approaches to explore new solutions to exposure therapy and training scenarios. Kandalaft et al. [4] tested social interaction on young adults (age 18–26) in different scenarios using the online virtual world called “Second Life” [5]. Using an existing platform which simulates reality made it easier to develop more sophisticated scenarios. However, the feeling of realism was diminished through the usage of keyboard and mouse. Rothbaum et al. [6] treated Vietnam veterans with a diagnosed PTSD, where the participants had to go through a helicopter ride in a war zone. This was done through VR simulations within a 3D environment, controlled through a joystick. Lastly, Stupar et al. created 360 videos of three different scenarios used to train public speaking’s anxiety [7] . With a not too large audience in a lecture hall, the participants could practice at their own pace. As VR equipment is in an evolutionary state, it is now possible to utilize natural sensorimotor contingencies to have subjects achieve a greater sense of presence. By designing a prototype to allow for social presence illusion, co-presence illusion, and communicative salience, teachers can be inside the virtual environment at the same time as the children who are receiving exposure therapy. The present study proposes an exposure therapy solution with immersive VR. The prototype was designed to use 360◦ video with ambisonic audio, recorded from a children’s concert rehearsal, to conceive a real life scenario as stimuli. The virtual environment supports two players (e.g., a child and a teacher) that can see each other through an avatar and play instruments together using natural sensorimotor contingencies (SC). This paper presents an exploratory approach on how VR can used as an exposure therapy method for children diagnosed with ASD and what to consider when designing such an experience. The prototype was evaluated through a three phase model that goes through a Readiness-, Action-, and Progression phase in a therapy session. A discussion on the solution was conceived from qualitatively evaluating the prototype on both children diagnosed with autism spectrum disorders, their guardians, and a psychologist.

23.2 State of the Art In Autism Spectrum Disorder (ASD) one’s life is influenced with difficulties within social interaction, general communication, repetitive behaviours and limited interests [3, 8, 9]. VR offers an interesting perspective in use within therapy and is gaining recognition especially in regards to phobia and anxiety therapy [8, 10, 11]. Exposure therapy is one of the more practiced methods for treating social anxieties and phobias by exposing the patients to a certain stressing situation regularly within a given time, to see a change of effect [8, 10]. Even if there is no significant difference of efficacy when it comes to exposure therapy using VR and classical evidence based interventions [8, 10], there are still some strong arguments for why VR as a technology can be efficient for exposure therapy.

23 Combining Cinematic Virtual Reality and Sonic Interaction …

447

Kim and Rizzo [11] performed a SWOT analysis of VR therapy and rehabilitation in 2005 as there were already promising and encouraging results from the use of VR in therapy. The strength of VRT consists of a controllable environment and a risk free performance. This corresponds to a more recent quantitative meta analysis on VRT in anxiety disorder by [10] oprics et al. and another review on VRT and how it applies with different disorders by Adjorlu [8]. A stronger result can be achieved by adding gamification for children with autism in order to keep them focused and make it more playful than actual therapy [8, 11]. Gamification [12] is the procedure to implement elements of games, such as points, levels and similar. It is however important not to over extent and overuse the game elements as they can overrule and thereby change focus which in turn will minimize the desired effect of the scenario. Furthermore there is the possibility to collect video data, both from an in-game and outside perspective, as this can be used as a correlation between body behaviour and the willingness to engage in the scenario [11]. There are certainly weaknesses present within the field of VRT. There is additional a personal and specific aspect to each disorder and phobia which requires specialized exposure, and since there is no methodology or tools to configure and change each scenario to be tailored to the individual, the cost and labor work is quite high [11]. The amount of wires and gadgets presented can also be of an negative impact. If they are too much in the way, they can distract and change the behaviour of the participant, and in the end the overall therapy [11]. There can also be side effects from using VR in general such as cybersickness. Cybersickness is a general concern in all VR matter, and shares similarities to motion sickness when there is conflict in the sensory system [11]. There should therefore be considerations and investigations into the different available hardware, and consistency testing throughout the development of the product to ensure the minimal risks of influencing the participants with cybersickness. While the equipment in itself can have a stunning effect for some participants, due to being new technology and they are not accustomed to it, the key observation is that the progress and outcome is the same as traditional exposure therapy [1, 13]. This project will utilize VR based exposure therapy to help individuals suffering from social anxiety. The intervention will be evaluated in a familiar environment to the participants via the oculus rift HMD. In the following section, we will describe the design and implementation of the intervention.

23.3 Design When players put on the head mounted display (HMD) they are presented with a small local space (see Fig. 23.1). The space includes a table, which has three instruments upon it, and a rug that seems to be flying. If the player chooses to look right or left they will be met with the avatar of player 2. Player 2 is standing on an identical space. The two players can pick up the instruments and play them. They are able to see each movement the other player makes of hands and head, as well as what instrument they

448

L. Andersen et al.

Fig. 23.1 The three instruments available to play in the VE

are holding. The area that exceeds the local space is only grey, except for the space behind them, which contains a black and starry half sphere. When the spacebar is pressed, the 360 video and audio will begin to play. The recorded material shows a children ’s concert at DR Koncerthuset, which takes approximately five minutes to finish. The children are also able to go through each song at their own pace. If it becomes too much for the child he/she can take off the HMD and the video and audio can be paused externally, until they feel ready to continue.

23.3.1 Space The challenge with implementing 3D objects in a 360◦ video environment is that the 3D does not seem to be a part of the 360◦ world. There are many conditions that can be changed in order to trick the viewer. Two important ways when working on a 2D screen are replicating the light angle and intensity and having the 3D objects cast shadows on the transparent geometry imitating the real objects in the video. However, with 3D objects the viewing angle changes ever so slightly all the time as the heads position changes in world space. Since the video is situated stationary around the player and does not change with the position of the head as the 3D objects will it can easily conceive a mismatch between the objects and the video for the viewer. To overcome this challenge that can end the viewers presence of the virtual environment, we are deliberately creating two different spaces: One space is the 360◦ video that is surrounding the viewer and the other is a local space for the viewer to walk around and interact (see Fig. 23.2). The local space as seen on Fig. 23.2 is deliberately shown as its own space and serves several design purposes. As an example, since the children with ASD are often pinpointing and focusing on everything that does not match together perfectly [14], it will fit the implementation better to have 3D that is more cartoonish than hyperrealistic. Having a local space that is purely a cartoonish 3D and the world space that is a realistic video will have the two stand in juxtaposition to each other

23 Combining Cinematic Virtual Reality and Sonic Interaction …

449

Fig. 23.2 The local space where the avatar is placed on

and help the children more easily distinguish the two from each other, so they will not have to focus on it. The local space still tries to replicate the lights and shadows from the video, so the viewer still feels that the two spaces belong together. To make it seem like it is not necessary for the player to reach the floor of the video material, the local space is stylized as a flying carpet. This will make the player feel more grounded and help diminish VR sickness [15].

23.3.1.1

Instruments

According to previous research, it helps people who suffer from ASD to focus if they are allowed to interact and play while they are in a therapeutic scenario. It was therefore necessary to design some interactivity that would be easy and give feedback when used. As the virtual experience revolves around a concert and the children we are working with have difficulty participating in a morning assembly that involves music, we are giving the children virtual instruments so they can play along with the music from the concert or play around on their own, as seen in Fig. 23.3. The instruments have to be easy to use, so when anyone picks them up they can make a kind of rhythm with them without any prerequisite knowledge about the specific instrument. It would therefore not be preferable to use the instruments that they use in the Big Band, as for example piano and trumpet. The chosen instruments (as seen on Fig. 23.2) are: A maracas, a tambourine and a triangle. The implementation will use three different instruments for variety. To replicate the feeling of using such an instrument in real life they act in different ways. The maracas is reacting to angular velocity and have a couple of sounds playing at a slightly lower or higher volume each time it is played for variation. It has a higher threshold for playing a sound when the maracas is going backwards. The tambourine is also reacting on angular velocity, much the same as the maracas and also playing two different sounds at different volumes. However, it has an additional feature as it plays if the tambourine

450

L. Andersen et al.

Fig. 23.3 The test setup at Behandlingsskolerne

collides with the off-hand. When picking up the triangle, a beater is spawned in the off-hand. The beater will react much the same as the tambourine as it will play a sound if it collides with the triangle. To help the player understand that they have dropped the instrument a small particle cloud will pop up at the hand holding the instrument, as well as at the instruments dedicated table location.

23.3.2 Multiplayer The implementation is supporting two players inside the same virtual environment. This will allow a child to be inside the virtual environment together with anyone of their choosing be it a friend, guardian, parent or pedagogue. They will be able to use the Oculus hands to wave to each other and play on the instruments together. The second player is going to see exactly the same video as player one. While having an initial meeting with the teachers from the school that handles the children with autism, that the present study has for testing on, they thought that it would not give the children enough presence if the two avatars were too far from away from each other. The avatars are therefore going to be spaced with a few meters between them (as seen in Fig. 23.2, right side). The mesh of the avatar has had its arms and legs removed in order to further enhance the cartoonish aspects and limit the work amount that has to go into making the limbs function in a believable way.

23 Combining Cinematic Virtual Reality and Sonic Interaction …

451

23.4 Recording Session The recording of the rehearsals took place the 13th and the 14th of March 2018 at DR Koncerthuset in Studio 2. We were granted access to the studio early in the morning and set up the tripods with a Garmin Virb 360 and Sennheiser Ambeo VR Microphone, while extending the cords from the microphone to the Zoom F4 mixer, which allowed us to sit in the far background to start and stop the recordings. Arriving in the morning allowed us to setup everything before the musicians, performers, director and producer showed up. Before they began their rehearsals the project was quickly introduced to share why they were being recorded.

23.5 Evaluation The test was conducted on the 8th of May 2018 at Behandlingsskolen in Vanlose, Copenhagen. 12 children diagnosed with ASD and social anxiety and 4 guardians participated in the test. One child, who should have participated, did not want to after entering the room where the test was conducted and left immediately. An additional test was conducted on the 23rd of May 2018 at Aalborg University Copenhagen in Sydhavnen, Copenhagen. 2 psychologists specialized in children and young adults with anxieties participated in the test, but went into the virtual environment together, without any children as the secondary player. As shown in Fig. 23.3 the two stations are placed right next to each other, in the same way as they stand inside the virtual environment. This is to avoid any position and placement confusion for the children, and to give them a reference point. The Oculus Rift to the left was set to a height of 150 cm and was therefore to be used by the children and young adults, while the one on the right was set to a height of 180 cm, which was used by the guardian.

23.5.1 Setup The setup of the testing area consisted of 2x Oculus Rifts, with controllers and sensors, 2x Computers and screens with the necessary cables, and a tablet with the questionnaire. The setup can be seen in Figure 23.3. As mentioned, the project was set to explore whether interactive 3D objects within a 360 video, with the capabilities of multiplayer make way for a new form of exposure therapy. The object was to triangulate the evaluation of the product from the participants, the participants parents and their respectively guardian at the school or their therapist. A semi-structured interview was conducted to gather data for triangulation which shares similar themes and categories, but has questions specific to the role which the participant were.

452

L. Andersen et al.

23.5.2 Target Group and Sampling Since the recording captured at DR Koncerthuset was a rehearsal of a children ’s franchise known as Cirkus Summarum, which has an official target group of 0–7 years old, with a Big Band that ensures loud music and performances, we believed that it would suit children even a little older. Additionally, we have observed that children diagnosed with autism are mentally immature compared to typically developed children [16]. The young adults could therefore be part of our target group as children as well. The target group therefore should consist of child and young adults diagnoesd with autism inflicted with social anxiety. At a treatment facility there are guardians attached to every child or young adult. They are additionally part of the target group. Their only requirement is to be a guardian or therapist and to participate together in the test with their respective child or young adult. Purposive Sampling was a necessity at the facility Behandlingsskolerne, where the product was tested, since they provided the test participants. We presented them with the necessary requirements and they picked out a target group that matched. While it did not yield a large amount of participants, it still gave the participants with the specific requirements.

23.5.2.1

Quantitative and Qualitative Data Collection

The quantitative and qualitative methods were created based on three phases which the participants goes through in exposure therapy sessions—Readiness, Action/ Interaction and Progression/Integration. This model originated from [17], and was developed by Såren Benedikt Pedersen, who is a clinic director and licensed psychologist at Cool Kids children- and young adults therapy clinic. The model is a theoretical model, which is rooted in many years of clinical and practical work and inspired by other therapists’ knowledge and experience. 1. Readiness considers how motivated and ready the participant is and why. This is important, especially aimed towards autistic kids, as this can have a snowball effect throughout the entire test. Additionally it helps to look into the level of anxiety, and establish some sort of baseline. 2. Action/Interaction is during the actual testing. The idea is to describe what feelings the participant is experiencing during the actual testing and why. However, it should be noted that interference can interrupt the flow of the exposure and should be consider carefully the amount of interruptions. 3. Progression/Integration is an evaluation of the scenario after the test. This investigates the progression of the therapy as a whole, and the integration of the scenarios into the test participants world view.

23 Combining Cinematic Virtual Reality and Sonic Interaction …

453

23.5.3 Evaluating the Children The quantitative was conducted on the autistic children and young adults, through a questionnaire with scales ranging from 1 to 10 shown with smileys instead of numbers. The scales were positively reinforced, so that they range from less good to very good, as the children and young adults diagnosed with autism has a tendency to be too affected by negative statements. Questions from the questionnaire were also given a positive twist, for the same reason. So it would not be possible to give them a question that would for example sound like: How unsafe do you feel? . Instead it should be: How safe do you feel?. The questionnaire revolves mostly around Readiness and Progression/Integration phase, as to not interrupt the scenario. However, two questions did ask about their presence and co-presence. It should be noted that a guardian was playing together with a child or young adult at the same time, and the Action/Interaction phase was more thoroughly explored in the semi-structured interview conducted on the guardian. The Readiness phase is divided into two parts—How much the participant is ready to attend a musical event, such as a concert or alike and play together with the orchestra. The second part was in regards to IVR. How motivated they are to use IVR. When the participants finally puts on the HMD, they are asked an additional two questions in regards to how comfortable they are with the HMD on, and how ready they are for the scenario. Two questions are asked, during the Action/Interaction phase. How much they feel the presence of the orchestra, presented through the video, and how much they feel the presence of the other person, presented through a 3D avatar. These questions was asked to explore whether the participants actually feels the orchestra and the other person as part of the VE they are inhabiting. In the last phase Progression/Integration, the questionnaire explores the overall impression of the scenario, if they at one point felt sick or felt nervous due during the experience. This provided us some insight into the quality of the scenario and the integration of 3D objects into a video. The last two questions investigates whether it made them less sensitive to the pressure of social anxiety in musical settings and if it was more overseeable with the addition of another person present in the scenario. The questionnaire had to be limited to as few questions as possible. More specifically, given that the attention span for children is very short, their motivation for answering questions drastically devolves if there are a lot. They could further try to please the interviewee by scoring the questions to what they think is the interviewees satisfaction. Furthermore, it would be a lot of pressure for the children and young adults to answer a lot of questions, as they feel that they are in the “hot seat”.

454

L. Andersen et al.

Question 1—Before 2 3 4 5—During 6 7 8 9—After 10 11 12 13

English version How do you experience other musical contexts? How much would you like to play with an orchestra in real life? How comfortable would you feel with playing instruments with an orchestra? How motivated are you for trying VR? How safe do you feel by having VR-glasses on? How comfortable are you with having the glasses on? How ready are you to start the game? How much do you feel being in the presence of the musicians? How was the experience with playing instruments with the orchestra? Did you feel nervous while playing? Did you feel uncomfortable while playing? Do you think it would be easier to be in front of an orchestra now that you have tried it? How did you feel experiencing the concert with another person?

23.5.4 Evaluating the Guardians The qualitative method was developed by first off going through the Microsoft Desirability Toolkit test [18], followed by a semi-structured interview. This was only conducted on the guardians that participated with their assigned kids or young adults. The Microsoft Desirability Toolkit test consist of the participant choosing their desired amount of words out of 118, then has to reduce the amount to only five. The idea is that the words should reflect what they have experienced, and thereby provide the researchers a starting point for an open discussion based on the chosen words. In this project, the amount of words were reduced from 118 to 36, equally positive and negative charged words, but the method stays the same. This is to ensure that the participant can look over every word and consider them equally. 118 words can be an extensive amount to take in. The semi-structured interview was spread into two sections; The experience, and the potential. The experience explores how the kids and young adults experience was, as well as their own. This is to make it possible to correlate it with the quantitative questionnaire. Then it goes into how it was to be with the children inside the scenario, which can provide us insight into the product gives additional control and information for the guardian. Lastly it covers whether the product itself can be accepted by the children and why, as they are the ones who the product is built for. The potential section investigated the potential of the product in terms of: therapy, if the product meets the guardians and the kids needs, possibilities for comfortability, motivations, necessities in a product like this for a guardian and what is needed in the future. These are all important questions as the project is set to explore if the combination between video and 3D interactivities can prove its worth within the context of exposure therapy.

23 Combining Cinematic Virtual Reality and Sonic Interaction …

455

Table 23.1 Showing the amount of people who picked each word from the 33 possibilities Amount of 5 4 3 2 1 participants who picked the word The words

Fun

Innovative

Stimulating

Difficult, Creative

Useful, approachable, simplistic, complex, collaborative, stressful, overwhelming

23.5.5 Microsoft Desirability Toolkit At the beginning of the interview the participants were asked to pick out which words described their experience with the prototype, in regards to its use case. The table underneath shows how many times a word was picked by all the participants: The words beneath 1 are therefore unique and were only chosen ones, while the word fun was picked five times. All other words available were therefore not chosen by any of the participants. As the participants were asked for a negative word, if the participant had only chosen positive, they picked up these words: simplistic and annoying (bug related). Simplistic is in this context referred by the participant as: few interaction possibilities and the look of the 3D environment compared to the video. With the word annoying the participant meant that the prototype had bug issues with the interactive objects.

23.5.5.1

Node Based Analysis

Nvivo 12 was used to construct nodes (Categories) based on quotes from the transcriptions written from the interviews. The nodes were formed by the words from the Microsoft desirability toolkit, as well as from key words that stood out from the participants statements. The nodes were established through several rounds of analysing the transcriptions to find the right keyword, context, connection and definition. The nodes are neither positively nor negatively charged, as they can contain all statements regarding the theme. Figure 23.4 shows a chart of the nodes which consists of the themes conceived from the interviews. In the chart one can see the amount of times themes are referred to in the transcriptions of all the interviews. Some of the nodes have sub-nodes. These sub-nodes are what the themes are most closely related to and can be set to be the category.

456

L. Andersen et al.

Fig. 23.4 Nodes created by the Microsoft desiderability toolkit

23.6 Ethical Issues The most considerable complication that appeared in the project was that we were restricted from conducting the tests on the autistic children and young adults, which was disclosed to us by the testing locations administration. We were therefore not allowed to observe or record the test sessions either. This meant that we had to instruct the guardians and staff on how to conduct the test on the children and young adults without our supervision. This also lead to the decision to only do a quantitative questionnaire, as both the guardians and the children were used to it from previous experiences.

23.7 Conclusion The aim of this study was to explore how a virtual reality experience would be beneficial as an exposure therapy method for children diagnosed with ASD and social anxiety. The prototype used interactive instruments in combination with the social possibility of having both a child and its guardian inside the same VR, to enhance a virtual concert scenario. A qualitative approach was conducted and evaluated. The study has provided insight on children’s motivation with novel technology, though

23 Combining Cinematic Virtual Reality and Sonic Interaction …

457

with the necessity for a readiness phase to provide the children with enough space to become comfortable with a new experience. The interactive objects caught too much focus from the children, as they were not optimized well-enough. However, observations showed that the interactive objects provided the participants with a fun and playful experience, which provides a motivation and readiness with the children.

References 1. Morina, N., Ijntema, H., Meyerbröker, K., Emmelkamp, P.M.: Can virtual reality exposure therapy gains be generalized to real-life? a meta-analysis of studies applying behavioral assessments. Behav. Res. Ther. 74, 18–24 (2015) 2. Hansen, S.N., Schendel, D.E., Parner, E.T.: Explaining the increase in the prevalence of autism spectrum disorders: the proportion attributable to changes in reporting practices. JAMA Pediatr. 169(1), 56–62 (2015) 3. American Psychiatric Association et al.: Diagnostic and statistical manual of mental disorders 4. Kandalaft, M.R., Didehbani, N., Krawczyk, D.C., Allen, T.T., Chapman, S.B.: Virtual reality social cognition training for young adults with high-functioning autism. J. Autism Dev. Disord. 43(1), 34–44 (2013) 5. Secondlife.: Second life: Official site. https://secondlife.com/, 2018. Last Retrieved 30 May 2018 6. Rothbaum, B.O., Hodges, L.F., Ready, D., Graap, K., Alarcon, R.D.: Virtual reality exposure therapy for vietnam veterans with posttraumatic stress disorder. J. Clin. Psychiatry (2001) 7. Stupar-Rutenfrans, S., Ketelaars, L.E. and van Gisbergen, M.S.: Beat the fear of public speaking: mobile 360 video virtual reality exposure training in home environment reduces public speaking anxiety. Cyberpsychol. Behav. Soc. Netw. 20(10), 624–633 (2017) 8. Adjorlu, A.: Virtual reality therapy. In: Encyclopedia of Computer Graphics and Games (ecgg). Springer (2018) 9. Parsons, S.: Authenticity in virtual reality for assessment and intervention in autism: a conceptual review. Edu. Res. Rev. 19, 138–157 (2016) 10. Opri¸s, D., Pintea, S., García-Palacios, A., Botella, C., Szamosközi, S., ¸ David, D.: Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis. Depression Anxiety 29(2), 85–93 (2012) 11. Rizzo, A.S., Kim, G.J.: A swot analysis of the field of virtual reality rehabilitation and therapy. Presence Teleoperators Virtual Environ. 14(2), 119–146 (2005) 12. Deterding, S., Dixon, D., Khaled, R., Nacke, L.: From game design elements to gamefulness: defining gamification. In: Proceedings of the 15th international academic MindTrek conference: Envisioning future media environments, pp. 9–15. ACM (2011) 13. Anderson, P.L., Edwards, S.M., Goodnight, J.R.: Virtual reality and exposure group therapy for social anxiety disorder: results from a 4–6 year follow-up. Cogn. Ther. Res. 41(2), 230–236 (2017) 14. Greenaway, R., Howlin, Patricia: Dysfunctional attitudes and perfectionism and their relationship to anxious and depressive symptoms in boys with autism spectrum disorders. J. Autism Dev. Disord. 40(10), 1179–1187 (2010) 15. Porcino, T.M., Clua, E., Trevisan, D., Vasconcelos, C.N., Valente, L.: Minimizing cyber sickness in head mounted display systems: design guidelines and applications. In: 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–6. IEEE (2017) 16. Peterson, C.C., Slaughter, V.P., Paynter, J.: Social maturity and theory of mind in typically developing children and those on the autism spectrum. J. Child Psychol. Psychiatry 48(12), 1243–1250 (2007)

458

L. Andersen et al.

17. Ale Duarte.: Kropsorienteret tilgang til: Børn ramt af stress og traumer - sådan tuner du ind på børn! Workshop at Park Hotel, Middlefart, 2018. Last Retrieved 30 May 2018 18. Benedek, Joey, Miner, Trish: Measuring desirability: new methods for evaluating desirability in a usability lab setting. Proc. Usability Prof. Assoc. 2003(8–12), 57 (2002)

Part IV

Design and Development

Chapter 24

Design and Development Anthony Lewis Brooks

Abstract This chapter introduces the fourth and closing part of the book. The number of chapters also amount to four and include texts on people with special needs such as those diagnosed with autism and cognitive disabilities, as well as people with visual impairment. Board games accessibility, immersive gaming, novel game interaction design and serious games for cognition are addressed in the chapters. Keywords Design · Development · Disability · Inclusion · Autism · Cognition · Games accessibility · Cloud-based avatar

24.1 Introduction The book contents are segmented into four parts with chapters being selected to each. Specifically, Part 1: Gaming, VR, and Immersive Technologies for Education/Training; Part 2: VR/Technologies for Rehabilitation; Part 3: Health and Well-Being; and Part 4: Design and Development. This fourth and final part is themed ‘Design and Development’ and includes chapters on (1) Participatory technology design for autism and cognitive disabilities: a narrative overview of issues and techniques, (2) Exploring current board games’ accessibility efforts for persons with visual impairment, (3) An extensible cloud based avatar: Implementation and evaluation, and (4) Frontiers of immersive gaming technology: A survey of novel game interaction design and serious games for cognition. This chapter represents a focused, and sometimes extended, ‘miniscule-review of the field’ by introducing the chapters in the opening part of this volume on ‘Gaming, VR, and immersive technologies for education/training’. Each paper’s author(s) acknowledgement aligns to using their source text to create these snippets to overview and to introduce readership. The following sections introduce the chapters and authors. A. L. Brooks (B) Aalborg University, Aalborg, Denmark e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_24

461

462

A. L. Brooks

24.1.1 Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques [1] The authoring team of the chapter titled ‘Participatory technology design for autism and cognitive disabilities: a narrative overview of issues and techniques’ are truly international being by name - Nigel Robb who is affiliated to The University of Tokyo, Tokyo, Japan; Bryan Boyle who is affiliated to The University College Cork, Ireland; Yurgos Politis who is affiliated to The Michigan State University and University College Dublin, Dublin, Ireland; Nigel Newbutt who is affiliated to The University of the West of England, Bristol. UK; Hug-Jen Kuo who is affiliated to California State University, Los Angeles. USA; and Connie Sung who is affiliated to Michigan State University, Michigan, USA. This chapter presents Participatory Design (PD) as referring to the involvement of users in the design and development process. The authors inform how this differs from instances when those who may be identified as eventual users of such technology are often involved in the development process as testers or as informants only. However, the author state how PD aims to transform this user involvement, allowing users to contribute ideas and influence the design of the resulting software, product, or service. PD therefore is not simply motivated by, or concerned with, the extent to which user involvement may improve the quality of technology. Rather, it is also concerned with the ethical and political dimensions of increased user participation. The chapter text highlights how this combined focus on the quality of outcome and empowerment through participation in the process of design has attracted the interest of developers of technology for people with autism and other neurodevelopmental disabilities (NDDs). In addition to the obvious benefit of improving software quality (e.g., by being aware of the unique requirements of such populations), researchers now also recognize that PD may help to improve inclusion and well-being for people with NDDs. Indeed, it is recognized that PD can help ensure that the rights of these individuals, both in terms of their right to full societal participation and access to the technology that supports such participation. The authors state how research on PD with people with NDDs has been carried out increasingly in recent years. However, the authors convey how much of the collective knowledge is based on a disparate body of research, with different researchers focused on different aspects of the field. As a result, there is little in the way of concrete, practical advice for designers who wish to use PD with people with NDDs. In this chapter, the authors provide a narrative review focused on the practical aspects of previous PD projects with a goal of providing the novice PD researcher and designer with an overview of what techniques they might use and what challenges they might face when embarking on a PD project. The authors present why a sense of ownership is important when autistic users participate in designing artefacts in ways that allow shared decision making and the contribution of ideas that reflect the lived experience of the co-designers (see [2]). This includes psychological ownership [3, 4], in which individuals gain an emotional connection, e.g., by feeling responsibility, is an important way of increasing involvement which may lead to

24 Design and Development

463

better self-expression, which is undoubtedly important in PD. Amongst techniques discussed in this chapter, the authors note additional recommendations that may facilitate the state of ownership: for example, preserving co-designer’s original drawings and writing, keeping co-designers involved through reports and presentations [2], and preserving co-designers’ creative input throughout the project, for example, by allowing them to modify low-fidelity prototypes [5]. In this chapter the authors provide a narrative overview of issues to consider, and techniques to use, during participatory design processes with people with autism and cognitive disabilities. By doing so, they aim to supplement existing literature in this area by providing a focus on practical techniques that the novice designer can utilize. An awareness of issues that should be considered when embarking on a participatory design process is also included. The authors point out how one possible issue with published research on participatory design is that there may be a tension between the need to communicate and describe practical techniques that can be learned and applied by designers across a range of areas, and the traditional aims of a scientific research paper, which will inevitably be focused on the generation of original research findings. While research on participatory design is of course necessary and extremely important, it is also valuable to provide details of practical techniques to facilitate participatory design in a way that will encourage a more widespread uptake of these approaches. Raising awareness and familiarity with participatory design, and thus increasing its application, will undoubtedly benefit people with disabilities in terms of their inclusion in decision-making and by extension society.

24.1.2 Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment [6] The authors Frederico Da Rocha Tomé Filho, Bill Kapralos and Pejman MirzaBabaei are all affiliated with Ontario Tech University in Oshawa, Ontario, Canada. This chapter starts by reflecting on how there has been a resurgence of traditional board-games yet persons who are visually impaired are limited in their access to board games as the gameplay information tends to be visual-based. Thus, in this chapter the authors discus issues related to board game accessibility alongside questioning the efforts that have been conducted in the field. They explore related fields, such as accessible video games’ and immersive technologies, presenting a variety of approaches employed to enable visual accessibility. They also discuss the strengths, weaknesses, and reach of different approaches, and the current research gap in the field of board games accessibility. To some extend this chapter presents a literature review focused upon (a) Accessible Digital Games, (b) Accessible Board Games, (c) Games Accessibility Guidelines, and (d) Immersive Technologies (VR and AR) and related. However, it could be reflected that this review only covers 14 works published between 2004 and 2017, and these are reflected in the text.

464

A. L. Brooks

In concluding, the authors correctly point out the topic of board game accessibility is rarely discussed and explored in academia. The inherent accessibility barriers of the activity for persons with visual impairment prevents many persons worldwide to be involved with the activity that could otherwise wield great benefits to them. The insightful (pun intended) discussion in this chapter points out that this can be an interesting field to explore in future work—possibly with an extensive literature review to extend what has been started herein.

24.1.3 An Extensible Cloud-Based Avatar: Implementation and Evaluation [7] Authors Enas Altarawneh, Michael Jenkin and I. Scott MacKenzie are all affiliated at The Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, in Toronto, Ontario, Canada. Building upon their previous recent works, “Is putting a face on an interactive robot worthwhile?” and “Leveraging cloud-based tools to talk with robots” (as cited in the chapter) the authors inform how a common issue in human–robot interaction is that a naive user expects an intelligent human-like conversational experience. The authors further inform that recent advances have enabled such experiences through cloud-based infrastructure; however, this is not currently possible on most mobile robots due to the need to access cloud-based (remote) AI technology. The chapter describes a toolkit that supports interactive avatars using cloud-based resources for human–robot interaction. The authors elaborate how the toolkit deals with communication and rendering latency through parallelization and mechanisms that obscure delays and how it can be used to put an interactive face on a mobile robot. The chapter shares a user study comparing human–robot interaction using text, audio, a realistic avatar, and a simplistic cartoon avatar to study human–robot interaction and to question effectiveness and usefulness. The chapter points to how more work is required to advance the work and, given the increased pressure on human–human interactions, the preferences associated to robots in human–robot is an important aspect to investigate from a societal perspective.

24.1.4 Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition [8] All the authors, namely—Samantha N. Stahlke, Josh D. Bellyk, Owen R. Meier, Pejman Mirza-Babaei and Bill Kapralos are from Ontario Tech University, in Oshawa, Ontario, Canada. This chapter explores beyond traditional game inputs

24 Design and Development

465

such as keyboard, joystick, or gamepad/handset controller toward more contemporary apparatus such as motion controller, eye tracking, and brain-computer interfaces. The research goals of the reported work in this chapter were stated by the authors as being toward their own design and development of games, exploring the use of brain-mediated controls and eye tracking, and to question improving digital game accessibility and immersion. A list of consumer devices (from 2017) is stated in the chapter—this also reflecting how head mounted displays (HMDs) are becoming available and affordable for public. A focus by the authors was to review a history of the field on (serious) game technologies using brain computer interface (BCI), electroencephalography (EEG) and eye tracking and a table informs of the authors’ choices of search terminologies and how they prioritized results aligned to their available testing apparatus. Following the review of the history and application of brain-computer interface (BCI) technology, the authors examine previous work using electroencephalography (EEG) and eye tracking for games and they also include a section on suitable games for such interfaces. There are innate challenges in using ‘natural interfaces’ with gameplay, especially with unwanted noise due to SNR or signal-to-noise ratio i.e. caused by the low signal strength of a BCI and the act of using a BCI can itself involve additional noise due to associated cognitive processes, such as multitasking, attention, and conflict monitoring, thus can also be considered a potential distractor. A challenge in such research is how fast the technology and systems change so the meaningfulness of third-party reviews from decades ago to assist in avant-garde state-of-the-art design can be questioned. This is posited as for several years eye-trackers and pupildilation cameras inside of head mounted displays (HMDs) have been available to inform game designers of a player’s emotional responses to what is seen—so that a reiterative design process is optimized. However, saying this, it is important to gain comprehension and deep insight on what has been achieved previously and the innate challenges (what works and what doesn’t) that needed to be overcome by developers at this gives foundation for such modern day designing that can reflect advances (e.g. sensors, miniaturization, processing speeds, etc.). The chapter is not clear on the demographics and profiles of eventual specific game players that are being designed for i.e., inclusive well-being aspect of the volume (but “games for rehabilitation” is mentioned and the review cites a study with individuals having ADHD i.e. Attention deficit hyperactivity disorder (a study by Alchalcabi, Eddin and Shirmohammadi 2017; “individuals incapable of using traditional input devices”) and also mentioned is “users with potential motor impairments”) [9]. In the chapter the authors share outcome guidelines toward their future work. With corporates such as Microsoft producing apparatus such as their ‘Xbox Adaptive Controller’ and support from companies such as Logitech who are selling an ‘Adaptive Gaming Kit’—a collection of 12 buttons and triggers designed to work with Microsoft’s Xbox Adaptive Controller to make gaming more accessible to people with disabilities—and with communities around the topic of accessible games being highly active the future looks bright for Technologies for Inclusive Well-Being.

466

A. L. Brooks

24.2 Conclusions In concluding this fourth and final part of the book that has presented a brief introduction review of each chapter and its author(s) positioning texts are resourced directly and with paraphrasing to not lose meaning. The four chapters herein include texts associated to exploring technologies for people with special needs such as those diagnosed with autism and cognitive disabilities, as well as people with visual impairment. Additionally, aligned to design and development, board games and their accessibility, immersive gaming and design, novel game interaction and serious games for cognition are (micro)reviewed. It is anticipated that scholars and students will be inspired and motivated by these contributions to the field of Technologies of Inclusive Well-Being towards inquiring more on the topics. Acknowledgements Acknowledgements are to the authors of these chapters in this opening part of the book. Their contribution is cited in each review snippet and also in the reference list to support reader cross-reference and to identify how the part section chapters are formulated with total respect for the original texts. The texts are cited in this way to not divert from meaning and to promote readership. However, to be clear, the references are without page numbers as they are not known at this time of writing. Further information will be available at the Springer site for the book/chapter.

References 1. Robb, N., Boyle, B., Politis, Y., Newbutt, N., Kuo, Sung, C.: Participatory technology design for autism and cognitive disabilities: a narrative overview of issues and techniques. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 2. Rijn, H.V.: Meaningful encounters: Explorative studies about designers learning from children with autism. Doctoral Thesis (2012). https://doi.org/https://doi.org/10.4233/uuid:978fbd25eb26-4306-bebc-5e2770538c5a 3. Beggan, J.K.: On the social nature of nonsocial perception: the mere ownership effect. J. Pers. Soc. Psychol. 62(2), 229–237 (1992) 4. Wang, Q., Battocchi, A., Graziola, I., Pianesi, F., Tomasini, D., Zancanaro, M., Nass, C.: The role of psychological ownership and ownership markers in collaborative working environment. In: Proceedings of the 8th international conference on Multimodal interfaces (pp. 225–232). ACM (2006) 5. Karpova, A., Culén, A.: Challenges in designing an app for a special education class. In: Proceedings of the IADIS International Conference on Interfaces and Human-Computer Interaction 2013 (pp. 95–102) (2013) 6. Filho, F., Kapralos, B., Mirza-Babaei, P.: Exploring current board games’ accessibility efforts for persons with visual impairment. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 7. Altarawneh, E., Jenkin, M., MacKenzie, I.: An extensible cloud-based avatar: Implementation and evaluation. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021)

24 Design and Development

467

8. Stahlke, N., Bellyk, J.D., Meier, O.R., Mirza-Babaei, P., Kapralos, B.: Frontiers of immersive gaming technology: A survey of novel game interaction design and serious games for cognition. In: Brooks, A.L., Brahnam, S., Kapralos, B., Nakajima, A., Tyerman, J., Jain, L.C. (Eds.) Recent Advances in Technologies for Inclusive Well-Being: Virtual patients, Gamification and Simulation. Springer Intelligent Systems Reference Library, vol. 196 (2021) 9. Alchalcabi, A.E., Eddin, A.N., Shirmohammadi, S.: More attention, less deficit: wearable EEGbased serious game for focus improvement. In: 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH). IEEE (2017). https://ieeexplore.ieee. org/abstract/document/7939288 10. Altarawneh, E., Jenkin, M., MacKenzie, I.S.: Is putting a face on an interactive robot worthwhile? In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (n.d.) 11. Altarawneh, E., Jenkin, M.: Leveraging cloud-based tools to talk with robots. In: Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO) (n.d.)

Chapter 25

Participatory Technology Design for Autism and Cognitive Disabilities: A Narrative Overview of Issues and Techniques Nigel Robb, Bryan Boyle, Yurgos Politis, Nigel Newbutt, Hung Jen Kuo, and Connie Sung Abstract Participatory design (PD) refers to the involvement of users in the design and development process. Those who may be identified as eventual users of such technology are often involved in the development process as testers or as informants. However, PD aims to transform this user involvement, allowing users to contribute ideas and influence the design of the resulting software, product, or service. PD therefore is not simply motivated by, or concerned with, the extent to which user involvement may improve the quality of technology. Rather, it is also concerned with the ethical and political dimensions of increased user participation. This combined focus on the quality of outcome and empowerment through participation in the process of design has attracted the interest of developers of technology for people with autism and other neurodevelopmental disabilities (NDDs). In addition to the obvious benefit of improving software quality (e.g., by being aware of the unique requirements of such populations), researchers now also recognise that PD may help to improve N. Robb University of Tokyo, Tokyo, Japan e-mail: [email protected] B. Boyle University College Cork, Cork, Ireland e-mail: [email protected] Y. Politis Technological University Dublin, Dublin, Ireland e-mail: [email protected] N. Newbutt (B) University of Florida, Gainesville, Florida, USA e-mail: [email protected] H. J. Kuo · C. Sung Michigan State University, East Lansing, Michigan, USA e-mail: [email protected] C. Sung e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_25

469

470

N. Robb et al.

inclusion and well-being for people with NDDs. Indeed, it is recognised that PD can help ensure that the rights of these individuals, both in terms of their right to full societal participation and access to the technology that supports such participation. Research on PD with people with NDDs has been carried out increasingly in recent years. However, much of our collective knowledge is based on a disparate body of research, with different researchers focused on different aspects of the field. As a result, there is little in the way of concrete, practical advice for designers who wish to use PD with people with NDDs. In this chapter, therefore, we provide a narrative review focused on the practical aspects of previous PD projects. Our aim is to provide the novice PD researcher and designer with an overview of what techniques they might use and what challenges they might face when embarking on a PD project. Keywords Participatory design · Autism · Intellectual disabilities · Neurodevelopmental disabilities · Co-creation · Co-design · Prototyping

25.1 Introduction 25.1.1 Participatory Design Participatory design (PD) refers to a broad set of practices by which individuals and groups who traditionally use a piece of technology (i.e., the “users”) are involved in the design and development of the technology. PD, also referred to as cooperative design or co-design, is characterised by its efforts to ensure the inclusion of numerous stakeholders ranging in expertise, experience and ability. PD practice aims to contextualize the design process by gathering and interpreting the real, lived experience of these stakeholders and translating these into a set of characteristics that will ensure the future success of a new technology [1, 2]. Participatory design processes are characterised by the democratization of creative input and sharing of decision making, thus conferring agency and power to the participant. PD differs from traditional technology design in several ways. Firstly, the end users are considered best placed to suggest how to improve the work processes and are considered experts in the design process. Secondly, the users’ perceptions of and feelings regarding the technology are considered as important as the functionality of the technology. Thirdly, the technology is viewed as a process in the context of the environment in which it will be used rather than as a product in isolation [2]. Agency and power are conferred from the professional designer to the user; the user is, in turn, elevated to the role of co-designer through the application of PD methods, techniques and tools. Participatory approaches to design are characterized by three aspects that define and shape the outcomes and the process: (1) transfer of tacit knowledge; (2) active co-creation and prototyping; and (3) shared decision making. PD offers designers a set of broad methodologies for accessing participant’s tacit knowledge, their experiences, and specific requirements, and seeks to bring this to bear on the design process. Tacit knowledge refers to the implicit, holistic

25 Participatory Technology Design for Autism and Cognitive Disabilities …

471

knowledge that is often difficult for participants to articulate but is of immense value to the design process [3]. PD seeks to ensure that the prospective technology enduser is not just the focus of the design process but also is an active contributor throughout. Co-creation refers to any act of collective creativity that is shared by two or more people and contributes to the design of an artefact. The practice of collective creativity in design has been a constituent of PD since it emerged as an approach to design practice in the 1970s. Provision of the appropriate tools with which to express themselves and articulate their unique creativity will facilitate the generation of new ideas and thinking [4]. A defining feature of PD is the sharing of power which is achieved through the emphasis on mutual engagement of designers and end-users and a respect for shared decision making. While the focus of user-centred design typically lies on a single, model-user as representative, PD seeks to capture the experiences of a broad range of stakeholders through all stages of design [5]. With PD, designers and researchers seek to empower participants through a willingness to share decision making with others. As such, empowerment can be seen to reflect the extent to which design decisionmaking is devolved or transferred from designer to participant [6]. Furthermore, through shared decision making, PD aims to democratize the process of design. As such, the devolution of decision making must be meaningful; providing participants with true power to influence the direction in which the design process takes and the final realization of the design outcome [7, 8]. Following in the tradition of user-centred design, PD, constitutes a collection of methods and approaches rather than a single methodology. The difference between the two philosophies may be summed up as follows: user-centred design is design for users, PD is design with users [5]. Methods may include design workshops, brainstorming, role-playing scenarios, prototype development, storyboards, and ethnographic techniques such as focus groups, interviews and observation. Sanders and her colleagues suggest organizing such techniques into the following categories: “talking, telling and explaining”, “acting, enacting and playing”, and “making tangible things”, thus summarizing the main purposes of the PD process [9]. Designers rely on a range of techniques and tools that support their engagement with potential end-users of software or products that they are designing. These tools and techniques are largely based on supporting social communication and are used to assist the designer in gathering information about the person, the context they inhabit and their hopes for the technology. Many design techniques and tools help the designer to get close to the end-user with a view to better understanding their needs and preferences. Ultimately the goal of design is to develop a product that accurately reflects these elements.

472

N. Robb et al.

25.1.2 Participatory Design and Neurodevelopmental Disabilities The term “neurodevelopment disabilities” (NDD) refers to a variety of clinically recognized disorders of brain development, which lead to a range of challenges for those affected [10]. This includes Autism Spectrum Disorder and various genetically defined syndromes associated with intellectual disability, such as Down syndrome and Prader-Willi syndrome. Each NDD has a unique profile, although some specific challenges are common across many NDDs; for example, many people with genetically defined syndromes associated with intellectual disability experience deficits in executive functions, the high-level cognitive control processes which we use to organize and plan, among other things [11]. In the European Union alone, it is estimated that there are between 5 and 15 million people with an intellectual disability [12], with a similar estimated magnitude of people with Autism [13]. Ensuring that this large group of people is both included fully in society, and has access to technology that is suitable for them is acknowledged in several the obligations of the United Nations Convention on the Rights of Persons with Disabilities: To undertake or promote research and development of universally designed goods, services, equipment and facilities … which should require the minimum possible adaptation and the least cost to meet the specific needs of a person with disabilities… …To undertake or promote research and development of, and to promote the availability and use of new technologies, including information and communications technologies, mobility aids, devices and assistive technologies, suitable for persons with disabilities, giving priority to technologies at an affordable cost [14].

As such, PD represents a potentially important way of ensuring the rights of people with NDDs, through the inclusion of their voices and opinions in the design of the technology that they will use. Early engagement in the design exploration phase provides people with a disability a chance to assert their own preferences and direct what aspect of their lives the envisioned technology will impact. Adopting practices and techniques that support people’s creative expression (within the framework of technology design) gives direct opportunity for those with disabilities to concretize their choices and bring these to life. Finally, inclusion of the people with a disability in evaluation of prototypes and the final design ensures that opportunities for active choice making are present through the entire design cycle. Designing technology for people with NDDs can be challenging. Understanding their needs, preferences, and how their abilities or disabilities will affect their use of technology requires a specialist, often multi-disciplinary skillset [15]. Typically, design professionals have only limited experience with people with a disability, particularly those who have substantial difficulties with thought processes and communication. Moreover, existing methods such as tools and/or techniques for collaborative research are of limited value, because of their reliance on verbal communication [16]. Although there are some guidance models for designers, the number of participatory design studies involving those with complex disabilities remains small. In many instances, the opportunities people with such challenges have in influencing

25 Participatory Technology Design for Autism and Cognitive Disabilities …

473

the design of the technology they use is often predicated by their intellectual, language and interpersonal skills. In contrast, the merits of engaging those with better language and expressive communication skills have been well demonstrated [17]. Currently, descriptions of techniques used in PD processes are found in a diverse range of literature, across multiple disciplines. While previous excellent reviews [18, 19] have summarized and analysed aspects of this literature, they are unable to provide much detail and guidance to those wishing to embark on a PD process with people with NDDs. Therefore, this chapter provides a narrative overview of some relevant literature. Rather than aiming to be comprehensive and systematic, we instead aim to provide concrete description of techniques that have been used in PD processes with people with NDDs. Ultimately, our aim is to provide practical guidance to facilitate an increase in PD with people with NDDs, particularly emphasizing techniques that may be suitable for individuals with more complex disabilities.

25.2 Transfer of Tacit Knowledge: Communicating the Lived Experience The early phases of the design of a new product or software, often referred to as the “fuzzy front-end” or “front end of innovation” is characterized by the activities that take place between the time the opportunity is identified, and formal design activities commence. These activities focus on: (1) identifying and analyzing opportunities, (2) generating and selecting ideas, and (3) developing concepts [20, 21]. Understanding the lived experience and context of the person can ensure that solutions developed accurately reflect their needs, preferences and desires. Lived experience may be used to describe the first-hand accounts and impressions of a member of a group or community. It is a representation of the person’s experiences, choices and other factors that contribute to their self-perception and actions [22, 23]. In typical design projects, gathering information about the user and the context for their use of technology takes place in the initial or early phases of the development cycle. As such, opportunities to support a process whereby the contributions of people with a disability can impact a design project typically occur during the first phases of the process. There is a broad consensus that designers should draw upon the experiences of the people they design for. This can ensure that the outcome of the design process will match the desires, abilities, needs and preferences of the eventual end-user [24]. Efforts to understand the lived experience of people with a disability presents the designer with a unique set of challenges. Designers with limited experience working with people with NDDs may face challenges in gathering such data due to the difficulties with thought processes and communication that such individuals may experience. Similarly, traditional methods, tools and techniques for such user research are often of limited use because these tools and techniques rely heavily on verbal communication and higher order cognitive skills to engage with identified end-users. Typical

474

N. Robb et al.

design processes involve identifying the needs, demands and opinions of users often extracted from interviews and discussion. Thus, the underlying assumption is that the representative user is both willing and able to communicate freely and transfer knowledge and opinion [25]. As such, the inclusion of developmentally diverse populations such as those with NDDs in the design process is often restricted to those who possess the verbal, cognitive and reasoning skills required to engage pro-actively with designers. For those with more significant disabilities of language or intellectual functioning their role is more passive with designers relying on the involvement of others as facilitators, proxies and/or experts [19]. A further challenge for design teams is that of decoding and interpreting the knowledge and expertise of participants. In their efforts to develop a virtual learning environment for social skills training, the team working on the ECHOES project observed that they found it difficult to transfer the contributions of participating children into design requirements. Instead they were required to collectively analyse the contributions of participants to identify their phenomenological intentions [26]. Drawing on the phenomenological philosophy of thinkers such as Heidegger, Merleau-Ponty, and Dewey, the authors aim to understand the experience of using technology as an act that is suffused with the tacit knowledge of the user. In other words, we cannot conceive of the user experience by focusing on the object of the experience alone; we must understand such experience as an interaction between the object and the user, in the context of the users’ life and culture. Frauenberger et al. [26] used several design techniques to achieve this. For example, children with Autism Spectrum Disorder were involved in 3 design sessions, during which they conceived of, created, and then explained objects which they think could populate the virtual setting of the learning environment (a magical garden). The authors argue that, through the creation of tangible objects (using craft materials), the design activity is grounded real experience. In addition, in the third session, the children were asked to explain their objects to a character from the virtual learning environment. In this way, the children’s ideas are further contextualised in a setting similar to the end-product. Through these techniques, Frauenberger et al. [26] are able to gain insight into the actual experiences of children with ASD in the context of the virtual learning environment being designed.

25.3 Active Co-creation The intermediate phases of design are characterized by the creative efforts of the design team to imagine what the proposed product or software looks, feels and acts like. In typical software design projects, it is during this phase that designers begin to assimilate all that they have learnt and understood about the problem in question, the imagined end-users and the context in which it will operate. In co-design projects, this is an opportunity for designers and non-designers to collaborate and communicate with creative purpose, with a view to finding and negotiating a solution and creating a shared understanding of how the final result might look like and function. The creative expressions of non-designers can be incorporated into various elements of

25 Participatory Technology Design for Autism and Cognitive Disabilities …

475

the final software interface or may support interaction. Capturing and translating their creative contributions is a significant way of demonstrating their impact on the design process but also contributes to their sense of ownership and agency in the project. Although co-creating technology with those with neurodevelopmental disabilities is considered challenging, it is often this group that stands to benefit the most from their active inclusion and contribution to the outcome of design. Limitations of verbal communication skills and a perception that people with a disability have impaired creative abilities mean that designers may be reluctant to engage them directly in co-design activities. Involving those with disabilities more directly in co-creation activities and incorporating these into the software design is likely to have the greatest impact on the design, but is difficult to implement [27]. Providing a platform for design participants to create both visual and auditory content that is successfully incorporated into the design of software provides a tangible, authentic demonstration of their impact on design. For children with a disability, the most common form of involvement for them in design projects focusses on evaluation of potential design options. This limits the impact that the child with NDDs can have on the outcome of design and fails to allow them to bring their creativity to bear [27]. Studies attempting to bridge this gap have used a range of low-fi tools and techniques to give children with autism a chance to draw, fabricate, generate and record ideas. These techniques serve to simplify the process of co-creation and provide children with a degree of agency in the process by providing opportunities to create and contribute elements to the design process. These techniques remove some of the technical, knowledge barriers often associated with design and value the creation of paper-based and other lo-fi artefacts that can be translated later into a final system. These techniques have been modified and refined for use with children in other contexts [28], including children with high functioning autism. Many of these techniques however are bespoke and have evolved in specific contexts with specific groups of children. In seeking to incorporate the creative expression of children with high-functioning autism they adapted, modified and scaffolded the creative open-ended activities to match the children’s needs [29]. From these studies, it appears that the key to providing children with disabilities with opportunities to generate creative content in a design project requires three steps. Firstly, identifying the correct tools to support children’s creative expression, secondly, creating structured activities that support their creativity; and finally, analysis and examination of children’s creative artefacts to understand their unique meaning. Malinverni and her colleagues employed a PD approach to support the creative contributions of children with ASD in the development of a Kinect, motion-based game for the development of social initiation skills [30]. A key element of authentic participation is the opportunities that children on the ASD spectrum have to impact the outcome of the design. Capturing and translating their creative contributions is one significant way of demonstrating their impact on the design process but also contributes to their sense of ownership and agency in the project. This work however is unclear as to how the ideas were generated and as with the experience of the ECHOES team, describe the challenges of translating children’s ideas into concrete design proposals.

476

N. Robb et al.

Identifying appropriate co-creation techniques that are tailored to the specific needs of the target population is important. In this content, it is important to be aware that, for individuals with severe and profound cognitive disabilities, their experience and self-expression may be partially or completely non-linguistic. The experience of such “Sensory Beings” is not structured with linguistic meaning in the way that most human experience is [31]. To involve such populations in cocreation activities may therefore require a broader view of creative expression and the techniques and tools used by designers should reflect this. For example, Robb et al. [32] developed and piloted a multi-sensory design technique. Children with a range of cognitive disabilities, including non-verbal children with severe cognitive disabilities, were able to take part in a creative activity incorporating, not only visual expression, but also tactile, auditory, and olfactory experience. This was achieved by providing children with a range of materials that including various textures (e.g., selection of fabrics, craft papers, and other materials), music (e.g., a selection of sound chips with different music recorded on them), and scents (using small bags filled with scented materials). Children selected objects and attached them to coloured card to create multisensory posters. While this was small-scale preliminary work, similar techniques could be used in the design of multi-sensory experiences and living environments, which are increasingly recognised as important for people with certain NDDs (e.g., [33])

25.4 Making Ideas Tangible: Prototyping Prototyping is a widely used technique in mainstream development. A prototype is a working, or “enactable” ([34], p. 5) model intended to simulate the final software artefact, so that features may be understood and evaluated. Prototyping balances two main problems: firstly, the fact that software production is inevitably limited by financial considerations; and secondly, the fact that software quality is highly important [35]. Prototyping, therefore, is intended to be cost-effective and efficient way to improve software quality by obtaining feedback early, before extensive resources are utilized. Prototyping is a major component of the many so-called “agile” approaches to software development that are widely used in the software engineering industry [36]. Broadly, prototypes may be of two types: throw-away, and evolutionary [36]. Throw-away prototyping involves the creation of artefacts (e.g., drawings of userinterfaces which cannot be physically integrated into the final system), while evolutionary prototyping involves creating a basic version (i.e., a software implementation of a working system or some part of the system). Evolutionary prototypes may then be further developed and refined, and they may form part of the final software. It is important to understand that prototyping is a requirements elicitation technique [36], that is, through prototyping, technology developers learn more about what functions and qualities the final product should have. Since we are concerned in this chapter with techniques to encourage and preserve the contributions of people with NDDs throughout the technology design process (i.e., their contribution to determining the

25 Participatory Technology Design for Autism and Cognitive Disabilities …

477

requirements of the technology), it follows that prototyping techniques are of high importance. Previous research has used a wide variety of prototyping techniques during the participatory design of software and other technology with people with NDDs (see Börjesson et al. [19] for a systematic review). Several themes emerge in this research, representing the challenges and considerations involved.

25.4.1 Prototyping Techniques Previous work in this area has used a wide variety of approaches, ranging from lowfi throwaway prototypes to high-fi prototypes with extensive features which may then evolve into the final software system. Paper prototyping was used with a varied group of children (including some with learning difficulties) by Brederode et al. [37]. Several studies have used low-fi software implementations (e.g., using Adobe Flash Animations) at the prototyping stage. While these are throwaway prototypes, they nevertheless may provide a better approximation of the final system and avoid some of the issues relating to children’s confusion regarding paper and other low-fi prototypes. In this context, it may be useful to consider so-called “Wizard of Oz” techniques [26, 38], in which a mock-up of a user interface is presented to the user who interreacts with it, while a researcher or designer observes their interaction and takes actions to simulate how the proposed final system would react. It is important to realize that certain design situations may limit the extent to which people with NDDs can contribute to the evaluation of early, low-fi prototypes. For example, Hailpern et al. [39] used Task Centered User Interface Design (TCUID; [40]) in the development of speech visualization software to aid language development in children with autism and speech delays. Although TCUID suggests using low-fi (typically paper) prototypes [40], Hailpern et al. [39] state that children with autism were only involved once the architecture of the system was completed and implemented in Java. This was likely due to multiple factors. For example, the system was designed based on principles of Applied Behaviour Analysis (ABA) and incorporated a sophisticated system for the real-time visualization of sounds produced by a speaker. Given these specific requirements, and the fact that the intended users are young children with speech difficulties, it is unclear how much they could contribute to the early design process. This highlights an important issue. Given that prototyping is a requirements-gathering process, it must be used appropriately, i.e., in situations where requirements are (a) unknown, and (b) can be determined from user evaluation of prototypes. Hailpern et al. [39] point out that, by implementing their system in a modular, extensible fashion, using appropriate software design patterns which could facilitate change and scalability, they were still able to make substantial changes to the interface design late in the development process, and thus incorporate feedback obtained from children during the software evaluation sessions, for example, by adapting the software to be used on touchscreen devices, which was not planned as a feature of the system.

478

N. Robb et al.

Multiple iterations of prototype evaluation (at multiple levels of fidelity) may also be required to both fully involve children and obtain the most useful feedback. During the development of a game for children with special educational needs, Karpova and Culén [41] first created low-fi paper version of their proposed game (note that the concept of the game was based on a prior idea-generation session with the children). In an evaluation session, they considered if the children both understood and enjoyed the game. This is an example of a situation where a paper prototype may have close similarities with the intended final software—the proposed app was a board game to be used on a touch screen device. Obviously this concept could be easily conveyed to children via a low-fi paper prototype. This study also highlights how paper prototyping techniques can easily facilitate children’s creative contributions even at the prototyping phase, by allowing children to select one of several versions and letting children draw their own additional icons to use in the game. After paper prototyping, the game was then implemented as a high-fi prototype which could be played on a touchscreen tablet. During a subsequent evaluation session, important feedback was obtained from children (including one participant from the first prototyping session) about the high-fi prototype. While this still involved a focus on features of the game, the authors primarily report improvements made after this session in terms of software usability and performance [41]. Early evaluation of prototypes by users of the target population is critical. However, difficulties with recruiting children with the disability of interest may lead designers to reply on evaluations by typically developing children. This is problematic for at least two reasons. Firstly, there is the obvious point that a substantial part of the justification for PD with children with disabilities is to benefit the children directly (e.g., enjoyment, inclusion, well-being). Secondly, it is unlikely that feedback from typically developing children will inform the specific requirements of children with disabilities. This is shown in a paper by Bai et al. [42]; during the development of an augmented reality system to encourage pretend play in children with autism, a “pilot study” was carried out with typically developing children in the same age group as those whom the software was intended for, with no usability problems noted by the authors. The project then proceeded to an empirical evaluation of the system with children with autism where several usability issues were then discovered. Fidelity of prototypes will depend on the specific project. However, we recommend that prototypes are as low-fi as is possible while ensuring that (a) children understand them appropriately, and (b) can give feedback at the correct stage. We also recommend that high-fi prototypes and deliverable software is designed to accommodate change from the outset, recognizing that a flexible and adaptive approach to design and development is often required in PD.

25 Participatory Technology Design for Autism and Cognitive Disabilities …

479

25.5 Empowerment Through Decision-Making Considering the empowerment of participants as another key cornerstone of PD, there is a need to examine how such empowerment translates into design practice. The empowerment of participants in PD refers to a disposition and a willingness to share power with others, especially with prospective users, and to let go of control. As such empowerment can be seen to reflect the extent to which decision-making is devolved or transferred from designer to participant [6]. Furthermore, PD aims to democratize the process of design. The devolution of decision making must be seen as meaningful, providing participants with true power to influence the direction in which the design process takes and the final realization of the design outcome [7, 8]. Despite rights enshrined in international legislation, individuals with intellectual impairments often remain excluded from participating in decision making that directly impact on their lives. Children and young people with intellectual disabilities then become some of the most severely excluded within an already marginalized group. In many circumstances, the denial of decision making agency highlights a power imbalance between those with disabilities and those who are placed in a position of responsibility. Too often, well-meaning adults, including policy-makers, service providers, parents and caregivers, make decisions on behalf of those with intellectual disabilities without consulting them about those decisions, which has a direct impact on their lives. For people with autism, there may be a reluctance to devolve such decision making and influence to those whom designers may consider incapable of such engagement. Much of the recent literature focussed on the participation of people with NDDs in design have sought to develop novel, bespoke techniques to support collaborative decision making. However, in a study documenting the experience of developing a Kinect based game with children with autism the point was well made that the highly engaged nature of such research makes it difficult to unpick how and why certain design decisions are made [43]. Some studies have focussed on adapting evaluation tools such as questionnaires or ranking scales to better match the cognitive and communication skills that those with autism and/or intellectual disabilities present with. Of note are efforts to decrease the cognitive load that decision making demands by using visual tools such as “smileyometer” scales [44, 45]. In studies involving children with NDDs, adults familiar with participating children are used to support the decision-making process [46–48]. In describing the IDEAS approach to designing with children with autism, decisionmaking activities during the ideas generation phase, such as selecting between a range of proposed alternatives, are supported by adults [29]. In a design project with children with more significant communication challenges, feedback is actively sought from parents during evaluation activities, consequently, design decisions were then made and acted upon on the basis of such feedback [49]. In many situations, the input from others representing the user extends to “proxy participation”, where designers rely on the decision making of other stakeholders as representing the choices of the user. This issue will be explored further in a later section.

480

N. Robb et al.

25.6 The Importance of Setting Several previous studies note the importance of the environment in which PD activities take place. For example, it is important that co-designers are comfortable and relaxed; for this reason, PD sessions often take place in locations that co-designers are already familiar with, such as their homes or schools [32, 50]. Related to this, several previous studies note the importance of ensuring that children understand both the prototype and the purpose of the evaluation setting. Frauenberger et al. [27] note that children not understanding concepts was a factor in their PD process with children with special needs. Finally, researchers have noted that prototypes and other materials used in PD sessions should be robust and simple where possible. It is important to remember that prototype evaluation will likely be viewed as a play activity by children; indeed, given that many participatory designers are expressly concerned with children’s enjoyment and well-being, it is arguable that play should be encouraged. However, some studies in this area note challenges related to ill-suited equipment, particularly in terms of the robustness of prototypes [27]. One interesting approach to the issue of setting is demonstrated by Nissinen et al. [51], who had children with autism and other special needs evaluate a range of technologies over a period of time as part of a regular club activity (“EvTech” club). It is possible that by presenting the activities to children as a clear, structured, regular activity, children were able to give better feedback. This study was not a traditional PD process, in that the children were merely evaluating already-developed technologies (e.g., Microsoft Xbox Kinect games). However, it is feasible that such evaluation sessions could form part of a larger PD process, or that a PD process could be structured in a similar way (i.e., as a “Design Club”).

25.7 Use of Proxies When designing with young children from populations where communication disabilities are common (e.g., autism), it is perhaps to be expected that designers will rely on feedback from proxies, such as parents, teachers, and care workers. Sampath et al. [52] worked closely with a non-verbal autistic child and his mother during the development of an assistive communication app. Here, it is arguable that the mother plays both the role of proxy and user, as is often the case in such situations [25]. Several changes to the software were made based on the mother’s feedback, and, in a subsequent very small-scale usability test of the resulting software, no usability issues were reported. The advantage of such an approach is that a suitable proxy with sufficient knowledge of the child will be well-placed to both recognize and convey the child’s unique requirements. However, we would suggest that designers take care when selecting proxies. While a body of research has shown agreement between proxy- and self-responses in a research setting (e.g., [53]), it is important to

25 Participatory Technology Design for Autism and Cognitive Disabilities …

481

remember that the aims of PD are not those of a quantitative research project. BoydGraber et al. [46] provide a detailed discussion of issues that may inform the decision to use, and how to select, proxies in a PD context. For example, the authors consider if advocates with the same disability as the participants (in this case, aphasia) are better suited to acting as proxies than carers and family members who, although they may be more familiar with the users, may not be able to convey the lived experience of the disability to the same extent as an advocate. A third approach, and the one adopted by Boyd-Graber et al. in their work, involves using experts (e.g., speech-language pathologists) as proxies. However, it is generally more common for PD researchers to consult with carers and family members (e.g., [32]). This most likely reflects the motivation of the PD process introduced above; if we are aiming to facilitate a greater involvement of people with disabilities in the design process, such that the end result can reflect their lived experience and genuine, original contribution, then it is likely that we will prefer proxies who have most familiarity with the participants, even if this does come at the cost of specialist scientific knowledge of the participants’ disability.

25.8 Ownership Given that PD processes aim to involve users in design in ways that allow shared decision making and the contribution of ideas that reflect the lived experience of the co-designers, it is important for the participatory designer to consider the concept of ownership. van Rijn and Stappers [50] consider this in detail. The concept of psychological ownership [54, 55], in which individuals gain an emotional connection, e.g., by feeling responsibility, is an important way of increasing involvement which may lead to better self-expression, which is undoubtedly important in PD. In addition to many of the techniques previously discussed in this chapter, we note additional recommendations that may facilitate the state of ownership: for example, preserving co-designer’s original drawings and writing, keeping codesigners involved through reports and presentations [50], and preserving codesigners’ creative input throughout the project, for example, by allowing them to modify low-fidelity prototypes [41].

25.9 Conclusion In this chapter we have provided a narrative overview of issues to consider, and techniques to use, during participatory design processes with people with autism and cognitive disabilities. By doing so, we aim to supplement previous excellent reviews in this area by providing a focus on practical techniques that the novice designer can utilize, and an awareness of issues that should be considered when embarking on a participatory design process. One possible issue with published research on

482

N. Robb et al.

participatory design is that there may be a tension between the need to communicate and describe practical techniques that can be learned and applied by designers across a range of areas, and the traditional aims of a scientific research paper, which will inevitably be focused on the generation of original research findings. While research on participatory design is of course necessary and extremely important, it is also valuable to provide details of practical techniques to facilitate participatory design in a way that will encourage a more widespread uptake of these approaches. Raising awareness and familiarity with participatory design, and thus increasing its application, will undoubtedly benefit people with disabilities in terms of their inclusion in decision-making and, by extension, society. While we therefore recognise that much more careful research is needed to fully understand the specific ways in which participatory design is effective and how it benefits people with disabilities, we also encourage further work which aims simply to provide practical information with the aim of increasing the frequency with which participatory design is applied, as exemplified in this chapter.

References 1. Halskov, K., Hansen, N.B.: The diversity of participatory design research practice at PDC 2002–2012. Int. J. Hum. Comput. Stud. 74, 81–92 (2015) 2. Schuler, D., Namioka, A. (eds.): Participatory Design: Principles and practices. CRC Press (1993) 3. Spinuzzi, C.: The methodology of participatory design. Techn. Commun. 52(2), 163–174 (2005) 4. Wilkinson, C.R., De Angeli, A.: Applying user centred and participatory design approaches to commercial product development. Des. Stud. 35(6), 614–631 (2014) 5. Sanders, E.B.N.: From user-centered to participatory design approaches. In: Design and the Social Sciences, pp. 1–8. CRC Press (2002). https://doi.org/10.1201/9780203301302.ch1 6. Steen, M.: Virtues in participatory design: cooperation, curiosity, creativity, empowerment and reflexivity. Sci. Eng. Ethics 19(3), 945–962 (2013) 7. Björgvinsson, E., Ehn, P., Hillgren, P.A.: Participatory design and democratizing innovation. In Proceedings of the 11th Biennial Participatory Design Conference, pp. 41–50. ACM, Nov 2010 8. Shapiro, D., Euchner, J.: Democratizing innovation. Res. Technol. Manag. (2016). https://doi. org/10.1080/08956308.2016.1136980 9. Sanders, E.B.N., Brandt, E., Binder, T.: A framework for organizing the tools and techniques of participatory design. In: Proceedings of the 11th Biennial Participatory Design Conference, pp. 195–198. ACM (2010) 10. Landrigan, P.J., Lambertini, L., Birnbaum, L.S.: A research strategy to discover the environmental causes of autism and neurodevelopmental disabilities. Environ. Health Perspect. 120(7), a258 (2012) 11. Memisevic, H., Sinanovic, O.: Executive function in children with intellectual disability–the effects of sex, level and aetiology of intellectual disability. J. Intellect. Disabil. Res. 58(9), 830–837 (2014) 12. Van Schrojenstein Lantman-de Valk, H., Linehan, C., Kerr, M., Noonan-Walsh, P.: Developing health indicators for people with intellectual disabilities. The method of the Pomona project. Journal of Intellectual Disability Research, 51(6), 427–434 (2007)

25 Participatory Technology Design for Autism and Cognitive Disabilities …

483

13. European Commission. (2005). Some elements about the prevalence of autism spectrum disorders (ASD) in the European Union. Retrieved from https://goo.gl/wjPpJt 14. United Nations: Conventions on the rights of persons with disabilities (n.d.). Retrieved from https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-personswith-disabilities/convention-on-the-rights-of-persons-with-disabilities-2.html 15. Porayska-Pomsta, K., Frauenberger, C., Pain, H., Rajendran, G., Smith, T., Menzies, R., Avramides, K., et al.: Developing technology for autism: an interdisciplinary approach. Personal Ubiqui. Comput. 16(2), 117–127 (2012) 16. Van Rijn, H.: Meaningful encounters: explorative studies about designers learning from children with autism. Doctoral Thesis (2012). https://doi.org/10.4233/uuid:978fbd25-eb26-4306-bebc5e2770538c5a 17. Parsons, S., Millen, L., Garib-Penna, S., Cobb, S.: Participatory design in the development of innovative technologies for children and young people on the autism spectrum: the COSPATIAL project. J. Assist. Technol. 5(1), 29–34 (2011) 18. Benton, L., Johnson, H.: Widening participation in technology design: A review of the involvement of children with special educational needs and disabilities. Int. J. Child-Comput. Interact. 3, 23–40 (2015) 19. Börjesson, P., Barendregt, W., Eriksson, E.,Torgersson, O.: Designing technology for and with developmentally diverse children: a systematic literature review. In Proceedings of the 14th International Conference on Interaction Design and Children, pp. 79–88. ACM (2015) 20. Koen, P., Ajamian, G., Burkart, R., Clamen, A., Davidson, J., D’Amore, R., Wagner, K., et al.: Providing clarity and a common language to the “Fuzzy Front End.” Res. Technol. Manag. 44(2):46–55 (2001). https://doi.org/10.1080/08956308.2001.11671418 21. Wagner, L., Baureis, D., Warschat, J.: How to develop product-service systems in the fuzzy front end of innovation. Int. J. Technol. Intell. Plann. 8(4), 333–357 (2012). https://doi.org/10. 1504/IJTIP.2012.051782 22. Barrow, D.M.: A Phenomenological Study of the Lived Experiences of Parents of Young Children with Autism Receiving Special Education Services. Portland State University (2017). https://doi.org/10.15760/etd.5919 23. DePape, A.M., Lindsay, S.: Lived experiences from the perspective of individuals with autism spectrum disorder. Focus Autism Other Dev. Disabil. 31(1), 60–71 (2016). https://doi.org/10. 1177/1088357615587504 24. Visser, F.S.: Bringing the everyday life of people into design. TEZ (2009). 978-90-9024244-6 25. Herriott, R.: The use of proxies: lessons of social co-design for inclusive design for people with cognitive disabilities. J. Accessibil. Design All 5(2), 100–124 (2015) 26. Frauenberger, C., Good, J., Keay-Bright, W.: Phenomenology, a framework for participatory design. In: Proceedings of the 11th Biennial Participatory Design Conference, pp. 187–190. ACM, Nov 2010 27. Frauenberger, C., Good, J., Keay-Bright, W.: Designing technology for children with special needs: bridging perspectives through participatory design. CoDesign 7(1), 1–28 (2011) 28. Guha, M.L., Druin, A., Chipman, G., Fails, J.A., Simms, S., Farber, A.: Mixing ideas: a new technique for working with young children as design partners. In: Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community, pp. 35–42. ACM, June 2004 29. Benton, L., Johnson, H., Ashwin, E., Brosnan, M., Grawemeyer, B.: IDEAS: an interface design experience for the autistic spectrum. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems—CHI ’12, p. 2599 (2012). ACM Press, New York. https://doi.org/10.1145/2207676.2208650 30. Malinverni, L., Mora-Guiard, J., Padillo, V., Mairena, M., Hervás, A.,Pares, N.: Participatory design strategies to enhance the creative contribution of children with special needs. In: Proceedings of the 2014 Conference on Interaction Design and Children, pp. 85–94. ACM, June 2014 31. Grace, J.: Sensory-being for sensory beings: creating entrancing sensory experiences. Routledge, London (2017)

484

N. Robb et al.

32. Robb, N., Leahy, M., Sung, C., Goodman, L.: Multisensory participatory design for children with special educational needs and disabilities. In: Proceedings of the 2017 Conference on Interaction Design and Children, pp. 490–496. ACM, June 2017 33. Hogg, J., Cavet, J., Lambe, L., Smeddle, M.: The use of ‘Snoezelen’as multisensory stimulation with people with intellectual disabilities: a review of the research. Res. Dev. Disabil. 22(5), 353–372 (2001) 34. Wood, D.P., Kang, K.C.: A classification and bibliography of software prototyping (1992) 35. Toffolon, C., Dakhli, S.: Software Prototyping Classification. In ICEIS (3) (pp. 266–271) (2003) 36. Paetsch, F., Eberlein, A., Maurer, F.: Requirements engineering and agile software development. In: Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003. WET ICE 2003. Proceedings, pp. 308–313. IEEE, June 2003 37. Brederode, B., Markopoulos, P., Gielen, M., Vermeeren, A., De Ridder, H.: pOwerball: the design of a novel mixed-reality game for children with mixed abilities. In: Proceedings of the 2005 Conference on Interaction Design and Children, pp. 32–39. ACM, June 2005. https://doi. org/10.1145/1109540.1109545 38. Höysniemi, J., Hämäläinen, P., Turkki, L.: Wizard of Oz prototyping of computer vision based action games for children. In: Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community (pp. 27–34). ACM (2004, June). 39. Hailpern, J., Harris, A., La Botz, R., Birman, B., Karahalios, K.: Designing visualizations to facilitate multisyllabic speech with children with autism and speech delays. In Proceedings of the Designing Interactive Systems Conference, pp. 126–135. ACM, June 2012 40. Lewis, C., Rieman, J. (1993). Task-Centered User Interface Design. A Practical Introduction 41. Karpova, A., Culén, A.: Challenges in designing an app for a special education class. In: Proceedings of the IADIS International Conference on Interfaces and Human-Computer Interaction 2013, pp. 95–102 (2013) 42. Bai, Z., Blackwell, A.F., Coulouris, G.: Through the looking glass: pretend play for children with autism. In: 2013 IEEE International Symposium on Mixed and augmented reality (ISMAR), pp. 49–58, Oct 2013. IEEE. https://doi.org/10.1109/ISMAR.2013.6671763 43. Malinverni, L., Mora-Guiard, J., Padillo, V., Valero, L., Hervás, A., Pares, N.: An inclusive design approach for developing video games for children with autism spectrum disorder. Comput. Hum. Behav. 71, 535–549 (2017). https://doi.org/10.1016/J.CHB.2016.01.018 44. Benton, L., Johnson, H.: Structured approaches to participatory design for children: can targeting the needs of children with autism provide benefits for a broader child population? Instr. Sci. 42(1), 47–65 (2014) 45. Millen, L., Cobb, S., Patel, H., Glover, T.: A collaborative virtual environment for conducting design sessions with students with autism spectrum disorder. Int. J. Child Health Hum. Dev. 7(4), 367–376 (2014). Retrieved from https://search.proquest.com/docview/1655287782/fullte xtPDF/79FD29BB21514577PQ/1?accountid=14504 46. Boyd-Graber, J.L., Nikolova, S.S., Moffatt, K.A., Kin, K.C., Lee, J.Y., Mackey, L.W., Klawe, M.M., et al.: Participatory design with proxies: developing a desktop-PDA system to support people with aphasia. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 151–160. ACM, Apr 2006 47. Boyle, B., Arnedillo-Sánchez, I.: Exploring the Role of Adults in Participatory Design for Children on the Autism Spectrum. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2016). https:// doi.org/10.1007/978-3-319-40409-7_21 48. Shen, S., Doyle-Thomas, K.A.R., Beesley, L., Karmali, A., Williams, L., Tanel, N., McPherson, A.C.: How and why should we engage parents as co-researchers in health research? A scoping review of current practices. Health Expect. (2016). https://doi.org/10.1111/hex.12490 49. Keay-Bright, W.: The reactive colours project: demonstrating participatory and collaborative design methods for the creation of software for autistic children. Design Principles Pract 1(2):7– 15. Retrieved from https://repository.cardiffmet.ac.uk/dspace/handle/10369/158 50. van Rijn, H., Stappers, P.J.: Expressions of ownership: motivating users in a co-design process. In: Proceedings of the Tenth Anniversary Conference on Participatory Design 2008, pp. 178– 181. Indiana University, Oct 2008

25 Participatory Technology Design for Autism and Cognitive Disabilities …

485

51. Nissinen, E., Korhonen, P., Vellonen, V., Kärnä, E., Tukiainen, M.: Children with special needs as evaluators of technologies. In: EdMedia: World Conference on Educational Media and Technology, pp. 1356–1365. Association for the Advancement of Computing in Education (AACE), June 2012 52. Sampath, H., Agarwal, R., Indurkhya, B.: Assistive technology for children with autism-lessons for interaction design. In: Proceedings of the 11th Asia Pacific Conference on Computer Human Interaction, pp. 325–333. ACM, Sept 2013. https://doi.org/10.1145/2525194.2525300 53. Schmidt, S., Power, M., Green, A., Lucas-Carrasco, R., Eser, E., Dragomirecka, E., Fleck, M.: Self and proxy rating of quality of life in adults with intellectual disabilities: results from the DISQOL study. Res. Dev. Disabil. 31(5), 1015–1026 (2010) 54. Beggan, J.K.: On the social nature of nonsocial perception: the mere ownership effect. J. Pers. Soc. Psychol. 62(2), 229 (1992) 55. Wang, Q., Battocchi, A., Graziola, I., Pianesi, F., Tomasini, D., Zancanaro, M., Nass, C.: The role of psychological ownership and ownership markers in collaborative working environment. In: Proceedings of the 8th International Conference on Multimodal Interfaces, pp. 225–232. ACM, Nov 2006

Chapter 26

Exploring Current Board Games’ Accessibility Efforts for Persons with Visual Impairment Frederico Da Rocha Tomé Filho, Bill Kapralos, and Pejman Mirza-Babaei

Abstract Traditional board games have risen back in popularity in recent years, with the activity bringing many positive contributions to its participants. Despite the resurgence of this genre, the act of playing such games remains highly inaccessible to persons with visual impairment, since they often employ visuals alone to communicate gameplay information. In this paper we discuss the issues related to board games accessibility, the efforts that have been conducted in the field, and explore related fields, such as accessible video games’ and immersive technologies, presenting a variety of approaches employed to enable visual accessibility. We discuss the strengths, weaknesses and reach of different approaches, and the current research gap in the field of board games accessibility. Keywords Accessibility · Board games · Visual impairment · Immersive technology

26.1 Introduction Despite the massive popularity of the video game industry, traditional tabletop board games (which we define as all non-digital games) have become more popular since the release of the commonly called “German-Style games”, or Eurogames, in the late 1970s and early 1980s [1]. This genre of games was responsible for the introduction of new types of themes and gameplay mechanics, distinguishing them from traditional abstract strategy games, such as Checkers and Chess, and other popular mass-market games, such as Monopoly and The Game of Life. The most important game of the genre, The Settlers of Catan [2] was initially released in 1995 in Germany and obtained huge success in the North American market, consequently setting the scene F. Da Rocha Tomé Filho · P. Mirza-Babaei Ontario Tech University, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada e-mail: [email protected] B. Kapralos (B) maxSIMhealth, Ontario Tech University, Oshawa, ON, Canada e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_26

487

488

F. Da Rocha Tomé Filho et al.

to the publishing of other titles from the genre. The Settlers of Catan has sold over 25 million copies worldwide, with a film adaptation in the making by Sony Pictures [3]. The industry is considered to be in its golden age, continuing to grow each year and registering almost $1.2 billion dollars in sales during 2015 just in Canada and the United States alone [4]. These styles of games are encouraging not only businesses that sell them to thrive, but also new related ones such as Board Game Cafés such as the Canadian Snakes and Lattes, where customers are able to enjoy gaming while eating [5]. Unique board games related words and components have also been recently officially recognized and added to English dictionaries, such as the case of the word “Meeple”, a recurrent playing piece in a variety of games [6]. In the past few decades, online board game communities have also emerged, such as the BoardGameGeek [7], which contains games’ information, news, and discussion forums, connecting hundreds of thousands of enthusiasts around the world [1]. Board Game conventions, such as the Spiel Essen Game Fair and Gen Con, also attract a increasing number of thousands of attendees each year, pulling turnstile numbers as high as 180.000 [8]. Aside from fun and enjoyment, playing tabletop board games has shown to bring positive and formative contributions to its participants. It has demonstrated to be an effective tool to improve participants’ communication, problem solving and social skills; as well as to promote effective social interaction between participants [9]. It is also valuable to the psychological, cognitive and social development of young kids [1]. In addition, it is worth noting that the act of playing board games has a strong emphasis on the socialization of its participants, with constant interaction between participants that are sharing the same physical environment. In a research conducted with board game players, the majority of participants considered this social aspect of the activity one of its primary source of enjoyment [1]. Similar to video games, board games have also transcended their status as pure entertainment devices, with serious games designed for purposes such as learning, simulation, training, and raising awareness regarding different topics. For example, the game “CODE: Programming Game Series” teaches children basic programming concepts, such as loops and conditionals, through puzzles [10]. Freedom: The Underground Railroad [11] provides an immersive experience that simulates the early history of the United States in the nineteenth century, where players are part of the abolitionist movement trying to end slavery. The game received an extensive list of awards, praised by how it respectfully dealt with the topic and how the gameplay aligned with contents that are usually teach in US schools [12]. The game Meltdown contains game pieces made of ice that melts while playing, in order to promote awareness about global warming and its effects [13]. Despite their growing popularity and benefits, the act of playing board games is an activity that poses many accessibility barriers for some, particularly those with visual impairments. Board games share similar accessibility issues to those found in video games, as most of their gameplay information is presented exclusively through the use of visuals [14]. Due to such characteristics, persons with visual impairment have

26 Exploring Current Board Games’ Accessibility Efforts …

489

severely limited experiences when playing these games, or are completely unable to play. According to the World Health Organization (WHO) in 2017, the number of persons with visual impairment is over 250 million people worldwide, with this number potentially tripling as the world population grows older [15]. Visual impairment is an umbrella term that encompasses conditions such as blindness, low vision, and color blindness, which negatively influence people’s ability to conduct daily life tasks and activities. It can be defined as the functional limitation of the vision that cannot be corrected even with the use of glasses or lenses [16]. As discussed earlier, the act of playing traditional board games brings valuable contributions to its participants, with many of those being pertinent to people with visual impairment. For example, the activity’s improvement of social and communication skills is highly beneficial to this group, as persons with visual impairment tend to present a higher level of social isolation and difficulties in interpersonal relationships [17]. Unfortunately, solutions to game’s accessibility for persons with visual impairment can be a big challenge. One of the biggest difficulties to design universally accessible games for this audience relates to the varying degrees of Visual Impairment that people experience. For example, low vision can include characteristics such as tunnel vision, sensitivity to light, blurred or distorted vision, absence of peripheral vision, among others; legally blind persons have a wide variance of visual acuity; and color blindness may refer to individuals that either have difficulty recognizing or are unable to completely visualize specific color spectrums [15, 16, 18]. Therefore, accessibility solutions must account for the specific needs of the individuals, while at the same time allowing gameplay between players with a variety of visual abilities. In order to achieve true inclusion and benefit the biggest number of individuals, it is also important to explore solutions that are low cost and with ease of access, to avoid hindering the reach of the selected approach. Potential solutions for board game accessibility can often be categorized into two main areas: (a) design changes of games’ components and rules; and (b) the use of digital assistive technology that can enable or facilitate play. Design changes of game elements, especially when carried during the conceptual stages of the development of games, often require low financial costs as even small decisions such as the use of specific colors, shapes, typographies, patterns, icons, and layout positioning can already thoroughly improve the accessibility of visual elements, and consequently of the game itself [19]. This approach requires the assessment of each individual game regarding their specific issues and accessibility barriers in order to carry changes, often resulting in a more optimal solution. Although an initial assessment needs to be conducted for each individual game, solutions can be reusable for other games of similar genre, components or mechanisms. While this approach usually does not require high financial costs and apparatus, it requires the skills or knowledge regarding visual accessibility in order to properly identify issues and conduct adjustments. Although some solutions can be carried to similar games, these still need to be implemented separately to each game, as the

490

F. Da Rocha Tomé Filho et al.

same set of game components are not shared between different games, and thus it can be time consuming for the end-user to adapt a variety of different games. The second approach, Digital assistive technologies, seeks to improve upon board games accessibility in a variety of ways, such as the design of companion applications to enable specific games, or in a more general manner, towards facilitating the overall tasks related to playing these games. They employ different technologies, each with their specific strengths and weaknesses. One such promising approach has been the investigation of the use of Immersive Technologies. In recent years, Immersive Technologies, which includes technologies such as Augmented Reality (AR) and Virtual Reality (VR), have become more accessible through the release of cheaper Headsets (or Head-Mounted Displays), such as the HTC Vive or Google Glass. The use of such devices have presented potential strategies that can improve accessibility for users with visual impairment, with approaches tackling both Vision Enhancement [20] as well as Sensory Substitution, such as communicating visual information through the use of audio [21]. Digital assistive technologies may require less responsibility and upfront effort from the players in order to enable board games’ gameplay, as applications can easily be designed to be compatible with a variety of different games that share similar elements. They can help players through moderation of the activity, automatizing parts of gameplay and facilitating information communication, tasks that would otherwise require constant assistance from a participant without visual impairments or design adjustments. Applications that seek to improve visualization of game components can also be used to easily enable games that would otherwise be too difficult or tiresome to conduct adjustments. However, the use of overly general digital assistive applications may not seamlessly provide an optimal accessible gameplay experience, as accessibility solutions designed with consideration to the specificities of each game are able to deliver a better and more polished user experience, without interaction hiccups [22]. Unfortunately, research and development of accessible board games for persons with visual impairment in either approaches is still very limited. The overarching goal of this work is to investigate and present an overview of the current efforts regarding board games’ accessibility for persons with visual impairment in the aforementioned areas and other closely related fields, discussing limitations in current approaches and potential solutions that can be investigated to enable this audience to actively participate in the activity of board games, fully experiencing gameplay without any constraints.

26.2 Selection Classification In order to conduct this literature review, we researched academic articles from peerreviewed conference and journal databases, such as the ACM Digital Library, Google Scholar, SpringerLink, among others. We sought articles that investigated the topic of board games accessibility, especially those that proposed solutions for the target

26 Exploring Current Board Games’ Accessibility Efforts …

491

audience of persons with any sort of visual impairment (including Low vision, Legal Blindness and Color Blindness). We also investigated Internet forums and websites that discussed board games’ accessibility, as the community itself is a huge driving force for the proposal and discussion of accessibility solutions. Unfortunately, as Woods [1] described and was reiterated in our literature research, there is limited formal research done regarding the characteristics of board games, and even less about board games’ accessibility. We have found only 1 study related to board games’ accessibility, which discussed about inclusion of persons with visual impairments. Therefore, we decided to expand the scope of our research to include works conducted in areas that share similar characteristics to board games or that propose the use of novel technologies to improve visual accessibility that could indirectly facilitate board games’ gameplay. These can ultimately provide insight on approaches to be translated to the context of board games. Our literature review was organized into the following four categories: 1. Accessible Digital Games Although the main interface of interaction of this type of game is different, Digital Games share a collection of similar characteristics to board games, such as goals, rules, multiplayer, interaction and gameplay loop. As previously commented, Digital games also face many of the same accessibility issues regarding communication of gameplay information, as it heavily employ visuals to communicate a variety of gameplay information to players. In this category we discuss approaches employed to enable non-accessible digital games to become playable irrespective of players’ visual abilities, and games that were designed from the ground up with the goal of being accessible to both participants with and without visual impairments. 2. Accessible Board Games Although scarce, we present the literature focused on the aspect of tabletop board game accessibility for persons with visual impairment. We investigate the different efforts that have been employed to make these games accessible, including community-driven strategies and discussions. These efforts include low-tech design adjustments and digital non-immersive assistive technologies to improve board games’ accessibility. 3. Games Accessibility Guidelines Accessibility guidelines are an effective way of providing a basic structure and guidance on how to identify and handle accessibility issues. These can help designers, publishers or players to carry design changes needed to enable games. Unlike Digital Games, currently no comprehensive list of accessibility guidelines that fully explores the specificities of board games exists. We explore the main collections of digital games’ accessibility guidelines with the goal of investigating how these can serve as initial foundation to the creation of board games guidelines. 4. Immersive Technologies (VR and AR) and related As discussed before, Immersive technologies have been one of the promising accessible technologies in order to improve upon visual accessibility, due to its

492

F. Da Rocha Tomé Filho et al.

current reach in the market and flexibility of features. In this category we investigate efforts related to visual accessibility, and board games’ gameplay, when employing technologies such Virtual and Augmented reality, with or without the use of Head-Mounted Displays (HMDs). We also discuss some non-immersive systems that make use of similar types of technologies. Although not all papers discussed in this category are focused to board games, they present pertinent efforts regarding the use of the technology in a way to improve visual accessibility to users, which can ultimately be used to enable board games’ gameplay. We have gathered a total of 14 works published between 2004 and 2017, including academic papers, reports and guidelines lists, of which we discuss all of them in the following sections. We assess the strengths and weaknesses of the approaches explored, their suitability considering the variety of visual impairments, and their reach (how feasible these solutions are to the general target audience). We also comment about a variety of community and publishers efforts to improve visual accessibility.

26.3 Accessible Digital Games When considering the topic of games accessibility for persons with visual impairment, the majority of efforts can be found in the digital domain, targeted to games available in video game consoles, smartphones and personal computers. While the amount of design and development of accessible digital games is still extremely small when compared to the digital gaming industry as a whole, considerable progress has been achieved in the past few years in regards to an overgrowing catalogue of games accessible for persons with visual impairment. The design of games for persons with visual impairment gave rise to the genre of “blind-accessible” games called audio-games, which replaces visual feedback with sound and haptics to communicate information. However, access to mainstream games is still rather scarce, and most accessible games only allow gameplay exclusively between participants that have visual impairments, limiting the possibilities of engagement between players with and without visual disabilities. Most research conducted in the field has aimed to close the gap between players with and without visual impairments; or to allow access to popular mass-market titles; in order to achieve more effective inclusion and access of this audience to the hobby. Yuan and Folmer [23] designed the accessible game “Blind Hero”, an adapted version of the commercially popular digital rhythm game Guitar Hero, which enabled gameplay for those with visual impairment via use of a custom designed glove capable of providing haptic feedback through small pager motors. Each motor was attached to a finger of the glove, buzzing when users should press buttons corresponding to the gameplay. The game was tested by four participants (two blind, one blindfolded

26 Exploring Current Board Games’ Accessibility Efforts …

493

sighted, and one sighted), with all users being able to play the game, considering it fun, and having a similar level of performance after continuous play. Gutschmidt et al. [24] developed a hybrid analog–digital adaptation of the puzzle game Sudoku for persons with visual impairment. The approach explored the use of sensory substitution through a tangible haptic display connected to a computer that communicates information to players via touch. The developed prototype, “BrailleDis 9000”, is a tactile display containing rows of dots that can be raised or lowered, supporting features such as vibrations or pulsations, and accepting gestures or touch as input. The system was designed to facilitate play of users with visual impairment, allowing them to customize different ways to receive feedback through the tactile display, while at the same time it sought to preserve the game’s level of challenge and complexity. Rector et al. [25] explored the design of accessible exergames (digital games used for exercise) for persons with visual impairment. They designed a game for yoga learning/practice called “Eyes-Free Yoga”, which uses the Microsoft Kinect in order to track players’ poses and to provide audible instructions and feedback to players. The prototype was tested with 16 participants with visual impairments, and was considered positive by most participants. Although the game was praised, it was unable to simulate a real yoga class, as limitations imposed by the Kinect made it impossible for the system to provide completely accurate feedback for users regarding their poses. Although the aforementioned studies do not approach traditional tabletop board games and its specificities, they provide useful insight regarding the possibility of designing accessible games via sensory substitution. In all studies, the majority of participants reported to be able to experience the intended gaming experience, regardless of their impairment, and that they have enjoyed the activity of playing these games [23, 25]. These studies demonstrate not only the feasibility of translating common visual feedbacks into other sensory systems, such as audio and touch, but also that players were able to actively engage with the games with autonomy. Unfortunately, there are limited considerations in regards to participants that have a lower degree of visual impairment, such as low vision or color blindness, with most solutions explored only addressing the needs of blind persons. Solutions that employed custom or proprietary technology, such as Gutschmidt’s BrailleDis 9000 and Yuan’s Haptic Glove, were effective to allow gameplay of participants with visual impairment. However, these approaches have limited reach to most of the population, as they require the development of complex and often expensive custom devices, and therefore may not be an ideal mass market accessibility approach [26]. The design of an assistive system or game that utilizes a more commonly available technology, such as the Microsoft Kinect in the work of Rector et al. [25] constitutes a more feasible step in the direction of bringing that solution to the general intended audience.

494

F. Da Rocha Tomé Filho et al.

26.4 Accessible Board Games: Community and Industry Efforts Most of the accessibility efforts to design or adjust tabletop games prominently come from the community of players. Handmade solutions for a variety of popular games are discussed on Internet forums, primarily via the BoardGameGeek website. Users collaboratively work on different types of information [27–30], including: • Discussion regarding the level of accessibility found in commercial board games, with detailed descriptions of barriers found in specific games; • How to play games with players that have some form of visual impairment, without having to adapt these games; • Players’ personal experiences in playing with those with visual impairment; • How to make physical adaptation of game components and rules to make a game more accessible. The website “Meeple Like Us” [31] is one of the pioneers to formally address the issue of accessibility in board games. It provides “accessibility teardowns”: reviews of popular board games in order to assess their level of accessibility for persons with visual, cognitive, and physical impairment. They also comment on barriers for those with communication, emotional and socioeconomic issues. Games analyzed receive an accessibility score on each of these different categories, representing how easily someone with that specific disability would be able to enjoy the game without any adjustments to the game or use of special assistive technology. In recent years, board game publishers are taking first steps towards making their games more accessible, especially in regards to issues related to color blindness. For example, the game Splendor [32] in its first edition used color alone to represent resources, making the game unplayable to players who could not differentiate the colors depicted in cards. The addition of iconography to differentiate each color made the game accessible for this audience. The classic game Uno, published by Mattel, received a colorblind accessible edition via addition of small ColorADD [33] icons to represent the cards’ colors, 46 years after the release of its original edition [34]. Nevertheless, publisher initiatives in regards to accessibility are still few and far between, especially when considering the amount of non-accessible published games every year. Other companies, such as 64 OZ Games [35], have approached the development of “toolkits” as a product to enable specific board games to become accessible for blind persons. 64 sell kits that employ use of Braille and QR Codes stickers to be attached to game components, communicating written information through touch and audio, respectively. The constant community discussion regarding board game accessibility helps to gauge overall interest of a diversity of users in accessible games, and provides an initial view and understanding on the needs and barriers faced by this audience. The recent efforts by board game publishers to improve upon visual accessibility may be reflective of the growing outcry for solutions by community users’. While

26 Exploring Current Board Games’ Accessibility Efforts …

495

most solutions from publishers have focused only on the issue of color blindness, these efforts and considerations can be deemed a step in the right direction that can potentially evolve to also account for practices to enable persons with other types of visual impairment: for example, graphic design changes to improve elements’ size and contrast can be already highly beneficial to persons with low vision. Unfortunately, other than the work by MeepleLikeUs, which focuses on a variety of different accessibility issues, there’s an absence of formal research in the topic of board games accessibility or board games accessibility for those with visual impairment. Although the community brings a plethora of contributions in many different aspects, most of the approaches discussed are still rather primitive: little is discussed regarding use of digital technologies or more advanced design techniques. There is also no observable “unity” regarding solutions proposed, or master recommendation list, as the discussions are scattered across multiple different posts and webpages.

26.5 Game Accessibility Guidelines One of the strategies employed to improve the accessibility of games has been through the development of guidelines that aid developers on the identification of potential accessibility issues, and at the same time also providing solutions to those, enabling the design of games that have lower barriers for those that have impairments. For digital games, researchers sought to build a comprehensive list of accessibility guidelines to become an industry standard, similar to the W3C Web Accessibility guidelines [36]. This list, the “Game Accessibility Guidelines” (Ellis et al. [37] is a collaborative living document developed by professionals from the digital game industry and research academics, being one of the prime lists of guidelines in how to enable overall video games to be more accessible for persons with motor, cognitive, vision, speech and hearing impairments, providing specific examples and details for each guideline. The list is divided into three main categories: (i) Basic, with easy to implement solutions and general techniques,(ii) Intermediate, requiring some planning, but benefits all users irrespective of impairments, and (iii) Advanced, which present complex adaptations to account for more profound impairments and a more universally accessible design. Other organizations and researchers have also designed their own sets of recommendations regarding digital games accessibility. Araujo et al. [38] proposed a set of guidelines focused on the specifics regarding the development of audio games for persons with visual impairment, addressing the ability to allow effective gameplay between persons with and without impairment. The International Game Developers Association (IGDA) prepared an accessibility report on digital games with statistics derived from surveys to assess the current degree of accessibility in the industry, discussing potential accessibility strategies that can aid with the inclusion process of players [39]. The “Includification” guide [19] discusses the presence of players with impairments in the community and lists different approaches to allow for a

496

F. Da Rocha Tomé Filho et al.

better inclusion of those in the hobby of playing digital games. Cheiran and Pimenta [26] grouped and evaluated many of these accessibility guidelines for digital games using content analysis, in order to build a more concise list, dividing the final list of guidelines in categories based on the W3C: Perceivable, Operable, Understandable and Robust. There is continuous improvement on recommendations, best practices and guidelines to enable accessible digital games, accounting for different devices and technologies. These approaches help designers and developers during the production phase of games, or even afterwards the release, with accessibility updates being able to quickly be delivered to users at home. Most collections, such as the work of Includification and the IGDA, are not limited to only design and development recommendations, instead they also aim to raise awareness to the overall public and game companies about the topic of games accessibility, providing statistics and information about users with a variety of impairments and their presence in the market. Although not specific to visual impairment, the AbleGamers charity (founded by Mark Barlet in 2004), employ a combination of technologies including mouth controllers, eye gaze, and special customized controllers, to allow people to play video games irrespective of their disability, “bridging the gap between ability and desire” [40]. Brooks [41] provides a more thorough discussion regarding the AbleGamers charity. Unfortunately, similar collections of recommendations, guidelines, or best practices cannot be observed regarding the topic of accessible board games. Although some of the recommendations found in guidelines for digital games can be easily adjusted to fit within the board game context (e.g. proper use of visuals, such as correct use of color, contrast, elements size, etc.), these games have specificities that still require them to be directly addressed, such as the materiality of game components, spatiality, and the social aspect of the activity. The discussion regarding board game accessibility is scarce, and although it can be found in discussion forums, as commented previously, there is still a lack of formal development of lists or guides that guide how to tackle the accessibility barriers present in board games.

26.6 Immersive Technologies (VR and AR) and Related Immersive Technology is an umbrella term that includes technologies such as Virtual Reality (VR), which allows the exploration of high fidelity three-dimensional computer generated environments, and Augmented Reality (AR), which augments the physical world with transposition of digital elements into it. Although both these technologies make heavy use of visuals, they also employ auditory and haptics feedback to communicate information and increase users’ immersion. The recent release of cheaper and commercially accessible virtual and augmented reality headsets has enabled the technology to reach end-users, and has been investigated as a promising tool to improve upon accessibility for a variety of types of

26 Exploring Current Board Games’ Accessibility Efforts …

497

impairment. These headsets have the flexibility of being compatible with a variety of different devices, and allow applications that focus on visuals, audio, gestures, haptics, movement, or a combination of them. Researchers have been investigating the development of accessible applications for these systems, and their usage as assistive technologies, intended to support persons with impairments by facilitation of specific tasks. Although we are not aware of any studies focused on the use of these technologies for the task of playing board games, there have been investigations on how to improve upon overall accessibility for persons with visual impairment. For this target audience, such technologies have the potential to provide either visual enhancement, which seeks to improve the visualization of elements, or sensory substitution, which replaces visual feedback with alternate sensory systems. Zhao et al. [22] explored using Head-Mounted Displays (HMDs), such as the Oculus Rift, to enhance the vision of persons with low vision. They devised a video see-through (VST) system called ForeSee, which contained customizable video enhancement methods and display views, and evaluated users experience using the system to conduct daily life tasks. They found out that ForeSee was effective for a variety of persons with different types of low vision, with the exception of those that had either a severe degree of impairment or too little impairment. The researchers also noticed that the functionality to mix and customize different enhancements was essential for the system, observing that different enhancements worked better for different types of users and/or tasks. Zhao et al. [42] sought to discover the ability of persons with low vision to perceive virtual elements using AR smart glasses. They conducted a series of user tests involving participants with low vision using a mainstream commercial AR glasses, the Epson Moverio BT-200. The test’s tasks sought to assess users’ abilities of perceiving the glasses’ projected elements (texts, shapes, sizes, contrasts, colors) in two different scenarios: walking and stationary. They found that low vision participants were able to identify the projected elements, and listed characteristics that made elements easier to be identified, such as: luminance contrast being better than color contrast,white and yellow colors; thick borders; sans serif fonts; etc. Maidenbaum et al. [21] explores the use of Sensory Substitution Devices (SSDs) for blind persons in the context of VR environments, in order to discover the possibilities of this approach for navigational training and the level of immersion experienced by participants on these environments. Blind participants used the SSD EyeMusic, which converts visual image characteristics (including distance, colors, brightness, etc.) into different sound instruments, in order to conduct different tasks. The results were highly positive, showing that all blind participants were able to effectively complete all required tasks and reported increased level of immersion in the VR environment as tests progressed. Other related approaches explore the design of systems that use smartphones’ cameras and sensors in order to identify and substitute visuals to audio. Kacorri et al. [43] discusses the possibility of developing a personal object recognizer app for persons with visual impairment, removing the need for expensive or crowd-powered alternatives. The authors designed an app for smartphones that allows users to take

498

F. Da Rocha Tomé Filho et al.

photos and label different objects, with the app being able to process and identify images using an adapted version of Google’s Inception image recognition system. The authors found that the biggest challenge for personal object recognizers is how to ensure that users are able to properly take photos of the objects following the system’s instructions, as photo consistency highly affects the system’s accuracy when identifying objects. Regardless, the average accuracy observed in tests conducted by blind participants was of 75%, with some blind participants having the system accuracy as high as 92%, close to sighted participants’ accuracy of 96.9 and 99.6%. Regal et al. [44] explored the inclusion of persons with visual impairment on the activity of Brainstorming, by using tangible cards with near field communication (NFC). The system developed, named TalkingCards, sought to allow persons with visual impairment to be able to use these cards in a similar fashion to written cards and post-its that are used during brainstorming sessions, maintaining the user experience involved with tactile brainstorming methods. The input methods to register information on cards were either speech-to-text or recording audio through a smartphone app. The authors conducted a series of four different user tests to assess the system, with results showing that all participants considered it useful and easy to use. Although these studies do not directly address the context of board games, many of the investigated approaches provide valuable insights regarding technologies that have the potential to lower or even remove some accessibility barriers for those with visual impairment. For example, systems such as ForeSee aim to provide an overall enhancement of one’s vision and, therefore, can improve accessibility of any activity that involves the visual stimuli, including playing games [22]. While general visual enhancement systems can be beneficial to facilitate a multitude of activities and tasks, such systems can also be designed in a more specialized manner to focus on the characteristics and scenarios present during board game play. Developing an assistive technology intended to enable users to more easily conduct gameplay related tasks contributes to a better and more complete user experience. These approaches can be highly beneficial to those affected by conditions of low vision or color blindness, as corrections can be customized and personalized to the user’s display. For persons with severe visual impairments, such as those that are legally blind, the use of cameras or sensors, such as the NFC, to identify elements and translate them into different sensory systems constitutes a low-cost approach that has been shown to be effective to communicate information or provide general feedback when conducting tasks. For a variety of board and card games, gameplay could be enabled by simple identification of cards by users, which could for example be quickly achieved through use of audio.

26.7 Conclusions As we exemplified in the previous sections, the topic of board game accessibility is rarely discussed and explored in the academia, lacking in-depth analysis of its

26 Exploring Current Board Games’ Accessibility Efforts …

499

different issues and how to solve them. The inherent accessibility barriers of the activity for persons with visual impairment prevents a large number of persons worldwide to be involved with the activity that could otherwise wield great benefits to them. Due to the absence of studies directly related to the field, we present works from related fields such as digital games and immersive technologies for those that have visual impairments. While the aspect of accessibility is also a work in progress in these fields, substantial progress has already been achieved in regards to technology, guidelines, techniques and awareness. Although there is no single silver bullet to improve upon visual accessibility, most studies have focused on investigating either sensory substitution, communicating visuals through another sensory system, or visual enhancement, improving and facilitating the visualization of elements. While approaches explored in related fields are mostly focused on digital media devices, they fundamentally seek to solve the same communication problem present in non-digital tabletop board games: the exclusive use of visuals to communicate relevant information. A variety of approaches have been investigated, ranging from low-tech solutions, such as simple graphic design changes, to heavily technological, with the use of immersive technologies, image recognition systems and sensors. The results from these studies were discussed in this work and aim to shed light on the strengths, weaknesses and reach of different approaches that seek to improve accessibility. These efforts can serve as foundation for similar studies that can be conducted, or technologies that can be investigated, addressing the specificities of tabletop board games and its components.

References 1. Woods S (2012) Eurogames: The Design, Culture and Play of Modern European Board Games. McFarland Publishing 2. Teber, K.: The settlers of Catan (1995). https://boardgamegeek.com/boardgame/13/catan. Accessed 4 Nov 2017 3. Variety: Variety—settlers of Catan. Movie adaptation in the works at Sony (2017). https://var iety.com/2017/film/news/settlers-of-catan-movie-sony-1202587485/. Accessed 4 Nov 2017 4. ICv2.: ICv2: Hobby Games Market Nearly $1.2 Billion (2016). https://icv2.com/articles/news/ view/35150/hobby-games-market-nearly-1--2-billion. Accessed 4 Nov 2017 5. Mcarthur, J.: The exciting rise of board game cafes|Geek and Sundry (2016). https://geekan dsundry.com/the-exciting-rise-of-board-game-cafes/. Accessed 6 July 2018 6. Dictionary O: Meeple|Definition of meeple in English by Oxford Dictionaries (2015). https:// en.oxforddictionaries.com/definition/meeple. Accessed 6 July 2018 7. Solko, D., Alden, S.: BoardGameGeek. https://boardgamegeek.com/. Accessed 19 Jan 2018 8. WAZ: Spielemesse Spiel’17 endet mit Besucherrekord (2017). https://www.waz.de/staedte/ essen/spielemesse-spiel-17-endet-mit-besucherrekord-id212384747.html. Accessed 4 Nov 2017 9. Zan, B.: Interpersonal understanding among friends: a case-study of two young boys playing checkers. J. Res. Child. Educ. 10, 114–122 (1996). https://doi.org/10.1080/025685496095 94894

500

F. Da Rocha Tomé Filho et al.

10. ThinkFun: //CODE Series Media—Thinkfun (2017). https://www.thinkfun.com/media-center/ code-series/. Accessed 4 Nov 2017 11. Mayer, B.: Freedom: the underground railroad (2012). https://boardgamegeek.com/boardgame/ 119506/freedom-underground-railroad. Accessed 4 Nov 2017 12. PlayPlayLearn Freedom: The Underground Railroad|Play Play Learn. https://playplaylearn. com/games/freedom-underground-railroad. Accessed 5 Nov 2017 13. Wired: GEOlino meltdown: a game of cold, Hard facts|wired (2013). https://www.wired.com/ 2013/03/geolino-meltdown/. Accessed 4 Nov 2017 14. Yuan, B.: Towards generalized accessibility of video games for the visually impaired. ProQuest Dissertations and Theses 125 (2009) 15. World Health Organization: WHO|Vision Impairment and Blindness. World Health Organization (2017) 16. Vanderheiden, G., Vanderheiden, K.: Accessible design of consumer products to increase their accessibility to the people with disabilities or who are aging (1992) 17. Maia, J.M.D., Del Prette, A., Cordeiro, F.L.: Social skills of visually disabled people. Rev. Bras. Ter. Cogn. 4 (2005) 18. Thylefors, B., Negrel, A.D., Pararajasegaram, R., Dadzie, K.Y.: Global data on blindness. Bull. World Health Organ. 73, 115–121 (1995) 19. Barlet, M.C., Spohn, S.D.: Welcome to includification—actionable game accessibility (2012). https://www.includification.com/. Accessed 4 Nov 2017 20. Peterson, R.C., Wolffsohn, J.S., Rubinstein, M., Lowe, J.: Benefits of electronic vision enhancement systems (EVES) for the visually impaired. Am. J. Ophthalmol. 136, 1129–1135 (2003) 21. Maidenbaum, S., Buchs, G., Abboud, S., et al.: Perception of graphical virtual environments by blind users via sensory substitution. PLoS ONE 11, 1–21 (2016). https://doi.org/10.1371/ journal.pone.0147501 22. Zhao, Y., Tech, C., Tech, C., et al.: ForeSee : a customizable head-mounted vision enhancement system for people with low Vision. In: ASSETS ’15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, pp. 239–249 (2015). https://doi.org/ 10.1145/2700648.2809865 23. Yuan, B., Folmer, E.: Blind hero: enabling guitar hero for the visually impaired. In: Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility, pp. 169–176 (2008). https://doi.org/10.1145/1414471.1414503 24. Gutschmidt, R., Schiewe, M., Zinke, F., Jürgensen, H.: Haptic emulation of games. In: Proceedings of 3rd International Conference on PErvasive Technologies Related to Assistive Environments (PETRA), vol. 10(1). https://doi.org/10.1145/1839294.1839297 25. Rector, K., Bennett, C.L., Kientz, J.A.: Eyes-free yoga: an exergame using depth cameras for blind & low vision exercise. In: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’13 1–8 (2013). https://doi.org/10.1145/ 2513383.2513392 26. Cheiran, J.F.P., Pimenta, M.S.: “Eu também quero jogar!”: reavaliando as práticas e diretrizes de acessibilidade em jogos. In: Proceedings of the 10th Brazilian Symposium on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction. Brazilian Computer Society, pp. 289–297 (2011) 27. DeFrisco, C.: Color Blind Images|BoardGameGeek (2007). https://www.boardgamegeek.com/ geeklist/20738/color-blind-images. Accessed 4 Nov 2017 28. Hoekstra, Z.: Gaming with the blind: a story|BoardGameGeek (2009). https://boardgamegeek. com/geeklist/48481/gaming-blind-story. Accessed 4 Nov 2017 29. Kunkel, J.: Games for the visually impaired|BoardGameGeek (2005). https://www.boardgame geek.com/geeklist/9538/games-visually-impaired. Accessed 4 Nov 2017 30. Timanus, E.: Modifying games for the blind (2002). https://www.thegamesjournal.com/art icles/GamesForTheBlind.shtml. Accessed 4 Nov 2017 31. James Heron, M., Belford, P., Reid, H.: Meeple like us—board games reviews and accessibility teardowns (2016). https://meeplelikeus.co.uk/. Accessed 4 Nov 2017

26 Exploring Current Board Games’ Accessibility Efforts …

501

32. André, M.: Splendor (2014). https://boardgamegeek.com/boardgame/148228/splendor. Accessed 4 Nov 2017 33. Neiva M (2013) ColorADD. https://www.coloradd.net/. Accessed 4 Nov 2017 34. Kotaku: Uno releases new card design for color blind players (2017). https://kotaku.com/unoreleases-new-card-design-for-color-blind-players-1802266210. Accessed 4 Nov 2017 35. Gibbs, R., Gibbs, E.: 64 Ounce Games. https://www.64ouncegames.com. Accessed 4 Nov 2017 36. Lawton Henry, S., McGee, L., Abou-Zahra, S., et al.: Accessibility—W3C (1994). https:// www.w3.org/standards/webdesign/accessibility. Accessed 6 July 2018 37. Ellis, B., Ford-Williams, G., Graham, .L., et al.: Game accessibility guidelines|a straightforward reference for inclusive game design (2012). https://gameaccessibilityguidelines.com/. Accessed 4 Nov 2017 38. Araújo, M.C.C., Sánchez, J., Façanha, A.R., et al.: Um Estudo das Recomendações de Acessibilidade para Audiogames Móveis. In: Proceeding of the XIV Brazilian Symposium on Games and Digital Entertainment, pp. 610–617 (2015) 39. IGDA: Game Accessibility White Paper (2004) 40. AbleGamers Charity (2019). https://ablegamers.org/. Accessed 31 Mar 2019 41. Brooks, A.L.: Accessibility: definition, labeling, and CVAA impact. In: Recent Advances in Technologies for Inclusive Well-Being, pp. 283–383. Springer Intelligent Systems Reference Library (2017) 42. Zhao, Y., Hu, M., Hashash, S., Azenkot, S.: Understanding low vision people’s visual perception on commercial augmented reality glasses. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’17, pp. 4170–4181 (2017). https://doi.org/10. 1145/3025453.3025949 43. Kacorri, H., Kitani, K.M., Bigham, J.P., Asakawa, C.: People with visual impairment training personal object recognizers. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems CHI ’17, pp. 5839–5849 (2017). https://doi.org/10.1145/3025453.302 5899 44. Regal, G., Mattheiss, E., Sellitsch, D., Tscheligi, M.: TalkingCards: using tactile NFC cards for accessible brainstorming. In: Proceedings of 7th Augmented Human International Conference 2016—AH ’16, pp. 1–7 (2016). https://doi.org/10.1145/2875194.2875240

Chapter 27

An Extensible Cloud Based Avatar: Implementation and Evaluation Enas Altarawneh, Michael Jenkin, and I. Scott MacKenzie

Abstract A common issue in human-robot interaction is that a naive user expects an intelligent human-like conversational experience. Recent advances have enabled such experiences through cloud-based infrastructure; however, this is not currently possible on most mobile robots due to the need to access cloud-based (remote) AI technology. Here we describe a toolkit that supports interactive avatars using cloudbased resources for human-robot interaction. The toolkit deals with communication and rendering latency through parallelization and mechanisms that obscure delays. This technology can be used to put an interactive face on a mobile robot. But does an animated face on a robot actually make the interaction more effective or useful? To answer this question, we conducted a user study comparing human-robot interaction using text, audio, a realistic avatar, and a simplistic cartoon avatar. Although response time was longer for both avatar interfaces (due to increased computation and communication), this had no significant effect on participant satisfaction with the avatar-based interfaces. When asked about general preferences, more participants preferred the audio interface over the text interface, the avatar interfaces over the audio interface, and the realistic avatar interface over the cartoon avatar interface. This chapter includes and expands on material previously published [1, 2]. Keywords Human-robot interaction · Text-to-speech · Speech-to-text · Avatars · Rendering farm · Cloud computing · Parallel processing · Intelligent agent · Artificial intelligence

E. Altarawneh (B) · M. Jenkin · I. Scott MacKenzie EECS, York University, 4700 Keele St., Toronto, ON, Canada e-mail: [email protected] M. Jenkin e-mail: [email protected] I. Scott MacKenzie e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_27

503

504

E. Altarawneh et al.

27.1 Introduction Responding to voice commands with an animated avatar requires solutions to the problems of speech understanding and speech generation, understanding the commands given, and rendering an appropriately animated and synchronized avatar. Although cloud-based approaches exist for these tasks, utilizing these remote computational resources introduces unwanted delays in the process. This is a serious impediment to deployment on an autonomous robot. Here we address this problem. The first part of this chapter describes technical solutions to communication and rendering delays and operational mechanisms to obscure these delays to the user. However, given these technical solutions, is it actually worthwhile to put a realistic face on a robot? This question is considered in the second part of the chapter in a user study comparing a realistic avatar interface with other available technologies.

27.2 Previous Work Enabling a talking head to express emotion along with a synchronized utterance, as illustrated in Fig. 27.1, is a challenging problem. There have been previous attempts to create realistic 3D talking heads for intelligent agents and avatars and some have

(a) The avatar

(b) Components

Fig. 27.1 Talking with an interactive avatar-enabled robot. Cloud-based systems are used to augment limited on-board computational and rendering resources

27 An Extensible Cloud Based Avatar: Implementation and Evaluation

505

Fig. 27.2 A film strip of an animated character. The avatar is animated using control points that synchronize the rendered visual rig to a generated utterance

shown encouraging results (e.g., [3–5]). However, existing systems have not yet achieved the level of realism of their 2D counterparts [5]. Presently 2D talking heads look more realistic than their 3D counterparts, but they are limited in the range of poses and in the lighting conditions that can be simulated [5]. A key problem in generating a synthetic 3D avatar is the creation of facial and speech animation. One example is the framework for synthesizing 3D lip-sync speech described by Chen et al. [6]. In another example, the speech signal is classified into different categories of visemes using a neural network [7]. The topology of the neural network is automatically configured using a genetic algorithm. Commercially, there exists a range of software tools, plugins, and add-on solutions that aid animators with lipsyncing and facial animations. These include CrazyTalk [8], and Faceshift technology [9]. Although there exists a large number of tools for speech recognition, speech generation, avatar animation, and tools that can animate an avatar based on the intended utterance, the tasks are computationally expensive. Furthermore, current state of the art approaches often rely on large cloud-based AI technologies or specialized rendering hardware. Cloud-based technology introduces undesirable latency in the process and specialized rendering hardware is typically not available on an autonomous system. Such constraints have favoured less expensive 2D approaches over 3D ones, even through 2D approaches are less expressive generally. This is a concern given that humans utilize a number of verbal and nonverbal signals (see [10]) when communicating, and thus the lack of expressiveness in 2D approaches is a concern.

27.3 Building the Avatar The system described here leverages cloud-based AI technology. There exists a number of different suppliers of cloud-based speech recognition and text-to-speech audio

506

E. Altarawneh et al.

generation systems. Although the work to date has concentrated on the Google Engine [11], operationally we utilize an abstract model of these processes. The development of this abstract toolkit builds upon substantive previous effort in this domain. A standard toolkit for local speech recognition can be found online [11]. Google and others provide toolkits to integrate their recognizer with 3rd party software (e.g., [11, 12]). The output of this process is natural language expression as a sequence of words in the recognition language. Similar tools exist for utterance generation. This work employs a Google text-to-speech cloud-based engine [13] to generate the audio layer of the utterance and blends it with an animated avatar to match the response. Rather than transmitting straight English text, the text rendered by the avatar is placed within a structured framework that provides rendering hints for both audio generation and avatar rendering through the Avatar Utterance Markup Language (AUML), a formal language for avatar utterances [2]. This language is an XML representation; it defines rules for encoding a desired output using a textual data format. Every utterance includes an avatar’s detailed description, language, spoken words, expression associated with sub phrases, and general mood. The goal is to standardize and facilitate the use of available avatars, languages, and expressions. Avatars are 3D puppets properly rigged for animation. We utilize an open source realistic 3D human character design application called MakeHuman [14]. MakeHuman provides the ability to manipulate the avatar’s age, weight, length, gender, and race. The software also allows for changes in facial details, hair, eyes, skin and clothes. Users can select from a variety of 3D meshes and bone structures for each character. Characters are exported using the Mhx2 rig [15] which enables MakeHuman structures to be imported into the Blender renderer [16].

Fig. 27.3 The process of lip-syncing spoken words

27 An Extensible Cloud Based Avatar: Implementation and Evaluation

507

27.3.1 Lip-Syncing Spoken Words Spoken words are lip-synced with the audio to provide a realistic utterance, as illustrated in Fig. 27.2. A key requirement is understanding the time indexing of individual events in the utterance. Since we know the text for generating the audio, we use this text to animate the lips. We utilize a dictionary of the sounds in words to compute the timing of events in the utterance. Having prior knowledge of the duration of every possible word (or at least common words) helps to automate realistic lip-syncing and allows us to predict the duration of the resulting audio and video sequences. To obtain the expected duration of utterances we trained our system on the duration of every word in a dictionary using the text-to-speech engine. We assume that the duration t (x) of the spoken word x is independent of its context. This simplifies estimating the duration of the spoken phrases. Audio strips generated by a text-toaudio engine are typically embedded in a quiet clip. The result usually includes empty audio at the beginning and the end of the audio strip. An audio clip consists of a constant number of frames ( f ) per second (typically 24) and the pre- and postclip residue are of constant duration. To accommodate these effects, the duration of each word is used as a weight for the actual plot time of the word in the lipsync animationof the sentence. The time marker of each word is calculated using n (t (xi )). The duration of the word x in the actual sentence Ts (x) w(x) = t (x)/ i=1 is approximated by the weight of the word multiplied by the actual duration of the sentence, ts (x) = w(x) ∗ t (x). The word marker in the actual sentence m(xs) is the marker for the first frame ( f 0 ) plus the number of frames (N F(d)) in the duration space (d) of every preceding  0.05, d f = 3). Participant likelihood to use the interface in the future. The means for participant likelihood to use each interface in the future were text (T): 5.54, audio (A): 4.92, cartoon avatar (CA): 5.08, and realistic avatar (RA): 5.33. See Fig. 27.14a. All interfaces had a relatively high level of participant likelihood to use the interface in the future. There was no statistical difference in the participant likelihood to use one interface over another (χ 2 = 1.125, p > 0.05, d f = 3). Participant perception of the consistency of each interface. The means for participant perception of the consistency of each interface were text (T): 6.96, audio (A): 6.83, cartoon avatar (CA): 6.67, and realistic avatar (RA): 6.79. See Fig. 27.14b. All interfaces have a high level of participant perception in terms of the consistency. There is a statistically significant difference in the participant’s perception of the consistency of the interface (χ 2 = 8.825, p < 0.05, d f = 3). Conover’s F post hoc test revealed that the difference between the text interface and the cartoon avatar interface was statistically significant. Participant perception of the seriousness of each interface. Participants were asked if the questions were responded to in a serious manner and if they found the interfaces to be serious. The means for the perceived level of the seriousness of each interface were text (T): 6.92, audio (A): 6.46, cartoon avatar (CA): 6.08, and realistic avatar (RA): 6.42. All interfaces had a high level of participant perception in terms of the seriousness. There was a statistical difference in the participant perception of the seriousness of the interface (χ 2 = 16.746, p < 0.001, d f = 3). Conover’s F post hoc test revealed that the text interface pairwise comparisons with the cartoon avatar interface and the realistic avatar interface were statistically significant. How seriously the participants took the interface. Participants were asked if they took the interfaces seriously and if they were asking questions in a serious manner. See Fig. 27.15a. The means of how seriously participants took each interface were text

27 An Extensible Cloud Based Avatar: Implementation and Evaluation

519

Fig. 27.15 Mean value for how seriously the participants took the interface (a) and participant preferences between the text- and audio-based interfaces (b)

(T): 6.79, audio (A): 6.50, cartoon avatar (CA): 6.50, and realistic avatar (RA): 6.38. All interfaces had a high level of how seriously the participants took the interface. There was no statistical difference on how seriously participants took the interface (χ 2 = 3.857, p > 0.05, d f = 3). Participant preferences between the text-based and audio-based interfaces. Figure 27.15b illustrates the number of participants that selected each level of preference for these two interfaces. Eleven of 24 participants were highly confident with their preference for the audio-based interface over the text-based interface. In total there were 8 participants that preferred text and 14 that preferred audio. Two participants did not have a preference. Participant preferences between avatar-based and audio-based interfaces. Figure 27.16a shows that 9 of 24 participants were highly confident with their preference of the avatar-based interfaces over the audio-based interface. In total there were 10 participants that preferred the audio-based interface and 14 that preferred the avatarbased interfaces. Participant preferences between realistic avatar-based and cartoon avatarbased interfaces. Figure 27.16b shows that 9 of 24 participants were highly confident with their preference of the realistic avatar interface over the cartoon avatar interface. Figure 27.16b also illustrates a potential gender difference in preference.

27.5.3 Discussion Participants in general expressed a high level of satisfaction with the speed and accuracy of the responses for all interfaces tested. They also found all interfaces to be consistent, easy, and fun to use. Participants indicated that they were likely to use all the presented interfaces again suggesting that all interfaces could be used to develop human-robot interaction systems. The study also requested feedback on the

520

(a) Participant preferences between avatarbased and audio-based interfaces.

E. Altarawneh et al.

(b) Participant preferences between realistic avatar-based and cartoon avatar-based interfaces.

Fig. 27.16 Participant preferences

participants general preferences among types of interfaces. In general, participants preferred the audio interface over the text interface, the avatar interfaces over the audio interface, and the realistic avatar interface over the cartoon avatar interface. Input time in the audio interface was significantly less than with the other interfaces, indicating that participants spoke faster when asking questions using this interface. The audio interface also had a higher query failure rate possibility as a result of the use of this strategy. Hong and Findlater [30] support our findings by suggesting that faster speech results in more errors in speech recognition. There was a significance difference in the audio interface query failure rate compared to other interfaces. This may be a result of the lack of attentiveness of the participants to the display when using audio (A) or the speed in which participants interact with the audio-only interface. The text interface had the lowest response generation time and the realistic avatar interface had the highest response generation time. There was a significant difference between the response generation time of the text interface and the other interfaces. This is to be expected as the text interface displays the result as text while the other interfaces require additional processing. The text interface had the highest level of participant perception of the accuracy of responses, however it was not significantly higher that the other avatar interfaces. Participants found the text interface to be more consistent than the cartoon avatar in displaying responses. Participants also found the text interface to be more serious than the avatar interfaces. Although participants showed a high satisfaction level with the responses and the accuracy of the audio interface, this satisfaction level was significantly lower than the other interfaces. This may be due to the audio interface having a higher query failure rate. Several studies [31–33] support our finding that errors in interactions lower the general satisfaction with robots communicating with speech. This suggests that interactions that require a high level of perception of accuracy should reduce reporting of errors to only when necessary. Even though participants had a lower satisfaction with the responses and accuracy of the responses given by the audio interface and the higher error rate of the interface, more participants selected the

27 An Extensible Cloud Based Avatar: Implementation and Evaluation

521

audio interface when asked to choose in general between the text interface and the audio interface. Despite the significantly higher response time for the realistic avatar than that for the other interfaces, participants still expressed a high satisfaction level with the time needed to obtain responses. There was no significant difference in participant satisfaction with the time to obtain responses from the realistic avatar interface and the audio and cartoon avatar interfaces. In general participants expressed a high level of satisfaction with the accuracy, speed, ease of use, consistency of display, and seriousness of all the interfaces. They also expressed that all of the interfaces were fun and that they would likely use them again. When asked about their general preference, more participants preferred the audio interface over the text interface, the avatar interfaces over the audio interface, and the realistic avatar interface over the cartoon avatar interface. Therefore, returning to the question of the usefulness or necessity of providing such an interface for a naive user, the answer would seem to be “yes”. Acknowledgements The authors would like to thank NSERC, the NCRN and the CFREF VISTA project for their generous support of this work.

References 1. Altarawneh, E., Jenkin, M., MacKenzie, I.S.: Is putting a face on an interactive robot worthwhile? In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (submitted) 2. Altarawneh, E., Jenkin, M.: Leveraging cloud-based tools to talk with robots. In: Proceedings of the International Conference on Informatics in Control, Automation and Robotics (ICINCO) (submitted) 3. Bremner, P., Celiktutan, O., Gunes, H.: Personality perception of robot avatar tele-operators. In: Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 141–148. Christchurch, New Zealand, Mar 2016 4. Wan, V., Anderson, R., Blokland, A., Braunschweiler, N., Chen, L., Kolluru, B., Latorre, J., Maia, R., Stenger, B., Yanagisawa, K., Stylianou, Y., Akamine, M., Gales, M., Cipolla, R.: Photo-realistic expressive text to talking head synthesis. In: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH). Lyon, France (2013) 5. Anderson, R., Stenger, B., Wan, V., Cipolla, R.: An expressive text-driven 3d talking head. In: Proceedings of the ACM SIGGRAPH: Posters, vol. 80. ACM, New York, NY (2013) 6. Chen, Y.M., Huang, F.C., Guan, S.H., Chen, B.Y.: Animating lip-sync characters with dominated animeme models. IEEE Trans. Circ. Syst. Video Technol. 22(9), 1344–1353 (2012). Sept 7. Zoric, G., Pandzic, I.S.: A real-time lip sync system using a genetic algorithm for automatic neural network configuration. In: 2005 IEEE International Conference on Multimedia and Expo, pp. 1366–1369. Amsterdam, The Netherlands, July 2005 8. Create 3D talking heads with CrazyTalk (2018) [Online]. Available: https://www.reallusion. com/crazytalk/ 9. Bouaziz, S., Pauly, M.: Online modeling for real-time facial animation. Patent 9 734 617, Aug 2017 [Online]. Available: http://www.freepatentsonline.com/9734617.html

522

E. Altarawneh et al.

10. Skantze, G.: Real-time coordination in human-robot interaction using face and voice. AI Mag 37, 19–31 (2016) 11. Speechrecognition 3.8.1: Python package index—pypis (2017) . Accessed 30 Apr 2017 [Online]. Available: https://pypi.python.org/pypi/SpeechRecognition/ 12. Andre-luiz-dos-santos/speech-app-github.s (2017) [Online]. Available: https://github.com/ andre-luiz-dos-santos/speech-app 13. Google cloud computing, hosting services and APIs | Google cloud (2018) [Online]. Available: https://cloud.google.com/ 14. Bastioni, M., Re, S., Misra, S.: Ideas and methods for modeling 3d human figures: the principal algorithms used by makehuman and their implementation in a new approach to parametric modeling. In: Proceedings of the 1st Bangalore Annual Compute Conference, pp. 10:1–10:6. ACM, New York, USA (2008) 15. Mhx2 documentation (2017) [Online]. Available: https://thomasmakehuman.wordpress.com/ mhx2-documentation 16. Hess, R.: Blender Foundations: The Essential Guide to Learning Blender 2.6. Focal Press (2010) 17. Quicktalk lip synch addon (2017) [Online]. Available: https://tentacles.org.uk/quicktalk 18. Virtualgl the virtualgl project (2018) [Online]. Available: https://www.virtualgl.org/ 19. Liang, W.-Y., Huang, C.-C., Tseng, T.-L.B., Lin, Y.-C., Tseng, J.: The evaluation of intelligent agent performance—an example of B2C e-commerce negotiation. Comput. Stand. Interfaces 34(5), 439–446 (2012) 20. MacKenzie, I.S.: Human-Computer Interaction: An Empirical Research Perspective, 1st edn. Morgan Kaufmann Publishers Inc., San Francisco, CA (2013) 21. Radziwill, N.M., Benton, M.C.: Evaluating quality of chatbots and intelligent conversational agents. CoRR, abs/1704.04579 (2017) 22. Questionnaire for user interaction satisfaction (2017). https://en.wikipedia.org/wiki/ 23. Survey design and implementation in HCI (2017) [Online]. Available: http://wiki.ggc.usg.edu/ images/c/cf/ 24. How to design questionnaires for usability evaluatione (2017) [Online]. Available: http://www. shengdongzhao.com/researchtips/how-to-design-a-questionnaire-for-usability-evaluation/ 25. Post-evaluation questionnaire (2017) [Online]. Available: http://ece.ubc.ca/~pooya/hestudy/ pc1/ 26. Chin, J.P., Diehl, V.A., Norman, K.L.: Development of an instrument measuring user satisfaction of the human-computer interface. In: Proceeding of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 213–218. ACM, New York, NY, USA (1988) 27. Lewis, J.R.: IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int. J. Hum. Comput. Interact. 7(1), 57–78 (1995). Jan 28. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–340 (1989) 29. Human-computer interaction: an empirical research perspective (2017) [Online]. Available: http://www.yorku.ca/mack/HCIbook/ 30. Hong, J., Findlater, L.: Identifying speech input errors through audio-only interaction. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI), pp. 1–12. ACM, New York, NY, USA (2018) 31. Honig, S., Oron-Gilad, T.: Understanding and resolving failures in human-robot interaction: literature review and model development. Front. Psychol. 9 (2018) 32. Weinstock, A., Parmet, Y., Oron-Gilad, T.: The effect of system aesthetics on trust, cooperation, satisfaction and annoyance in an imperfect automated system. Work 41(Suppl. 1), 258–265 (2012) 33. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot?: effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (CRI), pp. 141–148. ACM, New York, NY, USA (2015)

Chapter 28

Frontiers of Immersive Gaming Technology: A Survey of Novel Game Interaction Design and Serious Games for Cognition Samantha N. Stahlke, Josh D. Bellyk, Owen R. Meier, Pejman Mirza-Babaei, and Bill Kapralos Abstract This chapter presents an overview of novel game interaction design using brain computer interface (BCI), electroencephalography (EEG) and eye tracking.Our main goal is to highlight particular applications of these novel interfaces in digital games and accessible computing technology. We also investigate commercial offerings within these areas, such as mass-market “brain-training” games. Given the growing popularity and the relative novelty of these interfaces, this chapter reviews the current state of the art to gain an understanding of how the field may look moving forward. Keywords Interaction design · Digital games · Rehabilitation · Immersive technology · Brain computer interface (BCI)

28.1 Introduction With new and unique forms of interaction on the rise in the gaming industry, gamers and developers alike have shown interest in exploring new design territory beyond the conventions of traditional game interaction. Developers are no longer confined by the limitations of traditional keyboard- or controller-driven input, but rather, they are free to craft mechanics dependent on novel interactions such as motion control, eye tracking, and brain-computer interfaces. The promise of these technologies is multifaceted; in addition to their obvious novelty, non-traditional input may be used to more effectively mimic real-world interaction, enhancing immersion. Furthermore, these

S. N. Stahlke · J. D. Bellyk · O. R. Meier · P. Mirza-Babaei (B) Ontario Tech University, 2000 Simcoe St N, Oshawa, ON L17K4, Canada e-mail: [email protected] B. Kapralos maxSIMhealth, Ontario Tech University, Oshawa, ON, Canada e-mail: [email protected] © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8_28

523

524

S. N. Stahlke et al.

additional modes of interaction could be leveraged to ensure games are more accessible to individuals with difficulty using traditional input hardware. The ability of these technologies to improve game immersion and accessibility makes them particularly suited to serious gaming, where games are developed for practical reasons (e.g., education and training) other than pure entertainment. In educational applications, for instance, learners may benefit from increased immersion afforded by novel means of interaction and increased immersion has been linked to increased student achievement [1]. Games for rehabilitation may take advantage of the accessibility of novel input technologies, allowing for less restrictive mobility requirements. In this chapter, we present an overview of immersive technologies focused on applications in games and accessible computing technology. Our primary goal in conducting this review is to gather knowledge regarding the history and state of the art in research in the domain of novel game technologies using brain computer interface (BCI), electroencephalography (EEG) and eye tracking. We also aim to investigate commercial offerings within these areas, such as mass-market “braintraining” games and titles which take advantage of recent consumer-level hardware advancements such as the NeuroSky brain computer interface (BCI) headset. Given the relative novelty of these interfaces and their growing popularity in both academic and commercial applications, our goal is to review the current state of the art to gain an understanding of how the field may look moving forward. After collecting and reviewing a number of commercial and research-oriented games, we provide a set of suggested design considerations for gaming applications that employ these technologies in Sect. 28.3.

28.1.1 Review Process Searches were conducted online on a semi-regular basis between September 30 and October 26, 2017. For the investigation of academic and research work, searches were primarily completed through Google Scholar, the University of Ontario Institute of Technology library database, and the ACM Digital Library. The majority of articles selected were identified using these databases and sourced from online libraries including IEEE Xplore, Springer, the ACM Digital Library, and ScienceDirect. Articles were selected from search results for further analysis based on the content of their titles, abstracts, and publishing venue. Of these articles, we assessed their relevance for inclusion in the final review based on a few key factors: alignment to our research objectives ascertained from the goals of the research and the nature of any games involved; the perceived magnitude of paper contribution through significance of findings, discussion quality, and citation count according to database sources; inclusion of technologies pertinent to our development goals, including brain-computer interfaces, EEG-based interaction, and eye tracking. Some articles were also selected based on references contained within articles found through our initial search queries.

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

525

Table 28.1 Summary of search phases conducted in gathering research and commercial work for inclusion in this review Search phase

Date(s) of search (2017)

Goals of search

Example search terms

Exploratory

September 30

Establish goals of review, find broad research pertaining to core goals

“bci games”, “eeg games”, “eye tracking games”, “games for focus”

BCI and immersive technologies

October 14–15

Investigate history and applications of BCI, evaluate EEG and eye tracking hardware, investigate games developed with immersive technologies

“history of bci”, “eye tracking technology”, “neurosky mindwave games”, “emotiv epoc games”, “tobii eye tracking games”

Refinement and expansion

October 24–26

Find additional supporting research and commercial examples relevant to serious and immersive games

Combination of above, “EEG performance in youth”, “EEG performance in elderly”, “brainwave games”, “Tobii eye tracker performance”

Please note that search terms given are a representative subsample of our complete search efforts

We also conducted searches on Google to obtain examples of commercial applications. For Sect. 28.2, we prioritized results from Tobii and NeuroSky, as they were particularly relevant given our development goals and available hardware resources. A summary of our inquiry phases is presented in Table 28.1.

28.2 Novel Game Interaction Using EEG and Eye-Tracking A user’s ability to interact with any game system is contingent upon the presence of some form of input model capable of translating player intentions, conveyed through motions such as key presses, muscle movement, or physiological signals, into ingame actions. Game input devices have evolved significantly over the past several decades, spawning a collection of new interaction modalities. Today, designers and players alike are free to explore the potential of technologies beyond standard arcade cabinets, mouse, keyboard, and gamepad controllers. Relatively recent consumerlevel technologies such as the Oculus Rift,1 HTC Vive,2 Tobii Eye Tracker,3

1 https://www.oculus.com/. 2 https://www.vive.com/eu/. 3 https://www.tobii.com/.

526

S. N. Stahlke et al.

Microsoft Kinect,4 and PlayStation Move5 have created a new possibility space for the design of interactive gaming experiences. Thorpe et al. [2] provide a thorough review of the history and development of different game input devices, as well as the effects of novel input methods on player experience. The authors conclude that non-traditional input paradigms, such as motion control and haptic feedback devices, can contribute to improvements in perceived immersion and fun when used in appropriate game contexts. In this section, we discuss the design and development of games exploring the use of brain-mediated controls and eye tracking, two key interaction schemes pertaining to our research goals. Following a review of the history and application of braincomputer interface (BCI) technology, we examine previous work in the domain of electroencephalography (EEG) and eye tracking for games.

28.2.1 Brain-Computer Interfaces A brain-computer interface (BCI) is any computer interaction paradigm whereby input is achieved directly through the modulation of neural signals, independent of physical movement. Generally speaking, this is leveraged to create a communication system between the user and the device which can be controlled through voluntary mental action [3]. Mason and Birch [4] present a generalized design model describing the functional elements of BCI systems. In this model, the BCI is considered as a system comprising the human user, the device, and the operating environment. Communication between the user and the device is accomplished through a control interface connecting electrical and sensor components which transform the user’s neural activity into usable device input. These components typically consist of electrodes, which convert brain states into electrical signals,amplifiers, which increase signal strength and apply bandpass filters,a feature extractor, which output values specific to a given control mechanism (e.g., brainwave frequency range); and a feature translator, which converts output from the feature extractor into logical signals independent of the device context. The result of implementing such a system is the ability to translate a user’s mental state into digital inputs that can be interpreted at a hardware or software level, depending on the nature of the control interface involved, to control a device or software application. The first BCI systems were developed in the 1970s, driven by research interests in human–computer interaction (HCI), physiological sensors including electroencephalograms (EEG) and electromyograms (EMG), and military applications [3]. In the decades that followed, BCI research centred primarily on the development of assistive technologies aiding communication, quality of life, and exercise for individuals with physical handicaps, such as muscular impairment, and those with neurodevelopmental disorders, including autism [3]. 4 https://www.xbox.com/en-CA/xbox-one/accessories/kinect. 5 https://www.playstation.com/en-ca/explore/accessories/vr-accessories/playstation-move/.

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

527

Wolpaw et al. [5] presented one of the first BCI-based cursor control systems, developed as an assistive interface for individuals with severe muscular handicaps. More specifically, BCIs create a hands-free interface from the brain to the external environment thus circumventing the use of peripheral muscles and limbs [6]. The technology presented employed electrodes placed on the scalp to measure users’ mu rhythms, mapping the modulation of these signals to achieve linear cursor control. However, early BCI systems such as this required that users train their responses over the course of several sessions to achieve acceptable results. Wolpaw et al. [7] present a review of the evolution of BCI devices for assistive and communication technologies into the early twenty-first century. Examples of these applications include the ability to control traditional computer applications (e.g., word processors) and the operation of speech synthesizers for individuals with partial or complete paralysis. More recently, researchers and developers have explored the use of BCI technology in non-medical contexts, particularly in the realm of multimedia applications and digital games. Blankertz et al. [8] review several novel BCI implementations, including text processing, web browsers, digital art software, arcade machines, and digital games. Marshall et al. [9] provide a comprehensive analysis of BCI technologies used in games, including a genre-based review of existing applications and design recommendations for games using BCI technology. Figure 28.1, showcases some examples of BCI games developed for research purposes.

28.2.2 EEG in Games The use of player brainwaves and EEG as a viable form of non-traditional game input has increased in recent years with the advent of readily available consumer-grade hardware. Three of the most popular options include the NeuroSky MindWave,6 the Emotiv Epoc+,7 and Emotiv Insight headsets, wearable devices that use passive sensors to translate users’ brainwave activity into readable digital signals. While the MindWave and Insight headsets are advertised as primarily consumer-level devices, the Epoc+ is targeted primarily at research applications, offering higher resolution and more complete datasets. Nijboer et al. [13] conducted a usability study comparing the effectiveness of the Biosemi (gelled electrodes), Emotiv EPOC (semi-dry electrodes), and g.Sahara (dry electrodes) headsets across measures of set-up time, accuracy, comfort, and aesthetics. Researchers concluded that the EPOC was generally easiest to set-up, though trade-offs existed between ease of initial set-up and signal strength or the accuracy of data collected. Furthermore, the EPOC was perceived as uncomfortable, with the Biosemi headset rated as most effective overall. However, the set-up overhead of devices like the Biosemi (i.e., those incorporating a full scalp cap and gelled electrodes) may make them impractical. Moreover, if the number of electrodes is held 6 https://store.neurosky.com/pages/mindwave. 7 https://www.emotiv.com/.

528

S. N. Stahlke et al.

Fig. 28.1 Examples of BCI games, clockwise from top left: balancing game using steady-state visual evoked potential (SSVEP) to control avatar movement [10], tower defence game using SSVEP-controlled user interface (UI) elements to control in-game actions [11], Tetris implementation using event-related potentials (ERP) and sensorimotor rhythms to control block movement [12]. Figures reproduced with author permission

consistent, dry electrodes can often provide comparable results in terms of accuracy [13]. Ekandem et al. [14], examine the usability of the Emotiv EPOC and NeuroSky MindWave in research contexts. While the MindWave was found to have a shorter average set-up time and was less prone to early signal loss than the EPOC, it was also reported as less comfortable on average. Furthermore, after an initial connection was established, the MindWave resulted in more frequent signal fluctuations, which may pose a problem if consistently high input accuracy is required. The authors conclude that the MindWave’s strength with respect to signal acquisition and set-up make it a prime candidate for multi-user studies, while the EPOC is better suited to long-term use. Non-conventional forms of game input have also been found to promote increased player engagement, contributing to an overall more positive player experience. Alchalcabi et al. [15] examined engagement and focus metrics for players using the Emotiv EPOC+ versus traditional mouse and keyboard input for a serious game designed to improve focus in individuals with ADHD. In a pilot study evaluating the

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

529

application, it was found that players were approximately 10% more engaged and focused when using the Emotiv headset, as opposed to conventional game input. Several games and applications made for commercial and research purposes have employed EEG as a form of input for user control. The intention of these works ranges from pure entertainment, to exploratory research investigations, to behavioural training and serious games. In the following examples, we examine a range of applications in the domain of gaming and simulation for the use of EEG as a form of user input. Gudmundsdottir [16] proposed a hint-based utility system for training users to familiarize themselves with the attention and meditation controls of devices like the NeuroSky headset. Techniques such as deep breathing, calm music, and mantra repetition were found to effectively help “train” participants to increase their meditation levels. While the conclusions presented in this work are fairly preliminary, the need for such utilities suggests that calibration and training can prove important factors in the ability of users to effectively take advantage of BCI technology in a gaming context. An early attempt at creating an EEG-driven game was presented by Lalor et al. [10], who presented a game whereby players attempted to control an avatar’s balance by looking to either side of the character (see Fig. 28.1). EEG was used to measure steady-state visual evoked potential (SSVEP), a phenomenon observed in response to a visual stimulus that flickers at a particular rate, to determine which direction the player wanted the character to lean. To generate the appropriate response, two phase-reversing checkerboard patterns were presented at either side of the avatar. Researchers achieved a relatively high control accuracy, 89% on average, by using a Fast-Fourier Transform (i.e., frequency decomposition) to interpret EEG data. While the NeuroSky API allows for the abstraction of such calculations, it is vital to recognize the importance of data transformation in successfully using EEG signals as a form of game input. van Vliet et al. [11] developed a BCI-based tower defence game using the Emotiv EPOC headset and the Unity engine (see Fig. 28.1). Interaction with the game was based on SSVEP signals generated in response to UI elements. They found that most participants (87.5%) in a controlled setting were able to achieve complete control over the game, while many participants struggled to achieve control if testing the game in a public setting with many uncontrollable external stimuli. Moreover, participants indicated that the long-term use of SSVEP as a control method became tiring, calling into question its effectiveness as a solution for full game experiences. Since SSVEP is generally used to identify an entity in the user’s visual focus, we can effectively replace this functionality using eye-tracking technology (see Sect. 28.2.3), using less taxing methods of EEG control (e.g., the attention and meditation measures of the NeuroSky MindWave) for brain-based game interactions. NeuroWander [17] was a fairytale-themed game developed using the NeuroSky MindSet (now discontinued). In NeuroWander, players progress through a predefined storyline based on the events of Hansel and Gretel, as shown in Fig. 28.2. To advance past certain story points, players must modulate their attention and meditation levels, two values precalculated by the NeuroSky API.

530

S. N. Stahlke et al.

Fig. 28.2 Examples of games supporting the NeuroSky MindWave, clockwise from top left: Dagaz (https://store.neurosky.com/products/dagaz), EEG-driven mandala generator intended to promote player relaxation; BlinkShot (https://store.neurosky.com/products/blinkshot), an arcade-type game where players concentrate and blink to fire a plasma cannon; FlappyMind (https://store.neurosky. com/products/flappymind-android), an arcade-style experience where players move a flying brain with their minds; 28 Spoons Later (https://mindgames.is/ios-games/28-spoons/), a focus-training game where players bend virtual spoons with their minds; Invaders Reloaded (https://store.neurosky. com/products/invaders-reloaded), an arcade shooter that uses player concentration to modulate weapon power in a space-themed settings

Neurosky [18] lists a number of popular EEG-based games developed to work with the MindWave headset. Enumerated in Fig. 28.3 below, these titles display a range of designs spanning the gamut of real-time action to simulations for increasing relaxation, demonstrating the versatility of BCI-based game control.

Fig. 28.3 Screenshots from Rabbit Run [19], where players use eye tracking and voice commands to navigate through a virtual maze. Reprinted with author permission

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

531

Though devices facilitating BCI in games have yet to become ubiquitous in the consumer space, they present interesting opportunities and challenges for designers to create uniquely engaging and immersive interactions.

28.2.3 Eye-Tracking Technology Another relatively novel form of input available for use in the domains of HCI and gaming is eye tracking, facilitated through hardware capable of determining a user’s current gaze location. This may be used, for example, to control an on-screen cursor, select UI elements, or assess view patterns. Currently, the most prominent consumer-level solution for eye tracking in game applications are produced by Tobii,8 including the Eye Tracker 4C and the EyeX. These devices work by projecting nearinfrared light onto a user’s eyes and using sensors to interpret reflection patterns, determining the user’s on-screen gaze point [20]. Device APIs allow developers to use this information for application input, including, for instance, UI interaction, avatar movement, or aiming in-game. Jacob and Karn [21] present a comprehensive review of eye tracking work in the domain of HCI, noting its application as an alternative form of control for multitasking applications or users with disabilities. Modern eye tracking research dates back to the 1970s, though mechanical attempts to track eye movement were recorded as early as the late nineteenth century. Many early applications of eye tracking in HCI research were centred on improving usability studies by monitoring gaze fixation and tracking participants’ scanning of various user interfaces [21]. More recently, eye tracking has been investigated as a form of input for digital games. Cheng and Vertegaal [22] compared the accuracy, reliability, and robustness of an early Tobii eye tracker, the ET-17, with the LC Eyegaze. Study results indicated that Tobii’s technology was more robust in terms of compensating for head movement and allowing a larger effective workspace, making it an effective option for the majority of users. Smith and Graham [23] worked with a later iteration of Tobii’s eye tracker, the 1750, to develop demonstrations of gaze-based control for player orientation in a first-person shooter (FPS), avatar movement in a third-person role-playing game (RPG), and targeting in an arcade-style game. Researchers concluded that, on the whole, participants found that eye tracking controls were more immersive than their mouse-based counterparts. Kos’myna and Tarpin-Bernard [24] conducted a comparison of multiple input modalities for a puzzle game using combinations of mouse, eye tracking, and BCIbased control, including SSVEP. Player interaction in the game depended upon the selection of interface elements to rotate a series of picture tiles; actions occurred over timed phases of tile selection and rotation, with user selection ascertained through mouse movement, eye tracking, and/or brain activity. Participants generally agreed

8 https://tobiigaming.com/.

532

S. N. Stahlke et al.

that an approach based purely on eye tracking was easiest to use and yielded the least fatigue, demonstrating the effectiveness of gaze tracking in interface selection tasks. Sundstedt [25] presented a comprehensive review of the use of eye tracking for character control in games, noting the technology’s promising applications in game accessibility and the importance of explicitly accounting for the nature of eye tracking input during the design process. Of particular interest is the so-called Midas Touch problem, where users are incapable of looking anywhere without issuing some form of command. To overcome this issue, it is often recommended that eye-tracking be coupled with a secondary form of input, such keyboard bindings or mouse clicking. Eye tracking has been investigated as both a basis for the development of new experiences and as an augmentation to existing games. Nacke et al. [26] examined the use of eye tracking as a navigational aid for custom levels developed using custom levels developed in the Half-Life 2 Source SDK.9 The authors were particularly interested in evaluating aspects of subjective player experience, rather than investigating the accuracy of the technology involved. Participants reported high scores for flow, immersion, and positive affect, attesting to the effectiveness of eye tracking as an input mode for games. O’Donovan et al. [19] created Rabbit Run, a gaze- and voice-controlled game where players escape from a maze while collecting coins to earn points, as shown in Fig. 28.3. While participants reported some difficulty in navigation with non-traditional input, they reported increased immersion and enjoyment when interacting via gaze and voice rather than through mouse and keyboard. These results suggest that even though technical issues may inhibit performance for some users, the medium overall presents an added dimension of fun to the game experience.

28.3 Limitations and Design Recommendations The Tobii EyeX eye tracker and NeuroSky MindWave EEG headset are both impressive feats of modern gaming technology. However, their relative novelty means that many players may be unfamiliar with such interaction modalities, posing potential obstacles for player learning and creating sufficiently intuitive control schemes. With respect to EEG-based focus control, differences in players’ initial and longterm capabilities may conceivably vary based on factors relating to age, experience, and emotions. As a result, developers may plan to implement an adaptive difficulty scheme, supported by a calibration phase which will help to adjust the game’s mechanics to compensate for the abilities of different users. This may provide a more personalized experience and accessible to a diverse player population. Limitations of input hardware may also prove to pose difficulties in optimizing player interaction. Machkovech [27] of Ars Technica notes that factors such as precise positioning and orientation of both eye-tracking devices and users can impose limitations on effective interaction in a natural working space. Comfort may also prove 9 https://half-life.wikia.com/wiki/Source_SDK.

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

533

Table 28.2 Summary of the suggested guidelines to improve the usability of games integrating BCI and eye tracking technologies Guideline

Brief description

Feasibility of short playtimes

The core game structure should be based on a number of short, discrete levels

Availability of alternative inputs

Provide the option of using keyboard controls to imitate neurofeedback

Simplicity of input interpretation

Avoid overly complex analysis of consumer-level headset data

Tutorial and/or calibration

Provide a tutorial and/or calibration stage sufficient to “train” users in the use of the headset

Multimodal game interaction

Combine the use of eye tracking with an alternative input (e.g., keypresses) for selection confirmation, button presses, etc

Accessibility options for hands-free operation

Provide options for individuals incapable of using traditional input devices

to be an issue for players using an eye tracker and headset for an extended period of time,flexibility in playtime or session length may help to mitigate this concern.

28.3.1 Design Considerations for BCI and Eye Tracking in Games Based on our survey of existing games and usability studies outlined in Sect. 28.2, we propose the guidelines listed below to improve the usability of games integrating BCI and eye tracking technologies. A summary of these guidelines is provided in Table 28.2. Feasibility of Short Playtimes. We suggest structuring the core game structure based on a number of short, discrete levels, allowing users to easily pick up and put down the game for relatively short play-sessions if desired. The viability of this design choice is enhanced by the relatively short set-up time of devices such as the MindWave and serves two primary purposes: (i) it helps users to establish a routine of short playtimes supporting long-term habitual use; and (ii) it affords users the ability to take a break easily if they experience discomfort or excessive signal loss while using the headset. Availability of Alternative Inputs. To account for potential device malfunction or discomfort, the option of using keyboard controls to imitate neurofeedback, allowing players to enjoy the core game experience without requiring a constant connection may be useful. While this mode will not directly support attention training, it will

534

S. N. Stahlke et al.

allow players who experience difficulties with the BCI to enjoy the game, and still take advantage of eye tracking support. Simplicity of Input Interpretation. Attempting to perform overly complex analysis of headset data with data obtained using consumer-level BCIs will likely prove to be an overly demanding and potentially error-prone task. This may prove particularly troublesome in the event of intermittent signal loss. To help mitigate the risks of overestimating the device’s data resolution capabilities, one may employ pre-calculated values for attention and meditation provided by the NeuroSky API. Tutorialization and/or Calibration. To ensure that users fully understand the interaction required and feel comfortable using the MindWave as a control device, we will need to provide a tutorialization and/or calibration stage sufficient to “train” users in the use of the headset. Alternatively, for our prototype user studies, we should make sure to give users time to get used to using the headset with the visualization tools provided by NeuroSky (e.g., MindWave Mobile Tutorial, BrainWave Visualizer). Based on our review of existing work with eye tracking and the strengths and weaknesses of the technique as described earlier in the section, we present the following design takeaways: Multimodal Game Interaction. To help avert the Midas Touch problem discussed in Sect. 28.2.3, and to ensure that the navigation of menus and other game interfaces are intuitive, we suggest combining the use of eye tracking with an alternative input (e.g., keypresses) for selection confirmation, button presses, etc. This may minimize user frustration and reduce the amount of instruction required to train users unfamiliar with the device. Accessibility Options for Hands-Free Operation. While the combination of eye tracking with mouse and/or keyboard input can successfully streamline interaction for users without physical impairments, we suggest including options for individuals incapable of using traditional input devices. For example, integrate an alternate form of “button pressing” through strong blink detection, providing a hands-free input setting for users with potential motor impairments.

28.4 Conclusion The prevalence of serious gaming in modern applications as a delivery mechanism for education and cognitive training presents promising opportunities for game developers. The goals of serious game design encompass both domain objectives, such as the conveyance of particular knowledge, and universal game design considerations, including immersion and engagement. Such objectives are supported by the coincident rise of immersive technologies such as augmented and virtual reality, motion controls, consumer-level brain-computer interfaces, and eye tracking. A substantial interest in these innovations has already led to the development of several serious and

28 Frontiers of Immersive Gaming Technology: A Survey of Novel Game …

535

entertainment games using novel input devices to explore the possibilities of using these technologies for interactive applications. The application of immersive technologies in serious and entertainment gaming poses interesting challenges for researchers and designers with respect to usability and user experience. Developers wishing to work with novel input methods must deal with obstacles posed by technical device limitations, users’ lack of familiarity, and basic concerns like user comfort. Despite these challenges, a continued exploration of these technologies is warranted by their potential to improve digital game accessibility and immersion. Perhaps this exploration will serve to revolutionize our understanding of interactive design as developers and researchers alike continue to investigate this new frontier.

References 1. Shute, V.J., Ventura, M., Bauer, M., Zapata-Rivera, D.: Melding the power of serious games and embedded assessment to monitor and foster learning. In: Ritterfeld, U., Cody, M., Vorderer, P. (eds.) Serious Games: Mechanisms and Effects, pp. 295–321. Routedle Publishers, New York (2009) 2. Thorpe, A., Ma, M., Oikonomou, A.: History and alternative game input methods. In: Proceedings of the 16th International Conference on Computer Games, pp. 76–93 (2011). https://doi. org/10.1109/CGAMES.2011.6000321 3. Vallabhaneni, A., Wang, T., He, B.: Brain-computer interface. In: He, B. (ed.) Neural Engineering, pp. 85–121. Springer, US, USA (2005) 4. Mason, S.G., Birch, G.E.: A general framework for brain-computer interface design. IEEE Trans. Neural Syst. Rehabil. Eng. 11(1), 70–85 (2003). https://doi.org/10.1109/TNSRE.2003. 810426 5. Wolpaw, J.R., McFarland, D.J., Neat, G.W., Forneris, C.A.: An EEG-based brain-computer interface for cursor control. Electroencephalogr. Clin. Neurophysiol. 78(3), 252–259 (1991). https://doi.org/10.1016/0013-4694(91)90040-B 6. Ramaswamy, P.: Electroencephalogram-based brain–computer interface: An introduction. In Miranda, E.R., Castet, J. (eds.) Guide to Brain-Computer Music Interfacing, pp. 29–41. Springer, London (2014) 7. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., Vaughan, T.M.: Braincomputer interfaces for communication and control. Clin. Neurophysiol. 113(6), 767–791 (2002). https://doi.org/10.1016/S1388-2457(02)00057-3 8. Blankertz, B., Tangermann, M., Vidaurre, C., Fazli, S., Sannelli, C., Haufe, S., Maeder, C., Ramsey, L., Sturm, I., Gabriel, C., and Muller, K.: The Berlin brain-computer interface: nonmedical uses of BCI technology. Front. Neurosc. 2010(4): 198:1–198:17 (2010). https://doi. org/10.3389/fnins.2010.00198 9. Marshall, D., Coyle, D., Wilson, S., Callaghan, M.: Games, gameplay, and BCI: The state of the art. IEEE Trans. Comput. Intell. AI Games 5(2), 82–99 (2013). https://doi.org/10.1109/TCI AIG.2013.2263555 10. Lalor, E.C., Kelly, S.P., Finucane, C., Burke, R., Smith, R., Reilly, R.B., McDarby, G.: Steadystate VEP-based brain-computer interface control in an immersive 3D gaming environment. EURASIP J. Appl. Signal Process. 2005, 3156–3164 (2005). https://doi.org/10.1155/ASP.2005. 3156

536

S. N. Stahlke et al.

11. van Vliet, M., Robben, A., Chumerin, N., Manyakov, V., Combaz, A., Van Hulle, M.M.: Designing a brain-computer interface controlled video-game using consumer grade EEG hardware. In: Proceedings of the Biosignals and Biorobotics Conference (2012). https://doi.org/10. 1109/BRC.2012.6222186 12. Pires, G., Torres, M., Casaleiro, N., Nunes, U., Castelo-Branco, M.: Playing tetris with noninvasive BCI. In: Proceedings of IEEE SeGAH 2011. https://doi.org/10.1109/SeGAH.2011. 6165454 13. Nijboer, F., van de Laar, B., Gerritsen, S., Nijholt, A., Poel, M.: Usability of three electroencephalogram headsets for brain-computer interfaces: a within subject comparison. Interact. Comput. 27(5), 500–511 (2015). https://doi.org/10.1093/iwc/iwv023 14. Ekandem, J.I., Davis, T.A., Alvarez, I., James, M.T., Gilbert, J.E.: Evaluating the ergonomics of BCI devices for research and experimentation. Ergonomics 55(5), 592–598 (2012). https:// doi.org/10.1080/00140139.2012.662527 15. Alchalcabi, A.E., Eddin, A.N., Shirmohammadi, S.: More attention, less deficit: wearable EEGbased serious game for focus improvement. In: 2017 IEEE 5th International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–8, Perth, WA (2017).https://doi. org/10.1109/SeGAH.2017.7939288 16. Gudmundsdottir, K.: Improving players’ control over the NeuroSky brain-computer interface. Undergraduate Research Thesis, Reykjavik University, Iceland (2011) 17. Yoh, M., Kwon, J., Kim, S.: NeuroWander: a BCI game in the form of interactive fairy tale. In: 12th ACM International Conference Adjunct Papers on Ubiquitous Computing, pp. 389–390 (2010). https://doi.org/10.1145/1864431.1864450 18. NeuroSky: EEG games Top 5 list: playing with your brainwaves (2015). Retrieved from https:// neurosky.com/2015/09/eeg-games-top-5-list-playing-with-your-brainwaves/ 19. O’Donovan, J., Ward, J., Hodgins, S., Sundstedt, V.: Rabbit run: gaze and voice based game interaction. In: Proceedings of the 9th Irish Eurographics Workshop (2009) 20. Tobii, A.B.: Eye tracking in gaming, how does it work? (2017). Retrieved from https://help. tobii.com/hc/en-us/articles/115003295025-Eye-tracking-in-gaming-how-does-it-work 21. Jacob, R.J.K., Karn, K.S.: Eye tracking in human-computer interaction and usability research: ready to deliver the promises (Commentary on Section 4). In: Hyona, J., Radach, R., Deubel, H. (eds.) The Mind’s Eye: Cognitive and Applied Apsects of Eye Movement Research, pp. 573– 605. Elsevier Science, Amsterdam (2003) 22. Cheng, D., Vertegaal, R.: An eye for an eye: a performance evaluation comparison of the LC technologies and Tobii eye trackers. In: Proceedings of ETRA ‘04, 61 (2004). https://doi.org/ 10.1145/968363.968378 23. Smith, J.D., Graham, T.C.N.: Use of eye movements for video game control. In: Proceedings of ACE ‘06, Article No. 20 (2006). https://doi.org/10.1145/1178823.1178847 24. Kos’myna, N., & Tarpin-Bernard, F. : Evaluation and comparison of a multimodal combination of BCI paradigms and eye tracking with affordable consumer-grade hardware in a gaming context. IEEE Trans. Comput. Intell. AI Games 5(2), 150–154 (2013). https://doi.org/10.1109/ TCIAIG.2012.2230003 25. Sundstedt, V.: Gazing at games: using eye tracking to control virtual characters. In: ACM SIGGRAPH 2010 Courses, vol. 5(1–5), 160 (2010) 26. Nacke, L., Stellmach, S., Sasse, D., Lindley, C.A.: Gameplay experience in a gaze interaction game. Proc. COGAIN 2009, 49–54 (2009) (arXiv:1004.0259) 27. Machkovech, S.: Augmenting the FPS: how well does Tobii track your gaze in a video game? Ars Technica (2016). Retrieved from https://arstechnica.com/gaming/2016/08/augmentingthe-fps-how-well-does-tobii-track-your-gaze-in-a-video-game/

Glossary and Acronyms

Adaptive architecture Buildings that can change their properties to adapt to different environments or users ADHD Attention deficit hyperactive disorder—a behavioural condition that makes focusing on everyday requests and routine challenging ADL Activities of daily living Animated architecture Buildings that can change their properties in real time according to input from users or the surrounding environment Artificial intelligence Artificial Intelligence—the study and design of intelligent systems (agents) able to achieve goals through intelligent behaviour AS Algorithmic Strategy—a combination of elementary functions needed to express behaviour Assistive music-technology Where assistive technology refers to technology that is designed to enable a user to engage with activities that might ordinarily be challenging due to individual needs, assistive music-technology refers to those assistive technologies that are focused on music making Augmented reality The addition of computer-generated objects to the real physical space to augment the elements comprising it Avatar A graphical representation, typically three dimensional, of a person capable of relatively complex actions including facial expression and physical responses while participating in a virtual SBE. The user controls the avatar through the use of a mouse, keyboard, or a type of joystick to move through the virtual SBE Bubble-tube A common sensory device—essentially a tall cylinder of water with a stream of air bubbles rising from the bottom. Bubble-tubes are often equipped with coloured lighting CP Cerebral palsy DIY/hacker musician Someone who adapts and reconfigures audio-technologies to create new and unusual sound-generators/instruments DJ An abbreviation for Disc Jockey. Originally referring to the person who would select and play the music at a disco, the term has more recently broadened to © Springer Nature Switzerland AG 2021 A. L. Brooks et al. (eds.), Recent Advances in Technologies for Inclusive Well-Being, Intelligent Systems Reference Library 196, https://doi.org/10.1007/978-3-030-59608-8

537

538

Glossary and Acronyms

include elements of music performance where the DJ will mix, adapt and create music within a live environment DMI Digital musical instrument Ecological validity To establish the ecological validity of a neuropsychological measure, the neuropsychologist focuses upon demonstrations of either (or both) verisimilitude and veridicality. By verisimilitude, ecological validity researchers are emphasizing the need for the data collection method to be similar to real life tasks in an open environment. For the neuropsychological measure to demonstrate veridicality, the test results should reflect and predict real world phenomena EEG Electroencephalography—recording of electrical activity in neurons in cortex through electrodes placed on the scalp Engagement Fully occupied in, giving your full attention, curiosity, interest, optimism, and passion to a task or activity Experience-based plasticity The ability of the nervous system to respond to intrinsic or extrinsic stimuli through a reorganization of its internal structure Extended reality (XR) Extended reality (XR) is a term referring to all real-andvirtual combined environments and human-machine interactions generated by computer technology and wearables, where the ’X’ represents a variable for any current or future spatial computing technologies Fibromyalgia A long term condition characterized by chronic widespread pain throughout the body. The pain is allodynic (a heightened and painful response to pressure). The condition also often includes a range of other symptoms and as a result is often referred to as Fibromyalgia Syndrome fMRI Functional Magnetic Resonance Imaging is a technique that detects increased blood flow in regions of the brain, the increase in which is associated with increased neural activity. A brain imaging technique used to depict brain activity by measuring metabolism in the brain Frontostriatal system The frontalstriatal system is responsible for executive functions and supervisory attentional processing. In neurodevelopmental disorders that disrupt executive functioning, a heterogeneous pattern of deficits emerges, including: impulsivity, inhibition, distractibility, perseveration, decreased initiative, and social deficits. These cognitive symptoms are characteristic of pervasive developmental disorders such as attention-deficit hyperactivity disorder and autism Gamification The careful and considered integration of game characteristics, aesthetics and mechanics into a non-game context to promote change in behavior. It is most often used to motivate and engage people GSR Galvanic Skin Response involves the analysis of the skin conductivity which provides an indication of psychological or physiological arousal. Changes in the skins moisture level resulting from sweat glands are controlled by the sympathetic nervous system, as a result of this GSR can provide a rapid indication of a subjects stress levels Haptic Relating to active tactile-interaction of the kind that might exist within a human computer interface

Glossary and Acronyms

539

HMD A Head Mounted Display is a display system worn on the head that presents views to one or both eyes. In Immersive Virtual Reality this is both eyes, often using stereoscopic displays to create the illusion of depth through the use of parallax. Other HMD systems exist that present ‘augmented reality where views of the real and virtual worlds are combined Immersion Sensation of being in a computer-generated world created by surrounding hardware providing sensory stimuli. Can be a purely mental state or can be accomplished through physical means Immersive technology Devices that provide sensory stimuli to provide a sense of realism and immersion to the interactions with the computer-generated world Infinity tunnel A sensory device that uses a combination of LEDs and mirrors to create an illusion of a never-ending tunnel of lights Intelligent architecture Like animated architecture. But it also has a set or short and/or long-term goals that it bases its actions on Intensive interaction As defined by the Intensive Interaction Institute, “Intensive interaction is an approach to teaching the pre-speech fundamentals of communication to children and adults who have severe learning difficulties and/or autism and who are still at an early stage of communication development” Inverse kinematics A mathematical system used to calculate the position of various joints and limbs relative to the position of a particular part of a ‘body’, such techniques allow animators to move the hand of a 3D human model to a desired position and orientation and following this an algorithm selects the appropriate angles and positions for the wrist. elbow and shoulder joints IVE Immersive virtual environments. Environments that immerse their users in virtual simulations IVR Immersive virtual reality—this can take a number of forms including single user and multi user systems; however, in each instance rather than an image being presented on a screen as a window upon another world users occupy the virtual world either through the projection of that world onto surfaces surrounding the viewer (such as CAVES) or by a user wearing technology such as a Head Mounted Display Life like architecture Buildings that can change their properties in real time according to input from users or the surrounding environment. Similar to a living organism Light wheel Originally made for early discotheques, a light wheel uses a rotating disc of coloured lighting gels to project constantly changing patterns onto a suitable surface e.g. a white wall Mixed reality Integration of computer-generated graphics and real objects seamlessly MSE Multisensory environment Neglect An attention deficit characterized by an inability to respond to or orient towards objects in the contralesional space which cannot be attributed to visual impairments Neuropsychological assessment A neuropsychological assessment typically evaluates multiple areas of cognitive and affective functioning. In addition to

540

Glossary and Acronyms

measures of intelligence and achievement, it examines a number of areas of functioning that also have an impact on performance in activities of daily living PAT Prism Adaptation Therapy—a therapy for patients suffering from the impairment neglect. The patient is exposed to prism-induced distortion of visual input during pointing activity. Perimetry Tests designed to measure the function of the visual field of the eye excluding the central field of vision (Fovea) PMLD Profound and multiple learning difficulties Presence The feeling of being immersed in a computer-generated world PTSD Post Traumatic Stress Syndrome—a psychological reaction that occurs after experiencing a highly stressing event out-side the range of normal human experience and that is usually characterized by depression, anxiety, flashbacks, recurrent nightmares, and avoidance of reminders of the event PVC Polyvinyl chloride—a commonly produced type of plastic that is available in both flexible and rigid forms Rebound room An area designed to accommodate rebound therapy which typically includes a sunken trampoline surrounded by soft-furnishings REF Reorganization of Elementary Functions—a model of the possible mechanisms behind recovery of function in rehabilitation Repurposed technology A term used to describe technology that is being used in a way that it was not originally designed e.g. a gaming controller being used within an electronic musical instrument Resonance board A at wooden board that amplifies sounds as someone explores the surface with their hands e.g. scratching, tapping Responsive space Space that has similar qualities to animated architecture Sensory space A generic term of reference for an area that is designated for sensory activities, also described as a multisensory environment Serious game A video game whose primary purpose is education, training, advertising, simulation, or education as opposed to entertainment Simulation An educational strategy in which a particular set of conditions are created or replicated to resemble authentic situations that are possible in real life. Simulation can incorporate one or more modalities to promote, improve, or validate a participant’s performance Simulation-Based Experience(s) A broad array of structured activities that represent actual or potential situations in education, practice, and research. These activities allow participants to develop or enhance knowledge, skills, and/or attitudes and provide an opportunity to analyze and respond to realistic situations in a simulated environment SNE Special needs education Snoezelen Commercial realisation of the sensory room as originally conceived by Hulsegge and Verheul Soundbeam A non-contact approach to triggering and manipulating sound using one or more ultrasound beams. Originally created to enable dancers to produce sound based on their own movements, Soundbeam is an item of assistive music technology that is commonly found in special needs schools in the UK VBI Vision-Based Interfaces

Glossary and Acronyms

541

Velcro Registered trade name of main manufacturer of hook-and-loop fastener as used for rapid fastening Visuomotor Eye-to-hand activity and coordination Virtual human Virtual humans consist of artificially intelligent graphically rendered characters that have realistic appearances, can think and act like humans, and can express themselves both verbally and non-verbally. Additionally, these virtual humans can listen and understand natural language and see or track limited user interactions with speech or vision systems Virtual Reality (VR) An advanced form of human–computer interaction, in which users are immersed in an interactive and ecologically valid virtual environment VJ An abbreviation for Video Jockey, someone who creatively mixes, adapts and controls video projections in a live performance environment Wand An interface device used in virtual environments that allows the tracking of the wand device in space. The wand has a series of buttons that are used to trigger interactions, akin to a mouse that can be tracked in the third dimension Disclaimer The glossary offered in this section is to guide readers, however the editors are aware, and thus point out, of potential uses of the terms in differing context may bring about difference specific meaning to terms. Thus, various resources have been sourced to compile this list including: Various Online dictionaries and references; as well as the INACSL Standards Committee (2016): [INACSL Standards of Best Practice: SimulationSM Simulation Glossary. Clinical Simulation In Nursing, 12(S), S39–S47. 10.1016/j.ecns.2016.09.012)]