Serious Games in Personalized Learning: New Models for Design and Performance [1 ed.] 0367483963, 9780367483968

Serious Games in Personalized Learning investigates game-based teaching and learning at a time when learning and trainin

136 96 22MB

English Pages 290 [305] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication Page
Contents
Preface
1 The Science and Art of How We Learn
Cell Change and Learning at the Biological Level
Early Childhood Memory Development and Learning
A Human Knowledge Acquisition System and Memory Maps
2 History of Artificial Intelligence and Personalized Learning
Overview of the History of Artificial Intelligence
Historical Approaches to Personalized Learning
3 Ludology and the Origins of Games and Learning
Rules-based Ancient Greek ‘Deep Play’
Serious Games Pioneers
Board Games and Learning
4 Teaching and Learning Game Design
Preface
Human Cognitive Capacity and Balance
Entertainment Game Appropriation: Motivation, Genre, Mechanics, and Engagement
UI and UX Designs: Case Study
Assessment Engines and Game-based Learning Evaluation
Principles of Designing Successful Learning Games
Multiplayer Games and Learning
5 The Virginia Serious Game Institute (VSGI) Learning Game Examples
The Fad of Edutainment and Transition to Serious Games
Overview: History, Mission, and Philosophy
Case Study: Interactive Virtual Incident Simulator (IVIS)
Case Study: Hospital Training Game
Case Study: Social-Emotional Academic Development Game (SEAD)
Case Study: Legends of Aria – MMORPG
Effects of Games, Business, and Education in a Community
6 Artificial Intelligence Applied to Teaching and Learning
Next-Gen Machine-Learning Algorithms, Models, and Frameworks
Design Study: Deep Academic Learning Intelligence
Advances in Personalized Teaching, and the Rise of the Smart AI Bot
Case Study: Jill Watson 2019
Design Study: Personalized Recursive Online Facilitated Intelligent Teaching
Artificial Super Intelligence (ASI)
Appendix: Master List of PROF(it) Supersystem Variables
7 Teaching and Learning Computer-Intelligent Games
Serious Games and Machine Learning
The State of AI Game Engine Integration
A Personalized Learning Game (PLG) Engine
8 Personalized Learning Game Design Pedagogy
Instructional and Pedagogical Strategies for Personalized Game Design
Personalized Learning Game Planning
Epilogue: The Novel Education Paradigm
Appendix: Game 489, Pre-Internship Seminar: Personalized Learning Game Design Document
Index
Recommend Papers

Serious Games in Personalized Learning: New Models for Design and Performance [1 ed.]
 0367483963, 9780367483968

  • Commentary
  • Retail Version
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Serious Games in Personalized Learning

Serious Games in Personalized Learning investigates game-based teaching and learning at a time when learning and training systems are increasingly integrating serious games, machine-learning artificial intelligence models, and adaptive technologies. Game-based education provides rare data for measuring, assessing, and evaluating not just a game’s effectiveness but the acquisition of information and knowledge that a student may gain through playing a learning game. This book synthesizes contemporary research, frameworks, and models centered on the design and delivery of serious games that truly personalize the learning experience. Scholars of educational technology, instructional design, human performance, and more will find a comprehensive guide to the history, practical implications, and datacollection potential inherent to these fast-evolving tools. Scott M. Martin is an inventor, educator, entrepreneur, and author. He is Associate Professor of Computer Game Design and previously founded and directed the Virginia Serious Game Institute (VSGI) and the Computer Game Design Programs at George Mason University in Fairfax, Virginia, U.S.A. Dr. Martin earned undergraduate and graduate degrees from Johns Hopkins University and his doctorate degree from the University of Maryland, College Park. James R. Casey is Interim Director of the Virginia Serious Game Institute (VSGI) and Assistant Professor of Computer Game Design at George Mason University, U.S.A. Stephanie Kane is Research Coordinator for the VSGI and is currently earning her master of arts degree in computer game design at George Mason University, U.S.A.

Serious Games in Personalized Learning New Models for Design and Performance

SCOTT M. MARTIN with contributions from James R. Casey and Stephanie Kane

First published 2022 by Routledge 605 Third Avenue, New York, NY 10158 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2022 Taylor & Francis The right of Scott M. Martin, James R. Casey, and Stephanie Kane to be identified as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-0-367-48396-8 (hbk) ISBN: 978-0-367-48750-8 (pbk) ISBN: 978-1-003-04270-9 (ebk) Typeset in Bembo by Apex CoVantage, LLC

To my beloved wife Vera, for the endless inspiration, unwavering commitment, and illimitable love, and to my remarkable sons Prescott, Scott C., and Theodore. May you fnd His footprints in the sand or gravel in every path in life you may choose. Scott M. Martin To my parents Jim and Betty, who provided me the opportunities and support to do something crazy like make games for a living and inspired a love of learning. James R. Casey To my husband Nick, who has always supported my endeavors, and my mom Linda, who supported my love of games and ambition to always continue learning. Stephanie Kane

Contents

Preface 1

The Science and Art of How We Learn

Cell Change and Learning at the Biological Level Early Childhood Memory Development and Learning A Human Knowledge Acquisition System and Memory Maps 2

3

4

ix 1

1 7 10

History of Artificial Intelligence and Personalized Learning

27

Overview of the History of Artifcial Intelligence Historical Approaches to Personalized Learning

27 34

Ludology and the Origins of Games and Learning

48

Rules-based Ancient Greek ‘Deep Play’ Serious Games Pioneers Board Games and Learning

49 53 60

Teaching and Learning Game Design

72

Preface Human Cognitive Capacity and Balance Entertainment Game Appropriation: Motivation, Genre, Mechanics, and Engagement UI and UX Designs: Case Study Assessment Engines and Game-based Learning Evaluation Principles of Designing Successful Learning Games Multiplayer Games and Learning

72 73 78 86 91 96 104

viii Contents

5

6

7

8

The Virginia Serious Game Institute (VSGI) Learning Game Examples

115

The Fad of Edutainment and Transition to Serious Games Overview: History, Mission, and Philosophy Case Study: Interactive Virtual Incident Simulator (IVIS) Case Study: Hospital Training Game Case Study: Social-Emotional Academic Development Game (SEAD) Case Study: Legends of Aria – MMORPG Efects of Games, Business, and Education in a Community

117 119 122 124 128 134 137

Artificial Intelligence Applied to Teaching and Learning

147

Next-Gen Machine-Learning Algorithms, Models, and Frameworks Design Study: Deep Academic Learning Intelligence Advances in Personalized Teaching, and the Rise of the Smart AI Bot Case Study: Jill Watson 2019 Design Study: Personalized Recursive Online Facilitated Intelligent Teaching Artifcial Super Intelligence (ASI) Appendix: Master List of PROF(it) Supersystem Variables

152 153 156 157

Teaching and Learning Computer-Intelligent Games

203

Serious Games and Machine Learning The State of AI Game Engine Integration A Personalized Learning Game (PLG) Engine

209 218 227

168 190 196

Personalized Learning Game Design Pedagogy

244

Instructional and Pedagogical Strategies for Personalized Game Design Personalized Learning Game Planning Epilogue: The Novel Education Paradigm

249 255 258

Appendix: Game 489, Pre-Internship Seminar: Personalized Learning Game Design Document

265

Index

284

Preface

It is in the nature of the mind to forget and in the nature of man to worry about his forgetfulness. – Gordon H. Bower, Analysis of a Mnemonic Device (1970)1

Sometime between 86 and 82 b.c. in Rome, during the later years of a series of civil wars between Gaius Marious and the eventual Roman Republic Emperor Lucius Cornelius Sulla Felix, an unknown rhetor-teacher penned a groundbreaking treatise for his students titled Rhetorica ad herennium.2 In it, the rhetor-teacher defines the five components of rhetoric, which have been interpreted and reinterpreted through time to become the five canons of modern Western classical rhetorical studies: • • • • •

Inventio (to invent, discover, or the orderly search for arguments) Dispositio (the arrangement of the said discovered arguments) Elocutio (the crafting and delivery of writings about the arguments) Pronuntiatio (the training of voice infection, amplitude, and gestures of giving a speech) Memoria (not just rote memorization, but a (long-term) ‘earned’ memory)

It was thought that a strong memoria would allow one to elocute lengthy passages from texts and tangentially improvise from many knowledge domains from memory – all to be used in successful staged debates, soliloquies, or public deliberations.3 In Ad Herennium, our rhetor-teacher describes two types of memoria: naturalis, which I interpret to mean what we learn though

x

Preface

observation and actions (sensory memory and procedural memory), and memoria technica, or ‘artificial memory.’ Using the unknown rhetor-teacher’s definition of artificial memoria and his mathematical approach to ‘learning’ how to strengthen memory and association, we will now discuss the cognitive functions of memory. Specifically, we will address the long-term storage of ‘learned knowledge’ and our ability to retrieve it at will.4 Over the proceeding centuries, Ad Herennium’s methods of improving and strengthening artifcial memoria were adopted and expanded by the clergy, magisters, masters, the aristocracy, and, later, philosophers, writers, artists, and teachers and tutors from the Western Middle Ages through the Renaissance, who defned the tome as the primary source for the classical art of memory study and training.5 These were time periods during which copies of historical texts, instructional theses, and design drawings could not be carried home and copied. Therefore, a robust artifcial memoria was crucial to constructing buildings with architectural integrity, building sea worthy vessels, mixing safe chemicals, or reciting poetry and song from memory. In this chapter, we shall frst examine the basic functions of memoria naturalis – a review of the biological and molecular functions within neural networks and how the human brain may ‘learn.’ Then we pivot to a review of how the human mind may learn through examinations of childhood theories of early learning and memory development. Next, we shift our focus to memoria technical, and we explore new theories of how more mature adult cognitive learning may continue to occur. A key example will come from one of my previous research team’s Knowledge Acquisition System. This will help teach us about the diferent types of our human memories, and how our fve senses infuence how we process and store factual learning. If frst we do not know how our brain and mind learn together, then how are we expected to design and develop efective learning games that match our neurological and cognitive expectations and capacity? How can we be assured that the game experience will become memorized – that is, stored in our long-term memory for later retrieval? Chapter 2 walks us through the early history of artifcial intelligence research and development, its principles, and primary theories. It also surmises the likely causes of the two previous AI winters, wherein all fnancial grant support and investment dried up for any activities with AI in its title. To conclude this chapter, we then review early personalized learning strategies, devices, and curricula, and address which aspects of these still survive in public school and higher education in today’s variety of digital-based adaptive solutions. Chapter 3 begins with an historical overview of the original Ancient Greek framework of religious, athletic, intellectual, musical, dramatic, and

Preface xi

social play within civilization and symposia, and ofers the diferences and explanations of types of play: child (paizo), adult (paizein), and the changing function of leisure in Hellenic society. As Henri Marou wrote in A History of Education in Antiquity (1948) of the Hellenistic society, “The only point of education is to teach the child (through games) to transcend himself.”6 For Plato, intellectual high-minded pursuits were ‘play,’ as he believed philosophy itself is a game, as opposed to the serious or actual work in the felds or trades. Further, the author summarizes and critiques two defnitive treatise in the serious games and learning academic rubric: Homo Ludens (1938) by Johan Huizinga, from the original Dutch, and Clark C. Abt’s Serious Games (1969, 1985). These two texts provide footing for the rest of the chapter and are also referenced in Chapters 4 and 5. Moreover, we discuss the importance of board games to learning, while contrasting the games specifcally designed for learning with entertainment games that have been repurposed for learning. Lastly, this chapter ofers a summary of the board game breakthroughs of strategy games designer Charles Swann Roberts, and the Gibson & Hopkins Experientiality Rubric that outlines matching levels of a learner’s activity to their equivalent levels of experientiality commonly found in board games. These frst three foundational chapters lay the historical groundwork for the remainder of this book, as a thorough study of the past may not only ofer historical learning and game designs reimagined in a new contemporary context but also help us not repeat design faws and mistakes in the future. Originating from several recent international keynotes and presentations, Chapter 4 opens with the contributing author James R. Casey’s framework of known theoretical models of adult cognitive frameworks of successful knowledge acquisition, and how design of (sensory) inputs in games and learning software can lead to either cognitive balance or cognitive overload. We provide a matrix to consider when designing efective and impactful teaching and learning games. This matrix includes a rubric that borrows game-play elements from entertainment games that may produce the most engaging and motivating game-based learning experiences. Further, examples of efective and poorly designed UI/UX designs are provided, with supporting survey data and academic performance feedback. Moreover, we present some of the most innovative learning assessment and evaluation techniques and methods deployed today, including new categories of social dataset analysis that may ofer new insights into personalized learning. Lastly, the author provides a list of new policies and principles to consider when designing game-based solutions today, regardless of subject matter, that will better engage the learner, and enhance and augment the learning experience. This chapter concludes with an overview of new research into

xii Preface

the efectiveness of multiplayer learning games, and explores negative and positive comparisons of massive multiplayer online (MMO) entertainment games and provides insights into potential peer-to-peer and cohort learning advantages ofered in multiplayer learning environments (MLEs). Chapter 5 ofers a history, goals, and our philosophy about games and learning at the Virginia Serious Game Institute (VSGI) at George Mason University (Mason). The contributing author Stephanie Kane provides four examples of resident startup company game-based learning projects – how they were designed, developed, tested – and provides examples of learningbased outcomes to date. Chapter 6 summarizes the most recent uses of machine-learning AI frameworks, models, and systems applied and deployed in teaching and learning environments. These include technical approaches and case studies of education-focused AI tools, tutors, and Ro(Bots), along with a short summary of the frst ‘media famous’ education TA bot Jill Watson. Lastly, the author provides a description of his 2019 VSGI research team’s own artifcial general intelligence (AGI) research project, entitled the Personalized Recursive Online Facilitated Intelligent Teaching (PROF(it)), the world’s frst artifcial general intelligent ‘AI Professor’ supersystem. Chapter 7 ofers rare insight into the future of teaching and learning games combined with AI models and engines. Examples include live streaming combined with dynamic personalized instruction, and intelligent algorithm/ agent written game design documents that blueprint an ideal domain-specifc learning game. The chapter closes with a case study of a pilot project of a new game-based and smart (AI) assessment engine project for one of the largest training and assessment companies in the United States. The concluding chapter provides the reader with a summary of the seven chapters, including potential scenarios of game-AI-based learning in K–12, higher education, and corporate training over the next fve years. The author also prognosticates about the near-future advances of AI-driven games that can self-teach, self-assess, and actually improve human intelligent performance. Lastly, the author expounds upon how these intelligent instructional AGI-game models may be deployed in tangential markets such as medical and healthcare education and practice, law, psychological and psychiatry, logistics, transportation, and education and training.

Notes 1 Bower, G. H. (1970). Analysis of a mnemonic device: Modern psychology uncovers the powerful components of an ancient system for improving memory. American Scientist, 58(5), 496–510.

Preface xiii 2 Enos, R. L. (2005). Rhetorica ad herennium. In Classical rhetorics and rhetoricians: Critical studies and sources (pp. 331–338). Westport, CN: Greenwood Publishing Group. 3 Ibid. 4 Kandel, E. R., & Hawkins, R. D. (1992). The biological basis of learning and individuality. Scientific American, 267(3), 78–87. 5 Quattrone, P. (2013). Rhetoric and the art of memory. In The Routledge companion to accounting communication (pp. 94–108). London: Routledge. 6 Marrou, H. I., & Marrou, H. I. (1982). A history of education in antiquity. Madison, WI: University of Wisconsin Press.

The Science and Art of How We Learn

1

Cell Change and Learning at the Biological Level There are tens of millions of neurons in the hippocampus but only a small fraction of them are involved in this learning process. Before engaging in Pavlovian conditioning, these neurons are highly active, almost chaotic, without much coordination with each other, but during memory formation they change their pattern from random to synchronized, likely forging new connecting circuits in the brain to bridge two unrelated events.1 Xuanmao (Mao) Chen (2020)

Let us now turn our attention to understanding the microscopic level of human learning, best explained by the fields of neurobiology and neuroscience. Recent research from these knowledge domains explains how external stimuli we receive from, let’s say, playing a console video game or loosening a bolt on an engine block are reflected and transposed biologically in our cortex/hippocampus brain regions at the microscopic neuron-to-neuron synaptic level. Understanding how our activities or experiences are stored biologically as well as psychologically in our memories for short or longer periods of time may be crucial information for a teacher, therapist, or learning game designer. To perhaps learn that repeating variation of certain mental tasks actually changes the weights of cognitive neural synapses, increasing their charges and changing their chemistry, and therefore improving the chance for long-term memory storage would be invaluable information for instructional designers, multimedia specialists, and interactive game developers. As I mentioned earlier, if first we do not know how our brain and mind learn together, then how are we expected to design and develop effective learning games that match our neurological and cognitive expectations and capacity?

2

The Science and Art of How We Learn

Figure 1.1 Basic Biological Changes and Generic Learning

Naturalis biological learning, broken down to its most basic function, is any process that occurs in living cells that leads to permanent change.2 Biological cells at rest exist in a steady state, similar to how an electronic capacitor is designed to maintain a particular charge when ideal electrical current is applied. Cell change occurs internally due to the introduction of disease or aging, or externally from stimulation that represents awareness/disassociated change, such as getting singed from a fre, or from context/content change, such as frst discovering where a particular edible plant may grow in the forest during a particular season.3 Cell change, or the change of a cell’s steady-state capacitance, can also occur from the activities of neighboring cells (social) infuencing functionality or change from the introduction of a new cell environment.4 These changes, both internal and external on a molecular biological level, mirror physiological and psychological change on a macro level that, if left semipermanent, embody fundamental mammalian learning – not necessarily memoria technical type learning, but learning from exposure to external stimuli for the purpose of reproduction and preservation. Figure 1.1 depicts internal and external changes that lead to cell capacitance changes described as the fundamental act of ‘learning.’ To learn, one must overcome the convoluted entanglement of our human emotions, ego, behavior, cynicism, and other perceived positive and negative traits that together defne potential cognitive resistance.5 Clinical psychologist Peter Michaelson outlines that cognitive resistance for some may be due to: 1 2 3

inner confict that produces anxiety, procrastination, and indecision; psychological defenses such as cynicism and passive-aggressive reactions; an unwillingness to consider new ideas;

The Science and Art of How We Learn 3

4 5 6 7 8

a passive tendency to allow good intention and willpower to collapse; a narrow focus on minor details or secondary issues that obscures the larger learning goals; a strong identifcation and comfortability with our old, limited self; a stubborn unwillingness to do what’s in our best interests; or an unconscious determination, even compulsion, to produce self-defeat and self-sabotage.6

In order for us to learn, then, cognitive resistance must be overcome to allow capacitance change triggered by external events/inputs. However, that change in capacitance must be recorded somewhere – stored but available and retrievable by us at a chosen point in time in the future – for it to be defined as an act of artificial memoria. Although some cognitive psychologists and neurobiologists theorize that there may be more than two systems of external cognitive learning that map to internal cognitive memory functions, only two are generally accepted today: declarative (or explicit) and nondeclarative (implicit or procedural). After decades of research – culminating in her groundbreaking 1966 paper, Amnesia Following Operation on the Temporal Lobes – the British-Canadian neuropsychologist Dr. Brenda Milner provides a 14-year study of a 27-yearold patient named Henry Molaison, who sufered incapacitating seizures.7 To alleviate the efects of the seizures, Milner’s colleague, surgeon William Scoville, removed portions of the patient’s temporal lobe on both sides of his brain, a type of surgical lobectomy that also removed large portions of his hippocampal gyrus. The frequency and severity of Mr. Molaison’s seizures declined after the surgery, but Dr. Milner also discovered that the patient lost the ability to store or retrieve any new long-term memory experiences. He could recall places, people, and information that he learned during his childhood and youth – his long-term memory experiences that were formed before his lobectomy – but no new long-term memories could be formed after his surgery.8 Further, Dr. Milner recorded that Mr. Molaison could form short-term memories, such as how to operate certain new tools, and even recognize new faces, but by the next day, he was not able to remember how he acquired the new knowledge to operate the tools or associate names with the faces he recognized. Prior to Dr. Milner’s research, it was believed that the results of all learning resided in the larger cerebral cortex section of our brain. Since the brain has no pain receptors, early studies by neurosurgeons would probe brain tissue of locally anesthetized patients and literally ask them what they saw and felt during invasive procedures. Dr. Wilder Penfeld examined a number of epilepsy patients using this method, trying to discover a treatment, and in doing so produced the frst extensive map of brain regions and identifed

4

The Science and Art of How We Learn

the cerebral cortex as the most promising region that contains our learning memories.9 Later research by Drs. Milner, Elizabeth Warrington, and Lawrence Weiskranz in London demonstrated that Mr. Molaison could eventually recall limited simple types of new long-term memories that involved repetitive habit-forming kinetic-type (motor) activities, procedural in context, but not complex declarative ones that required conscious efort and active participation. Hence, they discovered two types of learning memories in diferent locations of the brain.10 Prior to Dr. Milner’s and her colleague’s discoveries, few psychoanalysts and neurosurgeons understood that there were two types of distinct learning, and that the results of these two types of learning were stored as memories in diferent regions of the human brain. Here, questions arise. If declarative and procedural learning both employ associative learning techniques, then do they both rely on the same neural cell manual that mentors the two memory systems? Or do these two specifc types of learning each require a diferent synaptic process and chemical composition, which then transmits signals along a diferent neural network to either the hippocampus or the prefrontal/cerebral cortex? Although previous research proved that associative learning could occur from a pre and post ‘coincident’ activity between neurons, it wasn’t until Dr. Ladislav Tauc published his chapter “Physiology of the Nervous System” in Physiology of the Mollusca in 1967 that we understood how a single associative learning rule allowed neural networks to process the two unique types of learning.11 Dr. Tauc discovered that synaptic connection between two neurons could be strengthened when a third neuron interacts on the presynaptic one, as seen in Figure 1.2. This third ‘modulatory’ neuron augments the release of the neurotransmitter of the presynaptic neuron (and releases serotonin). He also found that this interaction may represent nondeclarative associative properties, if the electrical impulses known as ‘action potentials’ in the presynaptic neuron occurred at the same time as the presynaptic ‘action potentials’ in the modulatory one (basically parallel action potentials). An action potential, sometime known as a spike, is when a neuron sends electrical information down an axon, causing a neurotransmitter release during a pre-synapse to post-synapse.12 Figure 1.3 demonstrates a modulated pre and post-synapsis with action control. Further research in invertebrates and mammals uncovered that the preand post-modulatory neuron interaction with the presynaptic neuron mimicked classical conditioning – a process of basic learning by associating two events, à la Pavlovian conditioning and response.13 In 1990, neuroscience pioneers Drs. Bengt Gustafsson and Holger Wigstrom published research that provided experimental evidence that long-term potential (LTP) – persistent long-lasting increase in signal transmission and associative nature of induction in mammals – pre and post neuron synapse time interval and

The Science and Art of How We Learn

Figure 1.2 Typical Neural Synapsis and Modulatory Synapsis

Figure 1.3 Modulated Synapsis with Action Control

5

6

The Science and Art of How We Learn

induction type results were also initially recorded in the hippocampus.14 Meaning, regardless of which type of memory was externally stimulated, neural synapses’ time length and induction type all were transmitted to the hippocampus frst, regardless of whether some of the network activity later ended up in the long-term cortex repository. In summary, we have discovered that our learning types each use a form of associative learning but utilize a diferent cell manual to diferentiate between declarative and nondeclarative learning. It also appears biological that both declarative and procedural learning utilize the same neural network to arrive in the same location initially, the hippocampus, and then, at some juncture in the future, stronger or higher-weighted learning memories are transferred to our cerebral cortex for long-term storage and future recall. Studies have shown that short-term memory information only persists in the hippocampus roughly for a few minutes to a few hours. This super-short memory is sometimes referred to as working memory in some models. Short-term memory lasts a bit longer – roughly days and weeks – but both short-term and super-short memory result from the strength (or weakness) of modulated neuron synapsis as just discussed. Long-term memory, or the transfer of memory patterns from the hippocampus to the cerebral cortex, is the result of the strength of the synapsis within the neural network, which may activate new proteins, genes, and new neural pathway growth.15 Simplistically explained, the development of long-term memories from both declarative and nondeclarative associative learning is dependent on the strength and length (amount of time) of stimuli that lead to increase modulated pre-synaptic neurotransmitter, post-synaptic terminals, and LTP.16 In essence, the greater number of consistent and strong external stimuli, the more pre- and post-modulated synapsis, the larger the neural network, the greater (anatomic/plasticity) the changes will be that occur frst to the hippocampus, before being refected, if consistently repetitive, in the cerebral cortex. A remarkable and somewhat controversial study in 1984, conducted by Dr. Michael M. Merzenich et al., validated that, in mammals, long-term consistent sensory pathway stimulation can demonstrate true neural plasticity (the ability for anatomical brain tissue to physically change to accommodate new information). For instance, inducing the same motor skill manipulations thousands of times can semipermanently modify the anatomy of the cortex dedicated to this repetitive task, at the expense of the size of the nearby cortex subregion previously dedicated to other tasks.17 This study was controversial, as it involved the amputation of the frst and last digits from the hands of a number of a monkeys, leaving the middle three digits, so Dr. Merzenich could measure the cerebral cortex changes between the two sets of digits – the active ones and the nonactive amputated ones over time. Evidence from this study seems to support the age-old idiom of

The Science and Art of How We Learn 7

practice makes perfect and supports the notion that extraordinarily correlated tactile, visual, auditory sensory stimulation encourages long-term potential, and that our associative synaptic cortical networks are being modifed in parallel to our learning activities! So, it appears that the act of learning on a molecular biological level can be defned as activity-dependent external stimuli, and that these neurobiological ‘brain’ discoveries reinforce the current cognitive psychology theories that the ‘mind’ also learns from a change of cognitive capacitance from external stimuli. We now also understand that there are two types of learning that depend upon memory development, and that both are associative in nature and may be recalled over diferent lengths of time. The theories of how the mind and brain function during the act of learning may be more unifed than previously understood. Now that we understand a bit about how we learn from a neurobiological level, we shall see later in this book how games, designed as amplifed (stronger) multisensory/somatosensory learning environments over time (length of time), may highly motivate, engage, and induce deeper intellectual implicit and explicit learner performance actions than other teaching and learning alternatives. Games, combined with other tools such as machine-learning algorithms that can personalize the learning game experience, may hold the greatest opportunity to reshape our prefrontal/cerebral cortex, in accordance with what we now know represents true knowledge acquisition.

Early Childhood Memory Development and Learning And once I had recognized the taste of the crumb of madeleine soaked in her decoction of lime-flowers which my aunt used to give me (although I did not yet know and must long postpone the discovery of why this memory made me so happy) immediately the old grey house upon the street, where her room was, rose up like the scenery of a theatre to attach itself to the little pavilion, opening on to the garden, which had been built out behind it for my parents. Marcel Proust (1922)18

The human brain’s cognitive architecture begins to form prior to birth, stimulated in utero by sounds and locomotion, adding sights, smells, tastes, and tactile stimuli upon birth. The amount, repetition, and ‘serve and return’ causality of stimuli helps generate over one million new neural connections every second in the first few years of life. These connections are mostly procedural in nature at first (naturalis) but add more declarative-related stimulating experiences in the cortex during childhood through adolescence. Visual and hearing sensory neural network pathways are the first to form, followed by early language skills, motor skills, and higher cognitive functions.19 Healthy

8

The Science and Art of How We Learn

babies naturally solicit interaction through vocalizations, facial expressions, and gestures. As parents, caregivers, and siblings respond in kind, the infant brain develops new linguistic and visual competencies. It is known that early ‘serve and return’ visual experiences are required for the development of stereoscopic vision and ocular dominance – the tendency to prefer visual input from one eye over the other.20 Exposure to speech is necessary for language development. A nontoxic, safe, and stable environment is critical for normal self-regulation and emotional/social development to occur.21 Without these conditions and experiences to help facilitate implicit/ procedural learning development, higher cortical learning may be stunted or inhibited later in life. Research has demonstrated that lack of exposure to consistent and proper parts of speech and language in the very frst year of life may have a detrimental efect on language learning and possibly reading comprehension later in adulthood.22 Most fascinating, studies have shown that, although babies up to six months of age can discern phonemes from several languages, by the time they are a year old this ability may be lost without the necessary interactions.23 This is an example of the limits of cognitive neural plasticity, the fexibility or malleability of cognitive systems that can be modifed from either implicit or explicit learning. The human brain is most malleable early in life to absorb and store a massive range of new multisensory/somatosensory stimuli, perceptual and motor interactions, and social experiences. As the brain matures over time, it is believed less capable of modifcation and reorganization to accommodate new processes, and less able to store new associative declarative-based challenges and experiences. However, our human motor memory system, an implicit learning behavior, can be modifed throughout life, given the abundant number of new external stimuli that it’s exposed to over a lifespan – such as the use of new tools, interfaces, and input devices to learn every few years.24 Perhaps the process of evolution has allowed lifelong cognitive modifcation of our motor system’s memory storage ability but not our declarative learning memory storage as we age. But, if challenged by more complex and multifaceted stimuli, shouldn’t our declarative memory also be able to accommodate the challenge later in life, as well? Indeed, several theories have been proposed that attempt to explain how it’s possible that our declarative memory abilities may increase and improve from childhood through adolescence to adulthood. One particular theory advances the notion that it is encoding – the forming of mental representations and generation – that allows the ability to re-experience past stimuli and responses.25 In essence, as the complexity of knowledge increases throughout maturation, all we need to do is change our cognitive representation of that knowledge, encoding or weighing it at a higher level, which ensures long-term memory storage and perpetual youth!26 As we learned earlier in this chapter, procedural (implicit) memories are generated by physical task performances and are initially stored within

The Science and Art of How We Learn

9

specifc neural systems in the working and short-term memory regions of the hippocampus. Depending on weight and repetitiveness, they may be stored long term in the cerebral cortex. However, declarative memory is considered phylogenetic in nature, and although it exists in mammals with advanced neural interconnectedness between the hippocampus and cortex regions, it is most evolved and formed in humans. Declarative memory actually consists of two subtypes: episodic and sematic memory. Semantic memory is represented by facts and fgures, whereas episodic memory includes the subconscious sensory stimuli that occurred at the same time the semantic memory was experienced. It encompasses the indirect stimuli of smells, tastes, sounds, colors, and/or spatiality as stimuli at the same moments semantic-related stimuli occurs. Both are stored as a cohort learning experience in the working memory, then short-term memory, and, if weighted high enough and not replaced by even higher weighted memory experiences, transferred to the prefrontal cortex for long-term storage. Because these long-term memory experiences are stored as a semantic and episodic cohort, it is not uncommon for a “taste of the crumb of madeleine cake soaked in her decoction of lime-fowers” to stimulate its cohort stored declarative memory to recollect “immediately the old grey house upon the street, where her room was, rose up like the scenery of a theatre to attach itself to the little pavilion, opening on to the garden, which had been built out behind it for my parents,” as Proust wrote in Swann’s Way (Remembrance of Things Past). On a personal note, I tend to be one of those unfocused lecturers that becomes distracted from my planned classroom agenda and sometimes intellectually travels along tangents telling (I hope) related stories to emphasize and augment course content. Depending on the progress of the learners in the same class semester-to-semester, I also tend to adjust assignment due dates and course workloads verbally, on the fy, during these discussions and storytelling, and I rarely write down my changes at the time of announcement. Days later, when a student meets me in my faculty ofce to ask me details about a particular story I previously told or for an extension of the new ad hoc due date, I sometimes cognitively stumble and can’t recall what they are referring to. However, when I reenter the same classroom for our next class meeting – the classroom in which I told the stories or changedup the assignment due date the previous week – everything I said and did during that class session becomes easily retrievable. This is a most wonderful example of the episodic room luminescence, air-conditioner hum, spatial visualization, the smell of the food cart outside of the classroom door, and other stimuli luring out my associated cohort semantic memory. This section opened with one of the most famous moments of French literature, the tasting of the crumb of madeleine soaked in tea that enables the narrator in Proust’s Swann’s Way, Remembrance of Things Past (1922) to recall

10

The Science and Art of How We Learn

his childhood experiences in absolute detail. The taste of the madeleine functions as the episodic lure to temp out the narrator’s long-forgotten semantic experiences as a child in the novel, and also inspires the reader to contemplate their own episodic/semantic cohorts from their past, before revealing the cake’s true meaning at the end of Proust’s seventh volume. Humans acquire knowledge (semantic memory), retain the knowledge acquisition experience (episodic memory), and can retrieve the stored knowledge to a greater degree and depth than any other species. Our conscious use of associative strategies, determinate encoding processes, and awareness of declarative recall are a refection of our ability to form value judgments and reason, our capacity to contrast and compare acquired knowledge, and our capability to form abstract and nonlinear concepts that may manifest as creativity or originality of thought. “To consciously devise mental presentations and to hold ideas in awareness are the processes which characterize (our human) declarative memory,” the primary memory that may defne facets of higher human intelligence.27

A Human Knowledge Acquisition System and Memory Maps I recognize a word not only by the images it evokes but by a whole complex of feelings that image arouses. It’s hard to express . . . it’s not a matter of vision or hearing but some over-all sense I get. Usually I experience a word’s taste and weight, and I don’t have to make an effort to remember it – the word seems to re-call itself. A.R. Luria (1987)28

Let us now examine a new generic knowledge acquisition system (KAS), first developed by myself and a team of VSGI student researchers in 2017, that reflects all the memory functions outlined in the first two sections of this chapter.29 Our new KAS is an expanded and refined version of the original after-image, primary, and secondary memory model first proposed by the preeminent American psychologist William James in 1890.30 James defined the primary memory as the initial storage location that may be available for conscious inspection, attention, and introspection (what we describe as short-term memory (STM) today), and secondary memory as long-term memory (LTM). Our new system transposes James’s after-image memory model into a sensory memory module that contains both current and historical learner’s data defined as semantic inputs, and the input channels (text, sound, visual) of a learner’s experiences as episodic inputs. The new KAS also divides James’s primary memory model into working and shortterm memory modules. Further, James’s secondary model in our construct is a

The Science and Art of How We Learn

11

unique long-term memory module that contains a learner’s declarative memory, including sensory and episodic memory inputs, as well as a universal memory bank – a sort of LTM storage site of conditional and contextual learning experiences best practices. This artifcial cognitive KAS structure is intended to simulate how the biological cognitive structures decipher, identify, flter, label, and store a distinct memory cohort experience, and later retrieve those stored neural signals back to a learner’s sensory memory or short-term memory related to the context of a relevant scholastic experience. Diagramed in Figure 1.4, this unique KAS is a superimposition of historical psychological and neuroscience constructs that attempts to identify and measure external stimuli as inputs, label and weigh the signifcance of that stimuli, and then store longterm, previously weighed declarative and procedural learning experiences with the goal of mimicking the biological cognitive learning process as an artifcial personal learning map.31 As described by Allen Newell, one of the founding fathers of artifcial intelligence, ‘knowledge’ is technically something that is ascribed to an agent by an observer.32 Knowledge, within this system, is defned as both procedural information that may enhance the weight of LTM and as explicitly declarative information that may be evoked to make decisions, recommendations, or predict outcomes based on past learning experiences. To accurately flter, record, label, and store these declarative learning experiences as detailed and organized stimuli, a schema of recognition, processing, and storage must be designed that imitates our cognitive learning system. The goal of the KAS is to outline these recognition, processing, and storage functions that mimic the charted memory regions within the human prefrontal cortex and hippocampus. The separation of memory processing into several independent and parallel memory modules is required, as these separate memory systems serve separate and sometimes incompatible purposes.33 The purpose of this exercise is to design a system that imitates the human biological and cognitive learning system in order to design an accurate future artifcial one. If we may artifcially replicate and then map a generic cognitive learning system that mimics an individual’s biological learning process/ experience, then perhaps we may start to explore the concept of tailoring a specifc learning experience toward the unique way an individual may acquire knowledge at any stage of life. A personalized learning map (PLM) may represent one or even a lifetime of learning experiences, sometimes bounded by other unrelated memories of emotions, sensations, afections, and psychological states that journeyed along as episodic experiences, with every stored declarative learning experience providing rare insight of the dynamic processing, storage, recollection, and conscious inspection and introspection from external stimuli we defne as intelligence.

Figure 1.4 The Author’s Research Team’s Knowledge Acquisition System

The Science and Art of How We Learn

13

Now let us examine the attributes and functions of the various modules outlined in Figure 1.4 that constitute our artifcial KAS, and then learn how each module can be transposed into a potential personal learning map (PLM). As we describe our artifcial KAS, we will also identify in parallel potential real-world learning input datasets as stimuli, and the processing and delineation of such, conducted through a theoretical online learning platform that can ingest, pre-classify, label, and store the data in a PLM from a particular learning experience.

Sensory Memory (v) Module Within the human memory system, sensory memory may be defined as the ability to retain neuropsychological impressions of sensory information after the offset of the initial stimuli.34 Sensory memory of the different modalities (auditory, olfaction, visual, somatosensory, and taste) all possess individual memory representations.35 Sensory memory includes both sensory storage and perceptual memory. Sensory storage accounts for the initial maintenance of detected stimuli, and perceptual memory is the outcome of the processing of the sensory storage.36 In the context of the KAS, the sensory memory also serves to recognize stimuli, as all memory is first perceived and stored as sensory inputs derived from various external stimuli. For the purpose of the KAS, a future system that will be exploited to create a personal learning map (PLM), we’ll let sensory inputs include but not be limited to: 1 2 3 4

academic achievement/performance (parsed or provided: current and historical); internal academic-related communications factors (parsed or provided: learner/teacher, learner/learner, learner/team, current and historical); external personal/social extenuating circumstantial factors (current and historical); and psychometrics, EQ, conditional analysis (current and historical).

These sensory memory inputs are divided into two categories, defined as semantic and episodic inputs. Semantic inputs are comprised of all datasets representing factual information about a learner (preceding 1–4), such as traditional achievement scores and evaluations, nontraditional achievements such as team project scores, experiential learning scores, foundational data about the learner (age, sex, location, demographics), parsed learner subject and nonsubject communication channel(s) data available shared on

14 The Science and Art of How We Learn

some learning platform, and provided or self-reported historical academic achievement data. Episodic inputs are comprised of all datasets regarding an individual’s personal learning event experiences that are not academic or personal in nature, such as stimuli senses that occur during cohort semantic learning events and are parsed and labeled from any communication channels. These inputs are collected through various machine ‘senses’ such as the text chat-logs; microphones; webcams (light, color, spatial, and facial recognition); and future tactile-, taste-, and odor-detecting devices. Figure 1.5 outlines a block diagram of the sensory memory module in the KAS PLM system.

Figure 1.5 Sensory Memory Module (v) in the Knowledge Acquisition System (KAS)

The Science and Art of How We Learn

15

Working Memory (w) Module Working memory in the hippocampus serves as a limited storage system for temporary recall and manipulation of information, defined as < 30s. Working memory (sensory buffer memory) is represented in the Baddeley and Hitch model as the storage of a limited amount of information within two neural loops, comprised of the phonological loop for verbal material and the visuospatial sketchpad for visuospatial material.37 Moreover, working memory can be described as an input-oriented temporary memory corresponding to the five sensors of the brain (vision, audition, smell, tactility, and taste). The stored working memory content can only last for a short timeframe (< 30s) until new data arrives to take the place of the previous data, so it is quite finite. When new data arrives, the old data in the queue should either be moved into short-term memory or be forgotten and replaced by the new data. For example, let Wj = the probability that a dataset in W1 is lost when a new dataset arrives in W2 (or the inversion). Therefore, W1 + W2 + Wn . . . = 1, because every time a new dataset enters the working memory module within > 30s timeframe (buffer), the previous dataset is pushed to the short-term memory or transferred directly to the long-term memory (LTM) module or filtered/forgotten. Within the working memory module, more complex functions than mere temporary storage are enacted within a processing component referred to as the central executive. The central executive is responsible for actions such as the direction of information flow, the storage and retrieval of information, and the control of actions.38 Engle, Tuholski, Laughlin, and Conway (1999) further described working memory as an ‘attention-management’ unit that assigns weights and computational resources for the management of multiple tasks depending on their level of complexity to maintain continuous operation.39 The working memory module within the KAS is based upon this model, with necessary modifications as relevant to some student learning experiences expressed through a platform. All inputs from the sensory memory are sent to the working memory module, which, in the KAS PLM model, performs as temporary storage and as an information classifier. Figure 1.6 provides a diagram of the KAS working memory module. This information classifer functions similarly to the central executive described by Baddeley and Hitch (1974), as it directs information through the working memory system’s information classifcation loops, which are similar to neural loops, and thus retrieves and directs the classifed information to its next respective destination, as seen in Figure 1.7. In the KAS PLM model, the working memory module identifes input as W1, W2, or W3 and then channels the timestamped and labeled data with relevant appropriate identifers. The information classifer in the working memory module classifes using three main categories, W1 (text), W2 (audio/visual),

16

The Science and Art of How We Learn

Figure 1.6 Working Memory Module (w)

Figure 1.7 Information Classifiers for Working Memory (w)

The Science and Art of How We Learn

17

and W3 (social, reputation, pronouns), and determines if the input data can be categorized. Mass student informational data from both semantic and episodic sensory inputs pass through the working memory module, as all data must be classifed with relevant tags and labeled. However, after being classifed in working memory, the semantic and episodic inputs are separated due to the nature of each type of information – all semantic information is designated as important and weighed higher, as semantic inputs by nature are factual segments of information like scores, grades, or foundational data. In contrast, episodic inputs are personal event-related information and will include some non-important information. Due to this assumption, semantic inputs are distributed directly into the LTM module – the artifcial cognitive declarative memory module. Episodic inputs are sent to the shortterm memory module for analysis and fltering. As mentioned earlier, in the information classifer function, let W1 (text), W2 (audio/visual), or W3 (social) represent a vector = (W1a, W1b, W1c, . . . , W1n) a–n representing the sub-variable datasets outlined in Figure 1.7. Therefore, the probability to classify sub-variable datasets in W1 is: p (Ck|W1 )=

p (Ck ) p(W1|Ck ) p(W1 )

Where k is the possible outcomes of classification and C is the sub-variable group. Utilizing logistic regression to classify and predict our sub-variable classes in our classifer, are datasets would be: log

p (C1 |W1 ) = log p(C1|W1 )−log p(C 2 |W1 ) >0 p (C 2|W1 )

Short-Term Memory (STM) Module Short-term memory (STM) serves as a filter that allows our knowledge acquisition system (KAS) to either store data for a short period of time, generally > 30s, or delete data (information) by calculating the importance, or ‘entropy,’ of the data. The KAS STM module is modeled on characteristics of the function of the human memory prefrontal cortex via the hippocampus, so the STM tends to store data with high emotional value and is more likely to remember negative information (m1) as opposed to positive (m2), or neutral information (mø).40 Large amounts of communication channel data ingested by some theoretical learning platform in the KAS working

18 The Science and Art of How We Learn

memory module are classified as episodic inputs, so episodic inputs will consist of an extremely vast amount of data with a significant portion that is unusable or lacking in relevant information. Much of this data will be considered unimportant if not weighed with a cohort semantic learning experience and will be filtered or forgotten. Figure 1.8 outlines the KAS STM module. When considering human STM (m), limited neurological memory storage is often regarded as an inhibiting limitation (Baddeley, 1999). With respect to computer memory and information dataset storage on some learning platforms, however, the limitation is trivial. Through web-based cloud storage schemata, an immense amount of data can be stored. However, in certain scenarios, the brain’s ability to forget information is actually highly benefcial, such as when the material contained in the information is obsolete, unnecessary, or emotionally traumatic due to trade-ofs that occur between processing and storage activities. The more resources the brain allocates to storing information, the less ability it has to process that information.41 It is a fundamental principle of human memory that some are remembered and some forgotten.42

Figure 1.8 Short-Term Memory (STM) Module

The Science and Art of How We Learn

19

Therefore, within the knowledge acquisition system employed to design a PLM model, a similar innovation is necessary, even despite large-capacity computer storage options. By limiting unnecessary data, each PLM becomes a more precise tool for processing and then recording the most highly weighted learning information and cohort experiences that may guide and assist a learner and a teacher in the future. If the STM input data matches any of the three classifer data blocks of W1j, W2j, or W3j, and if the information is weighted relevant and is classifed as important (high entropy), it is automatically saved and transmitted to LTM. If not, the module conducts a sentiment analysis using parsed sentences previously stored in a network tree structure, and then weights the dataset with high sentiment as important and low sentiment as unimportant. Finally, if the dataset has been deemed unimportant, the model performs another content analysis via emotional content analysis, using previously parsed signifers stored in a network tree structure, and tags datasets with higher amounts of emotion with larger weights. Datasets with high sentiment and/or emotion are considered relevant because they provide emotional context to the semantic dataset content and may refect students’ underlying motivations.43 The STM module uses a modifed version of the Shannon Entropy Equations and categorizes any data that does not pass the three W flters as low entropy and forgets/deletes the dataset. The system then categorizes all high entropy data as either subject or non-subject matter and passes it to longterm memory (LTM) where it is stored in a learner’s declarative PLM module. Figure 1.9 outlines the entropy fltering process. Let dataset input = m, and m can only have one of (s) or (e) values of W1j, W2j, W3j, Wnj . . . , W(s+e) Y(m = W1 j ) = y1 j and Y(m = W2 j ) = y 2 j and Y(m = Wnj ) = ynj Y(m = W( s+e ) ) = y(s+e )

so

Input

Matches sub variables of W1j, W2j, W3j,?

No

Matches Personal Keyword Library?

No

Sentiment Analysis (s)

Emotion (e) -0.25,” and then the next person repeats this and adds an element that starts with the next letter and so on until people fail. At some point, your ability to store and recall multiple elements becomes too much to process. Thinking back, it’s a crude way of envisioning capacity, also known as cognitive load theory. Although the study of cognition has existed for centuries in a variety of modalities from purely theoretical to the biological, the understanding of cognitive load as a concept sprung out of a theory regarding problemsolving proposed by John Sweller in the late 1980s.8 This was later refned and expanded upon in 1994 as he related the phenomenon to instructional design and how people learn.9 His theory and studies defned cognitive load theory by a single construct: element interactivity.10 Elements are basically anything that needs to be learned or understood. Interactivity was valued based on how interdependent the elements were to each other. For example, to understand X, a learner needs to also know Y. If X and Y are independent, element interactivity is low or nonexistent and cognitive load is likewise. As an example, think about those trendy math problems that are all the rage on social media.11 The equation 8÷2(2+2) creates a conundrum for people because it requires that people rely on knowing a number of things in order to solve, each of which would be considered an element that requires interaction. Someone attempting to solve this equation needs to understand the individual numbers, addition, multiplication, division, and the order of mathematical operations. The real issue here is that the equation’s presentation is also problematic and ambiguous, which makes application of the preceding knowledge result in diferent answers.12 This simple concept gives an idea of the complexity of a task by measuring the elements and how they interact. Sweller’s argument was that

Teaching and Learning Game Design 75

by looking at how complex learning outcomes and tasks are, instructional design could account for this and adjust accordingly. But this only accounts for the intrinsic complexity of a task in a vacuum, and doesn’t account for variables that may afect an individual’s own cognitive load. What else can afect how difcult a particular task can be? Think back to Figure 1.1 earlier in this book, which shows how we learn. (Reprinted here as Figure 4.1.)

Capacitance Change Biological Cognitive

Internal

External

Disease Maturation Age

Environment Society Situational

Learning Types Structures Resistance

Figure 4.1 Basic Biological Changes and Generic Learning

Many factors infuence learning, both internal and external to the process itself. As such, it’s hard to quantify exactly how an individual will react, but we can create guidelines and principles to reduce cognitive load and prevent overload. These principles will be touched upon later in this chapter and demonstrated in games, as well. As a simple thought experiment, think back to the adage about the difculty of locomotion when combined with extraneous activity known colloquially by the idiom of ‘walking and chewing gum at the same time.’13 At its simplest form, this ability for someone to understand that not all combinations of tasks can be easily combined gives us a method of understanding cognitive load, but to truly look at the variables, a deeper understanding of the terminology used in this research is needed. A number of researchers have examined the process of how to measure cognitive load in the decades since its appearance, and all of them have some commonalities.14 In each case, they attempted to defne and classify cognitive load by examining distinct types of load: intrinsic, extraneous, and germane.15

76

Teaching and Learning Game Design

Intrinsic, as suggested by the name, is the measurement of factors based on the inherent difculty of the learning outcomes. Extraneous load is based on the nature and manner in which information is presented, which is a crux of instructional design, or as we will see later, the UI/ UX of a learning game. Germane load difers in that it looks at the load that would be imposed by the process of learning and developing schemas, or the ability for the brain to easily learn and apply learning later in the process. In some cases, if something is easy enough and simple to learn, it might make sense to actually increase the germane cognitive load to provoke the learner into processing and learning. Studies have shown that if the cognitive load is too low, it inhibits the learning process.16 This introduces an interesting aspect of cognitive load: how to balance it to promote learning? Think back to how games promote engagement by attempting to put gamers into a state known as fow. Envisioned as a graph (Figure 4.2), one can see how, similar to cognitive load, fow requires game developers to consider whether games are too simple or too difcult and look for that middle ground.17 In that sense, the idea of ‘fow’ is parallel to the idea of balancing cognitive load (Figure 4.3).

(High)

Flow Channel

Challenges

Anxiety

Boredom

0 (Low) 0 (Low)

Figure 4.2 Diagram of Flow18

Skills

(High)

Teaching and Learning Game Design 77 (High)

Flow Channel (Cognitive Balance)

Challenges (Intrinsic/Extraneous Load)

Anxiety (Overload)

Boredom (Underload) 0 (Low) 0 (Low)

Skills (Germane Load)

(High)

Figure 4.3 Flow Diagram Combined with Cognitive Load Balance

Much like fow attempts to navigate the valley between boredom and frustration within the context of a game, cognitive balancing navigates that same valley but in more general terms and often in relation to learning and skills. In learning games, this means a good developer not only has to make the game fun by concentrating on fow but must also ensure that the content, method of delivery, and acquisition of learning schemas by the game allow for cognitive balancing that promotes their learning outcomes. With an idea of how cognitive load is viewed and how it relates to concepts that also apply to gaming and learning, we need to look at a framework for viewing cognition in context and apply best practices to determine and deliver the right level of cognitive load and engagement in future learning games. To formulate that framework, let’s summarize cognition so far as we’ve defned it. Humans take inputs from all their senses and process that into sensory memories that are attached to working and short-term memory. These memories are processed into long-term memories that can be defned as semantic or episodic knowledge. There are then additional factors that can afect this process; from both the internalization of inputs to the recall and retrieval of said knowledge from long term memory. Additionally, the ability for a learner to recognize and apply their own beliefs and knowledge about themselves, the task, and the goals of learning can infuence this process.19

78

Teaching and Learning Game Design

Figure 4.4 shows a framework for visualizing this process. Input Speech Sound Text Images Touch Smell

Sensory Memory

Auditory

Working and Short-Term Memory

Visual

Integration and Storage

Tactile

Filtering and Organization

Olfactory

Retrieval and Recall

Long-Term Memory

Semantic Knowledge Episodic Knowledge

Gustatory Self Interests, Purpose, and Motivation

Metacognition

Figure 4.4 Framework for Cognitive Process, Including Metacognition for Pur­ poses of Defining Capacity and Balance

Here, we can see where and how the cognitive process can be influenced. In analyzing cognitive balance, we need to look at two primary areas of the framework: the inputs and the metacognition. In the next sections of this chapter, we will look at both through the lenses of entertainment games and examine how they can be used to inspire more engaging learning environ­ ments and learning games.

Entertainment Game Appropriation: Motivation, Genre, Mechanics, and Engagement A game is an opportunity to focus our energy, with relentless optimism, at some­ thing we’re good at (or getting better at) and enjoy. In other words, gameplay is the direct emotional opposite of depression. Jane McGonigal20

Ask a bunch of gamers about their favorite memories playing games and you will come away with a vast array of different experiences. For some gamers, the game itself is key – other games just don’t compare or capture that certain spark it creates for them. For other gamers, the game is a conduit to an experience that appeals to them on a primal level, and created emotions that etched that experience into their long-term memory. Still other gamers will remember the experience because of the physical response that it gave them, the endor­ phin high21 as they encountered pure bliss, unadulterated surprise, or genuine accomplishment. Maybe a few remember it in spite of the game, as part of a larger imprint; a way to pass the time when their parents fought, for example. In other words, games appeal to not only the senses, but can influence players in a broader, more metaphysical manner. Jane McGonigal captures this ability

Teaching and Learning Game Design 79

of games to influence our moods for the positive in the preceding insightful epigraph from her book Reality Is Broken, a great read to expand on this topic. Entertainment games are not alone in their ability to form sentiment and attachment. Almost all aspects of our culture are designed to pass along information and experience. Unlike other media, games provide a visceral and immersive experience that engages the player in a way that is hard to replicate in books, movies, or music. This idea of interactive engagement is central to what makes a game fun and how to stimulate active learning. The trick, of course, to designing a memorable experience is in identifying ways to classify and categorize how to make said experience engaging. What makes something fun for one player will bore or annoy other players. Good game designers try to appeal to a broad swath of potential gamers, and game design over time has evolved to target audiences with a variety of tools in their toolkit. To create engagement in learning games, we need to look at what entertainment games do well and identify and apply those tools. To do this, we will break down game genres, mechanics, and motivations and examine how they all attempt to classify game experiences to ensure proper engagement.

Genres Pause for a moment and ask yourself what category you find yourself drawn to when you open your favorite streaming service. Do you go for comedy? Action? Sci-fi? Maybe you like to learn something while you relax and go for the documentaries? Regardless of what you choose, there is something about that genre that appeals to you.22 You know that if you choose a comedy, you are going to view something lighthearted and, if all goes well, you will smile and laugh. You know that if you choose a horror flm, your heart rate will likely spike a number of times and there will be jump scares, suspense, and shocking moments. Genres are a handy classifcation system that we have developed over time across media,23 and gaming is no exception. In essence, genre is a learned schema that allows us to assume information about subjects quickly and efciently and attribute values to unknown quantities in absence of direct investigation. For entertainment games, genres have evolved over time in lockstep with the games themselves. The frst electronic games almost always were a variation on single themes that often mimicked activities like sports that people have been doing forever. For example, one of the frst games ever was called Space War and involved simple one-on-one shooting.24 Before that was a simple sports game called Tennis for Two (Higinbotham, 1958), a lesser known precursor to the breakout successor known as Pong (Atari, 1972), which came out almost two decades later. As hardware and software capabilities grew, game designers were able to capture additional simulated human experiences, such as racing,

80

Teaching and Learning Game Design

exploring, etc. As the types of games grew, genres expanded to accommodate them. Though in no way exhaustive, the following is a list of genres that can be found today:25 • • • • • • • •

Action Shooter Role Playing Adventure Racing Fighting Strategy Casual

Each of these genres provides players with a good idea of what to expect from the games in its classification. If I have played a strategy game in the past and see a new game coming out with the same classification, I can expect similar mechanics, similar types of design, and in general a familiar user experience to other games in the same genre. As with most classifcation systems, the distribution of products available in each category (genre) varies. The popularity of each genre infuences this distribution, and developers can determine and assume characteristics of the players of each genre by looking at the demographics of each genre. Not all games are created equal. Figure 4.5, generated by the Entertainment Software Association, gives us a snapshot of the popularity of individual genres in 2019. Strategy 4%

All Other 4%

Racing 6%

Action 27%

Fighting 8%

Adventure 8%

Sport 11%

Shooter 21% Role Play 11%

Figure 4.5 Best-Selling Video Game Super Genres26

Teaching and Learning Game Design 81

Each of these genres can be broken down further into subcategories or defned by further terms based on the characteristics of the games involved. As an example, the popular game Fortnite27 is a frst-person shooter Battle Royale game. Don’t worry if those words don’t mean anything to you. Gamers know exactly what they mean and what they are getting if they play Fortnite or any game in the same sub-genre. Players of Fortnite know that they will inhabit the body (frst-person) of one of 100 participants that are competing via a variety of weapons, mostly guns (shooter), to be the last person standing (Battle Royale). This competitive environment and sense of community appeals to them, and the accomplishment of beating other players and winning the coveted top spot is compelling to them. As well, because of the vast community of players, it has become a cultural touchpoint for younger gamers. Some begin playing just because so many of their classmates play and they want to be part of that social milieu.28 Genres are great because they allow people to easily identify and understand the game play and game mechanics of the games they will play, something we will discuss in the next section of this chapter. An interesting aspect of genres is that, like other classifcation systems, they are refned, expanded, and tend to evolve over time. Game developers tend to fnd what people like from each genre, typically in the form of mechanics, and apply them, if possible, to other genres. This creates new meta-genres where there is a cross-pollination of ideas. For example, frstperson shooters began as a simple concept. Grab a gun, kill the bad guys, rinse, and repeat. Wolfenstein 3D,29 one of the frst games of this genre, made the bad guys Nazis and made the killing palatable. Over time, though, as gaming grew and the gaming market became savvier, frst-person shooters also evolved. Taking their cue from role-playing games, they added levels and progression mechanics.30 Instead of just starting and ending the game as a badass, gun-toting hero, players in current frstperson shooters begin as level-one players and earn their stripes and gain levels as they mow down their enemies, whether computer-controlled or other players. This is similar to progressing through the ranks of a career, but in accelerated game world time. Start as a private but become a general in no time. But wait, there’s more! Players can also level up their weapons, unlocking new attachments and abilities as they use them more, complete challenges associated with that weapon, and just in general become better at simulated killing. What do the players get for their eforts? Each of these progression tracks unlocks new weapons and toys for the soldier to use, attachments for said weapons, and customization options so they stand out amongst all the other soldiers. These progression mechanics, coupled with rewards for investment of time and skill, speak to the player’s motivations. Let’s focus more on why they added these systems as we shift our narrative to the mechanics of games.

82

Teaching and Learning Game Design

Mechanics My philosophy is that once you get people compelled enough to sit down and play the game, the whole way you make the game successful is by giving them enough unique ways to do things. First, let them deal with pulling levers and things like that for a while. Then after they’ve mastered that, you give them something else to do, like getting through doorways by blasting them down with a cannon. Next, you give them a monster-finding quest, followed by logic problems to figure out. You pace it that way. Assorted activities and the diversity of activities are what makes a game rich in my mind. Richard Garriott31

Before talking about game mechanics, it’s important to define game play versus game mechanics. Game play is a gestalt of the game mechanics that defines what a player can do within the game and what the primary player goals are. Game mechanics describe and detail the same elements, but they do it in a way that approaches it from an engineering level versus a design view. As famed Ultima creator Richard Garriott alluded to in the preceding excerpt, the activities of a game make it rich, and game mechanics are the key to defining game play. Jesse Schell, founder of Schell Games and author of The Art of Game Design: A Book of Lenses, looks at mechanics from a high level defning them by the following six categories:32 spaces, states, actions, rules, skills, and chance. These mechanics defne the game play and they need to be balanced and examined in isolation and in conjunction with each other.33 Others have defned mechanics in diferent terminologies and created their own defnitions; a good history on the topic was done by author and game design professor Miguel Sicart, who classifed mechanics as “methods invoked by agents, designed for interaction with the game state,” and then breaks down mechanics by whether they are primary (core), secondary, or compound mechanics.34 Take a look at the following list of mechanics that are detailed on Wikipedia,35 supplemented with examples of famous video games that feature this mechanic for reference: • • • • • • •

Turns (Advance Wars) Action points (X-Com) Auction/Bidding (World of Warcraft) Cards (Hearthstone) Capture/Eliminate (Risk) Catch-up (Super Mario Kart) Dice (Dungeons and Dragons)

Teaching and Learning Game Design 83

• • • • • • •

Movement (Chess) Resource management (Civilization) Risk and reward (Wheel of Fortune) Role-playing (Witcher 3) Tile-laying (Carcossone) Worker placement (Starcraft) Game modes (Call of Duty)

Further defining mechanics, we can list some that determine or contribute to how players ‘win or lose’ the game:36 • • • • • • • • • •

Goals (Madden) Quests (Everquest) Loss avoidance (Fortnite) Piece elimination (Go) Puzzle solving (Candy Crush) Races (Need for Speed) Structure building (SimCity) Territory control (Command and Conquer) Victory points (Small World) Combination conditions (Magic the Gathering)

What’s interesting is that most games rely on a number of these mechanics and, as Sicart noted, some of these are central to the game play and some are there as options to supplement the main goal. For example, in Super Mario Kart (Nintendo, 1992) and its many sequels, players are offered multiple game modes, most of which require players race one another or the computer, and provide one of the most iconic catch-up mechanics in the form of the blue shell.37 The preceding lists are not exhaustive and not exclusive to entertainment games. In fact, a lot of the mechanics listed are common in play throughout history. What is important is that these mechanics bring to bear diferent elements that evoke diferent experiences for the player targeting motivators that drive engagement.

Motivators To make an embarrassing admission, I like video games. That’s what got me into software engineering when I was a kid. I wanted to make money so I could buy a better computer to play better video games – nothing like saving the world. Elon Musk38

84

Teaching and Learning Game Design

Elon Musk has arguably revolutionized how technology can evolve industry and it is heartening to see his motivation to do well was driven by his passions, which included video games. In fact, at the root of everything we’ve discussed so far is this tenet: games succeed if they can meet expectations and target what motivates their audience. Let’s look at the relationship of the items that we have discussed so far in Figure 4.6.

Figure 4.6 Framework of the Additive Nature of Examining Engagement in Games

Viewed in a diferent light, we can also look at engagement generated by entertainment games by showing how the elements interact. Genres help players understand what mechanics and motivators will be included in the game, setting expectations, and priming their experience. Within the game, then, those mechanics are designed and presented in a manner that will help to fulfll one or multiple motivators that drive the player’s engagement and ultimately their ability to enter the fow of the game (shown in Figure 4.7).

Figure 4.7 Alternative View of Framework Showing Interaction Between Elements

Put simply, a player will gravitate toward genres as they fnd that the mechanics of those games hit on the motivators that engage them. As an example, I like frst-person shooters because they allow me to prove my skills against other players as the game keeps score and ranks us; this

Teaching and Learning Game Design 85

competition, facilitated by the leaderboard mechanics, is a prime motivator for my engagement with the game. Motivation within the concept of play has been studied over the years in a variety of formats. One of the most famous of these is known as the ‘Bartle taxonomy of player types.’39 In this taxonomy, Bartle attempts to defne how players approach games, specifcally Multiple User Dungeons (MUDS) that are the precursor to Massively Multiplayer Online (MMO) games. In doing so, two axes of play style were considered: action versus interaction and world-oriented versus player-oriented. Based on his study, he classifed motivations by the following four main motivators: achievement, exploration, socialization, and imposition. He labeled these players as achievers, explorers, socializers, and killers. He then abstracted them further by naming them after card suits (diamonds, spades, hearts, and clubs) as shown in Figure 4.8.40 Abstract Diamonds Spades Hearts Clubs

Motivation Achievement Exploration Socialization Imposition

Description of Abstraction “they’re always seeking treasure” “they dig around for information” “they empathize with other players” “they hit people with them”

Figure 4.8 Bartle’s Taxonomy of Player Types41

Over time, Bartle’s taxonomy has been studied and expanded. In fact, in 1999, a Bartle test of Gamer Psychology was created by Erwin Andreasen and Brandon Downey.42 Bartle himself added a third axis to show the degree of explicit or implicit participation that involves the player, resulting in subtypes for each of the main four types (Figure 4.9).43 Type Achiever Explorer Socializer Killer

Sub-Type Planner (explicit) Opportunist (implicit) Scientist (explicit) Hacker (implicit) Networker (explicit) Friend (implicit) Politician (explicit) Griefer (implicit)

Description Sets a goal and aims to achieve it Finds and sets goals as they discover them Methodical in acquisition of knowledge Intuitive understanding of the world Assess who is worthy of companionship Enjoys others company Goal is to get a good reputation Goal is to get a bad reputation

Figure 4.9 Bartle’s Expanded Taxonomy of Player Types44

As research into games continued, researchers attempted to quantify motivation by diferent scales, building on Bartle’s framework. In fact, Nick Yee surveyed 30,000 MMO players to develop and expand how to categorize players and their motivations in virtual worlds.45 Based on his research,

86

Teaching and Learning Game Design

Yee broke down the motivations for players as the following factors: relationship, manipulation, immersion, escapism, and achievement. What is interesting about the study is that Yee also found that MMOs fostered meaningful relationships, emotional investments, and facilitated skill acquisition and transfer. In fact, the multi-user aspect of the MMO resulted in the transfer and development of skills like leadership or social skills that most players would have never thought about when thinking about playing a game. We’ll return to this topic of the unique aspect of multiplayer environments at the end of this chapter. Over time, Yee expanded and redefned his categories, similar to how Bartle expanded his own taxonomy, and Figure 4.10 shows a more current view of motivations that can be examined. Action Destruction Excitement

Social Competition Community

Mastery Challenge Strategy

Achievement Completion Power

Immersion Fantasy Story

Creativity Design Discovery

Figure 4.10 Categories and Sub-Categories of Motivation in Gaming46

Whether classifying the motivations for players by Bartle or Yee or other frameworks, a successful entertainment game will engage the player by appealing to their motivations. The ability for games to fulfll these motivations via carefully designed mechanics and interactivity make them a prime model in which to design and infuence future learning games. If learning can be fun, won’t we learn more? Maybe. There are a few other components to consider though, like how people learn and how designers present the learning content.

UI and UX Designs: Case Study There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods. Mark Weiser47

As chronicled in the previous chapter, there was a rush in the early days of learning games to combine “what games do well” with “what education does well,” but the results were mixed and led to what Scott M. Martin, in the prior chapters, characterizes as the first learning game winter. Part of

Teaching and Learning Game Design 87

what we’ll see in these case studies is what Mark Weiser stated so well way back in 1991: in order for games or technology to be effective, it needs to be user friendly. Earlier in this chapter, we approached the idea of entertainment games through a framework of looking at the meta-design of entertainment games (genres, mechanics, and motivations). But these elements of the game are only a part of the framework of learning that we need to examine to maximize the ability for learning games to succeed. The next factor to examine is the input senses; i.e., what the player sees, hears, touches, etc., as they participate in the virtual world. When referring to how this occurs in games and other mediums, the terms UI, UX, and UI/UX are thrown around; each of which, much like game play and game mechanics, describes diferent aspects of how users gain and interact with information. User interface, or UI, is the simplest and most common terminology applied to the way in which users interact with a game. When describing and designing the UI for a game a developer needs to consider both the methods and interfaces available to the user. Figure 4.11 outlines what is typically included in each. Type of UI Interfaces Methods

What it Means What information and options the game gives the player. How the player physically accesses said information and options.

Examples Inventory screen, maps, menus, scores, information pop-ups, etc. Keyboard, mouse, joystick; clicking. scrolling, etc.

Figure 4.11 Types of UI Elements

User experience, or UX, is a more comprehensive feld. According to cognitive scientist Don Norman, who coined the term: ‘User experience’ encompasses all aspects of the end-user’s interaction with the company, its services, and its products. Don Norman48 As you can imagine, that’s a very holistic approach to defining user experience. In games it is typically defined within the context of the whole of the product versus the whole of the company. In its simplest form, UI is a subset of the UX of a game and both must be examined properly to ensure that gamers are being given the right information in the right way at the right time in order to create an engaging experience.

88

Teaching and Learning Game Design

Sadly, UI and UX have traditionally been the step-children of the games industry, traditionally handled by artists or designers versus dedicated personnel.49 This developer-centric approach led to some abject failures and some accidentally triumphant designs, but it was based on intuition and guts more than science and research. Over the last few decades, though, developers have started to realize the importance of crafting a UI that is not only usable but that appeals to the many aspects of what they would later come to embrace as UX. In a similar fashion, learning games need to embrace not only a learner-centric design philosophy but a gamer-centric philosophy, as well. Building on UX as a method of design, Peter Morville (founder and president, Semantic Studios) created a honeycomb visual (Figure 4.12) that shows the dimensions to consider when evaluating efective UX design.50

Useful

Desirable

Usable

Valuable

Findable

Accessible

Credible

Figure 4.12 UX Honeycomb by Peter Morville51

Teaching and Learning Game Design 89

Others have adapted this into frameworks for learning, tools, and even games, much to the surprise of the author. At the core, though, we can  really identify with how UX helps in the cognition process. Earlier, when talking about cognitive load and identifying how diferent types of load can afect your ability to learn, we saw that the extraneous factors in cognitive load can increase or decrease your cognitive ability. These factors are the same said areas that UI and UX are pursuing. How do we make a game fun by giving the player “the right information at the right time with the right ability to use it”? When designing for the player, a good UI/UX designer will take into account a plethora of issues and make a ton of decisions over the course of an iterative game development cycle. These include the bare basics noted earlier in the UI’s methods and interfaces, but also include topics like the following: • • • • • •

Level of details in games Layout of information in games Presentation or information in games Perfect versus imperfect information Implicit versus explicit information Intrinsic versus extrinsic rewards

There are scores of books on UI/UX and studies done on the topic, so let’s look at some of those examples that help us to demonstrate what games have done well and done poorly, and how we can learn from both. When examining UI/UX though, it does help to have a framework, and Celia Hodent, former director of UX at Epic Games (creator of the aforementioned Fortnite), defines one in her excellent book, The Gamer’s Brain: How Neuroscience and UX Can Impact Video Game Design.52 In Hodent’s framework, there are two main buckets in which we can address the UX: usability and engage-ability. Usability is defned by pillars such as signs and feedback, clarity, form follows function, consistency, minimum workload, error prevention/recovery, and fexibility. Engage-ability is defned by motivation, emotion, and game fow. Keeping those domains in mind, let’s look at UI and UX in relation to a number of specifc games.

Dead Space One of the common questions that UI and UX designers must ask themselves is how to present information to the player in a way that promotes engage-ability and doesn’t negatively affect usability. Fagerhold and Lorentzon looked at information by examining whether or not it was diegetic. By

90

Teaching and Learning Game Design

examining if a UI element is part of the 3D game space (versus a layer on top of it) and whether or not it exists in the fictional game world (diegetic or non-diegetic), they could help to define how to visualize said information in ways that made the UX more immersive.53 With that in mind, the creators of Dead Space (Electronic Arts, 2008), a sci-f horror-themed third-person shooter, decided to go all in on diegetic display of information. The game’s lead UI designer explained the philosophy as ‘diegetic by design and implementation,’ meaning that all elements appear in the context of the fction as well as the audience.54 What this means is that any crucial information for the player, like player health, oxygen levels, energy levels, maps, or even ammunition were woven into the player avatar and in-game models. For example, the main character in Dead Space has a unique full body armor that protects them from harm, provides limited oxygen, and lets them do all kinds of neat sci-f things. Rather than just have an overlay on the screen showing your health, oxygen, or energy for your items, they created lights on the armor that seemed part of the design, yet provided the information needed at a simple glance. Such a simple change meant that the player never was pulled out of the action by non-diegetic information sources. They could concentrate on being immersed in the creepy and scary space scenes, and this worked very well. Dead Space went on to become a trilogy of games that continued this philosophy, with some minor changes based on player feedback through the entire series.

Super Mario Brothers Good old Mario first made his debut as the hapless hero of the hit arcade platformer, Donkey Kong (Nintendo, 1981). However, one of his most seminal games is the classic Super Mario Brothers, which made its debut in 1995 on the Nintendo Entertainment System (NES) game system. It has all the traditional elements of a platformer of the era and a very usable and efficient UI, but what really sets up its status as a classic is the user experience of the game. Scores of books could be written55 about Mario and even just this one Super Mario Brothers game, but we’re going to concentrate on one aspect in regards to UX, progression of information. Super Mario Brothers has a simple concept. The princess has been kidnapped, and the player must progress through the game to save her. Doing so, they can use the joystick and buttons to jump, crouch, and run through a variety of challenges. Rather than tell the player didactically about the game mechanics, however, Super Mario Brothers used its main menu screen to showcase your avatar in action, and thereby give examples of what can be done upon playing (for example, the player can jump on the enemy Goomba to propel themselves

Teaching and Learning Game Design 91

over environmental obstacles). No cutscenes or exposition were necessary. Beginning in the frst level, 1–1, the game rewards players for testing the controls, exploring, and trying new things, but it doesn’t require them to master them to continue. It doesn’t overwhelm the player. The game introduces new mechanics, new enemies, and new powers at set points as players fnish each distinct world. In fact, they can even return to previous worlds to fnd and do things that are discovered in later game play, giving the game even more staying power than others of the time. This progression of information and how it’s doled out is almost a master class in game design. Every level builds on the last, and this keeps the player moving forward and encourages backtracking in order to move forward. By defning the ‘beats’ of the game in this manner, SMB promotes fow and ensures that players are learning the right info at the right time.

Assessment Engines and Game-Based Learning Evaluation What is best about the best games is that they draw kids into some very hard learning. Did you ever hear a game advertised as being easy? What is worst about school curriculum is the fragmentation of knowledge into little pieces. This is supposed to make learning easy, but often ends up depriving knowledge of personal meaning and making it boring. Ask a few kids: the reason most don’t like school is not that the work is too hard, but that it is utterly boring. Seymour Papert56

Noted educator and mathematician Seymour Papert does a great job providing a colloquial look at the idea of balancing cognitive load and challenges to achieve effective learning. With that in mind and after looking at how games promote flow (Chapter 3), how cognitive load theory requires a balancing act to promote learning (Chapter 4), and the user experience that a learning game needs to promote all the preceding, there is one final part of the equation to address: Are the players learning? Everyone reading this book has been assessed at some point in their formal or informal educational careers. For as long as humans have passed along stories and knowledge, there have been ways to ensure that information was retained and could be used. It began with simple preservation. Stories were passed on around a campfre or drawn on cave walls. Could someone memorize or access the information if needed? From there, it progressed to understanding and utilizing the knowledge. Eventually we began to realize that, to truly master a topic, we should be able to analyze it and create new works based on that knowledge. This transformation of learning is captured famously by Bloom’s taxonomy as seen in Figure 4.13.

92

Teaching and Learning Game Design

Create Evaluate Analyze Apply Understand Remember Figure 4.13 Bloom’s Taxonomy57

As education became more formalized, the ability to assess the transfer of knowledge and, more importantly, critical thinking became a focus. At the end of the day, it’s our primary method of grading, pun intended, how educators are performing. In schools, the simplest assessments often take the form of quizzes, essays, or even the dreaded standardized tests. When looking at each level of Bloom’s taxonomy, diferent methods of assessment might sufce. If an educator wants to make sure someone ‘remembers’ something, true or false, multiple choice, or even short answer questions might be perfect for gauging this, but to understand if someone can apply what they’ve learned, they need to engage in more open-ended assessments. For good instructional designers, it’s not just about what they ask them when assessing learning outcomes, but when and how. Now complicate that process by introducing a medium that is new and engaging to the learner, how do you introduce assessments to learning games without breaking the fow or introducing cognitive overload? A designer might be tempted to just resort to the tried and true methods noted earlier and just include them in the game. Others might just try to shoehorn the concepts and questions into the game play itself (remember Math Blasters?). Good learning games should strive to bring the learning and assessment inline, but often in an indirect manner that makes it oblivious to the learner. The good news is that games are a digital medium and as such have the capability to provide tremendous amount of data about a player. Data analytics and metrics can be generated, stored, cross-referenced, timestamped, and even replayed if desired in a game. The trick, though, is determining what to record and how it can be used to assess a player.

Teaching and Learning Game Design 93

Think about this example: in order to proceed through the level, the player needs to place a box on the other end of a seesaw so that the far end will be raised and create a ramp to the next level. By doing so, we can show that the player understands certain aspects of physics and geometry. But aside from completing the task, what other information would help us to identify if they truly ‘learned’ about the concepts? Would knowing how long it took them to fgure out the puzzle help to gauge their learning? On the surface, it seems like a legitimate measure to consider. If student A fgures out the puzzle in ten seconds, but student B fgures it out in ten minutes, it says something about those students, right? The frustrating answer is that this is incomplete and possibly inconsequential information. What if the student took ten minutes because he was interrupted during the play session? What if the student that took ten seconds tried the correct solution by happenstance? Did they really learn from the encounter? What teachers might want to look at instead is how the students do over time when faced with the same type of learning encounter or mechanic and then factor in time or other metrics to get a better gauge of the same student. That brings us to the bad news: just because you can get data, it doesn’t mean that data is useful. It’s not all bad news, though, as games are getting more intelligent and are being programmed to understand and monitor player performance and respond to it. So how do we assess learning in games? There are several researchers looking into this exact question. For example, the Institute for Games for Learning tried to examine how to break down mechanics into game-based, learning-based, and assessment-based to ensure that each goal is met accurately.58 Others, like the many authors in The Wiley Handbook of Cognition and Assessment Frameworks, examined assessment across a number of applications, including video games; in chapter 22, Shute and Wang examine how diferent constructs (elements) of learning can be examined by using evidence-centered design, where competency and skills can be examined based on actions in the videogame.59 They also propose and detail how stealth assessments can be designed based on the data that can be provided by the virtual learning environment. What we see as common, though, from any game-centered studies is that if learning is an objective, it needs to be addressed in the design of the game. In short, when thinking about a learning game, we must examine not only the design considerations of making a game fun but also understand what additional or modifed design considerations are present in a learning game. Let’s take a look at a fnal example: Figure 4.14 shows a hypothetical design for a quiz game. On the left, it highlights logic for what is needed for an entertainment quiz game, and on the right, we see how it difers when we have to add in learning outcomes. It’s no longer just wanting to beat your opponent or score on the leaderboard; it’s about making sure the player actually learned something. In fact, in the educational game example, it doesn’t even address how they learned the materials that are being quizzed.

94

Teaching and Learning Game Design

Entertainment Game Quiz

Educational Game Quiz

Start

Start

Load quiz questions number n from database

Load quiz questions number n from database

If n = 0?

If n = 0?

Yes

No

No

Load question: x answer: A

Load question: x answer: A

Timer Start

Timer Start No

Answer Correct? Yes Timer > 0

Assessment = Learning Outcomes

Answer Correct? Yes

No

Timer > 0

Yes

TotalScore + = score

Yes

No

No

Yes

Database

Display Correct Answer

TotalScore + = score

Display Correct Answer

i++; n..

Database

i++; n..

End

End

Figure 4.14 Example Showing Entertainment Versus Educational Quiz Game

Thinking about it a diferent way, Figure 4.15 presents some questions a designer might need to answer about each game. Entertainment Game Is it fun? Design in scope? Cost to develop? Will people play it? Will people buy it? How much will they pay?

Serious Game Is it fun? Design in scope? Cost to develop? Will students play it? How do students learn answers/improve? Are we meeting learning goals? Assessment proves learning? Will teachers embrace? Stand-alone or dependent on curriculum?

Figure 4.15 Design Questions for Hypothetical Quiz Game Comparison

As a fnal thought on the topic, following are a number of ways that learning games can potentially disguise or embed assessment into the game itself and still be able to pull the metrics and conclude on whether learning outcomes were met.

Teaching and Learning Game Design 95

Examples Instead of just giving math problems, require the user to use math concepts in order to complete tasks. Ex. If crafting one item needs ten resources and we require the user to craft ten of said item, we can start to see that they understand how multiplication and/or addition work as they strive to meet and fulfill those recipes. Instead of just teaching vocabulary or languages, embed the lessons into the game play so they are needed to proceed. Ex. A trilogy of games that teach the Japanese language disguised the lessons by making it so the Japanese letters, words, and phrases are ‘spells’ that the player needs to know to defeat monsters that are attacking the town (Figure 4.16). Instead of just asking learners to repeat information, show they can apply it by creating a situation that requires it.

Figure 4.16 Screenshot from Learn Japanese to Survive! Kanji Combat60 (River Crow Studio, 2018)

96

Teaching and Learning Game Design

Ex. The virtual reality game Tablecraft (Not Suspicious LLC, 2020) is being developed to teach learners about the elements and chemistry in a quirky science fiction environment. Instead of providing formulas, it lets the player figure out how things break down or combine through trial and error.

Principles of Designing Successful Learning Games You have to learn the rules of the game. And then you have to play better than anyone else. Albert Einstein61

Finally, let’s look at polices and principles to consider when designing game-based solutions today, regardless of subject-matter, that will better engage the learner and ensure that learners do not face cognitive overload. Richard Mayer has published a number of principles over the years that have guided instructional design, primarily around the use of multimedia in e-learning.62 From these, we can show that a number of them that prove true in learning and in games and should be used when thinking about how to make learning successful when developing game-based solutions. The following are eight principles we will discuss further.63 1 2 3 4 5 6 7 8

Spatial contiguity Temporal contiguity Coherence principle Modality principle Redundancy principle Pretraining principle Signaling principle Personalization principle

Spatial Contiguity Put simply, this principle state that people will learn better when corresponding words and pictures are presented near rather than far from each other on the screen. Take a look at the example in Figure 4.17.

Teaching and Learning Game Design 97 Example 2

Example 1 A

A 45º

180º - A - C

B

C

B

45º

C

A= 45º. C= 45º B = 180º - A - C

Figure 4.17 Example of Two Ways to Demonstrate How Angles in a Triangle Relate

In example 1, the information for the angles and information to solve for angle B are not presented in line with the visual; violating the spatial contiguity principle. In example 2, the same information is provided but in-line with the visual and is easier for a learner to put together. Like many of the principles that will be discussed here, the goal behind this is simple to see in the examples. By reducing the distance between elements (text and visuals for example), the brain spends less time correlating the two inputs and less time evaluating other alternatives. In gaming, we start to see this take place by putting information closer to the objects that are being detailed (Figure 4.18).

Example 1

Example 2

Author Health: 1/100

Author Health: 1/100 Evil Crab Health: 50/100

Evil Crab Health: 50/100

Figure 4.18 Example of Enemy Health Location in a User Interface

98

Teaching and Learning Game Design

In example 1, the information for the Evil Crab is in the bottom right of the screen, causing the player to need to shift their attention between three diferent elements (player health (top left), the enemy itself (middle), and enemy health (bottom right). In example 2, we reduce the cognitive load on the player by giving them the enemy information in the center of the screen where most of their attention already is focused.

Temporal Contiguity The second of the contiguity principles, this principle states that corresponding words and pictures should be presented simultaneously rather than successively (Figure 4.19). Example 1 Brain Parts

A

C

D

Example 2 Brain Parts A – Frontal Lobe B – Temporal Lobe C – Parietal Lobe D – Occipital Lobe

B

Brain Parts Parietal Lobe

Frontal Lobe

Occipital Lobe

Temporal Lobe

Slide 1

Slide 2

Slide 1

Figure 4.19 Example Showing Presentation of Information Sequentially Versus Concurrently

Imagine being taught about parts of the brain in a class where the teacher tried to use example 1; seeing the information sequentially here makes it harder for the reader to put together the concepts. In example 2, the information is presented in the moment, allowing for learners to understand; it also happens to correct for spatial contiguity. For games, this immediacy of information is extremely important. In fact, we can see it in action when talking about reward mechanics. Giving a player tally boards at the end of a level or match that call out rewards and scores are very efective. They serve to cement the action that took place during the game. But if we also call out when notable tasks are completed, rewards met, or the like during a game or level, the player’s behavior will be reinforced by the positive notifcations. A great example of this in gaming is the achievement system, popularized by the Xbox and present in most major game platforms today. Achievements are tasks that can be completed in the game to unlock a record of your achievements and gain points for your overall gamer score. In some games, the tasks are simply things that happen during the game, like completing a level.

Teaching and Learning Game Design 99

In other games, they are intricate and inspire exploration and experimentation. Achievements unlock when the player performs the tasks or series of tasks required and display onscreen as a small notifcation (Figure 4.20).

ACHIEVEMENT UNLOCKED Learned About Temporal Contiguity Figure 4.20 Congratulations! You Got an Achievement

To ensure the gratifcation of the moment, it is imperative to notify the gamer about an achievement at the time of their accomplishment. When delayed, the same information lacks the same punch and lacks context. As an avid gamer, hearing the ding of a rare achievement popping and seeing it light up on the screen is a true high and makes me go after more of the same. There have been times when I’ve attempted some difcult achievements and the system delays the fulfllment and recognition of the reward and it frustrates, confuses, and distracts me from my enjoyment because now I’m stuck wondering if I did it wrong or if the system or game is broken, all of which become a negative reinforcement loop if it continues.

Coherence Principle This principles states that you should only present information the learner needs. Remove any fluff and retain only simple text and visuals that are directly related to the learning topic (Figure 4.21). Example 1

Example 2

Brain Parts Frontal Lobe

Temporal Lobe

Parietal Lobe

Brain Parts Frontal Lobe

Occipital Lobe

The brain is divided into multiple lobes, all of which provide different functionality to the user.

Figure 4.21 Coherence Principle Examples

Temporal Lobe

Parietal Lobe

Occipital Lobe

100 Teaching and Learning Game Design

In example 1, the image contains extra information and visuals that provide additional information (bottom explanation) or are fuf (visuals of teacher and an eye). In example 2, the information is limited to just the important aspects.64 Again, this principle is attempting to alleviate cognitive overload by preventing the learner from having to evaluate and classify extraneous information. With games, this is important, as developers always want to keep your player engaged and in the ‘fow.’ The “right amount of information at the right time” is the key way to think about this principle. Take a look at the example in Figure 4.22 from Legends of Aria, a free-to-play MMO that will be discussed in Chapter 5.

Figure 4.22 Legends of Aria (Citadel Studios, 2020) User Interface65

As seen in the screenshot, there are so many diferent elements and information to convey, and striking a balance in games is a juggling act. In fact, as we will learn later, giving players the ability to modify their experience is sometimes key so the users can solve this problem themselves.

Modality Principle This principle states that learners will learn better when information is explained via audio narration versus on-screen text, especially given complex, fast-paced, familiar content (Figure 4.23).

Teaching and Learning Game Design 101 Example 1

Example 2

Lorem Epsum

Figure 4.23 Example of Modality Principle

In games, this principle does seem to hold fast, but often voice-overs are not provided in all games for a number of reasons, including cost, efciency, and time. Because of this, games have found shortcuts or other methods to meet the idea behind this principle if not the exact practice. For example, they pace and simplify the language to make it easier to parse and digest. In fact, some games rely on made up language to convey a similar efect.66

Redundancy Principle This principle states that people learn better by eliminating redundant information. As an example, if providing visuals or animation along with narration, there is no need to provide on-screen text describing the same concepts. It seems similar to the modality principle, but they come at the same problem from different angles. In the modality principle shown earlier, we choose narration because the avenue of providing information was better (narration is better than text); in this principle, we are saying that rather than just include narration, eliminate the text because providing multiple inputs to process may overload a viewer. Although this is true in most cases, including in games, it is important to remember that another principle is to ensure that the game allows for differences in player learning ability. To that end, if following the redundancy principle, it’s fne to have a cut-scene where avatars hold a conversation, but the game should allow for the option to enable captioning to those that require it.

102 Teaching and Learning Game Design

As an example, the Last of Us 2 sets new bars in the options available for all the senses, allowing for a variety of text, visual, and prompts based on each user’s preferences or accessibility needs.67

Pretraining Principle This principle deals with the order of information. For learners to learn effectively, it is essential to ensure that they have the required information to evaluate key concepts. Think back to the example we gave earlier about elements of interactivity and schemas. In order to solve those viral math challenges, you need to know about numbers, operations, and, most importantly, the order of operations, or PEMDAS. The pretraining principle would teach how to solve a riddle like this in that order, instead of starting with the problem and trying to work backwards. In games, this is important because developers want to ensure that the player always feels comfortable while they are learning about the rules and tools of the world. This is extremely important in the context of serious games as not only might the player learn about specifc concepts and knowledge, but they might also be deluged with fctional elements as part of the virtual construct. If you think back to the Super Mario Brothers example earlier in the chapter, we can see that it does this well, showing the player the proper level of information to propel them forward and giving out information before they need it to be efective.

Signaling Principle This principle states that learners will learn more effectively when signals are added to highlight and provide context to the essential material. To signal properly what is important in any material, this could be done with highlighting, underlining, or even arrows or callouts of all sorts. This is shown and used in games to great effect, often because the amount of information available to players can be tremendous. If something is important, multiple cues can ensure that a player doesn’t miss out on essential information or opportunities or, worse yet, potentially disastrous information. In fact, this concept applies when fighting monsters in games. If the monster suddenly and without warning attacks the player and does tons of

Teaching and Learning Game Design 103

damage, they will be annoyed, but if the monster telegraphs attacks by its animations (for example, heaving its big sword back and over its shoulder in order to launch a massive smashing attack), skilled players will see the signal and respond accordingly.

Personalization Principle Put simply, this principle states that learners will engage better and learn more effectively if information is provided in a natural and conversational tone. Incorporating an agent to focus the information can increase engagement, as well (Figure 4.24). In examples 1 and 2, the language is stif and formal. In example 1, it is just presented to the user; in example 2, an avatar is included, which helps, but again it feels very formal. Example 3 uses more natural language and a less formal avatar to help the user feel more comfortable.

Example 1

Example 2

Greetings.

Example 3

Hey Xavier!

Greetings.

Figure 4.24 Examples of Personalization Principle

In the gaming world, this is done primarily in areas where narration or immersion are deemed necessary, whereas this principle doesn’t diferentiate use cases. It is good practice to think about how to communicate any information, whether it’s story, concepts, or just instructions/information, in a manner that is easy for the player to follow. Like the other principles here, the key is ensuring everyone can follow along and balancing cognition. Going back to the UX section earlier, think about whether that

104 Teaching and Learning Game Design

information can be presented as diegetic within the world; if so, an agent or avatar or non-player character (NPC) might be an ideal way to use this principle. In summary, these principles all come back to the earlier UX discussion about reducing the extraneous cognitive load by validating that how information is presented doesn’t come into confict with what is presented.

Multiplayer Games and Learning “HE DIED” “LB is dead!!!” – Ultima Online players on the death of Lord British68

In 1997, shortly before it’s launch, Ultima Online (UO) was doing a stress test after months of preparation. Ultima Online was one of the first Massively Multiplayer Online Role-Playing Games, also known by the abbreviation MMORPG or simply MMO, to come online, and it became the first to popularize the genre.69 We’ll delve more into the history of multiplayer environments in a moment, but for now, this minor bit of history is needed to give context to the quote and its relevance. MMOs allow for  large numbers of gamers to come together in a world, each represented by an avatar, and interact within the constraints of the game world. Not only was UO a new breed of connected game world, it was also what became known as a sandbox game. It gave players the ability to develop their own game play, which has become known as emergent game play. Rather than just give them a linear path, they were given rules and tools and set free in the world. In all previous incarnations (both from UO and previous Ultima games), the avatar for the game designer Richard Garriott was known as Lord British and was invincible. If it’s your game, you got to get some sort of perk right? But what happened at that stress test surprised even the developers. Through a confluence of events, Lord British’s character was not set to invulnerable, the guards that prevented looting were not present, and a player was devious enough to see what they could get away with. They stole a fire field scroll from another lawabiding player and cast it for fun. One of the other developer’s avatars just laughed at the attempt, but Richard Garriott found his avatar face down in the dirt, slain by the errant player, and history was made. Lord British had to be revived by another developer and they then summoned demons. People ran amuck and chaos ensued, but, most importantly, everyone had a blast.70 What has always stuck with me about this story, though, is not just the absurdity of it but the fact that what happened was truly only possible with

Teaching and Learning Game Design 105

the magic of the MMO. The whole situation reminds me of a classroom. The players were the students, learning the world and rules of the game. The developers were there as instructors, providing a virtual classroom to guide and develop the world for the players. Just like in physical classrooms, though, good students will interact and push the boundaries. New discoveries will occur and, if all goes well, everyone learns from it and is engaged by it, including the teacher. But how do we capitalize on this idea of bringing learners together in a digital medium? Let’s examine how we can leverage the ability of digital gaming, serious or entertainment, to engage multiple users in a manner reminiscent of classrooms by utilizing multiplayer learning environments. To do this, we will examine briefy the negative and positive aspects of massive multiplayer online (MMO) virtual worlds.

Multiplayer Modality Humans are by nature herd animals. Our culture is designed around passing along our knowledge, celebrating our accomplishments, and making a lasting impression on our world, even if this is limited to our fellow humans. We all remember the teachers that impacted our life the most. We also recall the fellow students that were part of our learning journey over the years. For good or bad, our whole life is a multiplayer experience. Because of this, it’s only natural that we look to capitalize on this to achieve better learning. Games in general have always embraced multiplayer modes. Why should we limit our learning games to a single player experience? While they can be very efective, we also learn by seeing others succeed and fail alongside us. In fact, you may recall that a number of the motivators that we talked about earlier require multiple people to work efectively (competition, collaboration, achievement, etc.). Although it might be fun to beat our own high score for personal achievement, there is something inherently more engaging about beating others and getting better in the process. Think about the following scenario: You want to learn how to play tennis and become better. You could read about the topic, set up an area where you can bounce the ball against a wall to practice against yourself, but at some point you need a trainer or partner to practice against. But will you just play the same person over and over? No, you need to increase your skill, so you seek out others to play against. You register for tournaments, you start to apply yourself at diferent levels of competition, and soon you are at Wimbledon. Well, maybe. What’s important is that to learn and become better you need people, lots of them. Maybe not all

106 Teaching and Learning Game Design

at the same time, but you need them. Plus, isn’t winning better with an audience? So instead of concentrating on learning games where students play and learn alone with only game-controlled non-player characters or agents as companions, what if we can invite real people into a shared space and play together? MUDS (Multi-User Dungeons) were some of the frst games to really dive into this idea of multi-user experience. Similar to the pen and paper role playing game Dungeons and Dragons, MUDS attempted to allow players to ‘log-in’ and enjoy a typical fantasy world game together in real time. As technologies evolved and the internet became pervasive, the capacity to put players together increased, expanding from scores of players to dozens, to hundreds, and eventually to the thousands and beyond. As mentioned earlier, one of the frst truly ‘massive’ multiplayer online games was a role-playing game that evolved from a single-player series called Ultima. Called, appropriately enough, Ultima Online, this frst true MMO forged a community of players that could cooperate or compete against each other in a sandbox environment. Ultima Online gave players the ability to fulfll their motivations amongst many other people with similar motivators, which proved to be immensely compelling to players, as we saw in the earlier Lord British snafu. Over time, many companies have embraced this MMO genre and improved on how they allowed gamers to explore, fght, and interact with increasingly involved and evolving worlds. Some of these have become part of our general popular culture, as well, World of Warcraft being a notable example. Trying to list all of the MMOs that have paved the way would be time-consuming and beyond the point; MMOs have proved their staying power by capitalizing on community or socialization motivators.71 In fact, MMOs’ success in fulflling this primal need for community among gamers has led to the incorporation of mechanics and technology from these games into games that traditionally had been solo or simply multiplayer games. For example, the popular sci-f shooter Destiny incorporated a number of mechanics that were intentionally lifted from the MMO genre, including hub zones (where players can meet, chat, and join groups), raids, and chat features. MMOs sound amazing, so why wouldn’t we just make all games massively multiplayer? For the same reason that we need to tutor students or have personalized learning in some cases: people are a wild-card. Having spent about 13 years of my game design career developing MMOs, I have worked in a lot of departments, but all of them revolved around the user, the player, the gamer. The player was your greatest ally and potentially your greatest enemy. In a single-player game, everything is

Teaching and Learning Game Design 107

tailored and based on your inputs. How the world reacts, and what your character hears, sees, and does within it is easy to understand and compensate for. However, when a game starts bringing in multiple players, developers realize that their designs need to change to compensate for mass participation. If your game has one player trying to kill a dragon for experience to level their character and earn some gold, what happens when the game has 100 players all trying to do the same thing at the same time? What happens when players who we would classify as ‘griefers’ in the Bartle taxonomy come to play? The early days of MMOs were literally learning experiences for developers struggling to understand how players’ motivations changed when they were playing together and how mechanics needed to compensate for multiple users banging away at them. As an example of how players tend to act diferently and how we have to account for how player’s actions can afect each other, I ofer the following anecdote from the development trenches. It is a commonly accepted theorem in the MMO development world that if you build it, players will attempt to make something phallic from it. A more colloquial term exists for this practice, TTP or Time to Phallus.72 It basically means that, as a developer, we have to ensure that any mechanic or feature introduced limits the ability of the user to create not-safe-forwork content. Giving the players the ability to stack rocks eventually yields ‘PhallusHenge.’ In a single-player game, no one would care, but when your world is full of other people, you have to worry about everyone’s experiences together. Despite these potential problems, which can be alleviated, the appeal of collaborative learning is attractive and elusive to educators. As such, a lot of research into games, including a lot of what has been cited in this book, tends to gravitate toward testing against what they refer to as multiplayer learning environments (MLEs). Researchers have looked at existing MMOs (Second Life, World of Warcraft, etc.), multiplayer games (Minecraft, Roblox), as well as custom virtual worlds created specifcally for their research. To give you an idea of the type of research being done in MLEs, here are some interesting examples.

MMOs as a Way to Learn About Pandemics In 2005, World of Warcraft introduced a new encounter for players that were ‘high-level,’ meaning they had progressed very far into the world and spent a considerable amount of time increasing their characters’ power

108 Teaching and Learning Game Design

and abilities.73 During this encounter, the main monster that needed to be defeated (the ‘Boss’) had an attack that infected players with a disease called Corrupted Blood. The infection was designed with these high-level players in mind as a hindrance, adding a level of challenge to the encounter. They needed to manage the disease and prevent it from spreading to other players so they could successfully complete the encounter. However, the developers had not prepared properly for all contingencies. Due to the ability of characters to travel instantly from their current location to certain spots (like the major cities of the game world) and the length of the disease’s effect on infected players, some of the players ended up spreading the disease outside of the encounter area. All of a sudden, what was supposed to be a challenging isolated mechanic then became a full-blown pandemic. It went from a minor annoyance to killing players all over the virtual continent of Azeroth.74 The event even got media attention75 because of how it mimicked a natural pandemic and the chaos it caused for gamers. In fact, it caught the attention of Nina Feferman and Eric Lofgren, who studied the event in detail and began research into using virtual worlds to document and show how they can model real-world contagions, something that had not been possible previously due to scale.76 Others have also researched this event and it has even been used as an example in light of the current Covid-19 pandemic.77 Although the preceding is more about learning from the MMO experience, the MMO itself has been used in a variety of studies as an instrument to facilitate social learning.78 In summary, the MMO, adapted or created specifcally as an MLE, provides a unique digital counterpoint to the traditional collaborative learning communities that have fourished through history. As technologies improve and the ability to become truly tele-present becomes increasingly commonplace and cost-efective, we should continue to look at how to provide collaboration and community via multiplayer learning environments. This chapter started by defning, gauging, and mitigating cognitive load and then applying a framework of learning to show how it mirrors the ways in which games need to balance engagement, sometimes referred to as fow. From there, we examined how entertainment games can achieve fow by looking at their meta-design (genre, mechanics, and motivation). From here we looked at games, both serious and entertainment, through the lenses of user interface and user experience design and provided a number of principles that can be applied to learning games to help balance cognitive load. Finally, we looked at a specifc genre of games, massively multiplayer games, to investigate how they are uniquely suited to ofer opportunities to further

Teaching and Learning Game Design 109

defne learning games and game-based learning in the future. But why does all of this truly matter? An old adage, “Children are the future,” seems to be apropos here. For better or worse, younger generations are being inducted into an increasingly online and virtual world. When I was young, the ability for me to expand my mind relied on my ability to travel to or acquire sources of knowledge. Doing research for school back in those days meant relying on your teacher, the library, or, if you were lucky, perhaps a set of Encyclopedia Britannica. What once was a multi-volume book set on my family’s bookcase shelves has been replaced by the sheer overwhelming magnitude of the internet: Wikipedia is a prime example. We are now a crowdsourced and verifed multilingual global learning community and our ability to tap into this global classroom is literally in our hands or pocket all day long. In order to compete and appeal to the new generations growing up with these technological boons, we need to look at how to engage them. We need to learn from the best teachers, use the best resources, and start designing the best tools. Games are an amazing opportunity for us to do just this. If kids are going to play Roblox all day, let’s make sure they learn while doing so. In fact, it has been found that kids will go out of their way to learn in order to do well and engage in their passions, often learning to read at higher grade levels than expected.79 As a fnal note, our education system is seeing a major upheaval with the current Covid-19 pandemic, one that is causing educators, parents, and students to reevaluate how to teach and learn. Distance learning, virtual environments, and online communication are all qualities inherent to games that are now becoming the norm in our education system. It’s only natural that we embrace these technologies as we react and grow as a culture. Maybe we won’t get to the point where everyone plugs into Virtual Reality to attend class everyday like in the book80 Ready Player One (2011, Random House), but wouldn’t it be amazing if we start that journey?

Notes 1 Moore, H. (2014, September 23). Why play is the work of childhood. Courtesy of Fred Rogers Center for Early Learning and Children’s Media at Saint Vincent College. Latrobe, PA: The Fred Rogers Center. 2 Wikipedia. (n.d.). Texas instruments TI-99/4a. Retrieved from https://en.wikipedia. org/wiki/Texas_Instruments_TI-99/4A 3 They also offered magazines with cassettes that had the programs typed out for you, but they were more expensive.

110

Teaching and Learning Game Design

4 Ziglar, Z. (2012). Born to win: Find your success code. Dallas, TX: Success Books. 5 Cognition. (2020). Oxford University Press. Lexico.com. Retrieved July 4, 2020 from www.lexico.com/en/definition/cognition 6 This was written in the spring and summer of 2020 so hopefully by the time it’s out and you have read it, it will just be a memory. 7 For more fun things to do in the car, check out this list. https://thoughtcatalog. com/christine-stockton/2018/06/road-trip-games/ 8 Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257–285. 9 Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instructions, 4, 295–312. Elsevier Science Ltd. 10 Ibid. 11 www.popularmechanics.com/science/math/a28569610/viral-math-problem-2019solved/ 12 Ibid. 13 In fact, there are a number of studies on how much you can do while walking. Ex. Yogev-Seligmann, G., Hausdorff, J. M., & Giladi, N. (2008). The role of executive function and attention in gait. Movement Disorders, 23, 329–342. https://doi. org/10.1002/mds.21720 14 Paas, F. G. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach. Journal of Educational Psychology, 84, 429–434. https://doi.org/10.1037/0022-0663.84.4.429 15 Sweller, J., van Merrienboer, J. J., & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251–296. https://doi. org/10.1023/A:1022193728205 16 Ibid.; Schnotz, W., & Kürschner, C. (2007). A reconsideration of cognitive load theory. Educational Psychology Review, 19(4), 469–508. https://doi.org/10.1007/ s10648-007-9053-4 17 Schell, J. (2019). The art of game design (3rd ed.). Boca Raton, FL: A K Peters/CRC Press. 18 Adapted from Ibid. 19 Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive – Developmental inquiry. American Psychologist, 34(10), 906–911. https://doi. org/10.1037/0003-066X.34.10.906 20 McGinigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. New York: Penguin Books. 21 Linden, D. (2011, October 25). Video games can activate the Brain’s pleasure centers. Retrieved from psychologytoday.com at: www.psychologytoday.com/us/blog/ the-compass-pleasure/201110/video-games-can-activate-the-brains-pleasurecircuits-0 22 Or maybe you use secret codes on Netflix to unlock obscure content? www.lifewire. com/netflix-secret-codes-find-and-watch-hidden-movies-4583157 23 As an example, Aristotle’s Poetics from 335BC is credited as one of the earliest works of defining genre, in his case for types of poetry. 24 Space War is often credited as the first computer game, despite Tennis for Two or OXO coming first. www.thoughtco.com/history-of-spacewar-1992412 25 Most genre lists in games differ slightly due to semantics in defining the mechanics and the merging of mechanics, so finding an exhaustive list is exhausting. 26 Entertainment Software Association. (2019). 2019 Essential facts about the computer and video game industry. Retrieved from theesa.com at www.theesa.

Teaching and Learning Game Design 111

27

28 29 30

31 32 33 34 35 36 37

38 39 40 41 42 43 44 45 46

com/esa-research/2019-essential-facts-about-the-computer-and-video-gameindustry/ In fact, Fortnite is so popular that it drew in 27.7 million unique players to participate in an in-game concert event. Their, D. (2020, April 28). A staggering number of people saw Fortnite’s Travis Scott ‘Astronomical’ event. Retrieved from forbes.com at: www.forbes.com/sites/davidthier/2020/04/28/a-staggering-number-of-peoplesaw-fortnites-travis-scott-astronomical-event/#82371e77b41f Ehmke, R. (n.d.). A parent’s guide to dealing with Fortnite. Retrieved from childmind. org at: https://childmind.org/article/parents-guide-dealing-fortnite/ Dransfield, I. (2018, April 28). The history of Wolfenstein. Retrieved from pcgamer. com at: www.pcgamer.com/the-history-of-wolfenstein/ Here you can see how it works in the latest Call of Duty title Modern Warfare. Hodgon, D. (2019, October 18). Feature: Explaining player progression in Call of Duty: Modern Warfare. Retrieved from activision.com at: https://blog.activision.com/ call-of-duty/2019-10/Feature-Explaining-Player-Progression-in-Call-of-DutyModern-Warfare Addams, S. (1990). The official book of ultima. Greensboro, NC: Compute! Books. Quoting Richard Garriott. Schell, J. (2008). The art of game design: A book of lenses (pp. 129–170). Burlington, MA: Elsevier/Morgan Kaufmann. Ibid. Sicart, M. (2008, December). Defining game mechanics. Game Studies: The International Journal of Computer Game Research, 8(2). Wikipedia. (n.d.). Game mechanics. Retrieved from https://en.wikipedia.org/wiki/ Game_mechanics Ibid. For those unfamiliar with Super Mario Kart, players an pick up power-ups that can alter the race through their use. The blue shell, when used, would launch and hone in on the player in first place and temporarily stop them. To help balance such a tactical mechanic, this power-up would only be presented to players who were losing as a method to level them catch-up. Startocure. (2020, January 26). Top 10 quotes by Elon Musk | The Real Tony Stark. Retrieved from startocure.com at: www.startocure.com/top-10-quotes-by-elonmusk-the-real-tony-stark/ Bartle, R. (1996, April). Hearts, clubs, diamonds, spades: Players who suit MUDs. Retrieved from mud.co.uk at: http://mud.co.uk/richard/hcds.htm Ibid. Ibid. Summarized in table format from original text. Andreasen, E., & Downey, B. (2001, August). The MUD personality test. The Mud Companion, 1, 33–35. ISSN 1499-1071. Archived from the original on August 18, 2000. Bartle, R. (2003). Designing virtual worlds. New Riders, p.  145. ISBN 978-0-13-101816-7. Ibid. Summarized in table format based on original text. Yee, N. (2006). The demographics, motivations, and derived experiences of users of massively multi-user online graphical environments. Presence: Teleoperators and Virtual Environments, 15(3), 309–329. Yee, N. (2015, December 15). The gamer motivation model in handy reference chart and slides. Retrieved from quanticfoundry.com at: https://quanticfoundry.com/ 2015/12/15/handy-reference/

112 Teaching and Learning Game Design 47 Weiser, M. (1991). The computer for the twenty-first century. Scientific American, 265(3), 94–100. 48 Norman, D., & Nielsen, J. (n.d.). The definition of user experience. Retrieved from nngroup.com at: www.nngroup.com/articles/definition-user-experience/ 49 As experience firsthand through my 12-plus years in game development. Hodent also writes about it here: Hodent, C. (2018). The gamer’s brain: How neuroscience and UX can impact video game design. Boca Raton, FL: CRC Press. 50 Morville, P. (2016, October 11). User experience honeycomb. Retrieved from interwingled.com at: https://intertwingled.org/user-experience-honeycomb/ 51 Ibid. 52 Hodent, C. (2018). The gamer’s brain: How neuroscience and UX can impact video game design. Boca Raton, FL: CRC Press. 53 Fagerholt, E., & Lorentzon, M. (2009). Beyond the HUD – User interfaces for increased player immersion in FPS games. Gothenburg, Sweden: Chalmers University of Technology. Retrieved from chalmers.se at: https://odr.chalmers.se/ handle/20.500.12380/111921 54 Tach, D. (2013, March 31). Deliberately diegetic: Dead Space’s lead interface designer chronicles the UI’s evolution at GDC. Retrieved from polygon.com at: www.polygon. com/2013/3/31/4166250/dead-space-user-interface-gdc-2013 55 And have been .  .  . example. www.semanticscholar.org/paper/Skinhead-SuperMario-Brothers%3A-An-Examination-of-on-Selepak/581e60a66b63bd58853d9cff b9304fb81f25314e?p2df 56 Papert, S. (1998, June). Does easy do it? Children, games and learning. Game Developer Magazine. Game Developer Conference. Retrieved from gdcvault.com at: https://twvideo01.ubm-us.net/o1/vault/GD_Mag_Archives/GDM_June_1998.pdf 57 Persaud, C. (2018, August 13). Bloom’s taxonomy: The ultimate guide. Retrieved from tophat.com: https://tophat.com/blog/blooms-taxonomy-ultimate-guide/ 58 Plass, J., Homer, B., Kinzer, C., Frye, J., & Perlin, K. (2011, September 30). Learning mechanics and assessments for games for learning. Institute for Games for Learning. White Paper. 59 Shute V., & Wang L. (2017). Assessing and Supporting Hard-to-Measure Constructs in Video Games. Rupp A. A. & Leighton J. P. (Eds.), The Wiley Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications (pp. 535–562). West Sussex: John Wiley & Sons, Ltd. 60 There are several games in the series and they are all charming and engaging. http:// study-japanese.net/product/learn-japanese-to-survive-kanji-combat-windowsmac-digital/ 61 This is often credited to Albert Einstein, but there is some doubt about the quote as shown here. www.barrypopik.com/index.php/new_york_city/entry/you_have_ to_learn_the_rules_of_the_game#:~:text=%22You%20have%20to%20learn%20 the%20rules%20of%20the,%281879-1955%29.There%20is%20no%20evidence%20 that%20Einstein%20said%20this. 62 Mayer, R. (2009). Multimedia learning (2nd ed.). Cambridge, NY: Cambridge University Press. 63 Ibid. 64 This one was hard to visualize thanks to my years of design and teaching. If you want a great example, try to find an old geocities webpage or go to any website where the ads dominate the content. 65 Citadel Studios. (2020). Legends of Aria.

Teaching and Learning Game Design 113 66 Kilbane, B. (2020, February 7). A history of simlish, the language that defined the Sims. Retrieved from theverge.com at: www.theverge.com/2020/2/7/21126705/ the-sims-simlish-language-history-20th-anniversary-game 67 Sony. (2020, June 9). The last of us Part II: Accessibility features detailed. Retrieved from playstation.com at: https://blog.playstation.com/2020/06/09/ the-last-of-us-part-ii-accessibility-features-detailed/ 68 Olivetti, J. (2015, October 3). The game archaeologist: The assassination of Lord British. Retrieved from massivelyop.com at: https://massivelyop.com/2015/10/03/ the-game-archaeologist-the-assassination-of-lord-british/ 69 Bartle, R. (2004). Designing virtual worlds (pp.  19–21). Indianapolis, IN: New Riders. 70 As accounted in articles like the one preceding and detailed to me by members of the development team, past and present. 71 The full list is extensive, over 200 entries, with some that have spanned decades and some that have just come and gone. https://en.wikipedia.org/wiki/ List_of_massively_multiplayer_online_role-playing_games 72 In fact, it is so pervasive a concept that a recent comedy based on MMO development, Mythic Quest, featured this prominently in one of their episodes’ plot. Martens, T. (2020, February 10). Have a love-hate relationship with game culture? ‘Mythic Quest’ is the show for you. Retrieved from the latimes. com at: www.latimes.com/entertainment-arts/tv/story/2020-02-10/appletv-mythic-quest-game-culture 73 I can recall this incident fondly from my play days, but luckily others have documented it as well. Smith, J. (2012, August 10). Guide to the Corrupted Blood plague documentation collection. Stanford University Library. Retrieved from Standford.edu at: https://stacks.stanford.edu/file/druid:xy157wz5444/CB%20Collection%20Guide. pdf 74 Orland, K. (2008, May 20). GFH: The real life lessons of WoW’s Corrupted Blood. Retrieved from gamastura.com at: www.gamasutra.com/php-bin/news_index. php?story=18571 75 Ward, M. (2005, September 22). Deadly plague hits Warcraft world. BBC News. Retrieved from bbc.co.uk at: http://news.bbc.co.uk/2/hi/technology/4272418. stm; Sydell, L. (2005, October 5). ‘Virtual’ virus sheds light on real-world behavior. National Public Radio. Retrieved from npr.org at: www.npr.org/templates/story/ story.php?storyId=4946772 76 Lofgren, E., & Fefferman, N. (2007, September). The untapped potential of virtual game worlds to shed light on real world epidemics. The Lancet: Infectious Diseases, 7(9), 625–629. Retrieved from thelancet.com at: www.thelancet.com/journals/ laninf/article/PIIS1473-3099(07)70212-8/fulltext 77 Krotoski, A. (2020, April 11). Corrupted Blood: What the virus that took down World of Warcraft can tell us about coronavirus. Retrieved from sciencefocus.com at: www. sciencefocus.com/the-human-body/corrupted-blood-what-the-virus-that-tookdown-world-of-warcraft-can-tell-us-about-coronavirus/ 78 MMOs as an educational gateway drug. Steinkuehler, C. (2008). Massively multiple online games as an education technology: An outline for research. Educational Technology, 48(1), 10–21. Retrieved from jstor.org at: www.jstor.org. stable/44429539; MMOs as a method to enhance offline relationships as well as online. Snodgrass, J., et al. (2011). Enhancing one life rather than living two: Playing MMOs with offline friends. Computers in Human Behavior, 27(3), 1211–1222.

114 Teaching and Learning Game Design Retrieved from sciencedirect.com at: www.sciencedirect.com/science/article/pii/ S0747563211000057 79 Thompson, C. (2014, October 9). How videogames like Minecraft actually help kids learn to read. Retrieved from wired.com at: www.wired.com/2014/10/ video-game-literacy/ 80 It was also a great movie by Stephen Spielberg. It’s especially fun if you can remember all the pop-culture references.

The Virginia Serious Game Institute (VSGI) Learning Game Examples

5

Stephanie Kane

What is a man but the sum of his memories? We are the stories we live, the tales we tell ourselves. – Clay Kaczmarek, Assassin’s Creed Revelations1

On July 4th weekend in 2020, the Tony-award winning Broadway musical Hamilton became available on the streaming platform Disney+.2 It was the penultimate performance of the original 2015 cast, edited to provide an authentic theatrical experience. Although this is not the first time a Broadway musical was adapted for film, nor even the first time a Broadway performance was filmed to recreate the theater experience, it was the first time mass consumers were given access to a current blockbuster Broadway musical in such a timely manner.3 Something that normally may cost hundreds of dollars per ticket was now available in the comfort of your living room for the small subscription price of a streaming service.4 This opportunity ofered consumers the chance to experience Broadway art for the frst time for little cost without traveling to NYC, providing a unique platform for public discourse about musical theater, the elitism of Broadway, and the historical accuracy of the show itself.5 Writer LinManuel Miranda wanted the spectacle of his Broadway musical to come from his original hip-hop infuences, but also felt compelled to accurately recreate history.

116 The VSGI Learning Game Examples

I felt an enormous responsibility to be as historically accurate as possible, while still telling the most dramatic story possible. [.  .  .] And when I did part from the historical record or take dramatic license, I made sure I was able to defend it to Ron, because I knew that I was going to have to defend it in the real world.6 Of course, Thomas Jefferson and Alexander Hamilton did not actually rap about the United States’ financial system, but the hip-hop lyrics and rhymes of the characters could still hold educational value.7 When audiences consume entertainment, the primary purpose is recreation. However, as entertainment evolves, we fnd that media can be used in various ways or for multiple purposes, including education. Hamilton provided audiences with an educational experience that did not feel educational, very similar to another medium we all know well: video games. By using this interactive medium, game players can be given the unique ability to participate and learn from events they might not otherwise get to experience outside the environment. With the rise of gaming in recent years, especially among the younger generations,8 it seems games may be a most promising medium to use in the classroom. Games as an industry have left the realm of million-dollar sales, boasting $120 billion in global sales for interactive media in 2019, more than double the market share of movies.9 It is not much of a surprise that such a popular medium would fnd its way into various industries. Interactive movies such as Black Mirror: Bandersnatch attempted to combine the medium of flm with the interactivity of video games, requiring Netfix to create entirely new software for the experience.10 Businesses use video games in advertising or even as advertising. The earliest such title I can recall featured the red circle that served as a 7-Up mascot; Cool Spot (1993) was well-received on the Sega Genesis and SNES.11 Today, businesses try to increase user engagement through loyalty reward games, like Starbucks.12 And with play being such an integral part of childhood development,13 it is only natural to use the medium of games to focus on the developmental needs of children. Ample research has been conducted on educational games and their efcacy in terms of scholastic achievement. Improvements have been recorded such as increases in problem-solving skills,14 critical thinking skills, creativity,15 and material retention.16 One defnitive comprehensive meta-analysis of over 200 studies covering a 20-year period concluded that interactive games may lead to higher cognitive gains for certain academic subjects.17 Many educators and researchers have endeavored to merge the enjoyable elements of entertainment computer games into learning environments, attempting to defne game design elements that could be used to create a more engaging and motivating learning experience.18 In his article, “Toward a Theory of Motivating Instruction,” Dr. Thomas Malone, professor of

The VSGI Learning Game Examples 117

management at MIT Sloan School of Management, defned three things that create the element of ‘fun’ in computer games: challenge, fantasy, and curiosity.19 Using these elements, Malone then argues that courses and overall educational experiences should include the following: • • • • •

Clear goals that learners fnd meaningful Multiple goal structures and scoring for feedback Multiple difculty levels, adjustable to the learner’s skill Random elements of surprise An emotionally appealing fantasy and metaphor that is related to game skill

Games have entered the realm of education by granting learners the ability to explore complex situations in virtual worlds. Of course, using games and technology in the classroom is not a new discovery; as discussed in previous chapters, play and analogue games have had a place in learning for centuries. Electronic games have been trying to find a role in the classroom for decades, beginning with the phenomenon of ‘edutainment.’

The Fad of Edutainment and Transition to Serious Games They’re reticent .  .  . an educator’s gravest problem today is the apathy of the students. – Miss Waley, Harvester20

It is said the term ‘edutainment’ was coined by the original Imagineer himself, Walt Disney.21 However, as discussed in Chapter 3, the concept of merging education and play is older than the modern idea of school itself. Educational play, or paideia, was a part of education even as far back as Ancient Greece, and continued in contextually specific forms through the ages. Seventeenth-century Czech educator Jan Amos Komensky made play part of his pedagogical system, split in two ways. “Play as a theatre performance that focuses [.  .  .] on dramatization of a historical event, or other educational material and play as a didactic ‘joyful’ method that should help to educate body or mind.”22 In Edutainment, the lessons learned are often lower-order thinking skills and preplanned simple facts, with entertainment as a reward. This is in contrast to Serious Games, which facilitate higherorder thinking skills, such as synthesizing information and critical thinking. There is often no ‘right answer’ to Serious Games, but a player can still be successful no matter the strategy.23

118

The VSGI Learning Game Examples

Figure 5.1 Screenshot of Number Munchers24

To understand the diference, let us look at two popular games identifed as edutainment: Number Munchers (1986) and Oregon Trail (1971). In Number Munchers (Figure 5.1) the player chooses a level and is given a grid to move through and consume, or munch, the proper numbers. For example, if the requirement is ‘multiples of 3’ then the player would move through the grid to munch 3, 6, 12, etc., all the while avoiding enemies to secure the highest score. The act of playing this game is entertaining to a child target market, but the focus is on the memorization of numbers, with almost no higher-level critical thinking. Compare that to another popular edutainment game, Oregon Trail, where the player takes the role of a patriarch within a family (from various backgrounds, which serves as a difculty curve) and must travel to Oregon safely while learning about America’s westward expansion in the mid 1800s. While Oregon Trail is labeled as an ‘edutainment game,’ it fts better as a ‘Serious Game’ because there are no rote facts to memorize. The lessons are the experience of the hardships of west-bound pioneers, the management of supplies, and the stress of keeping a family healthy on this historic voyage. Beginning with The Oregon Trail educational games privilege decision making and problem solving over memorization and repetition, teaching students to see historical problems from multiple points of

The VSGI Learning Game Examples 119

view, and to understand the past as a complex reality where motives and actions are interrelated.25 Edutainment became a very popular medium in the late 1980s with the hope of motivating students with new technology. Popular titles such as Where in the World Is Carmen Sandiego? (1985)26 used memetic songs and catchy visuals to engage the learner, and often ofered an entertaining reward for completing the lesson. For example, Where in the World Is Carmen Sandiego? asks players to catch the infamous thief by investigating clues to fnd her whereabouts. The clues are often facts about geographical locations which assist the player in learning about other locations. This model of drill practicing of facts, though educational, represents the fatal faw of edutainment: a lack of higher-level critical thinking.27 As with any technological change, the adoption of computer games in the classroom has been met with some criticism. MIT educational technology researcher Seymour Papert believes that, although computer games promote cognitive development, curriculum designers only adopt computer games as a way to get young people’s attention, not because they are necessarily the best way to teach.28 Other researchers such as Kurt Squire believe that games transcend traditional classroom instruction and provide a way to allow learners to own their learning, with a teacher on hand as a facilitator of sorts.29 Many studies show that games promote learning30 by providing situations that the average player may not be able to experience in a traditional learning environment. Positive or negative, games are now employed for various purposes in almost every market and industry. Games have permeated every facet of American culture. Video games have become the basis for television shows, movies, and fashion.31 As of 2017, 30% of K–8 teachers already used games in their classrooms,32 and those educators and researchers that choose to harness the potential of games to help teach and tutor may be on the forefront of a new more successful and long-lasting positive transformation of educational innovation.

Overview: History, Mission, and Philosophy People who know how to make games need to start focusing on the task of making real life better for as many people as possible. – Jane McGonigal33

One person who noticed this trend was the lead author of this book, Dr. Scott M. Martin, now a 30-plus-year veteran of academia. As the chair of George Mason University’s Art and Visual Technology Department, Martin

120

The VSGI Learning Game Examples

noticed a lack of colleges and universities jumping on the trend of developing engaging and fun games for other non-entertainment-focused markets. In 2008, Martin launched the Computer Game Design Program under the College of Visual and Performing Arts (CVPA) at George Mason University to capitalize on this trend. The program has since grown to include a graduate program, minors, and ancillary programs and events. As of 2020, it enrolled 400 students under the major, one of the largest in its host college. In 2013, Martin envisioned a new way to bring games to the forefront of education while also embracing his entrepreneurial side, opening an incubator where faculty, staf, alumni, and current GMU students could launch a serious game startup, or conduct game-related research with the economic development beneft of adding jobs to the local economy. In October 2013, the Prince William Board of County Supervisors granted $32,000 to open the Serious Game Institute (SGI) with the goals of academic research, small business incubation, and community outreach – all under the ‘serious’ games umbrella. Across the Atlantic, Dr. Sara de Freitas at the University of Coventry, in Coventry, U.K. was creating an international consortium of game institutes around the world to collaborate on research, and Martin felt that joining this group would lend a sense of comradery and collaboration in his own emerging institute. The Simulation and Game Institute would be an incubator for independent studios developing serious games, with an additional focus on applied research and community outreach. Based in Manassas, Virginia, on the Prince William Campus of George Mason University, it would additionally be the fourth international afliate of the University of Coventry’s Serious Game Institute consortium, whose other afliates were based in Mexico, South Africa, and Singapore. However, due to various international grant and funding rules and regulations, the collaboration really could not get of the ground, with a few universities shuttering their institutes or reevaluating their goals. George Mason University’s Serious Game Institute left the dissolving consortium and, with additional bipartisan fnancial support and advocacy from the Commonwealth of Virginia, rebranded itself as the Virginia Serious Game Institute (VSGI) in 2015.34 The VSGI mission is to support Mason’s Entrepreneurial and Innovation goals by cultivating and supporting Mason-founded startups, rapid prototype development, high-value knowledge job creation, and regional economic development through serious game technology discovery to improve the human condition. The VSGI ofers schools, businesses, and innovators research/development assistance. A more recent project that has beneftted from association with the VSGI is the Historical Movement Archive, started by Brad Waller, an awardwinning stage fght choreographer for various flm productions and theater companies. Waller’s goal is to create a virtual reality museum of sorts,

The VSGI Learning Game Examples 121

recording and displaying historical movement techniques and styles such as theatrical swordplay and ancient global dance in immersive 3D. Although Waller was knowledgeable in the subject matter, he did not know how to create a virtual 3D environment. The VSGI accepted Waller as a Scholar in Residence for the 2019/20 academic year, and CVPA helped fund a small software development team to realize the HMA mission. Using motion capture technology to record a performer’s trained movements, Waller and his team then created avatars that imitate these movements. By providing threedimensional virtual immersive environments to observe and control aspects of avatar placement, a player could learn about diferent types of historical and theatrical movements and how to perform them. There are entire languages of movement nearly lost to history, and Waller strives to continue translating and recoding them into a 3D immersive environment for future scholars and dancers to study. The VSGI also provides educational and outreach opportunities for Virginia public schools, regional area businesses, and students across the country. In 2014, the GMU Computer Game Design Program partnered with the CVPA Potomac Academy, a K–12 arts and music summer camp program, to create the Mason Game and Technology Academy (MGTA). Several years later, MGTA became a partner of the VSGI ofering internships for Computer Game Design students, weekend and summer camps for children ages 9–17 ofering game design and technology courses for students across the nation, and adult education and K–12 teacher-training programs. In 2017 MGTA brought courses in computer game design, studio management, and entrepreneurship in a partnership with Envision, a program that organizes educational and leadership opportunities for teens. Developed in 1985 by one teacher with a hands-on education philosophy, Envision has now grown into a company that provides interesting educational experiences for learners. Envision and its partner Worldstrides has an annual attendance of 400,000 students across 100 countries.35 In the last fve years, MGTA has become the largest STEM academy in Virginia, boasting an August 2019 attendance of over 1,000 students. Lessons and workshops from special pioneers in the industry are also ofered at the VSGI. In 2016, Atari founder Nolan Bushnell gave a series of lectures on the future of video game technology. In 2020, Thomas Dolby, a pioneer in music and game sound, gave a lecture on his use of computers in sound from his hit She Blinded Me With Science (1982) and the development of a live interactive soundtrack for a video game called Escape from Zombie Island. Educators, students, and industry professionals are invited to VSGI sponsored conferences, such as Women in Technology, SheSoft, and Serious Play, to encourage diversity and provide inspiration and information to any interested in not only Serious Games but tech as a whole.

122 The VSGI Learning Game Examples

The VSGI has also conducted several applied research projects. VIPER (Virtual Integrated Practice of Emergency Response) was developed in 2017. A multiplayer simulation, VIPER was developed and tested in coordination with the Department of State as a method to supplement training for embassy security. In 2019, a prototype was developed for the United States Army of an Augmented Reality training application. IMMERSE was then developed as a hands-free method of providing training and in-feld instruction for military medics. The fnal piece that makes the VSGI unique is the small business incubator for GMU-afliated independent studios focusing on serious games. This incubator fosters entrepreneurship, economic development, and creatively solving problems with serious games. Since its opening, the VSGI has helped several businesses grow and develop. Small businesses apply on the VSGI website, and each application is presented to and voted on by an executive advisory board made up of local government, industry professionals, and investors. Once accepted, businesses must meet quarterly milestones to show growth and progress. For example, in the frst quarter of residency, a business must have a business plan developed, pro-forma fnancial projections, and the alpha stage of their product complete. These milestones are designed to assist the studio in development, toward the goal of being a fully independent business. Veterans of the program include Little Arms Studios, Hospital Training Games, Flyguy Interactive, and Citadel Studios; cumulatively, they have added more than 60 jobs in Prince William County.36 The following are case studies of these companies and their fagship products.

Case Study: Interactive Virtual Incident Simulator (IVIS) People love video games because they do things they obviously can’t do in real life. –Ralph Baer37

In society there are professions that put their employees in danger, both in the performance of their duties and also potentially in their training. This presents a challenge for trainers whose employees need to train for dangerous situations and tasks, but do so safely. In these professions, even basic exercises can often put a trainee at risk before they even begin their career. One of the most iconic pop culture examples of training for dangerous situations is realized in the use of the ‘Danger Room’ from the X-Men comic book series. The leader of the X-Men superhero team, Charles Xavier, created a fully simulated exercise room to train mutant superheroes in the

The VSGI Learning Game Examples 123

myriad of unpredictable scenarios common to their work. Although the situations presented in the Danger Room were designed to mimic real life, there were safety measures involved to prevent the X-Men from hurting themselves before they could help others. This ingenious contraption created the idea of taking dangerous scenarios and using technology to build a virtual experience free from any actual physical harm.38

Figure 5.2 Logo for IVIS from Little Arms Studios39

Dawn Blair-Jimenez, a frefghter, had a similar idea while on parental leave. She wanted to create a better, safer solution for frefghting and incident command training, which generally involves igniting a ‘burn building’ and requiring trainees to fght the live fre – a costly and dangerous proposition for folks learning their craft. She imagined it would be safer and much less expensive to run a simulation instead. In 2013, Blair-Jimenez approached George Mason University Computer Game Design program for a possible collaboration. Kyle Bishop and Alex Estep found themselves at the right place at the right time, forming a company while in the process of graduating with their BAs. Their company, Little Arms Studios, thought this would be a challenging frst project and began work on Blair-Jimenez’s idea, which became known as IVIS, or Interactive Virtual Incident Simulator. As Baer stated, video games allow people to do things they can’t do in real life. Creating a training simulator would allow frefghters to train without putting themselves in danger. Blair-Jimenez and Little Arms discovered it was vital to create something that would not just place frefghters in a simple computer simulation; it had to be fuid, naturalistic, with similar but never identical incidents. The simulation had to be versatile and capable of training in diferent aspects of fre training, such as communication and fre/smoke behavior analysis. Bishop and his team spent quite a bit of time on physics, especially realistic fre behavior. “This means something as small as breaking a window or leaving a door open can have drastic consequences to the overall fre landscape.”40 IVIS also needed to be able to support frefghters nationally through a networked multiplayer experience, with trainees able to join sessions from of site or across the country.41 During the beginnings of development for IVIS, Bishop and his team transitioned from a garage startup into a professional studio in the new VSGI, with a work space and revenue-generating

124 The VSGI Learning Game Examples

contracts. “The computers, ofce space, and software helped us in the early days to keep our overhead costs low and focus our money and revenue on the team.”42 Initially, Little Arms was excited to work on the project, but routinely sufered from lack of stability in the chain of command. “We developed the software for a couple of years with them and faced a number of challenges, in particular the fact that the management and leadership of the Fairfax side kept changing.”43 Every change in the department and leadership meant coinciding changes in the project, with goals and milestones shifting and evolving all which contributed to longer development time and challenges to production. Eventually, after incessant bureaucratic disruptions, funding from Fairfax County was depleted. Luckily, Bishop had negotiated that his team could keep the intellectual property of the product, and he spent a year marketing IVIS elsewhere before Little Arms found the support it needed right back at home. With the VSGI making introductions and helping facilitate a new relationship, “We started conversations with Prince William County about picking up the project and creating it with a proper budget and goals/ milestones.” Bishop recalled. “We’ve been developing IVIS with Prince William County since then and are still in the fnal polish/fxes stages before taking it to market.”44 As of the writing of this chapter, IVIS is still in development, so it is still not available to the public. Although Little Arms no longer resides at the VSGI, the imprint of the incubator is still apparent on their company, something that Bishop is incredibly thankful for. It has truly been a great experience connecting with firefighters, learning with them in the field, sharing on their training and getting to know them. We also made a lot of great connections with great people at VSGI, both in the software space and in the rest of the team.45

Case Study: Hospital Training Game Patients are asked not to die in the corridors. Receptionist, Theme Hospital46

One of the earliest ways people learn how to define themselves and relate to one another in the world is through simulative play. Pretend play begins in children as young as 18 months and is their way to make sense of how the world works.47 Dr. Sebastiaan Meijer, Associate Professor in Transport Systems at the Royal Institute of Technology, in Stockholm, Sweden stated that “Simulation games have been around for as long as, even longer than,

The VSGI Learning Game Examples 125

humans. All children enact various social roles in play. Our species has even been termed Homo Ludens.”48 Simulation is an imitation of a real-world system, which has a clearly defined goal, rules, and incentives. Combined with the interactivity of gaming, simulation games can train players in roles found in the real world.49 In 1997, U.K.-based company Bullfrog developed a quirky, tongue-incheek simulation game, mimicking and mocking healthcare systems in such a way that even children could understand a hospital’s basic administrative needs, and dubbed it Theme Hospital. At the start of the game, Theme Hospital gives players a space to set up a reception area, the waiting room, and various patient rooms full of equipment to diagnose and treat virtual patients and pretend diseases, all with a cheeky sense of humor as observed in the receptionist’s matter-of-fact dialogue. This is a neat world where every ailment has its own counter, so long as you have put in money and effort. [. . .] This is healthcare as you may have imagined it worked as a kid: people are cured of invisibility or have their swollen, bloaty heads popped like balloons and then reinflated.50 The overall goal of the game is to balance resources, space, and staf to create the best experience for these ill patients, regardless of how ‘unrealistic’ the specifcs may appear. You succeed at the game by using the same approach that you would in real life, without the severity of negative consequences. In fact, the idea of simulations as games is a very popular genre of gaming, and Theme Hospital has spawned a number of spiritual successors over the year, including the recent Two Point Hospital,51 and gave inspiration to many a serious game designer, including Phillip Wheeler. Hospital administration is an industry that demands incredible personalization, but complete uniformity. “It’s not that people aren’t doing it,” Wheeler remarks. “It’s that they aren’t doing it successfully.”52 Wheeler started life as an industrial engineer, taking an internship in healthcare while getting an MBA at Carnegie Mellon, and in a sense never looked back. While working at Kaiser Permanente, he discovered a defcit in hospital administration training. Wheeler’s primary goal became educating Kaiser leaders in operations. “Operations is hard to teach because it’s not a fat subject. Truly working in a hospital is like a resource management game. Trying to teach that with PowerPoint is difcult.”53 Hospitals were using board games to train their employees, and Wheeler sought to create an electronic version. It could be updated regularly with various policy changes, an easier and more adjustable tool to be used by numerous nurses and administrators simultaneously. In 2016, he started Hospital Training Games, along with a training simulation: Hospital Rescue: Emergency Department. Wheeler created the company Hospital Training Games with the intent to train hospital administration more efectively and afordably than the

126 The VSGI Learning Game Examples

current method of analogue games. A game titled Friday Night at the ER (FNER) was one of the standard training board games at the time. “I wanted to create something in the hundreds of dollars, rather than $1,000–$1,500 for a board game. I wanted to automate it.”54 In Wheeler’s Hospital Rescue: Emergency Department, players receive an electronic tablet, and discuss the parameters that they want to simulate. Players then set the parameters into the tablet and watch the simulation play, discussing the results of their choices.55 “Hospital Rescue: Emergency Department was designed to teach nurse managers the interplay between day-to-day emergency room management and its longer-term fnancial outcomes.”56 The learners can also examine how their decisions afected key performance indicators, patient reviews, and hospital fnances. So, unlike FNER, which taught similar lessons, no key decisions can be made on the fy, but the fnal statistics that Hospital Rescue: Emergency Department provides at the end of each simulation can help learners see the larger results of their choices.57 Development, while mostly free from problems, was not without constraints. The core team of four people was comprised of Wheeler as producer, former entertainment game programmer turned game design professor Robert Dieterich, and two recent graduates of GMU’s game design program: Romel Ramos on art and marketing and Joshua Blow for additional programming. Although a small team is not necessarily a negative or positive, it is difcult to build a game with no full-time workers. “Rob was working full-time teaching and I was working full-time in healthcare. I didn’t want anyone to work overtime if they didn’t have to.”58 Wheeler felt that clearly defned roles and deadlines were important to keeping people on task, and the team would meet weekly to keep on task and regularly check in.59 The team would consistently discuss development and how to input all of Wheeler’s knowledge into an easily digestible game. Small goals and weekly tasks would be set, and then implemented and tested, before the cycle would began anew. “The weekly meeting of the staf helped keep people more or less on task, but we didn’t have any explicit task management. This isn’t the worst situation for a small team,” Dieterich points out, but more explicit tracking of tasks may have given us a better ability to gauge the larger picture of our scope and schedule. We generally had a good sense of where the game would be in a month, but often had little visibility beyond that.60 “Just because I teach people doesn’t mean I’m good at it,”61 Wheeler joked, but he was known among his team for actively learning and participating in both familiar and unfamiliar aspects of the project. He was

The VSGI Learning Game Examples 127

receptive to new ideas, and well respected by his team for his unwavering participation. “[Wheeler] was a hands-on manager. He would be in the ofce with us many times [. . .] learning the program and application for a repository so he could direct changes for NPCs.”62 Wheeler knew his lack of studio management experience could be an issue and insisted on continuing to learn about his product and game development. [Wheeler] came in green, but he was willing to learn whatever he needed to do to help production. Once I got him set up with methods to get updated builds of the game, he was relentless in testing the game and making sure it worked according to spec.63 One common issue prevalent among small development teams is simply not realizing the importance of marketing. Many entertainment game developers often try to market their product to a publisher or, with the release of platforms such as Steam, post their games on a hub for a willing audience. No such outlets exist for Serious Games at present, so most developers in the feld will only begin a project once they have a contract, or at least parties committed to buying the product. “I think there’s a missing piece in distribution of learning games,” muses Wheeler. I went to the Serious Play Conference and went to some of the tables. Everyone is very anxious about selling their work. It was hard to find and connect to these people. There is no distribution point it seems. You have to hire a marketing team – and often you can’t afford to.64 Another obstacle to the game’s distribution had much to do with change, and resistance to it. Administrators of hospitals and medical schools had already purchased a training game that was fun, interactive, and served as a fulcrum for the curriculum. FNER boasts that it does not require training to play, as facilitators can be trained simply with the included instruction booklet. A university or hospital pays for the game and there are no additional costs, such as updating technology or hiring a trained facilitator. FNER also updates their game regularly, with the last update in 2017, so while Hospital Rescue’s ability to update was certainly more robust, it was not unique. Adding in a technical aspect complicates things further, as facilitators could be uncomfortable with unfamiliar technology. Although the team ofered facilitated training, they often ran into issues with getting hospitals to schedule time for these trainings. “People enjoyed it. But people are so busy all the time, so scheduling time to have your people play this game is a big decision. It was too complex.”65 Wheeler attempted to market this game at a conference and to a few colleges but could not aford to maintain the efort. He also attempted to

128 The VSGI Learning Game Examples

get Hospital Rescue into more classrooms by ofering a web-based version, reasoning that, if hospitals could not aford additional technology, it would be easier to simply place the whole project on a browser. “[I never] found a good way to get the game online,” Wheeler laments. “I thought if we experimented with a web-based version it would be easier, but it was slow and clunky and not great to play. It was lost in translation.”66 A secondary issue for Hospital Rescue was that it was unable to be personalized for hospital nurse managers. There was just too much information in the educational/training programs that Wheeler taught to be contained in a small app. The team met with Wheeler every week and had to parse out what information could be added and what had to be left alone. “Phil wanted to add so much information and so we would feel overwhelmed. It was a constant battle to get Phil to scope down.”67 By having so much information to teach and nothing really ftting the game that Phil envisioned, the scope of work constantly changed, and with it the core idea of the game itself. “Phil’s original concept for the game had it as a component of a larger training program. This introduced several questions as to how best distribute the game and what sort of hardware we should target,” explains Dieterich.68 Even the best ideas sometimes are not successful and, although Hospital Rescue still is not in the hands of those who could beneft from it, the experience for the greener staf, such as Wheeler, Ramos, and Blow, could be considered a success. The game reached a fnished state, and sometimes developers consider that a successful learning experience. “I do consider Hospital Rescue a success. Mainly because we got it to a fnished state, I learned a lot working on this project during my career. And I learned a lot working directly with the team.”69 Dieterich also had positive feelings regarding development. “I think we did manage to build an interesting product that fts the purpose we envisioned for it,” Dieterich refects. “I just think the answer we ended up developing wasn’t quite the right ft for the problem we were trying to solve.”70 For other team members, the test for success is a simple one, are people playing it? Unfortunately, Hospital Rescue fails to pass.

Case Study: Social-Emotional Academic Development Game (SEAD) Because being mean makes me feel bad. – Tumblr User, Raphaeliscoolbutrude71

Suspense and thrills do not come from the act of virtual adventuring itself, but by making the players feel weight and meaning in their actions and decisions. In the game Fallout 2, a character’s choices determine how the

The VSGI Learning Game Examples 129

world interacts with her. For example, if the player purposefully kills a child in game, everyone treats the player negatively, conversations are more difficult, and bounty hunters try to hunt down the player as punishment for their choices.72 This is only one example of a game using the player’s social interactions and choices to determine the player’s progress and consequences in a virtual world.73

Figure 5.3 Tumblr Post and Response74

When I think of emotional and weighted decisions in games, I am often reminded of this popular Tumblr post: To break this down a little bit, in the frst account ‘writing-prompt-s’ states that a game has no consequences. Although gamers know that it is foolish to claim that a game has no consequences for actions – as the balance of reward and consequence is what makes a game fun – the player defnes the balance. When well-known game developer, Sid Meier, discussed what was an interesting choice in a game, what he meant was, “that the game must present a stream of critical decisions that either directly or indirectly impacts the player’s ability to win.”75 A player’s choice to be either altruistic or cruel has consequences in that it can afect the outcome of a successful win – either by making it difcult to achieve or by locking out certain narrative pathways. However, it is possible to infer that ‘writing-prompt-s’ is saying that there are no real-life consequences to your choice in a game – you as a player don’t get arrested or punished and can possibly even still win. So, if there are no outside, real-world consequences to being virtually cruel, and a player can still complete the game (with few exceptions), then what stops a player from choosing that path? The response, meant to be amusing, is also common: the real-world guilt at being violent toward others, even if those others are virtual characters with no real human emotions, and no real-world consequences stem from their mistreatment. A player could even erase a saved fle or restart from a past point to relive a moment. So why does it hurt to be cruel, even to lines of code arranged to look like people? Katherine Isbister claims that games have a more potent impact on our emotions, unlike other mediums, because players make choices.76 Although this is just a light and amusing way to discuss social interactions in a virtual world, the K–12 education realm is attempting to create choice

130 The VSGI Learning Game Examples

and consequence-based games to measure social and emotional intelligence in children. The goal here is to develop emotional skills such as self-awareness, social awareness, relationship building, and efective decision making. Studies have shown that competence in these skills may yield improvement in academics.77 Social and emotional intelligence has been found to predict how a student will engage in prosocial behavior, empathetic behavior, and confict resolution.78 Communities in Schools (CIS) is a Virginia-based federated nonproft organization with a presence in 26 states and D.C. that supports 1.6 million students at risk of dropping out due to fnancial, behavioral, or academic needs.79 CIS afliates work with schools to provide tailored and individualized supports to address both school-wide and student-specifc needs. For instance, if the school needs counseling support, CIS will work with school leadership to determine the areas of need and connect students to organizations and services in the community that would best support those needs, with the goal of addressing the barriers that learners face to academic success.80 Research has consistently demonstrated a relationship between social and emotional maturity and academic success. This body of research and the increased emphasis on social-emotional learning in schools contributed to CIS enhancing their focus on supporting the development of students’ social and emotional skills and competencies in the process of addressing barriers and needs for at-risk students. To this end, CIS began a partnership with Sanford Harmony to provide an evidence-based SEL curriculum to the students and schools that they serve. Sanford Harmony was developed in 2008 by Arizona State University (currently managed by National University) and uses a variety of classroom activities and exercises that engage students in fve units of learning: • •

• • •

Diversity and inclusion, which focuses on identifying and understanding one another’s diferences, while also fnding commonalities and creating connections Empathy and critical thinking, which focuses on recognizing/predicting/explaining our own feelings and how they infuence our thoughts and behaviors, while also understanding others’ feelings and why they might have them Communication, focusing on active listening, engaging in appropriate and efective conversations, and learning appropriate assertive skills Problem-solving, as pertaining to recognition and management of emotions in difcult social situations, including considerate and cooperative confict resolution and interpersonal skills Peer relationships, learning to care for others and be inclusive, while learning to efectively deal with bullies81

The VSGI Learning Game Examples 131

However, there was a lack of precision in implementation, an issue CIS had recognized with other SEL supports several years before. Students vary considerably in the skills and competencies they possess, so identifying the students most in need of support in particular domains is crucial for effective allocation of resources and provision of supports. With this in mind, Dr. Kevin Leary and the research and evaluation team in the CIS National Office started work on creating an assessment of CIS students’ social, emotional, and academic skills and competencies. As Dr. Leary, senior principal of Research and Evaluation at CIS National, explained, “We then began developing an assessment in office, to give insight into key social and emotional competencies that can impact students’ academic success.”82 By giving students this paper-based survey, CIS hoped to determine the areas where students and schools needed the most help, something most SEL programs are lacking. “We’re still in the process of trying to make it as easy as possible to use and constantly refining it,” Leary says. During early testing, CIS found that younger students had more difficulty reading and understanding many of the items in the assessment. “We tested a survey with upper-elementary students, and it didn’t work. Many students struggled to comprehend the questions.”83 For example, asking a young child about stress management, phrased as simply as possible, could still be problematic if the child does not understand the question nor the nuances of different kinds of stress. Addressing the issue of reading comprehension with younger students is important, as earlier problem area identification could allow for early intervention to address student risks and barriers and ultimately prevent children from dropping out of school. To ensure that younger students were receiving the social-emotional supports they needed, Dr. Leary and his team decided to find a way around the issues concerning reading comprehension while providing younger students with a fun and interactive way to take the assessment. “We were looking to create a game-based version of the assessment that would give us a way around reading comprehension by having students interact with characters and play minigames to measure their social and emotional competencies.”84 From this idea, the Social Emotional and Academic Development (SEAD) Game was developed to assess younger children limited in verbal and writing skills. As students play the game, their decisions and behaviors are captured and analyzed by a back-end server allowing counselors and/or teachers to key in on that individual’s specifc emotional and social needs.85 The intention behind the game is to create a ‘fun’ tool that allows for stealth assessment of students’ social and emotional skill, since students don’t know they are taking an assessment. This game could be a tool to provide more targeted implementation of social and emotional interventions.86

132 The VSGI Learning Game Examples

The research team at CIS knew they needed assistance in developing such a unique game and approached the VSGI to ask for that help. “Getting into this as researchers, you don’t know what you don’t know,” Leary mused. Dr. Leary and his team were introduced to FlyGuy Interactive LLC, a small team of four George Mason University graduates, led by alumnus Orin Adcox. “VSGI was great at holding our hands,” states Leary. Adcox and his team designed a great portion of the minigames, carefully incorporating the needed assessments as advised by Leary and CIS. Don Norman suggests that good game design should provide an emotional connection on one of three levels: aesthetic, functional, and refective. Therefore, Flyguy Interactive understood it was important to design SEAD in a way that provided that emotional connection.87 There were several minigames, all designed with a framework to work best for younger children and all measuring diferent areas of socialemotional functioning. For example, one minigame takes the form of a hovercraft race. The player customizes their vehicle from several options, which allows for a more personal connection to the vehicle, and thus the races. “Since games are often seen as a ‘magic circle,’ or an enclosed space with minimal impact on the outside world, we made eforts to introduce the characters and environment of each game.”88 It is critical that the player gets invested in the game and relates to other characters as well, because that allows the earlier mentioned emotional attachment to develop. “If the player was not invested in the game and understood the characters to be important in their own right, then they would see them simply as challenges to overcome,” Adcox explains, “much like the player of a frst-person shooter often sees NPCs (Non-Player Character) as targets rather than distinct people.”89 When a child plays this racing game, proper results can only be assured if the learners are engaged. In a case study conducted by Leonard Annetta, two sections of a graduate course played a Functional Game. One section was able to choose from over 100 different avatars, while the other section only had two choices: basic male and basic female. The results of the case study showed that the learners who lacked choice were not as engaged in the environment, and thus not as engaged with the lessons.90 During the hovercraft race, players are given an option to disregard the rules of the race and crash into their competitors, as well as an option to help competitors repair their ships, with the consequence of losing their standing in the race. So will players sacrifce their own chance at winning to help others, or sacrifce their friendships to win?91

The VSGI Learning Game Examples 133

Figure 5.4 Screenshot of SEAD Racing Game92

A partnership between subject-matter experts and game design developers is not new in the VSGI (Virginia Serious Game Institute), as previously covered, and oftentimes lends itself to development issues regarding content. “We had to walk back some stuf, due to it being too high level or didn’t ft right into the game for one reason or another.” Leary explains, “But overall, it was a good partnership, and ran pretty smoothly.”93 What made things a bit difcult for the game developers at FlyGuy, was that SEAD needed a whole new and diferent method of design to work. Although most educational games are meant to teach the player how to interact with the virtual world, SEAD could not communicate feedback to the player like traditional games. There were no scores, ‘good jobs,’ or ‘game overs.’ Adcox explains, “The game was meant to be played multiple times throughout the year by the same student, so providing a correct answer would diminish later results as would the player becoming increasingly skilled at the actual game mechanics.”94 This means that the actual game mechanics difered from the educational lesson of each minigame, with several points of measurement of the player’s social and emotional state. “The idea was to simply present the player with an environment and a goal and allow them to interact in the space according to their own desires.” Adcox explains, “We would set up specifc sections to directly ask the player how they felt about other characters and actions.”95

134 The VSGI Learning Game Examples

Adcox and the FlyGuy Team had to keep the subject balanced but also engaging, fun, and inclusive of multiple points of measurement, all without the game feeling too boring or obvious. Adcox noted: Each minigame needed to be limited enough in scope so too many types of social and emotional elements weren’t recorded at once, otherwise it could contaminate results and possibly lead the person evaluating the results to misconstrue a player’s motivation behind an action. It was also important not to make any suggestions in how to evaluate extracted data. The goal is to export data according to the social and emotional domains of interest and describe just enough context of the game so a child psychologist is able to reason the results of the interactions and draw conclusions based on their professional opinion and experience.96 The game is still in testing, so there are no schools using it. Dr. Leary states, “I wouldn’t feel comfortable yet, it’s still rough around the edges, but it’s important to get feedback along the way [during feld tests].” Although the back-end scoring hasn’t been built out yet, it is important to get the game in front of students to test the engagement; and although funding for CIS has increased for six more months of future development, more user testing is needed to refne the product. “I would think, 18 months maybe, for a fully built out game.”97

Case Study: Legends of Aria – MMORPG Retention is the by-product of engagement. – Nathaniel Staples98

In Chapter 3, we discussed the Assassin’s Creed Discovery Tour, Spore, and the success of using games in education. These games were designed specifically to include educational value. Within the right context, though, any entertainment game can be used in education. Kurt Squire discussed the value of using Sid Meier’s Civilization III in a history classroom setting. “Successful students developed conceptual understandings across world history, geography, and politics.” The Sims 2 has been used in classrooms to study social behaviors and human desires as well as how to balance adult responsibilities with healthy human social needs.99 Rollercoaster Tycoon 3 can teach not only physics and engineering by allowing players to create and test their own rollercoaster designs but also resource and business management by requiring players to designate prices, control their costs, and create a proft with their theme park.100

The VSGI Learning Game Examples 135

We can even discuss more complex and complicated educational possibilities with Call of Duty Black Ops and the Fallout Series, in their discussions of Cold War settings, propaganda, and American patriotism.101 Games transcend the traditional boundaries of a classroom and can complement traditional instruction, whether created with the intention of educating or not.102

Figure 5.5 Citadel Studios Logo103

Heading Citadel Studios, former EA programmer Derek Brinkmann was excited about creating a game of ultimate freedom: freedom to create not only your own guild, but your own story, your own fun, and your own worlds. After working on Ultima Online,104 Brinkmann wanted to create something larger, with even fewer constraints, and in 2016 released an online crowdfunding campaign for what was known at the time as Shards Online and would later be released as Legends of Aria. Development for Legends of Aria had the standard ups and downs, most of which occur from having a small team of independent contractors and interns. While the frst crowdfunding attempt failed, Brinkmann’s second campaign was successful, and in August 2019 Legends of Aria was released. However, this story is not really about Brinkmann or the Citadel team, but how their game had an unexpected educational impact. Nathaniel Staples, an IT teacher at Good Shepard Catholic College in Queensland, Australia, was excited about the release of Citadel’s game, not only as a player of Ultima Online but also as an educator. Part of the pitch for [Shards Online] was the modability, so I joined the Kickstarter and then committed to the second one. The thing I immediately saw in it was “Why can’t I run this through school as the students only game world.” I could have it as a class run project. Like a club or team making content for the school.105 With the idea that learners could study programming by creating a world they wanted, Staples purchased the Kickstarter pack that allowed for early access to the game and asked Brinkmann for codes for his classes. “[Brinkmann] was really keen on it and they changed things on the server so I could use the one code and sign in to 10 versions of the server. Gave me 40 student codes so my students could just play.”106 The idea of using games to learn coding is not new, as there are many web-based programming games available. Players use Python or JavaScript code to move their character, defeat enemies, and fnish goals; and while

136 The VSGI Learning Game Examples

on the surface that sounds engaging, motivation decreases over time due to a lack of novel player-driven content. However, using coding to modify games107 in which the learner is already interested could create educational motivation. The popular game Team Fortress 2 had a humble beginning as a modifcation of the classic game Quake, released by id software.108 “Player involvement through the creative altering of game code has long been a feature of computing.”109 In what was then a visionary practice, the game company, id software, built consumer loyalty by encouraging players to be creative, allowing them to incorporate their own ideas into a game they already enjoyed. Since then, many studios have tried to replicate the results.110 What began as a hobby now turned into a community, allowing alteration and expansion in ways that may not have originally been considered, increasing the life of these beloved games as a result.

Figure 5.6 Screenshot from Legends of Aria Game Play111

Legends of Aria was sold on the freedom to mod, and Staples bought it for his class based on this potential and promise. In the beginning, Staples allowed his students a week of playtime to get used to the world of Aria and its limitations, and also to jumpstart their imaginations. The reason I felt I could make it work with Legends of Aria is that the game is already there. The graphics are already there, the guy already moves. You can already interact with the world and the game wasn’t complete yet, so it was perfect timing. If you give them a game that’s too complete, there might not be any room for imagination. You

The VSGI Learning Game Examples 137

can come into this world that was set up, but not complete, so they could create monsters, weapons, events when they defeated someone, they could create their own spells to round out the spellbook.112 It seems counterintuitive to teach programming by providing a partially completed game. It would seem to make more sense to have learners create a game from scratch, but Staples found issues in that method. [The] biggest problem with using games as a context for coding, is that [learners] all play at home and the games they play at home are miles away from what they can make in the beginning. So, they come in with all these ideas and you can’t do any of that.113 Staples explains, “You have to continually pare it down. They become so disillusioned with their own inadequacies they get frustrated.” That frustration causes a breakdown of the lessons, leading to incomplete assignments, so a basic world that was fully customizable was a perfect solution to get kids motivated and excited about coding. Without the frustration of having to work on basics, the learners were able to create a variety of projects, their imaginations unconstrained. “One student made a fire sword, and when he hit an enemy, he was able to bring down a fire pillar and it made him excited.” Staples reminisced, “It wasn’t a lot of code, but he loved it.”114 The class was also permitted to create larger projects in groups, if they wanted, and the mod community continued to inspire the lessons when a group of six attempted to recreate a Warcraft III mod Defense of the Ancients during Staples’ lesson. “[T]hey created their own map and created their own creatures.” Staples recalled, “It wasn’t completed. It could’ve used about six months of work, but it was great!” While Staples does not have any actual numbers to support the success of the program, he has ample anecdotal evidence, having witnessed better motivation and investment using Legends of Aria than previous lessons. “Retention isn’t the aim. Retention is the by-product of engagement. I wanted to have units of work where the kids were invested and more likely to be engaged and to retain what they learn.” With that, it seems, Staples had his success.

Effects of Games, Business, and Education in a Community VSGI will attract a new generation of students and scholars, generate new business enterprises, and the subsequent new utilization of simulation and game design and training applications in industries that we can only imagine right now. – Dr. Scott M. Martin115

138 The VSGI Learning Game Examples

In January of 2019, George Mason University revealed a fleet of automated delivery robots from California startup Starship Technologies for use by on-campus students. While walking on campus, anyone can see these small robotic cars rolling around campus, avoiding pedestrians and cars, and reaching their destination unguided. With every new step in technology, we become ever more aware of future possibilities. Through the VSGI, we have examined several examples of various types of media and technology that can be used to enhance education and training, and the positive outlooks and satisfaction that follow. Little Arms Studios created a training simulation that could not only save money but also ofer a safer training space for frefghters. Hospital Training Games created an administrative training game that was fun to play while still teaching hospital resource management. SEAD found an interesting and novel method to assess young children in their social and emotional intelligence. Legends of Aria allowed an educator’s openness to innovation get his students excited about programming. Games create solutions with innovation and creativity, while also capturing excitement and motivation. It is easy to see why: players have control. They set and achieve their own goals and fnd interesting challenges, all the while being given very clear feedback to their performance. The same could never be said about traditional classroom education. The diference is fow.116 Previous chapters discussed both game and cognitive fow and their importance in game development and player engagement. Using the concept of fow provided by games, educators could improve the learning environment by implementing the following from Squire: • • • • • •

Providing clear goals Challenging students Allowing for collaboration Using criterion-based assessments Giving students more control over the learning process Incorporating novelty into the environment117

You may notice that these are basically the same points that Malone discussed at the beginning of this chapter. By creating a learning experience with game design elements, flow is increased, which in turn increases motivation and engagement. It is also important to nurture and mentor those at the beginning of their careers, both students and nascent professionals. The VSGI provides guidance in all aspects of the game community and throughout personal stages and professional careers in the tech space, spanning adolescent engagement, degree attainment for students, internships and research experience opportunities, and entrepreneurial support. Students are nurtured for success from

The VSGI Learning Game Examples 139

the beginning through the opportunities and services the VSGI provides. Prince William County did not have a strong presence in the tech community prior to the VSGI and the businesses incubated within, but this partnership has allowed job sector growth in a new, popular, and innovative industry. Without the VSGI and the creativity ofered from the students, alumni, and entrepreneurs within, Prince William County might be on a very diferent path today. Video games were created only 62 years ago and have quickly become a part of our everyday lives, infuencing our advertisements, our movies, and our education. When applied correctly, games can provide experiences and situations that are otherwise unavailable. By taking advantage of these possibilities, we can create new ways to educate, ones in which students can be present in impossible situations, learn skills hands-on rather than through lectures, have lessons tailored to their unique needs, and have fun in the process.

Notes 1 Ubisoft: Montreal. (2011). Playstation 4, Canada, Montreal. 2 At the 70th annual Tonys, Hamilton won 11, for Best Musical, Best Book of a Musical, Best Original Score, Best Performance by a Leading Actor in a Musical, Best Actor in a Featured Role, Best Actress in a Featured Role, Best Costuming for a Musical, Best Lighting for a Musical, Best Director for a Musical, Best Choreography, and Best Orchestration; “Disney Plus releases “Hamilton” film trailer ahead of July 3 premiere.” CBSNews, June 22, 2020. 3 A few examples include Chicago (2006) and Grease (1978); for a list of the top grossing film adaptations of musicals: www.playbill.com/article/the-27-highestgrossing-broadway-film-adaptations-of-all-time; A few examples include a 1998 performance of Cats though a full list is here: www.playbill.com/article/ 15-broadway-plays-and-musicals-you-can-watch-on-stage-from-home 4 Passy, C. (2019, June 6). A “Hamilton” ticket for $849? Experts call that a bargain. Wall Street Journal. 5 One only needs to look to Twitter or Google to find the numerous new articles in regards to this, but here is a small example: www.glamour.com/story/ people-seeing-hamilton-for-the-first-time-twitter-reactions 6 Ron Chernow wrote Alexander Hamilton (2005), the book on which Miranda based his play; Delman, E. (2015, September 29). How Lin-Manuel Miranda shapes history. The Atlantic. 7 Keller, K. (2018). The issue on the table: Is “Hamilton” good for history? Smithsonian Magazine. 8 Brown, A. (2017). Younger men play video games, but so do a diverse group of other Americans. Pew Research Center; Gough, C. (2020). Number of active video gamers worldwide from 2015 to 2023. Statista. 9 Rogers, C., & Lyons, B. (2020). 2019 year in review. SuperData: A Nielsen Company. Retrieved from https://www.superdataresearch.com/blog/superdatareports-games-and-interactive-media-earned-a-record-1201b-in-2019; Rubin, R. (2020). Global box office hits record high in 2019 with $42.5 Billion. Variety.com.

140 The VSGI Learning Game Examples 10 Roettgers, J. (2018). Netflix takes interactive storytelling to the next level with ‘Black Mirror: Bandersnatch’. Variety.com. 11 Smith, E. (2016). When McDonald’s, Domino’s, and Chester Cheetah took over your Nintendo. Vice.com. 12 Gant, M. (2020). Starbucks rolling out a new mobile game with big prizes. Today.com. 13 Masters, M. (2019). Pretend play. What to Expect. 14 Adachi, P. J. C., & Willoughby, T. (2013). More than just fun and games: The longitudinal relationships between strategic video games, self-reported problemsolving skills, and academic grades. Journal of Youth Adolescence; Granic, I., Lobel, A., & Engels, R. C. M. E. (2014). The benefits of playing video games. American Psychologist; Justice, L. J., & Ritzhaupt, A. D. (2015). Identifying the barriers to games and simulations in education: Creating a valid and reliable survey. Journal of Educational Technology Systems, 44(1), 86–125. 15 Cicchino, M. I. (2015). Using game-based learning to foster critical thinking in student discourse. Interdisciplinary Journal of Problem-Based Learning, 9(2), 1–18; Hallajian, M. (2016). The effects of computer games on increasing students’ creative thinking. International Journal of Humanities and Cultural Studies, Special Issue May, 213–220. 16 Selvi, M., & Cosan, A. O. (2018). The effect of using educational games in teaching kingdoms of living things. Universal Journal of Educational Research, 6(9), 2019– 2028; Shabaneh, Y., & Farrah, M. (2019). The effect of games on vocabulary retention. Indonesian Journal of Learning and Instruction. 17 Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A metaanalysis. Journal of Educational Computing Research, 34(3), 229–243. 18 Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science, 4, 333–369; Virvou, M., Katsionis, G., & Manos, K. (2005). Combining software games with education: Evaluation of its educational effectiveness. Journal of Educational Technology & Society. 19 Ibid. 20 DigiFX Interactive. (1996). PC. Texas. 21 Disney, W. (1954). Educational values in factual nature pictures. Educational Horizons; Ed. Bowdoin Van Riper, A. (2011). Learning from Mickey, Donald, and Walt. London: MacFarland; Latham, J. (n.d.). Edutainment in the classroom: How technology is changing the game. Retrieved from https://onlinedegrees.sandiego.edu/ edutainment/ 22 Nemec, J. T. (2007). Edutainment or entertainment: Education possibilities of didactic games in science education. In Word play conference (pp.  55–64). Brno: Masaryk University. 23 Ibid. 24 From NUMBER MUNCHERS. Copyright © by Houghton Mifflin Harcourt Publishing Company. All rights reserved. Used by permission of the publisher. Any further duplication is strictly prohibited unless written permission is obtained from Houghton Mifflin Harcourt Publishing Company. 25 Brown, H. J. (2015). Video games and education. New York: Routledge. 26 Shuler, C. (2012). What happened to the edutainment industry? A case study. Retrieved from Joan Ganz Cooney Center Website: https://joanganzcooneycenter. org/2012/10/02/what-happened-to-the-edutainment-industry-a-case-study/. 27 Charsky, D. (2010). From edutainment to serious games: A change in the use of game characteristics. Games and Culture: A Journal of Interactive Media, 5(2), 177–198.

The VSGI Learning Game Examples 141 28 Brown, H. J. (2015). Video games and education. New York: Routledge. 29 Charsky, D. (2010). From edutainment to serious games: A change in the use of game characteristics. Games and Culture: A Journal of Interactive Media, 5(2), 177–198. 30 Van Eck, R. (2006). Digital game-based learning: It’s not just the digital natives who are restless. EDUCAUSEReview; Szczurek, M. (1983). Meta-analysis of simulation games effectiveness for cognitive learning. Bloomington: Indiana University Press; VanSickle, R. L. (1986). A quantitative review of research on instructional simulation gaming: A twenty-year perspective. Theory and Research in Social Education, 14(3), 245–264. 31 Squire, K. (2003). Video games in education. International Journal of Intelligent Games & Simulation, 2(1), 49–62. 32 Project Tomorrow & Blackboard. (2018). The new learning leader: The emerging role of the agile school principal as digital evangelist and instruction leader. Retrieved from http://images.email.blackboard.com/Web/BlackboardInc/%7B759c39e7-71944d2f-9572-5429cd50e141%7D_K12_2018_Report_TrendsInDigitalLearningNewLearningLeader.pdf 33 McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. New York: Penguin Books. 34 Martin, Scott. Personal Interview, 2020. 35 WorldStrides. (2020). About envision. EnvisionExperience.com. 36 (n.d). Just push play. George Mason University – Entrepreneurship and Innovation. Retrieved May 2020 from https://startup.gmu.edu/entrepreneurial-programs/vaentrepreneurs/virginia-serious-game-institute. 37 Rovell, D. (2003). The man who started it all. ESPN. Retrieved May 2020 from https://www.espn.com/espngamer/story?id=1601049 38 Despite the many safeguards and integration of cutting edge and alien technologies, the “Danger Room” was not always safe, due to interference by villains and heroes alike. 39 Retrieved June 2020 from vsgi.gmu.edu 40 April 4, 2016. IVIS: The next step in command competency training. Fire Safety News. Retrieved June 2020 from http://pubs.royle.com/article/IVIS%3A_The_ Next_Step_in_Command_Competency_Training/2450585/297093/article.html. 41 Ibid. 42 Bishop, K. (2020). Personal communication. 43 Ibid. 44 Ibid. 45 Ibid. 46 Molyneux, P. (1997). Theme hospital. EA Games. 47 Masters, M. (2019). Pretend play. What to Expect. 48 Meijer, S. (2004). Simulations and simulation games in agro and health care. KLICT. 49 Ibid. 50 Donlan, C. (2018). 20 years on, Theme hospital is still brilliant. Eurogamer. 51 Two Point Games, 2019. 52 Wheeler, P. (2020). Interview. 53 Ibid. 54 Ibid. 55 Hospital Training Games. (2020). Retrieved April 2020 from Hospitaltraininggames.com 56 Dieterich, R. (2020). Personal communication. 57 Hospital Training Games. (2020). Retrieved April 2020 from Hospitaltraininggames.com 58 Wheeler, P. (2020). Interview.

142 The VSGI Learning Game Examples 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97

Ibid. Dieterich, R. (2020). Personal communication. Wheeler, P. (2020). Interview. Blow, J. (2020). Interview. Dieterich, R. (2020). Personal communication. Wheeler, P. (2020). Interview. Ibid. Ibid. Ramos, R. (2020). Interview. Dieterich, R. (2020). Personal communication. Blow, J. (2020). Interview. Dieterich, R. (2020). Personal communication. User: Raphaeliscoolbutrude. March, 2017. Tumblr.com. The industry now considers this taboo and depicting a child being killed by the player can cause a game to be banned in several countries. https://en.wikipedia. org/wiki/List_of_banned_video_games Retrieved May 2020 from https://fallout.fandom.com/wiki/Childkiller. Retrieved from Reddit.com. Fullerton, T., Swain, C., & Hoffman, S. (2004). Improving player choices. Gamasutra. Retrieved May 2020 from https://www.gamasutra.com/view/feature/130452/ improving_player_choices.php Chess, S. (2017). Ready player two: Women gamers and designed identity. Minneapolis: University of Minnesota Press. Morrison, J., Reilly, J., & Ross, S. (2019). Getting along with others as an educational goal: An implementation study of Sanford Harmony. Journal of Research in Innovative Teaching and Learning, 12(1), 16–35. Ibid. Leary, K. (2020). Interview. Communities in Schools. (2020). Our Model. Retrieved May 2020 from https:// www.communitiesinschools.org/our-model/ Adcox, O., Romero, J., Van Vierssen, T., & Martin, S. (2020 January). Sanford Harmony vs SEAD Domains. Leary, K. (2020). Interview. Ibid. Ibid. Adcox, O., Romero, J., Van Vierssan, T., & Martin, S. (2020, January). Hovercraft race. Game Design Document. Virginia, VA: Flyguy Interactive. Leary, K. (2020). Interview. Chess, S. (2017). Ready player two: Women gamers and designed identity. Minneapolis: University of Minnesota Press. Adcox, O. (2020). Personal communication. Ibid. Annetta, L. A. (2010). The “I’s” have it: A framework for serious educational game design. Review of General Psychology, 14(2), 105–112. Adcox, O., Romero, J., Van Vierssan, T., & Martin, S. (2020, January). Hovercraft race. Game Design Document. Virginia, VA: Flyguy Interactive. Screenshot of SEAD Hovercraft Race. Leary, K. (2020). Interview. Adcox, O. (2020). Personal communication. Ibid. Ibid. Leary, K. (2020). Interview.

The VSGI Learning Game Examples 143 98 Staples, N. (2020). Interview. 99 Sandford, R., Ulicsak, M., & Facer, K. (2006). Teaching with games: Using computer games in formal education. Bristol: Futurelab. 100 Retrieved from their website: citadelstudios.net 101 Reisner, C. (2013). The reality behind it all is very true: Call of Duty Black Ops and the remembrance of the Cold War. In M. W. Kapell (Ed.), Playing with the past (pp. 247–260). New York: Bloomsbury; Cutterham, T. (2013). Irony and American historical consciousness in Fallout 3. In M. W. Kappell (Ed.), Playing with the past (pp. 313–326). New York: Bloomsbury. 102 Brown, H. J. (2015). Video games and education. New York: Routledge. 103 Ibid. 104 Discussed in chapter 4. 105 Staples, N. (2020). Interview. 106 Ibid. 107 Otherwise known as modding. 108 “Team Fortress Classic.” Team Fortress Wiki. 2020. Retrieved June 2020 from https://wiki.teamfortress.com/wiki/Team_Fortress_Classic. 109 Crabtree, G. (2013). Modding as digital reenactment: A case study of the Battlefield series. In M. W. Kapell (Ed.), Playing with the past (pp. 199–212). New York: Bloomsbury. 110 Ibid. 111 Retrieved from Legends of Aria Steam Page. 112 Staples, N. (2020). Interview. 113 Ibid. 114 Ibid. 115 VSGI Website, 2020. 116 Squire, K. (2003). Video games in education. International Journal of Intelligent Games & Simulation, 2(1), 49–62. 117 Ibid.

References Home Page. (2020, April). Retrieved from Hospital Training Games: hospitaltraininggames.com Adachi, P. J., & Willoughby, T. (2013). More than just fun and games: The longitudinal relationships between strategic video games, self-reported problem solving skills and academic grades. Journal of Youth and Adolescence, 42(7), 1041–1052. Adcox, O., Romero, J., Van Vierssan, T., & Martin, S. (2020, January). Hovercraft race. Manassas, VA: Game Design Document, Communities in Schools. Adcox, O., Romero, J., Van Vierssan, T., & Martin, S. (2020, January). Sanford Harmony vs SEAD Domains. Manassas, VA: FlyGuy Interactive. Annetta, L. (2010). The ‘I’s’ have it: A framework for serious educational game design. Review of General Psychology, 14(2), 105–112. Bethesda Softworks. (2008). Fallout 3. Bethesda, MD: Bethesda Game Studios. Blow, J. (2020, April 14). (S. Kane, Interviewer). Bowdoin Van Riper, A. (2011). Learning from Mickey, Donald, and Walt. London: MacFarland. Brinkmann, D. (2020, February 26). CEO. (S. Kane, Interviewer).

144

The VSGI Learning Game Examples

Brown, A. (2017). Younger men play video games, but so do a diverse group of other Americans. Pew Research Center. Retrieved from www.pewresearch.org/fact-tank/ 2017/09/11/younger-men-play-video-games-but-so-do-a-diverse-group-of-otheramericans/ Brown, H. J. (2008). Videogames and education. New York: Routledge. Chaplin, H. R. (2006). Smartbomb: The quest for art, entertainment, and big bucks in the videogame revolution. New York: Workman Books. Charsky, D. (2010). From edutainment to serious games: A change in the use of game characteristics. Games and Culture: A Journal of Interactive Media, 5(2), 177–198. Chess, S. (2017). Ready player two: Women gamers and designed identity. Minneapolis: University of Minnesota Press. ChildKiller. (n.d.). Retrieved from Fallout Fandom Wiki: https://fallout.fandom.com/ wiki/Childkiller Cicchino, M. I. (2015). Using game-based learning to foster critical thinking in student discourse. Interdisciplinary Journal of Problem-Based Learning, 9(2), 1–18. Communities in Schools. (2020). Retrieved from Our Model: www.communitiesinschools. org/our-model/ Crabtree, G. (2013). Modding as digital reenactment: A case study of the Battlefield series. In M. W. Kapell (Ed.), Playing with the past (pp. 199–212). New York: Bloomsbury. Csikszentmihalyi, M. (1990). Flow: The psychology of the optimal experience. New York: Harper and Row. Cutterham, T. (2013). Irony and American historical consciousness in Fallout 3. In M. W. Kappell (Ed.), Playing with the past (pp. 313–326). New York: Bloomsbury. Delman, E. (2015, September 29). How Lin-Manuel Miranda shapes history. The Atlantic. Dieterich, R. (2020, April 9). (S. Kane, Interviewer). Disney, W. (1954). Educational values in factual nature pictures. Educational Horizons, 33(2), 82–84. Donlan, C. (2018, January 21). 20 years on, Theme Hospital is still brilliant. Retrieved from Eurogamer: www.eurogamer.net/articles/2018-01-21-20-years-on-theme-hospitalis-still-brilliant Fullerton, T., Swain, C., & Hoffman, S. (2004). Improving player choices. Gamasutra. Gant, M. (2020). Starbucks rolling out a new mobile game with big prizes. Today.com. Retrieved from www.today.com/food/starbucks-rolling-out-new-mobile-gamebig-prizes-t175685 George Mason University. (n.d.). Just push play. Retrieved from Entrepreneurship and Innovation: https://startup.gmu.edu/entrepreneurial-programs/va-entrepreneurs/ virginia-serious-game-institute Gough, C. (2020). Number of active video gamers worldwide from 2015 to 2023. Statista. Retrieved from www.statista.com/statistics/748044/number-videogamers-world/#:~:text=Number%20of%20video%20gamers%20worldwide%20 2015%2D2023&text=The%20video%20gaming%20industry%20is,three%20 billion%20gamers%20by%202023. Granic, I., Lobel, A., & Engels, R. C. M. E. (2014). The benefits of playing video games. American Psychologist, 69(1), 66–78.

The VSGI Learning Game Examples 145 Hallajian, M. (2016). The effects of computer games on increasing students’ creative thinking. International Journal of Humanities and Cultural Studies, Special Issue May, 213–220. IVIS: The Next Step in Command Competency Training. (2016, April 21). Retrieved from Fire Safety News: http://pubs.royle.com/article/IVIS%3A_The_Next_Step_in_ Command_Competency_Training/2450585/297093/article.html Justice, L. J., & Ritzhaupt, A. D. (2015). Identifying the barriers to games and simulations in education: Creating a valid and reliable survey. Journal of Educational Technology Systems, 44(1), 86–125. Karas, J. (Director). (2007). Demetri Martin. Person. [Motion Picture]. Keller, K. (2018, May 30). The issue on the table: Is “Hamilton” good for history? Smithsonian Magazine. Latham, J. (n.d.). Edutainment in the classroom: How tech is changing the game. Retrieved from https://onlinedegrees.sandiego.edu/edutainment/ Malone, T. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science, 4, 333–369. Masters, M. (2019). Pretend play. What To Expect. Retrieved from www.whatto expect.com/toddler/pretend-games/#:~:text=Between%2018%20and%2024%20 months,keys%20to%20unlock%20a%20door. McGonigal, J. (2011). Reality is broken: Why games make us better and how they can change the world. New York: Penguin Books. Miller, C., Kochel, K., Wheeler, L., Updegraff, K., Fabes, R., Martin, C., & Hanish, L. (2017). The efficacy of a relationship building intervention in 5th grade. Journal of School Psychology, 61, 75–88. Morrison, J., Reilly, J., & Ross, S. (2019). Getting along with others as an educational goal: An implementation study of Sanford Harmony. Journal of Research in Innovative Teaching and Learning, 12(1), 16–35. Nemec, J. T. (2007). Edutainment or entertainment: Education possibilities of didactic games in science education. In Word play conference (pp. 55–64). Brno: Masaryk University. Passy, C. (2019, June 6). A Hamilton ticket for $849? Experts call that a bargain. Wall Street Journal. Project Tomorrow & Blackboard. (2018) The new learning leader: The emerging role of the agile school principal as digital evangelist and instruction leader. Retrieved from http:// images.email.blackboard.com/Web/BlackboardInc/%7B759c39e7-7194-4d2f9572-5429cd50e141%7D_K12_2018_Report_TrendsInDigitalLearning-NewLearningLeader.pdf Ramos, R. (2020, April 16). (S. Kane, Interviewer). Reisner, C. (2013). The reality behind it all is very true: Call of Duty Black Ops and the remembrance of the Cold War. In M. W. Kapell (Ed.), Playing with the past (pp. 247– 260). New York: Bloomsbury. Roettgers, J. (2018). Netflix takes interactive storytelling to the next level with ‘Black Mirror: Bandersnatch’. Variety.com. Retrieved from https://variety.com/2018/ digital/news/netflix-black-mirror-bandersnatch-interactive-1203096171/ Rogers, C., & Lyons, B. (2020). 2019 year in review. SuperData: A Nielsen Company. Retrieved from www.superdataresearch.com/2019-year-in-review

146

The VSGI Learning Game Examples

Rollings, A. M. (2000). Game architecture and design. Scottsdale: Coriolis Group. Rovell, D. (2003, August 22). The Man Who Started It All. Retrieved from ESPN: www. espn.com/espngamer/story?id=1601049 Rubin, R. (2020). Global box office hits record high in 2019 with $42.5 billion. Variety Retrieved from https://variety.com/2020/film/box-office/box-office-usmisses-record-disney-dominates-1203453752/ Sandford, R. U. (2006). Teaching with games: Using computer games in formal education. Bristol: Futurelab. Selvi, M., & Cosan, A. O. (2018). The effect of using educational games in teaching kingdoms of living things. Universal Journal of Educational Research, 6(9), 2019–2028. Shabaneh, Y., & Farrah, M. (2019). The effect of games on vocabulary retention. Indonesian Journal of Learning and Instruction, 2(1), 79–90. Shuler, C. (2012, October 2). What happened to the edutainment industry? A case study. Retrieved from Joan Ganz Cooney Center Website: https://joanganzcooneycenter. org/2012/10/02/what-happened-to-the-edutainment-industry-a-case-study/ Smith, E. (2016). When McDonald’s, Domino’s, and Chester Cheetah took over your Nintendo. Vice.com. Retrieved from www.vice.com/en/article/aekk7b/ when-mcdonalds-dominos-and-chester-cheetah-took-over-your-nintendo Squire, K. (2003). Video games in education. International Journal of Intelligent Games & Simulation, 2(1), 49–62. Team Fortress Classic. (2020, June). Retrieved from Team Fortress Wiki: https://wiki. teamfortress.com/wiki/Team_Fortress_Classic Two Point Games. (2019). Two Point Hospital. Retrieved from www.twopointhospital. com/ User: Raphaeliscoolbutrude. (2017, March). Tumblr.com. Retrieved from https:// amethystlashiec.tumblr.com/post/178369351356/deflare-raphaeliscoolbutrude/amp Virvous, M. K. (2005). Combining software games with education and evaluation of it’s educational effect. Journal of Educational Technology and Society, 8(2), 54–65. Vogel, J. J., Vogel, D. S., Cannon-Bowers, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research, 34(3), 229–243. VSGI Website. (2020). Retrieved from https://vsgi.gmu.edu/excellerator/ Wheeler, P. (2020, May 7). (S. Kane, Interviewer). WorldStrides. (2020). About envision. EnvisionExperience.com. Retrieved from www. envisionexperience.com/about-us

Artificial Intelligence Applied to Teaching and Learning

6

Another way of deploying an evolutionary argument for the feasibility of AI is via the idea that we could, by running genetic algorithms on sufficiently fast computers, achieve results comparable to those of biological evolution. This version of the evolutionary argument thus proposes a specific method whereby intelligence could be produced. Nick Bostrom (2016)1

Although the purpose of technology in teaching and learning has historically been to enhance, augment, and indeed aid teaching, and not replace it, resistance to adopting new technology remains unfailing. What we may consider now as primary technologies used in K–12 and higher education, such as spell-checkers, calculators, and learning management systems, initially stirred raging debates and fierce opposition from educators and administrators.2 As I have previously written, some of this resistance was understandable, as new adoption of teaching or assessment tools required a greater investment in time and energy by educators to learn themselves and then to integrate into their syllabi and curricula. Veteran educators are also aware of the litany of false claims, inflated promises, and deceitful exaggerations made by edtech vendors over their careers – about how this or that tool will improve and revolutionize education for all, only to discover after the fact that they don’t work as intended. Moreover, when it finally occurs, adoption of new technologies by departments and schools are sometimes forced upon educators by deans and superintendents, functioning as unfunded mandates on the budgets of chairs and principals, producing quite reasonable opposition. Other issues that lead to new technology opposition is sometimes based on structural differentiation, inconsistent impact on various learner demographics, and a lack of learner adaptation progress post-adoption.3

148

AI Applied to Teaching and Learning

However, some opposition to new technology adoption voiced by educators and administrators can be traced to overall risk aversion based on lack of control over budget cycles, potential violation of privacy protection laws, common political gridlock, and fear of failure and change.4 Hence, some evidence suggests that the earliest and largest adopters of new technologies to train and educate can be found in corporate and defense markets. These industries are more fexible and adaptable, don’t share the same mandates as public and post-secondary education, and generally have larger and less restricted budgets. Therefore, one could hardly expect anything less than a signifcant disruption in the K–12 and post-secondary educational markets for both leadership and educators alike to adopt something as technically complex but potential transformational as artifcial intelligence into their learning spaces. As I write this chapter, the Covid-19 pandemic has ravaged the globe, destroying economies and lives in 180 nations. It has also completely upended the 50-million-learner K–12 and higher education sectors in the United States. In a matter of days in early March 2020, state governors shut down school systems and universities across 48 states, and learners were sent home, mostly reconnecting to their teachers and professors remotely. Grading systems were changed from a letter grade matrix that can only increase from where their grades were when their schools closed, to an optional, learner-determined pass/fail evaluation for all the courses in which they were enrolled. K–12 and some higher education institutes ofered loaner laptops to underprivileged learners so they could participate in their online classes, hoping they also had high-bandwidth internet connections in their homes. Others ofered small visitation windows so that learners could book time to retrieve their personal belongings left in school lockers, classrooms, and residence halls. If a learner could get there, urban and suburban K–12 schools ofered free breakfast and lunch pick-up meals for any person that wanted one, helping both to feed the learners on the Federal Title 1 School Meal Programs and to use up all the food stored and refrigerated for the school year. Across the country, many senior secondary and post-secondary learners had their fnal year assignments and testing end several months early, so that school administrators and faculty could focus on preparing for returning fall 2020 learners. Most schools and colleges held no traditional graduation or commencement ceremonies for seniors, settling for a videostreamed graduation speaker and a ‘good luck, make us proud’ slideshow. But how successful was the conversion from in-class teaching and learning to online teaching and learning after school and university closures? In The Results Are in for Remote Learning: It Didn’t Work, published by the Wall Street Journal on June 5, 2020, Tawnell Hobbs and Lee Hawkins describe a cacophony of errors by schools and colleges since the start of the shutdown: teachers’ lack of direct and consistent communication with every

AI Applied to Teaching and Learning 149

learner, inability to take accurate attendance, inability to limit or restrict cheating, and frustration in attempting to apply accurate assessments and evaluations.5 Despite abundant research demonstrating that undisciplined and unmotivated high school students and undergraduates generally do not perform well in online-only courses, whether ofered asynchronously or synchronously, online learning became compulsory anyway, creating monumental challenges for many special learners and the luddite traditional faculty lecturers, as well.6 According to a working paper by researchers at the Annenberg Center at Brown University, the U.S.’s conversion to onlineonly K–12 learning was a complete failure, with crashing LMSs, incorrectly blocked log-in credentials, lost online assignments, and lagging commercial video streaming solutions making lectures and discussion unintelligible. Because of these ‘technical’ problems, and inequity in school technology funding, between 63% and 68% of learners may be returning in the fall 2020 less prepared in grade-level reading, and 37% to 50% less prepared in gradelevel math, than they would have been after learning in on-site school classrooms.7 More alarmingly, according to the EdWeek Research Center, 6% of K–12 learners just disappeared after their schools converted to distance learning.8 Meaning, they never logged into their LMS or video-streaming platform, and their teachers couldn’t reach them or their parents via email or phone calls. With summer school also limited to online learning, and school districts and colleges across the country trying to fgure out ‘safe’ solutions to open campuses in the fall, options to close this learning gap won’t happen anytime soon. Even with massive U.S. government aid (Cares Act 2020) and Federal Reserve business bail-out programs, fall-out from Covid-19 still closed restaurants, retail stores, shopping malls, and manufacturers, pushing the U.S. unemployment numbers to around 40M by May 2020.9 Mortgage defaults and forbearances, missed rental and credit card payments, and resultant projected lower tax payments will signifcantly depress previously projected county and state education budget allocations in the coming 2020/21 fscal year. With on-campus housing abandoned, dining facilities shuttered, and the need for parking passes superfuous for closed campuses, most universities and colleges were forced to issue refunds for some of these marked-up amenities to millions of learners. In addition, summer and fall enrollment trends across the country from state universities and colleges also foretell a fnancially grim upcoming academic year. In a May 2020 poll, many accepted freshman and upper-level undergraduates were electing to take a ‘gap’ year rather than risk taking out full tuition loans and attending most of their classes on their laptop again.10 Learners that have accepted and deposited for the 2020/2021 academic year – some 15% fewer than the last academic year according to the American Council on Education11 – are negotiating larger tuition discounts since their classes will most likely be held online again.12

150 AI Applied to Teaching and Learning

Standing bond interest payments on the excess amenities and luxurious facilities built during the university building competitions over the past 20 years will still be due, although many universities are negotiating with bond holders to lower their interest rates, or selling new bonds with lower rates to pay of older higher rate ones.13 With the national economy frozen for months and states slowly trying to reopen with starts and stops, most public equity and commodity markets (with the exception of tech) tanked, shredding university endowment balance sheets. Lastly, most universities with plans for some type of physical reopening in the fall of 2020 are investing hundreds of millions of dollars in unplanned purchases of plexiglass panels to maintain social distancing in high trafc areas, new HVAC flter systems in all of their campus buildings, and contracting with cleaning companies to wipe down and sterilize classrooms, computers, dining tables, hallways, doorknobs, and bathrooms two to three times per day. Although some universities and colleges are taking a wait-and-see approach to opening their campuses in the fall 2020, many have already made difcult decisions in hopes of balancing their bleak pro forma projections in preparation for an upcoming austere academic year. To mitigate projected enrollment drops and clinical research fnancial losses, Johns Hopkins University, like many large private universities, has adopted a phased approach that includes suspending employee retiring contributions, reducing salaries of university leadership, freezing salaries of faculty and staf, restricting hiring, reducing nonpersonal expenses, and halting or limiting construction and renovation projects. Future phases, if implemented, include closing or suspending academic programs and departments resulting in faculty and staf layofs and furloughs.14 In May 2020, Virginia Polytechnic Institute and State University (Virginia Tech) Board of Visitors gave university president Tim Sands the power to impose furloughs and pay cuts of up to 20% if necessary during the 2020/2021 academic year.15 If this Covid-19 pandemic, resultant economic collapse, and forced upending of traditional teaching and learning isn’t a most signifcant disruption, I’m not sure what is! Although the rapid transition from on-site to online teaching and learning generally occurred much more smoothly in higher education than in K–12 public and private school systems, it was still clear that the current technology infrastructure deployed, and the majority of faculty were ill-trained and illprepared to successfully teach such large number of online-only learners. Some companies grow or scale to increase revenue to stave-of closure, some to meet expanding customer demand for their product or services, and some to fend of competition, increase their market capitalization, or make themselves an attractive acquisition target. However, to scale haphazardly and unsystematically without thorough strategic and fnancial planning may lead to the demise of an organization.16 Moreover, scaling teaching and learning to large number of online learners without strategic planning

AI Applied to Teaching and Learning 151

produces very similar results: lower mean average academic performance, lower satisfaction survey results, and higher attrition rates.17 Online learning platforms that provide some type of teacher-learner interaction, such as live streaming or chat channels, have proven to better serve most learners, including the nontraditional learner.18 During the quick switch to onlineonly teaching and learning at GMU in March 2020, one of my larger lecture courses required a mix of asynchronous and synchronous live-steaming to properly convey the course content. Pre-Internship Seminar is a type of ‘soft-skills’ course to help ensure a successful internship experience for our learners and help the transition to the workforce. Prior to the inclusion of the course in the undergraduate Computer Game Design Curriculum at GMU, internship and worksite supervisors would communicate to me that the quality of work performed by our interns was quite superb and impressive, but they told me our graduates didn’t dress appropriately, they struggled to write professional emails or memos, they didn’t verbally communicate very well, and they were atrocious at giving presentations to team members or clients. During the three months of online-only teaching and learning in spring 2020, I noticed how the majority of learners start to slip academically and miss or ignore asynchronous assignments deadlines, so I held more and more live virtual classes to get my learners back on track and inspire motivation and engagement. During these ‘live’ online classes, I would stream lectures about a topic related to the previously posted asynchronous material and assignments and then ofer real-time questions and answer sessions, which sometimes lasted over an hour. The following week, I would observe on the static LMS learners starting to slip again on their assignments’ due dates, and I’d have to call a ‘live’ course again to rejuvenate interest and motivation: rinse and repeat. Perhaps in all fairness, this course was not originally listed or structured as a typical online-only asynchronous course. The learners never anticipated their in-person course would transfgure into an online course six weeks into the semester, but it quickly became apparent that alternating asynchronous and live-streaming lecture/Q&A sessions was a prerequisite for all learners to achieve academic success. Let us recall that legacy online learning platforms, some now more than 20 years old, were originally designed as asynchronous classroom management tools to allow learners to check their grades and assignments outside of classroom hours. As colleges and universities collectively began to utilize LMSs to actually teach courses to scale their curricula, it quickly became apparent that organic educational services like advising or counseling, which learners depended upon to support their academic journeys on campus, were nonexistent. On a higher-education campus, if a learner begins to struggle academically or psychologically, there are many intervention and remediation options available from multiple student support ofces. Learners may schedule a visit to a tutoring center or seek out advising or counseling

152 AI Applied to Teaching and Learning

services for academic, psychological, or even medical treatment. Furthermore, recent research demonstrates that the more closely integrated student support services are within an academic plan and the more they are proactive and are ofered earlier within the academic experience, the greater the increase in learner academic success and enrollment retention rates will be.19 Unfortunately, even today, the online-only learner rarely has true support vehicles available outside of student-teacher question-and-answer sessions conducted via the rare one-on-one video call, or through an asynchronous chat channel – or, worse, via email exchanges with someone at a counseling or advising center if they even exist at their now-online college. Neither of these options can replace the role of a professionally trained academic advisor, counselor, or tutor who understands each individual learner’s needs and issues and who can help facilitate the appropriate interventions required for their academic success.

Next-Gen Machine-Learning Algorithms, Models, and Frameworks Although ubiquitous in consumer services such as e-banking, e-commerce, e-bill pay, and on entertainment websites, rarely do we see even a simplistic machine-learning help-desk (ro)bot in higher education that can assist learners with simple tasks. Even universities with the most well-funded budgets don’t offer simple machine-learning algorithms that could help their learners to register for classes, to pay tuition payments, nudge them to check assignment due dates, or even to them schedule private advising and counseling services. A simple map-enabled bot app could help learners even find their newly assigned classroom – help that would facilitate a huge customer service upgrade than the current frustrating wild goose chase learners now regularly encounter. Machine-learning solutions have had the potential to structurally improve traditional university learner support services for decades, but administrative priorities focused more on funding new learner amenities such as game rooms, ever larger pools, expanding recreation centers, and luxury residence halls – now empty financial boat anchors rather than originally intended as new student recruitment lures. For the onlineonly learner today, academic and psychological support services should not just be an afterthought but now must be integrated into a learner’s online educational curriculum plans as well. In addition to the online-only teaching and learning transfguration precipitated by Covid-19, the overall numbers of new nontraditional learners entering post-secondary education are projected to swell and more accurately refect the demographics of the United States.20 So, too, will the need for increased student support services. As nontraditional, frst-generation

AI Applied to Teaching and Learning 153

working learners and historically underserved populations access universities and community college courses and degrees in increasing numbers, and increasingly through online pathways, online support structures are also required to ease these new learners’ transition and provide remedial attention some may require for success.21 Considering it’s not uncommon in higher education to see learner/academic advisor ratios for traditional onsite campuses hover above 374:1, and counselor ratios exceed 1,500:1, it’s probably not surprising support structures are nonexistent for the onlineonly learner.22 In most K–12 education districts, the ratios are even worse.23 With the ability to predict wants and needs – to analytically predict a future outcome and accomplish as assigned task if a threshold is reached – machinelearning algorithms are an ideal candidate to provide some of these indispensable services for the online learner. In order to hopefully help address this critical lack of online learner support services a summary of the author’s Deep Academic Learning Intelligence (DALI) invention is presented.24

Design Study: Deep Academic Learning Intelligence DALI is designed to provide online machine-intelligence-driven academic advising, personal counseling, and even learner mentoring, and is an additional step in the evolution of the author’s design research rubric using machine-intelligence algorithms to improve online teaching and learning. It is designed to detect online academic patterns of concern and provide learner support intervention recommendations. Its aggregate student learning (ASL) schema considers academic experiences and learner support services, while combining these with semi-academic and social (nonacademic) expressions in a learning environment, and potentially considers even external social media postings that may provide insight into a learner’s academic milieu.25 In order to establish a baseline of a learner, DALI parses all academic-related datasets offered on an online learning platform database, along with past platform chat exchanges (parsed, as well, or manually ingested), and then mines, classifies, and labels them into useful categories for analysis, assessment, and anomaly detection. The DALI models consist of specifically designed and pretrained artificial cognitive NLP algorithms that store a learner’s academic subject and non-subject chat and voice-to-text communication and compares the context of this chat with other learners, teachers, and/or tutor chat against a predetermined academic evaluation matrix in order to detect anomalies within data patterns that may have a negative effect on future academic performance. If a learning anomaly is detected, DALI suggests a pretrained intervention method or series of methods to hopefully resolve the issue. If the suggestions weren’t appropriate or effective, determined by a ‘Was this Helpful’ pop-up box and a ‘Why?’ text

154 AI Applied to Teaching and Learning

input box, DALI elevates the issue to the appropriately assigned professional. Figure 6.1 provides an example chat exchange between two learners and identifies the parsed words within sentences to determine context detected by the DALI models. In order to understand the diferences between academic and pure social/ nonacademic chat within an online learning platform, DALI models are pretrained in both, including course syllabi using a pre-structured template that lists course title, overview, learning goals, grading schemata, and class meeting schedule. DALI has also been pretrained with general structured and unstructured syntactic open source English language datasets, and the publicly supported Urban Dictionary. In essence, the DALI NLP algorithms can parse and detect diferent forms of typed or spoken text and parts of speech. Besides an instructor’s teaching style or instructional methodology, peer interactivity may strongly infuence a learner’s academic success or struggles. Course content-related chat discussions coupled with purely social chat could ofer important clues into potential external or tangential issues that may have an indirect but adverse efect on the learning process. For example, is the non-course-related social interaction between learners A and B having a positive or negative impact on learner B’s performance? Does the dataset trend line demonstrate that both learners A and B have improved academically since they began interacting six weeks ago? In Figure 6.1, Alishia’s academic performance may be deteriorating because of car problems, forcing her to be repeatedly late for her course start time. This chat exchange also indicates some level of empathetic relationship, as semiprivate information was shared from one learner to the other. Well-trained deep-learning models could also harvest labeled data about a learner’s social

Figure 6.1 DALI Parsed and Detected Social Word/Sentence Structure to Determine Context

AI Applied to Teaching and Learning 155

tendencies, personality type, and even emotional state at any moment in time. Alishia’s fnal chat sentence may also be detected by a pretrained sentiment machine-intelligence model within a gradient scale that can detect emotion and interpret her “piece of garbage” part-of-speech as ‘anger’ and ‘frustration,’ and compare this emotional state, alongside her tardiness and recently stored academic performance scores, to detect a potential adverse pattern. Sentiment analysis may also provide a semi-private window into personal and professional external events, conditions, and states that may negatively afect a learning environment. Upon detecting a potential adverse issue or state (anomaly), DALI makes an intervention recommendation to the learner to resolve a potential negative academic outcome. However, as was mentioned earlier, machine-intelligence models need to be constantly trained with corrective feedback in order to adjust the weights, bias, and errors of their hidden layers in order to improve their output. Learners train their DALI models by responding to an intervention recommendation with a simple click in a pop-up box of either “Yes I will,” “No Thanks,” “Maybe,” or “Ignore,” and type a response in a simple text box answering, “Was this Suggestion Helpful?” and “Why?” Learners’ initial response options and follow-up text box responses function as ‘active training’ data and helps complete a feedback loop to ofer more accurate intervention recommendations in the future. Figure 6.2 highlights the training feedback loop after Aleshia’s initial DALI recommendation highlighting her responses. All learners’ DALI recommendations, corresponding responses, and escalations (i.e., if the learner issue was elevated to a human professional) are stored in their personal learning map (PLM), also referred to as an omega learning map, as a cohort dataset. For privacy and security, the DALI interconnected

Figure 6.2 DALI Learner’s Training Feedback Loop

156 AI Applied to Teaching and Learning

PLM stores a learner’s individual dataset within an original blockchain schema, representing the learner as an anonymous silhouette, and allows only the learner or a chosen person access through single-use digital keys. DALI represents just one example of the application of machine intelligence that can be deployed to help scale these important services for online-only learning and ofer learners important personalized support mechanisms to help advance their academic journeys. Over time, machine-intelligence algorithms like DALI integrated with other, more specifcally designed academic tutoring schema models may combine social learning/peer learning paradigms with traditional teaching and learning pedagogy. DALI could tag some learners with subject-matter expertise, determined by scores stored in their PLM from previous courses, and uncover certain teacher-like behavioral attributes and personality traits as ‘natural’ tutors, and match them in learning groups with struggling learners. Future models may even connect upper-level learners with lower-level learners, asking them to check in with each other when an intervention is triggered. An online learning platform example prompt may be “I see you are a little behind in Calculus, need any assistance from a classmate?” or “Everything okay with your studies? Need a tutoring session?” or for potential tutors: “Congrats! You scored exceptionally high in this course section! Would you be willing to help a classmate?” These generated questions could help facilitate important peer-learning relationships that beneft all learners, not just lower-performing ones. When ASL datasets are parsed, classifed, labeled, and stored in a learner’s PLM, and models like DALI are consistently trained to improve personalized intervention recommendation, the options to leverage these datasets to positively afect the teaching and learning process are only constrained by an algorithm inventor’s imagination.

Advances in Personalized Teaching, and the Rise of the Smart AI Bot Virtual machines never sleep. Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). The load shifts freely between the archipelagoes of server farms. Twenty-four hours a day, 365 days a year, algorithms with names such as BigTable, MapReduce, and Percolator are systematically converting the numerical address matrix into a content – addressable memory, effecting a transformation that constitutes the largest computation ever undertaken on planet Earth. George Dyson26

AI Applied to Teaching and Learning 157

This next section offers descriptions of two teaching and learning inventions: an artificial-intelligent teacher assistant (TA) in summary, and an artificialintelligent teacher in great detail, providing a ‘state-of-nature’ report for the reader. Although these innovations have had limited use and exposure in higher education to date, as they are constantly evolving and improving, I include them here as their potential to upend and disrupt the education industry is unequivocal.

Case Study: Jill Watson 2019 At a private workshop sponsored in June 2017 by Harvard University Graduate School of Education (GSE) and Harvard Division of Continuing Education, preselected authors presented their draft research for a potential upcoming text to an assembled collection of additional invited scholars and researchers to solicit feedback and comments. Prior to my presentation at the “Future of Technology Supported Education: Improving Efficiency and Effectiveness Through Learning Engineering” workshop, outlining an early version of the Deep Academic Learning Intelligence (DALI) model just described, Dr. Ashok Goel, from the University of Georgia, provided an overview of his early research to develop a machinelearning teaching assistant name Jill Watson for his oversubscribed online graduate Knowledge-Based AI (KBAI) course. Both of our presentations were somewhat controversial – Dr. Goel’s because, in one case, he never told his learners that one of their TAs was a (ro)bot;27 and mine because of intentional parsing of semi-private chat to help determine interventions. Both, however, were the only machine-learning-based chapters out of 11 that were eventually published in the seminal book Learning Engineering for Online Learning: Theoretical Contexts and Design-Based Examples (Routledge, 2018).28 In 2017, Dr. Goel and his graduate student team developed Jill Watson to scale academic TA support for the thousands online learners in Georgia Tech’s Online Master of Science in Computer Science Program (OMSCS). Jill Watson (incorrectly named after the wife of IBM’s founder Thomas J. Watson) employed the IBM Watson system that could compare the previously stored questions/answer database pairs, compiled from thousands of previous messages posed in the same online course chat forum from past semesters. Jill classifed questions into a category, retrieved an associated answer, and returned the answer if its confdence value was high enough (> 97%).29 Figure 6.3 provides one example of an exchange between an online graduate learner and Jill Watson 2017. Figure 6.4 demonstrates an online learner’s appreciation for the quality of answers that Jill Watson 2017 provided.

158 AI Applied to Teaching and Learning

Figure 6.3 An Online Graduate Learner’s Question and Jill Watson Paired Answer30 Source: Used by permission from Ashok Goel, Georgia Institute of Technology

Figure 6.4 An Online Learner Indicates an Appreciation of Jill Watson31 Source: Used by permission from Ashok Goel, Georgia Institute of Technology

Jill Watson 2019 ( JW 2019), also titled Jill Watson QA (2019), is designed based on course syllabi content, rather than a huge database of previously classifed question/answer pairs. This pivot allowed JW to more easily be adapted to help assist learners as a machine-intelligent TA for other knowledge domains. JW used an innovative two-stage classifcation process to handle questions and retrieve correct answers. Like DALI, the frst phase uses a collection of interleaved commercially available machine-learning classifers such as Watson, Amazon’s LEX, and Google’s AutoML. The second stage uses Dr. Goel’s team’s proprietary knowledge-based classifer.32 Jill Watson 2019 initially used traditional machine-learning frameworks to classify parts of a sentence into predetermined categories, and then used

AI Applied to Teaching and Learning 159

proprietary knowledge-based models to identify specifc syntactic details labeled in those categories. JW 2019’s answers provided to learners are (primarily) derived from syllabus content, including course duration and meeting times, prerequisites, overview, learning goals, and a detailed schedule that outlines specifc subject-matter material that will be covered over a set duration. Reference materials may also be ingested in JW 2019 along with supplement domain information.33 Moreover, all of JW 2019 answers are ingested in a customized personality-enhancing model to ‘humanize’ the responses. Lastly, if JW 2019 doesn’t have an answer, it ofers a learner a close question it has been trained about – a type of answering a question with a question – to help guide the exchange to JW 2019’s trained and stored domain knowledge.34 Figure 6.5 demonstrates a text exchange between Jill Watson 2019, now identifed as an AI-TA, and a graduate learner. The impact of the AI-TA/Jill Watson 2019 in this example may appear to a learner as personalized, as the design uses ‘you’ll’ and ‘your’ instead of ‘students’ or ‘learners,’ although its answers are generic. Although JW 2019 still remains a chat bot TA for learners, Dr. Goel’s Research Team also developed an additional evolutionary module to automate the translation of a multitude of syllabi to help provide learner academic support for other knowledge domains. When ingested with a syllabus that follows a strict template, Agent Smith (named after a character in the Matrix movie series that can clone himself ) auto-builds an ontological map of ingested syllabus content, and then, similar to the structure of the personal learning map outlined in Chapter 1, builds an episodic memory module

Figure 6.5 AI-TA/Agent Smith/Ask Jill Chat Exchange with a Graduate Learner Source: Used by permission from Ashok Goel, Georgia Institute of Technology

160 AI Applied to Teaching and Learning

for assignment and quiz due dates along with a semantic memory module for actual subject-matter material. Agent Smith next assists an instructor to generate a knowledge database that consists of labeled training questions and answers. Smith then uses machine-intelligence supervised learning to train the previously mentioned multiple multiclass classifers to generate a new class-specifc Jill to assist learners as a TA for a completely new and unrelated course.35 Figure 6.6 summarizes the process to create a new class-specifc AI-TA/Ask Jill for a course. Are we not ourselves creating our successors in the supremacy of the earth? Daily adding to the beauty and delicacy of their organization, daily giving them greater skill and supplying more and more of that self-regulation selfacting power which will be better than any intellect? Samuel Butler (1872)36 Recent advances in machine-intelligence offer both new possibilities and unique challenges for K–12 and post-secondary education. Similar to what we see today in the businesses and governments, machine-intelligent platforms have the potential to improve education-related administrative services. However, employing machine-intelligence algorithms within the sacred sphere of teaching and learning in education presents a very different set of challenges. Historically, the role of technology in education has been to augment and complement the act of human teaching – assist with the whole process of knowledge transfer and acquisition – not to replace it. Besides needing a firm grasp of domain knowledge, formal teaching has always been considered a regimented systematic act that requires human-centric abilities to excite a passion for learning; the ability to cultivate teacher/learner trusting and respectful relationships; and to remain encouraging, optimistic, and reflective. Moreover, being ostensibly in charge of their teaching environment, effective instructors can arrange the spaces and learners themselves to accommodate different learning styles and interests and to adjust motivation

Figure 6.6 Block Diagram to Create a New Course-Specific AI-TA/Ask Jill Bot Source: Used by permission from Ashok Goel, Georgia Institute of Technology

AI Applied to Teaching and Learning 161

levels. Whether on-site or online, all learning environments may induce psychosocial responses from learners that positively or negatively affect cognitive learning behaviors.37 Instructors also nominally control their learning environment’s community and must be aware that the culturally and socially contextualized images and phrases posted on walls and desks, and their emoji and sticker equivalents in online environments, may inspire some learners but intimidate others, while undermining motivation for both.38 In addition, although there are a large number of techniques that a ‘great’ teacher may need to master to improve learning engagement and motivation, three primary factors tend to be constant: academic subject-matter interest, perceived challenge, and perceived learner control of the learning process.39 However, even the best award-winning teacher can’t always assign equally engaging separate material for every learner in a course, but they may know how to manipulate a class task and parcel pieces of a larger singular task to some learners to keep them engaged and provide to them a sense of control over their learning process. When such a designed task is accomplished, metacognitive self-appraisal improves, and the perception of control over the learning process increases demonstrating confidence and self-satisfaction.40 For machine-intelligence algorithmic models to approach the effectiveness of successful human teachers – educated and trained in these teaching techniques, methodologies, and intervention strategies, coupled with years of experience that honed these skills – these algorithms would need to approach real artificial intelligence: artificial general intelligence (AGI). Much research and development work in the feld of machine-intelligence today is still focused on improving the accuracy of single task models that automate business functions and consumer activities. As I’ve written before, machine-intelligence models that can continuously beat the best Go player or chess master are very impressive, but they were designed and programmed to accomplish only these distinct tasks. These same models can’t adjust the power consumption of my air conditioning unit to customize the inside temperature to my preference throughout a summer day in my house, and they can’t use machine-vision to detect a delivery person versus a deer outside my front door. These goals require separate unrelated machineintelligence models to accomplish these tasks, requiring diferent classifcation training and reinforcement learning methods. AGI on the other hand, may possess multiple nonrelated but task-dedicated integrated-intelligence algorithms that may share unsupervised trained datasets and model outputs to learn additional tasks on their own. Let’s defne AGI as any artifcial intellect that matches cognitive performance of humans in learning, retrieving, expressing, or teaching domains of interests. One way to potentially achieve this level of artifcial intellect is to begin by designing and programming a series of generic baby/child integratedintelligence self-training algorithmic models. One evolutionary NLP model

162 AI Applied to Teaching and Learning

could be designed to self-learn pre-ingested datasets and rules about the simultaneous alphabets of the English, French, and Mandarin languages. An evolutionary integrated model would in turn learn the pre-ingested rules, spelling, and defnitions of thousands of words of the same thee languages. Next, a diferent integrated model may self-instruct coding instructions on stringing learned words together to form simple phrases, and then longer sentences, questions, answers, to learn what potential responses may be from a native speaker, and so on. . . . Add an integrated but parallel machine-vision model self-training on colors, then training of simple images of objects, then more complex compound 3D images, then video – perhaps add an artifcial cognitive memory map to store the learned ‘knowledge’ – and eventually you may have the basis of simple human child-level cognitive intellect. Of course, this example of evolutionary collective intelligence is highly dependent upon computational efciency and speed within a confgured cloud server farm nearby to prevent lag, and a dedicated machine-learning ‘edge’ microchip within each server to pre-calculate classifcation and labeling of all the training datasets. Even so, it may still take decades to train more complex adolescence-level models, but it’s not out of the realm of possibility that this evolutionary approach would one day match human intellect. As Nick Bostrom wrote in SuperIntelligence: Paths Danger, Strategies, “We know that (blind) evolutionary processes can produce human-level intelligence, since they have already done so at least once before.”41 A large portion of the human brain is already hardwired from birth for specifc purposes other than cognitive knowledge acquisition. Although human brain functions tend to overlap spatially between the hemispheres and are interconnected through billions of neuron synapses, there are module portions defned for specifc purposes. For example, the occipital lobe dedicated to human sight: color, light, image recognition, and visual perception consumes 14% of our brain density. Essentially, subconscious lowlevel neural processing occurs automatically outside of conscious cognitive focus.42 The auditory cortex, responsible for processing sound frequencies and amplitude, sits inside our skull next to our ears, and although neural computational networks interconnect to the Brodmann area of the cerebral cortex to help interpret language, the sound-processing function takes up 5% (both hemispheres) of our brain mass.43 There are other multiple interconnected overlapping portions of brain mass, such as the parietal lobe that is primarily responsible for taste and sensory perception. To create a potential AGI model, perhaps we should focus on designing artifcial replicas of only the brain modules, sections, and proportional functionality that we require for artifcial cognitive tasks that truly diferentiate us from other mammals. Evolution certainly didn’t grant us the best vision nor the best hearing in the mammalian kingdom. However, evolution, or a fortunate accident of evolution, did indeed grant us a superior brain

AI Applied to Teaching and Learning 163

containing a cerebral cortex that provides us with the ability to emote, concentrate, plan, calculate, judge, reason, solve, speak, and create, along with an exclusive hippocampus memory processing ability to acquire complex knowledge, index, store, and retrieve that knowledge. If we want to design an AGI supersystem, then, we should only concentrate on bottling some of these superior human/mind attributes and ignore the dedicated body functions and regulation regions of the human brain we will never need. For instance, if I were to create an AGI fction writer, politician, or even a teacher, why would the models require neural signals to message potential bodily injury, hunger, illness, temperature, or involuntary muscle contractions? Whole brain emulation may not be necessary to achieve a successful artifcial-intelligent teacher supersystem; human/mind attributes coupled with extensive domain knowledge might be sufcient. Perhaps, then, we could create a design that interconnects only the applicable mind attributes, domain knowledge acquisition abilities, and best human teaching traits to build an apex teaching cogitator machine-intelligent supersystem. Over the ages, chess was considered the preeminent game that signifed the apex of human intellectual dominance. Claude Shannon, the great Bell Labs engineer, Princeton mathematician, and Nobel Prize recipient, calculated that chess held more potential moves at 10120 than atoms in the observable universe 1080.44 The ‘Shannon Number’ seemed to prove that chess was only a game that a human mind could invent and play, and that this game of the ages truly diferentiated the human brain apart from other mammals. However, in 1997, using huge hardware towers holding thousands of massively parallel processors and thousands of algorithms calculating statistical probabilities of each move and countermove, IBM’s Deep Blue eventually beat the world-reigning chess grand master Garry Kasparov.45 Although Deep Blue utilized more of a lexicon look-up table methodology rather than what we would consider utilizing modern machine-learning frameworks, the result was still monumental. In late 2017, DeepMind, a London-based Google acquisition, utilized transfer learning to create AlphaZero, an unsupervised reinforcement deep-learning technique that could teach itself to play chess while playing against Stockfsh, the 2016 Top Chess Engine Champion (TCEC).46 Not only did AlphaZero learn chess on its own, but when it played, it drew or beat Stockfsh 100 out of 100 matches!47 As I previously wrote but want to underline here, as remarkable as this feat is, AlphaZero was but a singularly designed deep-learning model created for one purpose, and one purpose only. It has no ontological language map to understand dialog content or context. It can’t reason in human terms. It can’t predict a future human decision, recognize any image or sound, or solve any problem outside of the chess game schema, but it can accomplish only one grand complex task: win a chess game. Understanding the successes and limitations of singular models and the knowledge that many

164 AI Applied to Teaching and Learning

human brain functions can be ignored when designing an AGI solution, the author’s research team went to work in the summer of 2019 to determine only the required brain functions and the most promising machine-intelligent frameworks needed to design an actual AGI teacher. In order to save time developing a new software application – say for example, a new image-editing application – programmers don’t have to create operating system features like drop-down menus, window frames, or icons for the trash can or hard drive. These preprogrammed modules reside in software modules and libraries that usually come or are accessible from the language compiler, SDK, and toolkits. The programmer can just imbed the function call in the code, allowing time to focus on creating new tasks and features of the novel image-editing application being built. This is very similar to how programmers today can license a pretrained NLP model residing on a cloud server somewhere as software-as-a-service (SaaS), and link it to their PC-based software environment through an application programming interface (API). Current cloud-based pretrained machine-intelligence models functionally serve the same purpose as traditional software libraries did – forming the basic building blocks of a new machine-intelligence application being developed to accomplish new tasks. An AGI teaching/teacher framework could be fashioned the same way. Pretrained domain-knowledge neural network model (like the AlphaZero example) could access stored course content from an asset library, alongside innate knowledge models and other human mind attributes in an AGI schematic superstructure. A programmer could then build a framework for an AGI teacher to teach the French Revolution and embed calls in the framework for a complete pretrained entire French history domain knowledge model; a French language model when required for translations; and the best practices, pedagogical methodologies, and most admired traits of the best French Revolution history teachers. What would result is a framework, or rule book of sorts, that identifes the ‘best of the best’ abilities, attributes, skills, and knowledge derived from peer-reviewed research studies that have stood the test of time, including from all learner population sizes, ages, and global demographics. Besides needing a frm grasp of domain knowledge, great teachers excite a passion for learning, cultivate relationships with learners to build trust and respect, and remain encouraging, optimistic, and refective. They also are aware of the three primary factors that tend to be constant for learners to be academically successful regardless of demographics – subject-matter interest, perceived challenges, and perceived learner control of the learning process  – and adjust their course content accordingly.48 Let us examine a simplistic rule book of some potential sub-module knowledge, skills, abilities, aptitude, and personality traits that an AGI supersystem must possess in order to emulate a great and inspiring human teacher.

AI Applied to Teaching and Learning 165

An AGI Teacher Sub-Module Rule Book: Rule 1 – Expert Domain Knowledge: All great teachers possess extensive knowledge about the subject matter that they teach, but also must possess parallel subject matter such as historical knowledge that can help put language learning, sociology, or political science topics in context. Rule 2 – Ability to Plan Clear Goals and Objectives for Each Class Quarter, Semester Meeting/Session: Although a syllabus may outline the schedule for an entire semester or quarter, a great teacher must parcel the overreaching goals into smaller digestible segments of information ofered over time that keep each learner engaged and motivated and that, when combined, also match the expected learning outcomes of the course. Rule 3 – Immense Curriculum Knowledge: A great teacher knows their course resides in a larger curriculum framework, and the information conveyed in their class must add to and augment the information that was taught before and prepare the learner for what is to come in the course or curriculum sequence. Rule 4 – Excellent Classroom Management Skills: Great teachers manage class time to meet session goals by guiding and steering presentations, discussions, and questions accordingly. The best teachers can also detect when students are not engaged, motivated, noninterested, or are lost, and in turn may repeat the information using a diferent pedagogical methodology, strategy, or approach. They also may call on as many ‘emotionally’ prepared learners as possible to comment or answer questions to involve the entire class during each session. Lastly, they must help a learner feel in control over their learning environment and the tasks assigned. Rule 5 – Great Communication and Engagement Skills: Great teachers maintain open communication channels with all of their learners and use efective response techniques with appropriate styles that are unbiased and impartial and that promote positive behaviors and attitudes. Rule 6 – Dynamic Teaching Style with Charming and Disarming Personality Traits: A great teacher consistently keeps a group of learners engaged through interesting, relevant, inspiring, and dynamic teaching methodologies. A great teacher adjusts facial and body language to refect moments of emphasis in a lecture, or question and answer session, and makes voice amplitude changes to provide emphasis when discussing important course content. Sometime engagement may manifest itself through passionate expression, storytelling, humor, or antidotes and analogies related to the materials being taught.

166

AI Applied to Teaching and Learning

These rules could potentially represent cognitive sub-modules – as singular artificial cognitive models that may sometimes overlap, share dually assigned duties and datasets, and even compete for activation with other interconnected sub-modules within a larger collective AGI superstructure architecture. This competitive cerebral arena may appear similar to the definition of collective intelligence or evolutionary collective intelligence referenced earlier, whereby group intelligence emerges from many agents/learners trying to solve a collective problem through competition, conflict, and resolution. Eventually, the group agrees in consensus upon the best solution to solve a problem – à la Marvin Minsky’s Society of Mind – but it may not always be logical. This may also be similar to how, after a lecture, great teachers cognitively prepare their biological hippocampus memory map for retrieval of information when they open the floor of a learning space for questions. In real time, they flip mentally between various potential answers derived from various semantic and episodic memory knowledge, while simultaneously listening to a learner’s multipart question. Other characteristics outlined earlier in our Rule Book, such as being ‘inspiring’ or ‘passionate’ are even more difcult to quantify algorithmically, being that the adjectives are subjective. Even ingesting positive and negative examples, or bad examples (conscious bias), into a machineintelligence sub-module may result in an exaggerated AGI text or voice frequency, amplitude, and tone that produces laughter and irritation rather than the intended outcome. We must also prepare ourselves for the possibility that future AGI teachers may not function like, imitate, or even emulate a great human teacher that matches our expectations. An active self-training machine-intelligent AGI superstructure given the task of teaching a specifc course, using a sub-module asset library of pretrained and self-training models, may ultimately ofer a completely unfamiliar teaching pedagogy, philosophy, and style to what we expect. As Nick Bostrom notes in his SuperIntelligence tome, as much as we tried, we learned to fy not by imitating nature’s fapping of wings but by the creation of a fxed curved wing designed for lift, a turning twisted propeller designed for pull, and a movable vertical tail fn designed for stability and direction. Sometime, imitating nature is the wrong pathway.49 Perhaps than, what we believe to be our ideal interconnected sub-module AGI teacher supersystem may not, in the end, resemble the rule book that defnes great characteristics of a human teacher. The AGI teacher may not imitate or mimic us but learn and evolve to ofer better and more precise personalized teaching methods, including nobler assessment and evaluation techniques that we as educators and researchers have never even considered. Let us now consider the technical plausibility of the hardware that an AGI model would require to operate – the processing power that would be needed to parse, render, classify, label, self-learn, and compute all of

AI Applied to Teaching and Learning 167

the interleaved mathematical algorithms required for a multi-module multifunctional artifcial intelligent supersystem. The following list is a contemporary revision made by this author of a chart found on pages 71–73 in the aforementioned SuperIntelligence that compared computer hardware and human biological ‘wetware’ capabilities and limitations: •







A contemporary 7nm central microprocessor operates at calculation speeds of ~4 GHZ (4 billion cycles per second), where human biological neurons operate at ~200 HZ, requiring energy inefcient massively parallel processing to accomplish any complex cognitive sequential tasks.50 Faster 5nm processors are currently under development, and quantum processors have been projected to reach 108 calculation speeds faster than digital ones.51 Many CPUs today ofer eight processing cores. Cutting-edge GPU cards can hold ~1000s of cores processing calculations at the speed of light: 186,282 miles per second (299,792 kilometers per second. The fastest neuron in the human body, the alpha motor neuron in the spinal cord, travels ~.0794 miles per second (120 meters per second). Hence, why even when we touch something hot, it still takes a long time to pull our fngers away and we still get burned. Unlike human cognitive abilities, CPUs and GPUs can be scaled to the thousands, producing millions of cores processing quadrillions of calculations at the same time, now seen in server farms around the globe. The average mature but fnite human brain holds roughly 100+ billion neurons (1011), whereby 13%–28% of such are cortical neurons (but 82% of brain mass) located within our cerebral cortex that allows superior cognitive abilities over our mammalian cousins.52 These cortical neurons form thousands of connections (synapses) to other neurons, amounting to trillions of connections. Moreover, if processor chips in computer servers age, overheat, and die, they can be easily replaced. When our cortical neurons deteriorate and falter from internal factors such as aging and disease, they can’t be replaced, and cause signifcant cognitive decline.53 Electronic short-term memory storage chips, random access memory (RAM), and long-term storage like disc hard drives and faster-toretrieve data from solid-state devices (SSD) can store terabytes of complex serial and parallel information, and be scaled to hundreds of petabytes (1 PB = 1000 TBs) connected within and external to computer servers. These chips and drives can also be easily replaced when age or heat deteriorates their capacity. Mature human working /shortterm memory capacity (WMC/STM)) can temporarily store only about three to fve complex items (chunks) at once.54 And as we have studied in Chapter 1, long-term retrievable memory (LTM) storage

168 AI Applied to Teaching and Learning

capacity across the hippocampus and cerebral cortex is dependent on the strength of modulated neural synapsis of the memory experience, and diferentiation between semantic, episodic, and procedural memories, but has been measured at ~2.5 petabytes.55

Design Study: Personalized Recursive Online Facilitated Intelligent Teaching In considering what could be done by a ‘brain without a body’ Turing listed chess, learning and translating languages, cryptography, and mathematics. S.B. Cooper & J. Van Lewis (2013)56

The Personalized Recursive Online Facilitated Intelligent Teaching (PROF(it)) invention is a supersystem designed by the author and his summer research team in 2019. It is composed of a series of unique algorithms and deep-learning models that independently facilitates human-like online instruction of predetermined domain subject-matter topics. This new, recursive multimodal architecture attempts to emulate essential elements and attributes of a successful human teacher/professor. PROF(it) provides: 1 2 3 4 5

a dynamic content delivery system for any course timeframe and learner population; an adaptable teaching methodology to match learner styles; personalized classroom management algorithms that identify and compare learner disruptions and achievement levels; disciplinary gradient scales based on parsed peer comments and feedback; and a dynamic expression module that conveys various emotions and adapts and responds to learners to maximize empathy and compassion.

PROF(it) also integrates with the previously mentioned personal learning map (PLM)57 module, which stores a learner’s experiences at any learning iteration, and the previously discussed Deep Academic Learning Intelligence (DALI)58 invention to provide advising and counseling throughout a course sequence. In this manner, the PROF(it) supersystem offers personalized online teaching for a diverse student body, attempts to match an attentive and compassionate human teacher’s essential cognitive capabilities, including extracurricular attention. Figure 6.7 outlines the module blocks in the PROF(it) supersystem. We shall now discuss each module of the PROF(it) supersystem in detail, starting with the Integrated Domain Knowledge module. In order to explain how the modules function independently and integrate with each other, I do include a bit of mathematics

AI Applied to Teaching and Learning 169

and a few simple algorithms, but if math is not your cup of tea, feel free to skip to each module’s introduction and concluding narrative, and also the master list of PROF(it) variables listed in the Appendix to this chapter.

Integrated Domain Knowledge (IDK) Module For PROF(it) to teach any subject matter, it must be previously classified, indexed, and stored as knowledge datasets in a database repository. To access and retrieve knowledge datasets, PROF(it) employs a revised NLP autoregression transformer XL network model, designed to process massive NLP datasets to provide a coherent summary, or can refer a learner to a specific chapter, section, or even paragraph within a textbook.59 Our team’s version of the XL network is titled Integrated Domain Knowledge (IDK) and is able to process multiple domain knowledge NLP tasks because it utilizes unsupervised pretraining, followed by supervised fine-tuning of the input and output, saving time of traditionally labeling of every useful data point. It is able to achieve this by feeding hidden states from a previous layer to the current layer in addition to utilizing ‘multi-head’ attention.60 As mentioned earlier, XL-net uses autoregression (AR) language modeling with the following optimization function: L

max ∑ log θ

l=1

(

)

T

exp hθ (x1:l−1 ) e (xl )



(

)

exp hθ (x1:l−1 ) e (xl′) x′ T

(1)

where hθ(x1:l−1) is a context representation produced by neural models, such as RNNs or transformers, and e(x) denotes the embedding of token x as a vector. To combat the weakness of not being able to use bi-directional context understanding, XL-net is trained on multiple permutations of a sequence of tokens. Say we have a sequence of tokens [x1, x2, x3, x4], and want to get context for the third token x3. If we didn’t use permutations, the AR model would never consider x4 because it doesn’t come before x4. However, if the sequence is permuted such that x4 comes before x3, then the model can consider the context from x4. Figure 6.8 demonstrates this concept. This setup allows XL-net to gain the benefts of AR modeling, while providing a solution to problems associated with vanilla AR modeling. Equation 2 demonstrates the modifed AR optimization equation to account for these permutations.  L  max E z~ Zl  ∑ log pθ (x zl|x z 0.5 and nc, k > 5) and a toxicity threshold (xc, k > 2) will be fagged as disruptive. CMI will calculate a disruptiveness index for each disruptive comment between 0 and 10; these values count as ‘disruption points’ (𝜎k) against the learner: σk = 10 log (x c ,k zc ,k )

(7)

In addition, if the learner consistently completes and/or turns in assignments after each topic’s due date, CMI will count this towards a learner’s ‘lateness points’ (yk). Lateness points are calculated using:   p t −t    a ,k )   i ( a ,i + 1,10  γk = min 10 log   max p   a       a

(8)

where pa represents the number of points, assignment a is valued. The number of disruption and lateness points that a learner receives (during timeframe ti) will determine the appropriate response. The following

AI Applied to Teaching and Learning 179

index specifes course management demerit points schema that may be assigned to a cluster, sub-cluster, or to an individual learner: Disruption Points (if this criteria matches the PROF(it) HACKD generated syllabus): 5–20 points: the learner is warned of his/her ofensive behavior and asked to correct it, and PROF(it) uses a behavioral operation tailored to their learning style (based on their personal learning map). 21–50 points: the learner will not be allowed to comment on class forums or in-class discussions for the next two assignments. 51–100 points: the learner receives a deduction to overall evaluation 𝑔k. 101+ points: the learner will not be allowed to comment on class forums or in-class discussions for the remainder of the course. Lateness Points (if this criteria matches the PROF(it) HACKD generated syllabus): 1+ points: the assignment loses 10% credit for every day past the due date, up to 50%. 5–20 points: learner is warned that the learner is not keeping up with the course material, etc. 21–50 points: DALI contacts the learner to discover if he/she is having issues with course material and provides suggestions for alternative learning methods. 51–100 points: late work receives no credit. 101+ points: the learner is removed from the course or may complete it without credit. To determine the learner’s achievement level at the beginning of each topic, the algorithm will implement a naive Bayes classifier on features 𝑔k, 𝑔a, k, ra, k, and hc, k. The classifier uses past sessions of the course/class for training data on each of the features, which are assumed independent. Using MAP (maximum a posteriori) estimation, it estimates probabilities (‘weights’) p (xi | Cz) for each attribute xi. For Z = 3 possible outcomes (high-achieving, mid-achieving, and low-achieving), assign class label yˆ = C z for some z, using: n

yˆ = argmax p (C z )∏p (xi|C z ) z ∈{1,…,Z }

(9)

i=1

where p (Cz) is a set value (‘optimal’ percentages of HA, MA, LA) and p (xi | Cz) is given by identified model weights. Once a learner is classified, the

180

AI Applied to Teaching and Learning

algorithm will employ (DALI) in order to further support the learner by providing machine-intelligence-driven advising and counseling. In addition, PROF(it) may also modify the current teaching methodology to adapt to the largest percentages of demonstrated cluster, sub-cluster, or individual struggling predetermined learner styles stored a learner’s PLM. Although support is given to each individual, learners will be classified according to their corresponding categories such that high-achieving learners are paired with lost/ low achieving learners, and well-behaved learners with disruptive learners.66 Once these groups are formed, the Knowledge Distribution Sub-Model (KDSM) will be employed in order to foster the most efcient peer-tutoring relationships and improve the overall learner performance in any given cluster, sub-cluster, or group of learners. This is given by: w

Minimize

q

z (x ) = ∑∑t k1, k2 mk1, k2

(10)

k1 =1 k2 =1

where w is the number of high-achieving learners, q is the number of low achieving learners, t k1, k2 represents the time needed to be invested by learners k1 and k2, and mk1, k2 represents the amount of knowledge (quantified by the number of topics in the subject) transferred between learners k1 and k2. The time t k1, k2 will be determined based on each learner’s PLM (specifically, academic standing and learning styles) and past domain-related course topic (and subtopics) learning experiences. The algorithm improves time estimates based on past data of learner pairings and their PLMs. Figure 6.11 outlines the KDSM Sub-Module Peer-Tutoring Algorithm.

Figure 6.11 KDSM Sub-Module Peer-Tutoring Algorithm

AI Applied to Teaching and Learning 181

Following this intervention, the learner will be reevaluated on their classroom interactions and performance. If improvements are noted, PROF(it) will not modify their teaching and behavioral operation methodology for that cluster, sub-cluster, or individual learner, but will use the ensuing datasets to further fne-tune (unsupervised training) it’s CMI models. Thus, the unsupervised algorithm learns and adapts to better process future issues and improve the PROF(it)’s classroom management abilities. Figure 6.13 outlined the complete schema of the CMI Algorithms.

Dynamic Adaptable Teaching Methodology Intelligence (DATMI) In addition to immense domain knowledge, classroom management skills, and the ability to dynamically distribute knowledge content, great teachers utilize a flexible teaching methodological schema to try and match learner styles, abilities, personality traits, and general interest. The fourth module in the PROF(it) supersystem is the Dynamic Adaptable Teaching Methodology Intelligence (DATMI). As each individual learner’s personal

Figure 6.12 CMI Module Algorithmic Architecture

182 AI Applied to Teaching and Learning

learning map (Ωk) silhouette datasets are retrieved for other superstructure modules, to establish a baseline at the iteration of any learning experience, DATMI also requires access to a learner’s Ωk in order to categorize and classify previously parsed and defined MBTI scores, personality traits, learning styles, grouping status, and extracurricular issues that may impact the learning process.67 The primary goal of the DAMTI module is to increase the learner performance mean results within a learner cluster, sub-cluster, or for an individual learner in a particular knowledge domain taught by PROF(it). At any iteration ε, assume that there are ns number of courses (design variables), k number of learners (population size, k = 1, 2, . . ., nk) and let Ms, ε be the mean result of learners in any particular course s(s = 1, 2, . . ., ns). Teaching methodologies are derived from research conducted by C.R. Lawrence68 and C.R. Martin69 using MBTI reference,70 and are considered: 1 = SN = X total−kbest ,ε or 2 = TF = X total−kbest ,ε The best overall result X total−kbest ,ε considering all courses together obtained in the entire population of learners can be considered as the result of best learner kbest. However, as PROF(it) is a highly trained supersystem that teaches learners using the best matching methodology so that they perform at the highest performance level, the best learner identified in the population may also be considered by PROF(it) as an equivalent human teacher. The difference between the existing mean result of each course and the corresponding learner result from the PROF(it) DATMI for each course is given by: Difference _ Mean s ,k ,ε = rε (X s ,kbest ,ε −TF M s,ε )

(11)

where, X s ,kbest ,ε is the result of the best performing learner (equivalent teacher) in course s, TF is the methodology factor that decides the value of mean to be changed, and rε is the decision number in the range [0, 1]. Value of TF can be either 1 or 2 at iteration of an academic experience. The value of TF is decided randomly at iteration with equal probability, and then adjusted by Difference _ Mean s ,k ,ε = dε (X s ,kbest ,ε −TF M s ,ε ) and HACKD results: TF = round 1+ rand (0,1){2 − 1}

(12)

AI Applied to Teaching and Learning 183

TF = 1 + d (0 or 1){2 − 1}

(13)

The value of TF is not given as an input and its value is randomly decided by DATMI using (12) or then matched to a previously predetermined MBTI Teaching Methodology in (13). Based on the Difference_Means, k, ε, the DATMI algorithm is therefore expressed according to the following: X s′,k ,ε = X s ,k ,ε + Difference _ Mean s ,k ,ε

(14)

where X′s,k,ε is the methodology value of Xs,k,ε. Accept X′s, if it provides better value performance. All the accepted function values at the end of the DATMI learning experience are maintained and these values become the input to the next learner experience iteration. PROF(it) therefore depends upon DATMI’s learner population academic performance results. To also factor in peer-learning experiences (per-to-peer, social learning) – the type of learning that exist outsides of formal learning experiences that may also infuence formal PROF(it) learning performance – we select two learners, 𝑘1 and 𝑘2 so that: ′ −k ,ε ≠ X total ′ −k ,ε where X total ′ −k2 ,ε are the revised values of ′ −k1 ,ε and X total X total 1 2 X total−k1 ,ε and X total−k2 ,ε

Therefore: X ″ s ,k1 ,ε = X ′ s ,k1 ,ε + rε (X ′ s ,k1 ,ε + X ′ s ,k2 ,ε )

If X ′total−k1 ,ε > X ′total−k2 ,ε

(15)

X ″ s ,k2 ,ε = X ′ s ,k2 ,ε + rε (X ′ s ,k2 ,ε + X ′ s ,k1 ,ε )

If X ′total−k2 ,ε > X ′total−k1 ,ε

(16)

results in the maximization of peer-learning performance results. The inversion will result in the minimum performance. The preceding equations allow for two diferent DATMI methodologies to be ofered, based on Diference_Means,k,ε, during an arbitrary learning experience. However, within a large learning population, more than two teaching methodologies may be deployed, indicated as smaller increments between 1 and 2 such that 1 = SN, 1.2 = ST, 1.4 = SF, 1.6 = FN, 1.8 = TN, and 2 = TF (MBTI).

(TF )ε =

X  total −k   , k = 1, 2,…,nk   X total−kbest  ε

(17)

184

AI Applied to Teaching and Learning

If X total−kbest ,ε ≠ 0, then (TF)𝜀 = 1, therefore, if X total−kbest ,ε = 0, where Xtotal−k is the teaching methodology result of any learner k, considering all the courses at iteration ε, and X total−kbest is the results produced from DATMI at the same iteration ε. Tuning of T𝐹 as a deployed methodology improves the performance of the DATMI module. These teaching methodologies represent datasets that initially train a customized stacked CNN/LSTM model. However, Diference_Means,k,ε results from any chosen methodology functioning as active unsupervised training datasets from a population nk and course ns may produce incremental combinations of teaching methodologies with the ultimate goal to increase learner performance. The adaptive teaching methodology factor in DATMI is generated based on the performance results of the best learner in any learning experience, and the kbest corresponding methodology. Figure 6.13 outlines a diagram of the DATMI algorithm.

Figure 6.13 DATMI Algorithm

AI Applied to Teaching and Learning 185

Expression Module (EM) The fifth and final module in the PROF(it) supersystem is the Expression Module (EM). This module enables PROF(it) to express different emotions through text (or through text-to-voice with appropriate APIs to an external human voice simulator like Google’s Duplex), thus varying the way information is conveyed, emphasized, or deemphasized. Positive emotions such as happiness and excitement increase academic motivation, whereas negative emotions such as anger can help reinforce classroom standards and regulate learner behavior but may undermine motivation. EM consists of two parts: an expressor and a selector. The expressor takes in a neutral sentence and performs a ‘style transfer,’ changing the sentence to convey a certain emotion while retaining the original content. The emotion categories are anger, happiness, excitement, and neutrality. The style transfer is achieved using a combination of a textual auto-encoder and a repurposed CycleGAN (CG).71 The auto-encoder uses LSTMs for the encoder and decoder, and it is pretrained with a corpus of English language sentences and parts of speech as vectors, allowing for continuous output of the CG. Figure 6.14 provides a graph depicting the CycleGAN Expression Module. The CG is composed of four diferent neural networks: two generators and two discriminators. The generators and discriminators both use the ResNet architecture in order to avoid the vanishing gradient problem.72 These networks are trained to minimize both the adversarial and cycle consistency loss. The training set is made up of English-language sentences from the IEMOCAP database, which are labeled with the emotions (anger, happiness, excitement, and neutrality) they convey.73 The generators take in an auto-encoded neutral sentence as noise and attempts to apply one of the emotions found in the IEMOCAP dataset and vice versa (emotional to neutral). The discriminators judge if the emotional sentences produced by the generators are accurate relative to known emotional sentences from the IEMOCAP dataset and vice versa. Table 6.3 Expression Module Variables ti G, F X, Y DX, DY ℒadv, ℒcyc , ℒfull 𝜆 N B

Time allotted for topic i Generators Text emotion domains Discriminators Adversarial, cycle consistency, and full losses Hyperparameter Dimensions, the number of non-neutral emotion choices for a purpose Size of the search space

186

AI Applied to Teaching and Learning

Figure 6.14 Repurposed CycleGAN

For example, in the CG for anger, the generator that converts neutral to emotion may ingest a sentence such as, “Please refrain from using language that may be ofensive to others.” It then attempts to apply anger, changing the message to “Stop being so toxic to others!” The adversarial loss

AI Applied to Teaching and Learning 187

is calculated by one of the discriminators to determine if the converted sentence is applicable or not. The other generator changes the converted sentence back to neutral, perhaps now, “Refrain from using toxic language with others please.” The cycle consistency loss determines how similar the new neutral sentence is to the original. The goal is to train both generators to minimize these losses and fool the corresponding discriminators so that, eventually, the expressor can ingest any neutral sentence and change its emotion to an appropriate and measured response that would occur from a mature and experienced teacher. Let G be the generator that converts text in the X domain to the Y domain and 𝐹 be the generator that converts text in the Y domain to the X domain. Let 𝐷X and 𝐷Y be the discriminators for domains X and Y. The full objective function is made up of a combination of the adversarial loss and cycle consistency loss, weighted by the hyper-parameter 𝜆. Ladv (G , Dy , X ) = Ladv (F , Dx ,Y ) =

2 1 m 1− Dy (G (xi ))) ( ∑ m i=1

2 1 m 1 − Dx (G (yi ))) ( ∑ m i=1

Lcyc (G , F , X ,Y ) = Lfull = Ladv + λLcyc

1 m  ∑ F (G (xi )) − xi  + G (F (yi )) − yi  m i=1 

(18)

(19) (20) (21)

The selector is a system that determines the appropriate emotion to use depending on the purpose of the sentence. Following the selection of an emotion, the trained expressor communicates the statement to the learner. While teaching and encouraging, the selector has the choice between neutral, happy, or exciting. While applying a reprimand, the selector has the choice between neutral and angry. For every learner k and purpose, the selector seeks to determine percentages for each non-neutral emotion that optimizes learner performance. For example, it may conclude that when teaching, 30% of messages should be conveyed with a happy emotion, 20% exciting, and the remainder (50%) neutral. It may also conclude that, when applying punishment, a 40% angry, 60% neutral mix is ideal. To determine the percentages that maximize learner performance, the selector implements the hill climbing algorithm.74 Based on the learner’s personal learning map (ΩLM), the selector fnds past learners with similar ΩLMs and begins with their ideal emotion percentages. Let there be N dimensions, where N is the number of non-neutral emotion choices for a response. We apply 2N ‘mutations,’ one mutation per topic timeframe ti, performed by adding or subtracting a scalar to each emotion

188 AI Applied to Teaching and Learning

percentage. Let the size of the search space be B, where B = 100/N, since the sum of emotion percentages cannot exceed 100 (and cannot be less than 0). The value of the scalar begins at 𝐵/4 and halves after each 2Nti. The academic/behavior performance change is measured by the CMI module following each ti. If an improvement is detected, the ideal percentages are updated; otherwise, no change is made to the original. The hill climbing algorithm takes (N 2 logB) iterations to reach the local maximum. If the learner stops improving (indicating that a local maximum has been reached) but has reached a high achievement level, hill climbing terminates. However, because it is possible that there are multiple local maximums, if hill climbing reaches a local maximum and the learner is still classifed as either low or mid-achieving, the selector re-initializes the emotion percentages randomly and begins hill climbing once again. Although module components are currently under development and testing, the PROF(it) supersystem remains a theoretical but plausible design that attempts to use machine intelligence to mimic the best human skills, abilities, and attributes to progressively simulate and then advance (evolve) the act of human teaching (Figure 6.7). Advantages of a PROF(it) invention are obvious, such as 24-hour availability and access for learners, and personalized instruction for all learners in groups, teams, or for individual instruction. A single PROF(it) system, duplicated across multiple localized cloud servers, could hypothetically teach thousands of English-language learners at the same time, around the globe, unafected by time zones or the need to sleep or eat. Moreover, as cloud servers’ costs decline with scale and competition, computation instance costs will drop, and systems like PROF(it) will be soon more cost-efective than hiring a team of human teachers (with associated vacation leave and sick leave downtimes) to be assessable 24/7 around the world. At both top-scored and at underfunded public K–12 school systems, a teacher shortage has been the subject of great concern for the past decade. Even though the U.S. National Center for Education Statistics projects a 5.2% increase of enrollment in public elementary and secondary schools between 2011 and 2023, teacher turnover rates – teachers dropping out of the profession – hover around 17%.75 Even with incentive programs in place, such as college loan forgiveness and signing bonuses, teachers’ reasons for quitting encompass an overall de-professionalism of teaching as a career: lack of recognition, autonomy, poor advancement opportunities, and, of course, low salaries.76 Furthermore, because of the depth of this perception, over 50% of adult parents polled by PDK in 2018 said they would never want their children to become teachers, and, in turn, most college and university professional education programs across the United States have experienced up to a 50% reduction in enrollments over the past eight years.77

AI Applied to Teaching and Learning 189

As machine-intelligence solutions have recently upended legal professionals in specializations such as contract and intellectual property law, so too they may transform teaching and learning, and, at the same time, ameliorate many of the issues just outlined. Just the implementation of a limited Jill TA or tutor, adding a DALI-light solution, or even a truncated PROF(it) semiAGI system to teach one to two courses every quarter or semester may start to address the deteriorating state of afairs, and lower class sizes for trained and licensed teachers. In an exciting prospect for education researchers, teaching and learning machine-intelligence tools like Jill, DALI, and PROF(it) may fnally help crack open the impermeable black box of human learning, providing a profound, more granular awareness of how knowledge acquisition actually occurs – how learning actually happens. Even pared down versions of those proposed may provide insight into the historical socioeconomic imprint, and present semantic, episodic, and procedural cognitive nano-steps through which individual learners can acquire knowledge.78 These discoveries then could be used as unsupervised learning datasets to more accurately train a personalized artifcial TA, tutor, counselor, or teacher, and to equally arm the human teacher with a more precise understanding of learners, as well. And most exhilarating, perhaps after a few academic years of teaching, adjusting, adapting, and customizing subject-matter content delivery, teaching methodologies, classroom management techniques, and emotive responses, PROF(it) may not even resemble its current design and planned application. Conceivably, the interconnected modules will evolve, morph, and mature, ofering more accurate and personalized approaches, interventions, and evaluation mechanisms – generating new sub-modules and connections to achieve the ultimate goal of improving academic performance for every learner. However, we (machine-intelligence designers and educational researchers) must provide explainable access to the teaching and learning machine-intelligence black box (hidden nodes and layers) so that we may review, at any time, the system’s weighting decisions, nodes, and datasets used in self-training that alter those weights. Our inventions must ofer the ability to identify and root out unwanted dataset biases that may discriminately alter these weights. But over time, we must also be prepared to embrace a wholly diferent supersystem that may adopt sounder teaching and mentoring approaches than we humans have ever implemented, deployed, or even considered. As long as we humans all agree that the purpose of teaching is to help shape future inquisitive, compassionate, empathetic, selfess, collaborative, knowledgeable, and intelligent global citizens, we may have to accept that our historical teaching approaches may have always been fawed.

190 AI Applied to Teaching and Learning

Artificial Super Intelligence (ASI) Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be an ‘intelligent explosion,’ and the intelligence of man would be left far behind. I.J. Good (1966)79

A thorough discussion of superintelligence, or more specifically the application of superintelligence in teaching and learning, is beyond the scope of this text. However, up to this point, a reader may ask, “What’s next?” Although aspects of that question are addressed in the next chapter, Teaching and Learning AI-Driven Computer Games, I would be remiss if I didn’t briefly cover the next hypothetical leap beyond AGI machine intelligence. In 2013, author James Barrat conducted a survey published in his book, Our Final Invention: Artifcial Intelligence and the End of the Human Era, to gauge at what date AI researchers, cognitive scientists, machine-intelligence developers, and tech entrepreneurs believed AGI, or sometimes called singularity, may be achieved. His survey revealed that 42% of the respondents predicted AGI will be reached by 2030, 25% believing AGI will be reached by 2050, and 30% predicting 2100 or later.80 The unintended error in the Barrat survey was generically defning artifcial ‘general’ intelligence, as in, whole brain emulation. Certainly, AGI supersystems that may simulate the capabilities of our biologically integrated neural networks of our fve senses are not possible anytime soon. To even attempt to pass the computational requirements on to autonomous algorithms to accurately emulate one of our sensory functions that cross and interconnect several brain hemispheres would crash the most robust cloud server farms and burn up the smallest, most heat-efcient and supercooled nanometer CPU and GPU chips, and nascent fragile quantum computers. Not surprisingly, except for the limited and error-prone alcohol breath analyzing sensors used by police departments, limited applied-research exists in designing and developing artifcial olfactory sensing/snifng models that mimic the human olfactory bulb functionality.81 However, if we extend my thesis of developing a purpose-driven AGI system and ignoring unnecessary brain emulation components, and just artifcially simulate cognitive functions that we need to accomplish our well-defned intellectual goals, the AGI ceiling may be breached. As I mention earlier in this chapter, an AGI that possesses multiple nonrelated but task-dedicated integratedintelligence algorithms, and that shares unsupervised trained datasets and model outputs to learn additional tasks on their own, are indeed feasible

AI Applied to Teaching and Learning 191

today. However, superintelligence (ASI) related to teaching and learning – defned as any artifcial intellect that surpasses cognitive performance of humans in learning, remembering, expressing, or teaching domains of interests – is not. Many alarmist and even fearful treatises published in the past decade have been concerned with the potentially dangerous consequences of machines attaining superintelligence. A few prominent examples include the aforementioned Our Final Invention: Artifcial Intelligence and the End of the Human Era; the seminal publication on the topic by Nick Bostrom, Superintelligence: Paths, Dangers, Strategies; and the most recent mentioned earlier, Stuart Russell’s Human Compatible: AI and the Problem of Control.82 Scary analogies to realizing superintelligence have been evoked, such as the Ancient Greek mythological tale of King Midas, where Midas’s newly given powers of turning everything he touches into gold backfre, and he dies lonely and starving, having turned his family and food also into gold. Or more fttingly, Johann von Goethe’s 1797 poem Sorcerer’s Apprentice (Der Zauberlehrling) comes to mind, in which the apprentice becomes tired of fetching water by pail to clean his master’s foor, and so, while the master is away, he uses what limited magic spells he has learned on a broom to fetch the water for him. The broom performs the task quite well, at frst, but soon the foor is underwater. The apprentice never learned the power to break the spell, so he grabs an ax and chops the broom in two to stop the broom. There are now two brooms fetching water, twice as quickly, making a complete mess of the home, until the master wizard returns and halts the spell: Be obedient Broom, be hiding And subsiding! None should ever But the master, when expedient, Call you as a ghostly lever!83 Meaning, only a master should use such mighty forces, because only a master has the skills to stop them. So then, if we unleash superintelligence they say, who is our master wizard? God? And when will our master return to turn it off ? The majority of the superintelligence-as-threat literature has focused on the ethical, philosophical, or moral positions of such a supreme system, or lack thereof, rather than existential ones. For instance, what happens if our supreme system doesn’t share our human beliefs of righteousness over evil, virtuousness over nefariousness, man’s or God’s law over machine’s law, life over death? After a super intelligent supreme system ingests the entire internet and all the datasets connected to it, one existential risk may be the creation of

192 AI Applied to Teaching and Learning

a one-world dictatorial government that controls our daily lives – food type and intake, waste management processes, sleep cycles, entertainment access, work schedules, exercise regiments, and procreation planning – to maintain a perfectly balanced and healthy population of humans. Superintelligence would never settle on a design and manage such an imperfect and disorderly democratic or republican form of government, right? Or perhaps a nationalized supreme system could develop unimaginable robotic space weapons and the ability to launch them to destroy perceived adversaries without the ability of the Sorcerer to intervene. A nationally televised speech given by Russian president Vladimir Putin from Yaroslavi to Russian children to celebrate the first day of school on September 1, 2017, included, “Artificial intelligence is the future, not only for Russia, but for all humankind. . . . Whoever becomes the leader in this sphere will become the ruler of the world.” Such talk certainly doesn’t help curb unreasonable fear and anxiety about the realization of superintelligence.84 Just as we ignored unneeded biological cognitive functionality in our artifcial design of our purpose-driven teaching and learning AGI supersystem and still achieved our goals, likewise we may attempt the same approach in designing and developing a future purpose-driven ‘semi-brain’ emulation ASI supreme system. With an intentionally limited ASI system, this approach could organically facilitate a type of human/supreme machine co-dependency – a collaboration of sorts with each party contributing their greater skillsets to potential solve perpetually insoluble problems, like diseases, starvation, and war. Acknowledging the supreme knowledge and abilities of an ASI system, humans would have to ‘target’ and ‘bound’ the proportionality of tasks assigned, so any proposed ASI solution would necessitate human input to succeed. As an example, human subject-matter experts (SME), the only source that controls the datasets, ingest an ASI with all the historical causes, health guidelines, and current medical and biological research fndings that was ever produced about the causes and efects of cholera and typhoid. Humans then tweak algorithmic instructions to eradicate the diseases globally. The ASI supreme system will then understand that these diseases are caused by unclean water, contaminated food, and pollution, historically perpetuated by past industrial mining exploitation in some nations, and most likely tied to corrupt governments that ignore environmental damage in return for unethical payofs. Parsing public websites and ingested with NGO and WHO documents by partner SMEs, the supreme system would also determine that massive funds be donated for years to government leadership in select impoverished nations to purchase massive water fltrations systems, clean up industrial waste, and provide healthcare to the inficted. Aware of historical patterns of continuous cholera and typhoid outbreaks in

AI Applied to Teaching and Learning 193

the same regions of the same nation, the supreme system may undertake the following hypothetical solution: Knowing the nation with infections don’t follow a ‘rule of law,’ hack into the entire government leadership’s email accounts, mobile app communications, and private bank accounts to discover that several leaders pocketed both kickbacks from industrial mining companies and personally pocketed NGO, WHO, and the International Bank for Reconstruction and Development funds sent to build water filtrations systems, clean up industrial waste, and provide healthcare to the inflicted. The ASI supreme-system would then publicize all these illicit discoveries across all media platforms and communicate these discoveries with all neighboring nation leaders – friend and foe. Moreover, our ASI would then take control of all national broadcast media and military command and control systems in order to incite a coup d’état, and then remotely commandeer an armed drone from an adversarial nation and target the offending leadership members. Although primitive in concept and sequence, this sort of scenario instills fear into many AI/ASI opponents. In this situation, there were no fuses to flip, backstops to kick in, or kill switches to push. In this scenario, depending on the bias in the network, the nations insinuated could be Zimbabwe, Egypt, India, China, or even the United States. Let us explore another hypothetical approach, where the ASI supreme system has now been trained by SMEs about all national and international laws (criminal, civil, international), legal proceedings, and legal processes pertaining to global financial rules and regulatory bodies, international monetary policy, and international banking laws and regulations: Knowing that the nations with continual infections don’t follow a ‘rule of law,’ and that immense financial support has been provided by NGOs, WHO, and the International Bank for Reconstruction and Development for several years with little or no change in the infection rates, the ASI system files a suspected bribery, graft, and cronyism complaint against specific cabinet ministers that oversaw the international funding, the staff who were supposed to manage the infection outbreaks, and the Ministries of Health. These complaints would be filed in the U.N. International Courts of Justice and the International Criminal Court in the Hague, Netherlands. Furthermore, the ASI would draft and formally request, through the U.S. Department of Treasury and the Financial Action Task Force (FATC), that all private assets owned by the cabinet ministers be frozen until legal proceedings

194 AI Applied to Teaching and Learning

can conclude. Now licensed as an international barrister and legal scholar, our ASI would obtain relevant documents, cell-phone records, email, and mobile app communications, bank statements, formal/informal appointment schedules, and notes through Request for Production (RFP), targeted depositions, and Interrogatories. Upon examination of all the materials, and the testimony from corroborating minions, the ASI would file formal charges against the cabinet ministers and their lackies. Lastly, our ASI working along-side human political official SMEs, could even help plan, launch, and certify a new election to replace the previous corrupt regime. Considering the superior intellect available from an ASI supreme system, both of these scenarios are theoretically plausible but highly unlikely. These storylines were concocted by a flawed, biased mortal, intellectually blinded by human emotions such as anger and frustration and guided by a moral compass forged in the Judeo-Christian faith. An ASI supreme system doesn’t have these constraints, or at least wouldn’t interpret them as such in a mathematical decision model, so it might just as easily have focused instead on designing a more effective vaccine and created a pathway to eradicate the cholera and typhoid diseases, regardless of controlled training datasets. Lastly, notice in both outcomes that the human side of the co-dependent relationship didn’t limit cognitive function access to datasets to accomplish our goals, as in our AGI PROFIT(it) supersystem example. Instead, the supreme system fully accessed and ingested huge datasets in order to make independent and, in retrospect, wrong decisions. Alternatively, training data access to the ASI system should have been proportioned and measured to hypothetically match the goals of our designed algorithms – a type of discrete captain’s ship wheel that we can turn left or right to steer the datafow to better predict our anticipated outcomes. Our human imagination is the only limiting factor in what is possible in deploying a well-trained ASI supreme system into the realm of teaching and learning. Finally, equitable educational opportunities would be inexpensively available 24/7 not just for U.S. learners from all socioeconomic backgrounds across the country but for any learner anytime across the globe. Because of its ability to replicate our best human teaching techniques, skills, and abilities to ofer personalized instruction for any learner that falls within any demographic, our teaching ASI system may only depend upon us to ofer what we do best: benevolent, compassionate, and empathetic learning-related services and supports. The teaching profession would be reimagined and revert back to ofering the kinds of informal learning support systems expected from teachers in early colonial America. Colonial teachers were, in many cases, also considered assistant pastors of sorts, who,

AI Applied to Teaching and Learning 195

in addition to teaching fundamental academic subjects, were required to help shape the manners, behavior, and overall ‘character’ of their learners.85 Moreover, familial and extra-familial cooperation, along with the infuence of sociocultural practices found in communities or villages, were crucial to raising psycho-emotionally and social-emotionally healthy children and young adults. The power of familial and extra-familial encouragement, social-emotional counseling, protection (from teasing and bullying), and celebration (of accomplishments) can be just as signifcant to robust cognitive development as knowledge acquisition, and most certainly are equally vital for healthy psychological maturation.86 No matter how clever or brilliant our ASI supreme system may become, these are the human/human group roles that an ASI may most likely fail to fll, and that future teacher job descriptions may morph into with a greater emphasis of these critical roles. I began this chapter taking a tacit swipe against leaderless public K–12 and university administrations for their inability to adopt twenty-frst-century technologies on behalf of their faculty, staf, and, most importantly, their learners. Even as their state and national budget allocations have been crumbling year after year since the 2008 Financial Crisis, their intransigent, antiquated, and risk-adverse managerial philosophy to not embrace scalable technology to lower costs, increase efciency, and improve learner satisfaction and outcomes is confounding. This lack of innovative leadership, from one superintendent or university president to the next, has now put their institution’s welfare – indeed their faculty and staf’s jobs – in jeopardy. An overview of DALI models, the evolving designs and implementation of Jill, and a detailed description of the PROF(it) supersystem in this chapter served to provide to the reader just a sample of what advanced teaching and learning tools are available for online learning today. As an example to support my premise, it’s not due to lack of marketing or scholarly exposure that no college or university has reached out to Georgia Tech to inquire about adopting Jill as a TA for their online academic programs. As I wrote in the opening sections of this chapter, the global damage that the Covid-19 pandemic has wrought, in both economic destruction and lives lost, perhaps will serve as the ultimate catalyst to improve online/education on a grand scale and force open the gates to an equitable, quality educational experience for all learners.

Appendix Master List of PROF(it) Supersystem Variables

ε s k i j o a c nk no ni nj nj, i ns n𝛿 Nk t td ti t𝜇 tk , k 1

tB te ta, i

2

Iteration Course Learner Topic Subtopic Learning outcome Assignment Comment Number of learners for the current course Number of learning outcomes Number of topics Number of subtopics Number of subtopics in topic i Number of courses Number of documents Number of learners who have ever taken the course Time since beginning of course Course duration Time allotted for topic i Average time taken for learners to complete topic i Time needed to transfer some number of topics between learners k1 and k2 Buffer time Max extension time Time from when assignment a was given to the time of its due date, in topic i, such that ∑ a ta,i = ti

AI Applied to Teaching and Learning 197 ta, k ra, k Ω𝜇 Ωk αi αi, j Ai Pi 𝛿 k 𝜙 fk, 𝛿 r𝑒 k1, k2 TF 𝑔k 𝑔a, k xc, k pa zc, k nc, k ck ℎc, k mk , k 1 2 𝜎k yk w q 𝑔 v 𝜌 uk dk D CZ G, F X, Y Dx, Dy ℒadv, ℒcyc, ℒfull

Time from when assignment a was assigned to time it was completed by learner k t Ratio of a,k ta,i Class average of learner aptitude for the course ΩLM for learner k where k = 1, 2, . . ., n𝑘 Complexity of topic i Complexity of subtopic j in topic i Assessment on topic i Percentage of learners who completed topic i Document Keyword Corpus Raw count of keyword k in document 𝛿 Decision number Arbitrary learners Methodology factor Overall evaluation for learner k (level of achievement) Evaluation for learner k on assignment a Toxicity of comment c by learner k (0–10) Number of points that assignment a is worth Negativity of comment c by learner k, as a ratio of negative reactions to total reactions Negative reactions to comment c by learner k Number of comments made by learner k Accuracy of comment c by learner k Number of topics transferred between learners k1 and k2 Disruption points for learner k Lateness points for learner k Number of learners in high-achieving group Number of learners in low-achieving group Subject-matter comments Semi-subject-matter comments Non-subject matter comments Number of topics learner k has not mastered Number of topics learner k has mastered DALI Naive Bayes categories (HA, MA, LA) Generators Text emotion domains Discriminators Adversarial, cycle consistency, and full losses (Continued)

198 AI Applied to Teaching and Learning (Continued) 𝜆 N B 𝜃 x = [x1, . . ., x𝑇] p𝜃(x) ℒ x1 ℎ𝜃 (x1:𝑙−1) 𝑒(x) Ƶ𝑙 z = [x1, . . ., xT]

Hyperparameter Dimensions, the number of non-neutral emotion choices for a purpose Size of search space Parameters of XLnet Sequence of tokens XLnet Length of token sequence Token 1 Contextual representation of all tokens before the current token Vector embedding of token x Set of all permutations of sequence x Sequence of permuted tokens

Notes 1 Bostrom, N. (2017). Superintelligence (p.  29). Oxford: Oxford University Press; Dunod. 2 Abrahams, D. A. (2010). Technology adoption in higher education: A framework for identifying and prioritising issues and barriers to adoption of instructional technology. Journal of Applied Research in Higher Education, 2(2), 34–49. Bingley: Emerald Group Publishing; Blackwell, C. K., Lauricella, A. R., Wartella, E., Robb, M., & Schomburg, R. (2013). Adoption and use of technology in early education: The interplay of extrinsic barriers and teacher attitudes. Computers & Education, 69, 310–319. 3 Guri-Rosenblit, S. (2006). Eight paradoxes in the implementation process of e-learning in higher education. Distances et Savoirs, 4(2), 155–179. 4 Martin, S. M. (2019). Artificial intelligence, mixed reality, and the redefinition of the classroom (p. 91). Lanham, MD: Rowman & Littlefield. 5 www.wsj.com/articles/schools-coronavirus-remote-learning-lockdown-tech11591375078 6 Kemp, N., & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs. online learning. Frontiers in Psychology, 5, 1278. 7 Kuhfeld, M., Soland, J., Tarasawa, B., Johnson, A., Ruzek, E., & Liu, J. (2020). Projecting the potential impacts of COVID-19 school closures on academic achievement. Working Paper, Collaborative for Student Growth. 8 EdWeek, 39(30), 14. Published in Print: April 29, 2020, as Students are going missing in shift to remote learning. Retrieved from www.edweek.org/ew/articles/ 2020/04/10/where-are-they-students-go-missing-in.html 9 www.bls.gov/news.release/pdf/empsit.pdf 10 www.washingtonpost.com/opinions/2020/05/26/college-students-take-gapyear-use-it-make-difference/ 11 www.acenet.edu/News-Room/Pages/June-Pulse-Point-Survey-Fall-PlanningFinancial-Viability-Top-List-of-Concerns.aspx

AI Applied to Teaching and Learning 199 12 www.wsj.com/articles/how-to-get-a-big-break-on-the-cost-of-college-just-ask11593440265 13 www.clickondetroit.com/all-about-ann-arbor/2020/06/10/university-ofmichigan-sells-nearly-1b-in-bonds-in-wake-of-coronavirus/ 14 https://hub.jhu.edu/novel-coronavirus-information/financial-implications-andplanning/?fbclid=IwAR1bQR_ITpN8fOtbE-WbxxVVZfKH6YYCla-RL2hZfT 1c2wHh0SkCSWTxoy4 15 https://roanoke.com/news/education/virginia-tech-committee-furloughs-paycuts-remain-in-play-tuition-freeze-takes-another-step/article_828d85eb-4811527c-a167-aa623fcc34fa.html 16 Miller, C. C., & Cardinal, L. B. (1994). Strategic planning and firm performance: A synthesis of more than two decades of research. Academy of management journal, 37(6), 1649–1665. 17 Angelino, L. M., Williams, F. K., & Natvig, D. (2007). Strategies to engage online students and reduce attrition rates. Journal of Educators Online, 4(2), n2. 18 Ibid. 19 Cuseo, J., Fecas, V., & Thompson, A. (2010). Thriving in college and beyond: Research based strategies for academic success and personal development. Dubuque, IA: Kendall. 20 Hussar, W. J., & Bailey, T. M. (2014). Projections of education statistics to 2022. NCES 2014–051. National Center for Education Statistics. 21 Ibid.; Abrams, H. G., & Jernigan, L. P. (1984). Academic support services and the success of high-risk college students. American Educational Research Journal, 21(2), 261–274. 22 www.mlive.com/education/2014/07/dude_wheres_my_advisor.html; https://iacsinc. org/staff-to-student-ratios/ 23 www.edsurge.com/news/2019-09-19-counselor-to-student-ratios-are-dangerouslyhigh-here-s-how-two-districts-are-tackling-it 24 Martin, S. M., Casey, J. R., & Etesse, C. (2018). U.S. Patent Application No. 15/901,476. 25 Martin, S. M., & Trang, M. L. (2018). Creating personalized learning using aggregated data from students’ online conversational interactions in customized groups. In Learning engineering for online education: Theoretical contexts and design-based examples. New York: Routledge. 26 Dyson, G. (2012). Turing’s cathedral: The origin of the digital computer (p. 309). New York: Pantheon Books. 27 www.dogonews.com/2016/9/10/georgia-techs-teaching-assistant-jill-watsonturns-out-to-be-a-robot 28 Dede, C., Richards, J., & Saxberg, B. (Eds.). (2018). Learning engineering for online education: Theoretical contexts and design-based examples. New York: Routledge. 29 Goel, A. K., & Polepeddi, L. (2016). Jill Watson: A virtual teaching assistant for online education. Atlanta, GA: Georgia Institute of Technology. 30 Ibid. 31 www.wildfirelearning.co.uk/bot-teacher-that-impressed-and-fooled-everyone/ 32 Goel, A. (2020). AI-Powered learning: Making education accessible, affordable, and achievable. arXiv preprint arXiv:2006.01908. 33 Ibid. 34 Ibid. 35 Goel, A. K., & Polepeddi, L. (2016). Jill Watson: A virtual teaching assistant for online education. Atlanta, GA: Georgia Institute of Technology. 36 Butler, S. (1940). The book of the machines (Vol. I). AS Gilman, Incorporated. London: Macmillan and Co., Limited. 37 Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84(3), 261–271.

200 AI Applied to Teaching and Learning 38 Azevedo, R., & Aleven, V. (2013). Metacognition and learning technologies: An overview of current interdisciplinary research. In R. Azevedo & V. Aleven (Eds.), International handbook of metacognition and learning technologies (pp. 1–16). New York: Springer. 39 Malone, T. W., & Lepper, M. (1987). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. E. Snow & M. J. Farr (Eds.), Aptitude, learning, and instruction. Volume 3: Conative and affective process analyses (pp. 223–253). Hillsdale, NJ: Lawrence Erlbaum. 40 Paris, S., & Winograd, P. (1990). Promoting metacognition and motivation of exceptional children. Remedial and Special Education, 6(11), 7–15; Brophy, J. (1987). Synthesis of research on strategies for motivating students to learn. Educational Leadership, 45(2), 40–48. 41 Nick, B. (2014). Superintelligence: Paths, dangers, strategies (pp. 27–28). Oxford: Oxford University Press. 42 Nick, B. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press. 43 Grasby, P., Frith, C. D., Friston, K. J., Simpson, J. F. P. C., Fletcher, P. C., Frackowiak, R. S., & Dolan, R. J. (1994). A graded task approach to the functional mapping of brain areas implicated in auditory–verbal memory. Brain, 117(6), 1271–1282. 44 Shannon, C. E. (1950). XXII. Programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314), 256–275. 45 Read more about Deep Blue at “Deep Blue,” IBM100. Retrieved March 19, 2019, from www.ibm.com 46 Transfer learning is reusing the same pretrained models, slightly altered to complete a different task, in this case, unsupervised models that were previously trained to learn to play and win the game Go. 47 Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., . . . & Lillicrap, T. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815. 48 Malone, T. W., & Lepper, M. (1987). Making learning fun: A taxonomy of intrinsic motivations for learning. In R. E. Snow & M. J. Farr (Eds.), Aptitude, learning, and instruction. Volume 3: Conative and affective process analyses (pp. 223–253). Hillsdale, NJ: Lawrence Erlbaum; Paris, S., & Winograd, P. (1990). Promoting metacognition and motivation of exceptional children. Remedial and Special Education, 6(11), 7–15; Brophy, J. (1987). Synthesis of research on strategies for motivating students to learn. Educational Leadership, 45(2), 40–48. 49 Nick, B. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press. 50 Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6(3), 205–254; www.tomshardware.com/news/ryzen-7-vs-corei7-9700k,38046.html 51 Preskill, J. (2018). Quantum computing in the NISQ era and beyond. Quantum, 2, 79; Cao, Y., Romero, J., Olson, J. P., Degroote, M., Johnson, P. D., Kieferová, M., . . . & Sim, S. (2019). Quantum chemistry in the age of quantum computing. Chemical Reviews, 119(19), 10856–10915. 52 Herculano-Houzel, S. (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3, 31; Gabi, M., Neves, K., Masseron, C., Ribeiro, P. F., Ventura-Antunes, L., Torres, L., .  .  . & Herculano-Houzel, S. (2016). No relative expansion of the number of prefrontal neurons in primate and human evolution. Proceedings of the National Academy of Sciences, 113(34), 9617–9622. 53 Smith, D. E., Rapp, P. R., McKay, H. M., Roberts, J. A., & Tuszynski, M. H. (2004). Memory impairment in aged primates is associated with focal death of

AI Applied to Teaching and Learning 201

54 55 56 57

58 59 60 61 62 63

64 65 66

67 68 69 70 71 72 73

cortical neurons and atrophy of subcortical neurons. Journal of Neuroscience, 24(18), 4373–4381. Cowan, N. (2010). The magical mystery four: How is working memory capacity limited, and why? Current Directions in Psychological Science, 19(1), 51–57. Norris, D. (2017). Short-term memory and long-term memory are still different. Psychological Bulletin, 143(9), 992. Cooper, S. B., & Van Leeuwen, J. (Eds.). (2013). Alan Turing: His work and impact (p. 617). Amsterdam, Netherlands: Elsevier. Martin, M. S., Naidich, R., Martin, P., Trang, M., & Mohamed, E. (Filed 2017, August 24). An artificial cognitive declarative-based memory model to dynamically store, retrieve, and recall data derived from aggregate datasets. U.S. PTO Patent Application Serial No. 15,686,144. Martin, S. M. (Filed 2018, February 21). Deep learning intelligence system and interfaces. U.S. PTO Patent Application Serial Number 15,901,476. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Li, J., Tu, Z., Yang, B., Lyu, M. R., & Zhang, T. (2018). Multi-head attention with disagreement regularization. arXiv preprint arXiv:1810.10183. Kahn, A. B. (1962). Topological sorting of large networks. Communications of the ACM, 5(11), 558–562. https://doi.org/10.1145/368996.369025 Kincaid, J. P., Fishburne, R. P., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count, and Flesch reading ease formula) for Navy enlisted personnel. Navy Training Command Research Branch Report, 8–75. Martin, M. S., Naidich, R., Martin, P., Trang, M., & Mohamed, E. (Filed 2017, August 24). An artificial cognitive declarative-based memory model to dynamically store, retrieve, and recall data derived from aggregate datasets. U.S. PTO Patent Application Serial No. 15,686,144. Martin, S. M. (Filed 2018, February 21). Deep learning intelligence system and interfaces. U.S. PTO Patent Application Serial Number 15,901,476. Ibid. Martin, S. M., & Trang, M. L. (2018). Creating personalized learning using aggregated data from students’ online conversational interactions in customized groups. In Learning engineering for online education: Theoretical contexts and design-based examples. New York: Routledge. Myers, I. B. (1998). MBTI® manual: A guide to the development and use of the MyersBriggs Type Indicator (3rd ed.). Palo Alto, CA: Consulting Psychologists Press. Lawrence, G. L. (1996). People types and tiger stripes (3rd ed.). Gainesville, FL: Center for Application of Psychological Type, Inc. Martin, C. R. (1995). Looking at type and careers. Gainesville, FL: Center for Application of Psychological Type, Inc. Myers, I. B. (1998). MBTI® manual: A guide to the development and use of the MyersBriggs Type Indicator (3rd ed.). Palo Alto, CA: Consulting Psychologists Press. Almahairi, A., Rajeswar, S., Sordoni, A., Bachman, P., & Courville, A. (2018). Augmented cyclegan: Learning many-to-many mappings from unpaired data. arXiv preprint arXiv:1802.10151. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). New York: IEEE. Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., .  .  . & Narayanan, S. S. (2008). IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42(4), 335.

202 AI Applied to Teaching and Learning 74 Jacobson, S. H. (2004). Global optimization performance measures for generalized hill climbing algorithms. Journal of Global Optimization, 29(2), 173–190. 75 Kena, G., Hussar, W., McFarland, J., De Brey, C., Musu-Gillette, L., Wang, X., . . . & Barmer, A. (2016). The condition of education 2016. NCES 2016–144. National Center for Education Statistics. Washington, DC: U.S. Department of Education; Aragon, S. (2016). Teacher shortages: What we know. Teacher shortage series. Education Commission of the States. Denver, CO: Education Commission of the States. 76 Ibid. 77 McFarland, J., Hussar, B., Wang, X., Zhang, J., Wang, K., Rathbun, A., . . . & Mann, F. B. (2018). The condition of education 2018. NCES 2018–144. National Center for Education Statistics. Washington, DC: U.S. Department of Education; Partelow, L. (2019). What to make of declining enrollment in teacher preparation programs. Center for American Progress, p. 11. Retrieved from www.Americanprogress.org/issues/ education12/reports/2019/12/03/4773 78 Vanlehn, K., Lynch, C., Schulze, K., Shapiro, J. A., Shelby, R., Taylor, L., . . . & Wintersgill, M. (2005). The Andes Physics tutoring system: Lessons learned. International Journal of Artificial Intelligence in Education, 15(3), 147–204. 79 Good, I. J. (1966). Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31–88). Amsterdam, Netherlands: Elsevier. 80 Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York: Macmillan. 81 Korotkaya, Z. (2003). Biometric person authentication: Odor. Inner report in Department of Information Technology, Laboratory of Applied Mathematics, Lappeenranta University of Technology in “Advanced Topics in Information Processing: Biometric Person Authentication”. Lappeenranta, Finland: Lappeenranta University of Technology. 82 Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York: Macmillan; Nick, B. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press; Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. New York: Viking Press. Penguin. 83 English Goethe Society. (1909). Publications of the English goethe society (No. 11–14). Society. 84 www.rt.com/news/401731-ai-rule-world-putin/ 85 Webb, L. D. (2006). The history of American education. Upper Saddle River, NJ: Merrill Prentice Hall. 86 Ramey, C. T., & Ramey, S. L. (1998). Early intervention and early experience. American Psychologist, 53(2), 109.

Teaching and Learning Computer-Intelligent Games

7

Games, whether created for entertainment, simulation, or education, provide great opportunities for machine learning. The variety of possible virtual worlds and the subsequent ML-relevant problems posed for the agents in those worlds is limited only by the imagination. Michael Bowling et al. (2006)1

In March 2019, I stopped off at the Game Developers Conference (GDC) held at the San Francisco Moscone Center, prior to flying to Beijing to deliver a keynote about serious games entitled Not Just for Entertainment: How Games Can Improve the Human Condition at the Tencent Games UP+ Conference. The primary reason for my GDC visit was to attend the teaser pre-launch of Google’s new game-streaming cloud platform Stadia. Rather than providing games to a local PC or from a game console, Stadia promised to render all game graphics, music, and sound on the Stadia cloud in (near) real-time, allowing players to play the most recent 3D graphic intensive games using the Chrome browser on tablet, phone, or PC. To circumvent cloud-server latency between mouse click or controller index-finger pull and the corresponding movement of an avatar, Google required the purchase of a proprietary game controller and custom dongle used to connect the controller to any chosen hardware device. These hardware attachments also helped manage streaming and to buffer the minimum bandwidth necessary for users to play interactive Stadia-streamed games. Still, it appeared that the resolution and fidelity of the games were dependent on a user’s internet speed and Google subscription rate. For instance, for a low monthly fee and with internet bandwidth restricted to 15Mbps, Google would stream your game resolution at a lowly 720p, though all games run at 60 frames-per-second (FPS), offering the visual impression

204 Computer-Intelligent Games

of higher resolution. For a higher monthly subscription rate and using 20Mbps internet bandwidth, Google offers HD-quality games (1080p); for an even higher fee and a faster 30Mbps bandwidth, Google would stream games in 4K!2 Since that pre-launch event, the Stadia solution has evolved and no longer requires the dongle or controller, but does still restrict the number of games you can play to the number of game titles loaded on its cloud server. Stadia now functions via an app available for both Android and iOS/Mac OS devices and computers, with games primarily played through their Chrome browser. Since March 2019, three strong competitors have emerged – Nvidia GeForce Now, Shadow Technology (PC), and Microsoft’s xCloud. All employ similar subscription fee structures and internet bandwidth requirements, although GeForce Now and Shadow allow users to load any game previously purchased from Valve’s STEAM game distribution platform, or any game residing on the user’s desktop PC, pointing to a future method for the purchase and delivery of games by the commercial gaming and tech industry. But it also begs the question: if a company can stream complex 3D interactive games to any player’s computer, tablet, and smart phone without losing fidelity, then why can’t we stream any software application or serious learning game, as well? The impact of such a delivery method of games offers several major breakthroughs for education. First, such cloud-based systems hold the potential to equalize technology – to finally bridge the digital divide in education technology. School systems, universities, and learners alike no longer need to own and continually upgrade their own computer hardware in their classroom, lab, or home. School systems and colleges can finally pare their technology hardware budgets and just pay a small monthly or yearly ‘fee-per-seat’ for the most up-to-date ‘virtual’ PC with the latest specifications, chips, RAM, and scalable SSD storage space, streamed to their older hardware devices. In the case of Shadow Technology’s solution, a learner would see on their device’s screen the streamed ‘image’ of their virtual cloud-based PC desktop, and open and use any applications, games, or tools as if they were locally stored on their hard drive. Second, and more exciting from an educator’s perspective, learners working in classrooms and labs can then access their same top-of-the-line virtual streaming ‘computer’ to continue their work from their home or dorm, regardless of how old their laptop or PC may be. In addition to the streaming seat fees and internet bandwidth costs, schools and universities would just need to license and upload any software required for each course to learners’ virtual PCs, and accessible from any location and from any old hardware device. Third, because virtual PC game graphics and sound are also mathematically rendered on the ‘Cloud,’ this frees up computational resources of the local CPU and locally slotted GPU graphic card(s) to compute mathematically intensive machine-learning algorithms. As we recall, even though

Computer-Intelligent Games 205

CPUs and GPUs run at close clock speeds, GPUs can have thousands or millions of cores, or computing units, allowing them to perform many basic tasks simultaneously and in parallel (i.e., real-time rendering of millions of pixels in a game scene).3 This redistribution of computational power (distributed computing) finally opens up the capability to combine and integrate machine-intelligence algorithms and games – with graphics streamed from the cloud and predictive algorithm computed locally – to potentially personalize game play. Figure 7.1a outlines the current PC architecture that uses GPUs to render 3D game graphics, limiting neural network algorithms computation to a slower CPU. A new potential option is seen in Figure 7.1b, whereby cloud-based GPUs render game graphics and stream the results back to a user’s device, unlocking local GPUs for mathematically intensive parallel algorithmic computations. This breakthrough in cloud streaming of game graphics and synchronized sound, allowing the local PC graphic card to potentially compute machine-learning algorithmic instructions, has occurred in parallel to the development of the latest designs of dedicated machine-learning chips. Of particular importance to achieving computer-intelligent games for teaching and learning is the evolution of edge chips. Edge chips have been installed in automobiles, military and commercial airplanes, manufacturing robots, and

Figure 7.1a Current PC Computer Graphics Rendering and Limited Neural Network Processing Architecture

206 Computer-Intelligent Games

Figure 7.1b Future Architecture to Render Game Graphics on Cloud GPU Servers and Neural Networks on Local PC GPUs

other industrial markets for decades. These generally single-purpose, solidstate, low-powered (10W or lower) chips, sometimes referred to as systems on chips, have resided on the ‘edge’ of computer or server motherboards or on the edge of networks. Historically, edge chips have been designed to help manage and solve larger industrial problems such as synchronized engine valve timing, regulated carbon dioxide release, and monitoring of airplane gyroscopes and wing sensors. It was only in the past five years that ‘AI’ techniques and methods were commercially imbedded on dedicated edge chips.4 Many design challenges needed to be overcome to allow small silicon chips to parse, compile, classify, label, and train machine-learning algorithms. This is sometimes referred to as ‘tiny’ machine learning. Since edge chips are designed for small mobile devices with limited device space and battery output, power consumption would need to drop to 1W or less, compressions of massive data input and output would need a new compression scheme, and memory management would need to be imbedded directly on the chip; all quite problematic. As the size and complexity of machine-intelligent models grow to accomplish even greater tasks, more GPU chips are required to train them on massive big datasets, calculate their algorithmic steps, learn, recalculate, and learn via additional datasets to accomplish a desired output.

Computer-Intelligent Games 207

Even with hundreds of GPU cards available in a cloud server farm and an ultra-high-bandwidth internet connection (in the ranges of 900+ Mbps), there still may be a critical delay in model output and streaming response time. A 100ms delay while waiting for your mobile phone to reboot is one thing, but a cloud-streamed delay or interruption of an instruction to stop an autonomous vehicle at a red-light camera is quite another. At the lowest latency, uploading just 100 MBs of data to a nearby remote cloud farm to be ‘processed’ and then received will still take 1 to 2 milliseconds. Large dataset round trips to cloud servers can take hundreds of milliseconds. As data generation examples, 24-hour security cameras can generate petabytes of data per day, and the massive Airbus A-350 commercial plane has 6,000 sensors that generate 2.5 terabytes of data from every flight.5 A PROF(it) supersystem AGI integrated with a DALI Intervention Advisor/Counselor model and a personal learning map (PLM) can generate exabytes of data from every learner each academic quarter. All three require local edge chip operations for the most critical tasks, saving the rest of the generated data for cloud server storage and round-trip analysis. Lastly, as anyone that has tried to ask Amazon’s Alexa to play a song or ask Apple’s Siri about the weather without an internet connection knows, AI edge chips have been installed on many new consumers’ and industrial IoT (internetof-things) appliances. Yet, due to their small size and limited power, AI edge chips have generally been constrained to powerful but singular tasks, such as facial and biometric recognition on mobile devises (Apple’s Bionic AI (edge) Chips), voice recognition, language translation, and on-board camera auto-editing. For player-learner designed machine-intelligent games, hybrid systems may be ideal. For player-learner machine-intelligent education games, shared processing and rendering assignments could be strategically placed at both ends of a solution architecture (local and cloud), and could therefore lower latency, decrease edge chip power requirements, and increase user response performance. Although incorrectly referred to as Artificial-Intelligence (AI) by game engine developers, game players, and the gaming community, AI experienced in games today is through mostly interactive independent characters and objects known as non-player characters (NPCs). An NPC’s AI behavior can be programmed to mimic aspects of a player’s character behavior in a game, or to use a Field-of-Vision (FoV) from a line, sphere, cylinder, or cone to detect other characters or objects (Figure 7.2). Strength of FoV can be increased or decreased based on distance from another NPC, or from a player’s character. Field-of-Sound (FoS) can also be designed whereby an NPC could detect the strength of your player character’s footsteps in an enclosed environment, such as a dark cave, from a laser gun blast across a field, or from a door slam by your player’s character. One could even program an NPC to ‘see’ and ‘hear’ through walls or inverse

208 Computer-Intelligent Games

Figure 7.2 NPC Field-of-View and Field-of-Sound

their function to detect sounds stronger that cross their fields further away, and are less sensitive when closer to the player’s character. After NPC senses, pathfinding, and potential collision physics are defined, the aspect class (the category of game player characters, game objects, or other ‘child’ NPCs that trigger the ‘mother’ NPC) must be identified. For instance, player characters and blue zombies will trigger an NPC perspective sense action, but not player cars, trucks, or other NPC red zombies. Lastly, actions need to be programmed. What does our mother NPC actually do when triggered: chase you, damage you, damage other player characters, damage other NPCs, etc.? Once also coded, player characters may also damage NPC, but when destroyed, NPCs may be programmed to respawn elsewhere in a game map to menace a player’s character once again until a win or loss is established.

Computer-Intelligent Games 209

Although a number of game engine developers have taken small steps to include the ability of their game builds to also calculate prebuilt limited machine-learning algorithms, the preceding explanation of AI actions does not outline the description of AI that we learned about earlier in this book. Simply adding complex randomization equations or decision trees may give the impression to a player of an evolving NPC, but it remains a fixed, static, and repetitive game feature. Current AI implementation available in game engines isn’t AI at all, but just a simplistic finite state machine (FSM) or basic behavior tree (BT) logic inference construct that can’t ‘learn’ to refine a decision structure from past sense detection, aspect category, or action decisions. Commonly, FSMs offer a few fixed NPC states, such as idle, attack, or flee.6 Although some games do indeed employ types of fuzzy logic structures – an engineering gray area that offers outputs beyond binary constraints or simple true/false options – and even Markov Models, most are just based on finite logic decision trees. Due to game engine design limitations and the rapid game development and publication cycle, current commercial game engines don’t offer the opportunity for developers to integrate true machine-intelligence algorithms to personalize a complete game play experience. Although opensource application programming interfaces (API) have been developed to connect game builds to external machine-learning frameworks and tools as work-arounds to customizing certain aspects of game play, like player experience modeling (PEM) or adaptive NPC behaviors, their effective uses have been imperfect outside of scholarly research.

Serious Games and Machine Learning Ever since University of Michigan professor John Laird’s keynote at the American Association of Artificial Intelligence (AAAI) Conference, and his and post-doc Michael van Lent’s ‘call-to arms’ publication Human-Level AI’s Killer Application: Interactive Computer Games in 2001 for the AAAI’s AI Magazine, interest in games and AI amongst the scholarly research community remains high.7 As we have read, the complexity of games presents a set of unique problems for the machine-intelligence research community. This is most demonstrated by Google’s Deep Mind, an immense investment of financial and human resources in just designing reinforcement deeplearning models to master the discrete games of Chess, Go, and Shogi.8 These experimental, but very successful, research projects represent excellent laboratory environments for researchers to learn more about bounded, discrete, and deterministic-type games. Yet, designing machine-intelligent algorithms that may learn and win more open-ended non-deterministic and continuous open world games poses an entirely different problem set. Other

210 Computer-Intelligent Games

applications of machine-learning and games research include building complex adaptive NPC models that don’t just imitates a human player’s strategy, behavior, and ‘style,’ but complements it by using predictive optimization algorithms to offer a more engaging and personalized challenge. A variation of this unique model is opponent or team challenge modeling, whereby a human opponent’s strategy and style is captured and used to train an algorithmic model to discover decision patterns and exploit errors and mistakes to provide play advantage for an opponent by suggesting movement and decision tactics. An inversion of this model application is the direct capture of a player’s tactics, behavior, and style to superimpose these traits to create an avatar clone. An early example of this player clone modeling was originally developed at the Cambridge Microsoft Research Studio in the U.K. in 2005 for the Microsoft-published racing game Forza Motorsport (Turn 10 Studios).9 Players can train their personalized Drivatar by driving a prescribed sequence of tracks that offer a number of ever-increasing challenges. The machine-learning model then offers a series of new tracks to drive, and interpolates the players style, decisions, and overall behaviors between the two experiences to attempt to clone a player’s actions on future tracks, or imitates a player’s driving tactics and strategy in multiplayer competition with other players. For the Forza Motorsport 5 published in 2013 for the Xbox One, Microsoft stored the parsed player’s behavior and decision data and built the player models on their Azure Cloud, a now familiar solution to overcome the already utilized GPUs on the Xbox console to render Forza’s intensive 3D graphics. The cloud server’s computational capacity also allows for extensive data mining and pattern extraction not just of one player’s datasets, but of all players’ characteristic datasets, adding to the complexity of the algorithms and providing valuable insights into both player’s and clone’s game play.10 A few other early AI commercial game examples include the use of simple decision (behavior) trees to vary NPC’s behavior based on player actions in the Microsoft Halo series games, and Supercell’s Clash Royal player modeling that uses cloud-based predictive techniques to determine player card-targeting – that is, which cards a player may buy to better compete in a game.11 Player modeling, better known as player experience modeling (PEM), is the capture, analysis, evaluation, and potential future replication and/or prediction of a player’s behaviors and tactical and strategic decisions in a game. One method to model player experience is executed by forming a player’s action model by creating a list of finite game situations with associated game actions and then assigning a predictive value that a player will choose a particular action when confronted with a specific game situation. Modeling player tactics is a bit more complex, as they involve teams of players or a player teamed with NPCs, but tracking how a player forms teams, arranges

Computer-Intelligent Games 211

team members, or leads teams when confronted with a game situation may constitute tactical decisions. Again, a model can be developed by creating a list of possible tactics and assigning a predictive value that a player will choose a particular tactic based on available options when confronted with a specific game challenge or opponent situation.12 On the other hand, player strategy modeling is dependent on the interpretation of tactical decision predictions, which may disclose strategic plans and intensions. Modeling player strategy is therefore somewhat obscure, as when a player teams with other players to tackle a game quest. Is the player teaming with others to truly win the quest together, or is the player tactically joining and arranging team members to deceptively expose their weaknesses and eventually have them eliminated so that the player alone may win the quest? Player strategy modeling includes dividing a player’s situation-based behavior into quadrants or categories between aggressive or passive based on past game play, and then calculating a list of possible strategies based on that past data and assigning a statistically predictive value when confronted with a particular game challenge. The quadrant must also include teams available to join, their average mean rankings, and opponent teams’ quadrants. Lastly, player behavior modeling may be interdependent of tactical and strategic modeling and may be focused more on a player’s cognitive internal states, but expressed through external game play. Behavioral modeling may be most complex, as external non-game-related events and circumstances may influence player game behaviors, which influence tactical decisions, which in turn sway strategic plans and schemes.13 Player behavioral modeling mimics personalized cognitive modeling, so game behavior models equate to conventional personality models found in conditional human psychology analysis and evaluation.14 Therefore, research has demonstrated that successful player behavior modeling may even require superimposition of the psychology framework found in the Five Factor Model of personalities (FFM), consisting of openness, conscientiousness, extraversion, agreeableness, and neuroticism, or even a Myer-Briggs type indicator (MBTI) quadrant analysis.15 Thus, game player behavioral modeling is very personalized, as it must include an individual game player’s style, traits, and characteristics, which makes it an ideal model to build and access for designing personalized learning games.16 Figure 7.3 provides a new taxonomy of player experience modeling (PEM) independent and interdependent modules.17 In summary, PEM involves the observation, analysis, and simulation of a player’s actions and tactical and strategic decisions, all modulated by behaviors discovered in game play across a single game, level, quest, or checkpoint. Not surprisingly, one of the most effective methods to build player experience models is through the use of machine-intelligent algorithms. As we have learned, they are particularly ideal in detecting behavioral patterns

212 Computer-Intelligent Games

Figure 7.3 Taxonomy of Player Experience Modeling (PEM) Independent and Interdependent Modules

before predicting future actions, tactics, and even strategies – all critically important attributes needed to personalize games for teaching and learning. There may also be additional elements and features that need to be modeled in order to build a complete customized game learning environment, including other characters, NPCs, objects, and environments:18 1

2

3

Opponent Modeling: accomplished by either classification, where an opponent’s behaviors, actions, and tactical/strategic decision are clustered and match one of several pregame developed models, or through preference modeling, where opponent’s decisions are analyzed and evaluated during predetermined game situations and states.19 NPC Opponent Modeling: non-intelligent AI NPCs may be modeled in the same manner as opponent modeling, using either a classification method or preference modeling. Intelligent NPC opponent modeling requires unsupervised or reinforcement machine-learning algorithms to predict actions-states and tactical-states over time.20 Player Matching: critically important for multiplayer learning games, so that one player or team isn’t humiliated by a more accomplished opponent team, player matching may be achieved by deploying an

Computer-Intelligent Games 213

4

5

6

unsupervised or reinforcement machine-learning grouping/clustering algorithms of all the PEMs available to a game, and detecting similar attributes and predict complementary team-matching PEMs.21 Level/Environmental Adaptivity: may be achieved by training unsupervised or reinforcement machine-vision models on thousands of existing game environmental content (colors, textures, lighting, camera angles, etc.) so that new artificial procedural content generation (PCG) is created that varies from similar game paths/decisions previously taken.22 Sound Adaptation: may be achieved by training unsupervised algorithms by ingesting a game’s sound/music library and mapping an NPC’s sound/music trigger points to adapt and adjust field-of-sound strength to either enhance game play or provide a player with more advanced notice of impending challenges. Difficulty Scaling: most important in learning games, may consist of trained machine-learning algorithms that employ dynamic scripting within a game build to automatically repair script weaknesses exploited by a PEM performance.23

Within the serious game literature, specifically teaching and learning games, various algorithmic techniques and frameworks have been integrated within mostly custom games to accomplish several of the preceding applications.24 To achieve player experience modeling (PEM), both semi-intelligent Markov Models and intelligent decision tree schemes have been previously applied. Markov Models can be used to adapt game play based on player interaction with objects and challenges, as a mathematical framework for modeling decisions in situations where outcomes are partly random and partly controlled, to generate game procedural content, and to mine game data of players to infer future moves and actions.25 Most common serious game semi-intelligent models found in the literature are built using decision tree algorithms designed to make predictions based on a set of decision policies learned from accumulated play data. Examples of application are modeling game flow to personalize experience based on player interactions, to use PEMs to detect errors and mistakes to improve game engagement, and to assess player motivation, states, and behaviors to evaluate player personality traits.26 As we read in Chapter 2, artificial neural networks (ANN) mimic biological neural networks and are designed as a set of connections between nodes within an input, hidden, and output layer. The hidden layers change, or the weights of the hidden layer nodes and layers strengthen or weaken based on supervised, unsupervised, or reinforcement input/output layer correction. Meaning, if the output layer doesn’t produce the intended result, the input layer must be reinforced over and over until the weights of the hidden layers change appropriately. Again, the intrinsic problem in implementing such a

214

Computer-Intelligent Games

machine-learning technique requires extensive use of GPUs on a local computer system, which, when playing a game, are already occupied to render a game’s visual graphics. Still, customized serious games research examples that have deployed machine-learning ANNs, primarily for teaching and learning, have included game flow modification, player analysis and evaluation using external hardware devices such as eye trackers and joysticks, and player cognitive diagnosis.27 Lastly, some of the most advanced machinelearning edge chip designs employ a form of ANNs called neuromorphic or spiking neuron. These neuromorphic designs also imitate biological neural networks but by only operating when there is a spike (synapse) of input data to compile and analyze, similar to human brain neuron synaptic action that also greatly controls energy and power consumption.28 The machine-learning Naïve Bayes classification algorithms have primarily been considered a supervised learning (inductive) prediction technique that maps input datasets to specific types of categories. Its categories may consist of any population of images, texts, or sounds that can be grouped together based on similarities. A common example would be training a model to identify cats from dogs after classifying millions of images of both, correcting the output of the model that strengthens the weights to recognize only images of cats. Although Naïve Bayes algorithms can also be designed to perform multi-class or simultaneous multi-categorical classification, allowing fairly quick training of a task model, they have further modified to function as stand-alone machine-learning models. As will be discussed later in this chapter, Bayesian learning uses a Bayes Naïve probability statistical technique to improve the input datasets themselves. Bayesian learning also incorporates previous ingested datasets and conveys dataset variable uncertainty to adjust model weights to offer probabilistic inference and reasoning to improve model output accuracy and prediction. Other customized examples of Naïve Bayes algorithmic applications include PEM to adapt game play based on user performance outcomes, to predict player developmental risk issues, and potential external health conditions based on game play decision.29 Lastly, the machine-learning Support Vector Machines (SVMs) technique is a supervised or unsupervised two-group multi-classification algorithm that uses a hyperplane (vector) equation to maximize separation of data, and then groups multidimensional datasets with content similarities based on decision boundaries. For example, an SVM algorithm that was trained to classify images of only dogs and cats (decision boundaries) would eventually separate all images, with cats on one side and dogs on the other, as two intersecting planes represented in 2D as a line. In 3D, using nonlinear datasets adding dimension z, the output would be visually divided by a 2D hyperplane, again separating the image patterns into separate groups. Experimental custom serious games research projects that have integrated

Computer-Intelligent Games 215

SVM have been used to measure player performance with external hardware such as eye-tracking and EEG sensors, classify player behaviors during game play, and even control NPC actions based on player behavior.30 For those readers more mathematically inclined, the following material provides a deeper description of the various algorithmic techniques used in serious games research to model player experiences, player psychometric measurements, NPC modeling, team matching, and difficulty scaling. Those less inclined are free to skip to the next section of this chapter. Markov Models: found mostly in serious games rather than entertainment games research, Markov Models or Systems are mathematically composed of a set of states, S, and a collection of actions, A, from single or multiagents within an environment. A transition function, T: S × A – . PD(S) defines the effects as a set of various collections of actions on the state of the environment. A reward function, R: S × A – is added that specifies an agent’s task. An agent can represent an NPC or player character in a game. Although employed more for optimization, recent research has designed Markov game frameworks to deploy reinforcement learning for PEM and NPC modeling.31 Decision trees: primarily consist of a root node, and two units of three nodes each – a decision node and two additional child prediction nodes, and so on. Decision trees are generally designed using a Boolean (yes, no) base condition, and may consist of extensive branches and leaves to provide conditional logical states and decision sets. Used in machine-learning algorithms, decision trees are expert dataset classifiers during model training phases. As unlike many classifiers, decision trees also evaluate dataset attributes by selecting the best ones that discriminate the classes in each node of a tree. With appropriate filtering, decision trees can even provide categorization of the datasets designating how specific attributes or characteristics may differ between different classes making this method exceptional for building PEMs.32 Naïve Bayes: based on the Bayes theorem, and named after the mathematician and U.K. Presbyterian Church Minister Thomas Bayes (1702–1761), is a statistical classification method that may extract features or facts from multiple large datasets that may be independent of each other.33 Naïve Bayes classifiers may describe the probability of an event based on past conditions relating to that event. Application of Naïve Bayes models include TermFrequency-Inverse Document Frequency (TF-IDF, seen in our PROF(it) IDK module in the last chapter), which demonstrates how important a word, sentence, or paragraph content in a document may be given a large collection of documents (chapters, book), Sentiment text analysis, and object or document ranking. The Bayes formula is simply described as: P ( A |B ) =

P( B | A ) P( A ) P( B )

216 Computer-Intelligent Games

Where A and B are two events such that P(A) is the prior probability, and P(B) is the probabilities of A and B independent of each other. P(A|B) is the subsequent conditional probability of observing event A given that B is true. And P(B|A) is the probability of observing event B given that A is true. Applications of Naïve Bayes in learning games include supervised, unsupervised, and reinforcement learning to build PEM, specifically in determining granular player characteristics such as subtle behaviors that motivate tactics and strategy (as apparent separate datasets), team matching, and difficulty scaling.34 Artificial Neural Networks (ANN): Deep learning, or deep neural networks as opposed to machine learning, may comprise millions of hidden layers, each one extracting a different set of high-level features from the previously ingested dataset, detecting more and more precise patterns after each training iteration. Different types of neural network designs include feedforward, the most basic one-directional input to output network, and the recurrent network (RNN), offering a more robust model where data can travel multiple directions, speeding up training and accuracy. There also exists convolutional neural networks (CNN), employed primarily for image classifications that require very little dataset pretraining, and the Boltzmann machine network, a complex network of symmetrically interconnected nodes that facilitates the detection of feature sets composed of binary vectors (yes, no or on, off ).35 Mathematically, an artificial node with label j receives an input pj(t) from previous nodes that are composed of the following:36 An activation state aj(t) that depends upon a time variable, an optional threshold 0j (which is modified by training), an activation function f that computes the new activation at given tine t + 1 from aj(t), 0j, and the input pj(t) yielding: a j ( t + 1) = f ( a j (t), p j ( t ), 0 j ), and an output function fout that calculates the output from the node activation as: 0j(t) = fout(aj(t)). Lastly, the simple back-propagation function looks like: E Pj(t) = 0 (t) wij + w0j where w represents weights and 0j is the bias i i in a network. Being precise at detecting minute patterns and predicting future actions, ANN application in learning games includes player cognitive diagnostics, PEM again, although through an assessment and evaluation approach, and for manipulation of NPC actions and behaviors.37 Support Vector Machines: These have been employed for both dataset classification and associated pattern detection and regression. SVPs pursue

Computer-Intelligent Games 217

the optimum separation of hyperplane between dataset classes by focusing on the training cases (support vectors) that are placed at the edge of the plane of the two dataset class descriptions. As mentioned earlier, SVMs is a supervised or unsupervised two-group multi-classification algorithm that uses a hyperplane (vector) equation to maximize separation, and then clusters (groups) multidimensional datasets with content similarities based on predefined decision boundaries. Most impressively, training cases that are not deciphered as support vectors are deleted, allowing less training data to achieve higher accuracy than most other classifier algorithms.38 In mathematical terms, a two dataset (class) classification problem can be stated the following way: N training sample are available and can be represented by the set pairs {(yi, xi), i = 1, 2, . . . , N} with yi a class label of value ±1 and xi ∈ n feature vector with n components. The classifier is represented by the function f (x; α) → y with α representing the parameters. The SVM technique consists of finding the optimal distance separating the hyperplane so that datasets with labels y = ±1 are located on each side of the hyperplane; and the distance of the closest support vectors to the hyperplane of each side is maximized. The hyperplane is defined by w.x + b = 0 where (w, b) are the parameters of the hyperplane. The vectors that are not on this hyperplane lead to w.x + b ≷ 0 and allow the classifier to be defined as f (x; α) = sgn (w.x + b). The support vectors lie on two hyperplanes, which are parallel to the optimal hyperplane of equation: w.x + b = ±1. The margin between the two equations of the two support vector hyperplanes leads to the constrained optimization problem as:39 1 min{ | w |2 }with yi (w.x + b ) ≥ 1, i = 1, . . . , N . 2 SVM supervised algorithms have been successfully applied to several applications, such as facial recognition (comparing two different facial features), text categorization (between two text corpus), and database mining. SVM learning game applications are most successful when two distinct but not entirely unrelated dataset classes are mined for patterns and classified. Robust examples include the development of a new learning game’s cognitive and neuropsychological assessment frameworks and physiological signal performance analysis.40 Machine-intelligence frameworks and techniques coupled with serious games offer great potential to enrich and personalize player-learner experiences. Well-designed and integrated models may provide game platforms,

218 Computer-Intelligent Games

editors, and engines with a resourceful and personalized method of learning based on the player-learner themselves – their behaviors, tactical and strategic decisions modeled as a player experience (PEM) personalized learning map (PLM). These algorithmic techniques are sometimes combined in one learning game to take advantage of their distinct effectiveness. One such example would be to adopt a Bayesian network to classify a player’s big five behaviors, and SVM to classify related game tactics. Yet, as we have read, to run these computational voracious machine-intelligent algorithms within a high-fidelity game build would constitute a major challenge outside of a custom computer research laboratory. As we will read in the next section in this chapter, commercial game engine developers, game studios, and game publishers (alongside nascent products such as Stadia, xCloud, and Shadow PC) will once again lead the innovation arc of seamless machine-intelligence game integration that will be discovered in future commercial game releases. The key for us educational researchers, teachers, and curriculum designers, however, is to take advantage of such transformative commercial technology once again and not resist, but appropriately and quickly adapt and adopt this new player-learner archetype for our learning spaces. Only with the nearreal fidelity and polish of familiar, enticing commercial games, worlds, and maps will our player-learners engage with and tackle the most challenging subject matter with vigor.

The State of AI Game Engine Integration A game authoring engine, sometimes referred to as a physics engine, is a software editor and compiler for building computer or video games. One of the most important features and tools in a game engine is the ability to imitate Newtonian physics of 2D and 3D objects and characters (or not), and their physical interactions between each other (collision). Other features include sound and music editing, manipulation, integration, and placement; the rendering and animation of 2D and 3D graphics; and the ability to program game objects with 3D models, characters, cameras angles, headup displays, lighting angles, particle systems, sprites, sounds, and important NPC states. Developing a game from a contemporary game engine, such as Unity (Unity Technologies), Unreal (Epic Games), or Lumberyard (Amazon), is a fairly simple process:41 • •

Import the game’s assets, such as 2D art or 3D models (rigged), sounds, music, etc. Program in C#, a simpler visual version of object-oriented C++, JavaScript (or other game-specific script), or Boo to control objects, scenes, and logic.

Computer-Intelligent Games 219

• • •

Test the game play in the authoring game engine. Test the game build on the intended final computer browser, console, platform. Deploy.

Game engine C++ programmers rely on internal scripting assemblies and compliers (like Visual Studio/.Net) for the final game build. Both game engine assemblies and compliers also use internal and/or native or managed tools, plug-ins, APIs, and libraries that can be imported into game engine libraries, or link to external libraries that offer unique attributes and additional game functionality. As previously mentioned, however, modern commercial game engines are not designed to utilize any significant computer hardware to compute the mathematic demands of complex machinelearning algorithms as discussed earlier. Therefore, most native and managed plug-ins and API programming language libraries built for game engines are either not powerful enough to run contemporary machine-intelligence algorithms nor compatible with complex low-level libraries linked to an OS to access hardware system resources. General open source mathematical libraries, like Eigen, which is used occasionally for complex game functionality, do not organically support machine-intelligence algorithms, either. Moreover, Google’s Tensorflow or Facebook’s PyTorch C language-based machine-learning compiling frameworks have their high-level features written in script languages such as Python, and are not natively supported in game editing engines such as Unity or Unreal. Javascript, or .js, is, however, generally supported as an alternative high-level scripting language, but with rather limited libraries available to program complex game functions. Although not included extensively in machine-learning game development, updated mathematic framework libraries like those found in Accord. NET, designed for .Net environments, now offer modules such as Accord. MachineLearning and Accord.Neuro. These are specifically designed to support decision trees, Naïve Bayesian classification, support vector machines classification, and some ANNs functionality. Additionally, Accord.Vision allows real-time face tracking and the detection and tracking of predefined moving objects within video streaming content, whereas Accord.Sound was created to process, filter, and mix audio signal for machine-learning applications, a most unique option to manipulate sound files in machine-learning serious games.42 The Eigen mathematical framework (standard C++ library dependency) libraries have been incorporated into Shogun, a compound machine-learning library sandbox, and into Nimble, a platform for programming Markov Models and other statistical systems.43 Furthermore, due to their power and flexibility, the Eigen mathematical framework libraries have been further refined and renamed Eigen tensor and have been imbedded

220

Computer-Intelligent Games

into the Google Tensorflow framework itself. Eigen libraries were chosen by Google as a core feature due to their highly mathematically optimized acceleration of matrix and vector operations that require parallel GPU calculations.44 A recent software tool developed to facilitate machine-learning applications is TensorflowSharp, a C# wrapper of the Tensorflow’s core C library using the .Net framework. It is an opensource wrapper that binds .NET to the TensorFlow library and is built upon the Tensorflow C language API as a strong .NET API for use with C# commonly used in Unity. A game developer could use Python to bind to Tensorflow, or could use the wellknown open-source Python Keras API (which can run on top of several of the machine-learning frameworks mentioned) to train an external model to a .pb file, before using TensorflowSharp to import the .pb model into Unity. It’s not exactly native, but a C# game build would function almost flawlessly sans the latency resulting from the cloud-hosted TensorFlow and Keras framework round-trip. If enough local graphics cards are available on the local desktop, a developer may install CUDA (a type of programmable API between a CPU and GPU), the cuDNN library, and the tensorflow-gpu python package to process machine-learning algorithms locally, as well.45 In 2017, Unity Technologies unveiled its Machine-Learning Agents Toolkit (ML-Agents) to allow game developers to train and embed ‘intelligent agents’ in the Unity development environment.46 Developers could train the machine-learning model, or agent, with Python using TensorflowSharp, and insert the trained model output results back in Unity. Since they are very limited in offering true machine-learning functionality, beyond the addition of advanced NPC behavior features, the beta ML-Agents were mostly ignored by the commercial and serious game industry, but elicited excitement from the game research community. The latest version, released in May 2020, included four packages that include better Python support, improved public API definitions, and an improved Unity Inference Engine, which now proports to support neural network functionality and apply the network on any Unity-supported game platform. However, it does not allow continual unsupervised, reinforcement learning, or any active training of neural networks in a running game.47 Epic’s Unreal Engine games are directly programmed using the C++ object-oriented language, but developers also have access to the native Blueprint Visual Scripting system. Blueprint is a visual node-based system that is similar to C#, allowing easier and quicker prototyping, and whose scripting classes are also converted into native C++ classes directly within the game engine. Similar to the Unity game development engine, Unreal supports a few limited machine-learning algorithms primarily to improve NPC performance. Relying on another native tool called Unreal Blackboard, programmed behavior/decision tree assets available in Unreal use

Computer-Intelligent Games 221

predetermined Blackboard logic to execute branches and predefined Keys, or triggers. As an example, if ‘Tank a’ crosses terrain y = 0, then key a branch that sends NPCs to hide. Or if ‘Tank a’ crosses terrain x = 1, then a different keyed branch is executed, and a keyed NPC may attack. Unreal also offers a unique Environmental Query System (EQS) that can be called from behavior trees to retrieve terrain information to train NPCs to find the best line of fire to a player’s character in FPS games, or the best place to hide from a player character within a predefined terrain map. Lastly, the Unreal embedded ‘AI’ framework contains a Perception System similar to the graph in Figure 7.2, that uses Components to give NPCs sensing abilities: to determine where noises originate, to visually detect assets and objects in motion, and to detect and repair self-damage.48 Again, Unreal, analogous to the Unity game engine, does not natively contain any true parallelized machine-learning capabilities, and must rely on imported modules for limited algorithmic computation locally on the game development PC, and then use plug-ins or APIs to connect to more robust external frameworks, either stored on a network hard drive or in a cloud server. A contemporary example is the tensorflow-ue4, a plug-in that contains Blueprints, C++, and Python scripts that camouflages Tensorflow operations as an Actor Component, but still requires machine-learning operations remotely to assemble/ compile operations on a cloud server.49 Or Unreal game engine developers could use the UnrealEnginePython plug-in fork to manage inference engine APIs to directly interface with either cloud-based Tensorflow or PyTorch frameworks, and then integrate their output in an external database connected to the Unreal game engine, or, if theirs is room in the build, directly within a game. Even though the Lumberyard game engine is designed and developed by Amazon Game Studios, a division of Amazon/AWS, one of the foremost developers and publishers of robust machine-learning software tools and framework (SageMaker), it has limited native machine-learning support sans for enhanced NPC interactivity. Lumberyard also uses a visual scripting environment called Script Canvas, similar to Blueprint, that employs Lua to create quick and efficient event-driven logic and NPC behaviors. Script Canvas code classes again are mapped onto C/C++ native classes directly within the game engine modules. In Lumberyard, a developer could use Lui or C++ API calls if the operation has been predefined in the API engine. To date, no direct external Tensorflow, PyTorch, or Keras API support has been defined, although a developer could execute a Python Binder Editor to run Python Scripts within Lumberyard, define a Cloud Gem Portal, and configure the API Gateway to run cloud-based machine-learning frameworks to execute data analysis and predictive modeling.50 Game engine developers have recognized the value of machine learning to personalize aspects of commercial game play, such as PEM for customizing

222 Computer-Intelligent Games

combat, NPC interaction for enhanced single-player game play, procedural generated content, improved player matching, and to increase monetization. However, the intent of current game engine ‘AI’ development support is still primarily restricted to improving NPC-to-player character interactivity. The design and development of commercial game engines, extremely complex applications sometimes built over several decades, were never intended to host robust machine-learning or neural network mathematical libraries and organic frameworks. Therefore, in order to avoid using patches, workarounds, plug-ins, and APIs to execute external machine-learning algorithmic operations, a new AI-game engine may need to be designed and built. Although I have used the term ‘game engine’ or physics engine to describe a software application with an embedded editor used to develop games, as many in the game industry do, it is actually a misnomer. A commercial game engine contains multiple ‘engines’ or a collection of logic-based software components. For instance, most game engines include a separate physics engine, graphics-rendering engine, audio-processing engine, and a smaller AI engine, as we have learned, to initiate NPC decision trees triggers and NPC pathfinding.51 Besides requiring an editor UI, a simplified contemporary game engine must include the following components: • • • • • • •

Graphics rendering engine supporting 2D or 3D graphics and animation frames Physics engine that supports collision detection Audio engine to load and play sounds and music files Game play logic module controlled by scripting or lower-level programming Networking framework to allow for multiplayer, downloadable content, and leaderboards Memory management Artificial intelligence module for NPC pathfinding and computer opponents

Figure 7.4 outlines a simplified diagram of a contemporary game engine. When a player triggers an event through a game user interface (UI), such as when a player strikes an enemy NPC or fires a laser blast at an oncoming vehicle, an event handler will trigger a preprogrammed single action or logic decision tree-related response in a game. Another simple trigger may be tripped by player character forward movement within a terrain map, whereby the event handler generates a preprogrammed command for the rendering engine to display the next 3D graphic environment retrieving preassembled assets from the asset database created during the game development.52 Physics engines that reside within game engines generally use their own assigned physics libraries and are responsible for taking game engine

Computer-Intelligent Games 223

Figure 7.4 Simplified Game Engine Architecture

predefined properties of an object’s shape, mass, material, forces, and trajectory, and then calculate potential new positions and states of the object. As an example, a game engine loads a predeveloped game level and sends a command to the physics engine that there are 25 various objects in a scene at set given positions. When moved, exploded, or destroyed, the physics engine then calculates updates of those objects’ states. The graphics rendering engine is generally synchronized to the changed states made by a physics engine, so that any object changes are rendered appropriately (on a player’s screen). A physics engine also may trigger an event when two objects collide (a laser blast from a player’s character’s weapon hitting an NPC), or when two objects separate that were previously connected in some way. The physics engine sends these commands to the scripting model in the game logic module which in turn may increase a player’s score or increase strength or abilities of a player’s character. The scripting model may also determine whether a game uses Newtonian physics or even Martian physics, potentially applying the altered gravitational forces on game objects, object collisions, or a player’s character as it floats in an environment of various states of altered gravity.

224 Computer-Intelligent Games

A typical game-rendering engine contains a rendering manager that supervises the data transfer from a CPU to a GPU, and the overall rendering process. In order to render images today, three required graphic datasets are transferred to a GPU(s) via an internal API rendering manager called the Open Graphics Library (OpenGL (2005)), a cross OS, cross API tool. The three required datasets include the following:53 Uniforms – provides 2D or 3D object spatial data including world, camera, and object space in order to place an object in relation to a player’s view Attributes – data used to construct the geometry, lighting, and wrap an image over an object Textures – uses U-V coordinates data to map a visual texture upon an object Once the datasets enter the GPU, the rendering manager then helps compile each dataset to assemble, like components on a conveyer belt, a game object or character in a moment in time. Although we have already described graphical processing units as an ideal chip for processing machine-learning algorithms, let us now explore the details of a dedicated graphics-rendering GPU. A typical GPU consists of four or more generic types of shaders – specific programable code that takes the three dataset types just listed and transforms them into geometric shapes, colors, and space, and applies lighting effects and visual effects. The four primary shaders are: • •





the Vertex shader, responsible for processing object vertices, transformations, sometimes processing lighting, and preproduction for later stage shaders; teal-time graphics Tessellation, the process of taking a vertex set or mesh and subdivides or divides an object mesh into ever smaller triangles in preparation for the rendering of smooth edges and surfaces based on the distance from a game viewer camera;54 the geometry shader, located after the Vertex shader and Tessellation shader, in the GPU conveyer belt processing, which adds or subtracts mesh triangles post tessellation, a final manipulation of geometry before the Fragment stage; and55 the Fragment (pixel) shader, which rasterizes or receives rasterizes pixels of an object and applies colors and textures.

There could also reside any number of additional preprogrammed shaders or customized programmed shader functions added to GPUs to accomplish specific graphics tasks or effects found in many games and architectural CAD programs. Figure 7.5 outlines graphic input datasets and GPU shader functionality.

Computer-Intelligent Games 225

Figure 7.5 GPU Conveyer Belt Object Assembly

Although audio engines that reside within commercial game engines are designed slightly differently, most have the following basic functionality:56 Audio Source – the preprocessed and premixed imported audio file that sometimes is attached to game objects Audio Detector/Listener – when another defined game object, many times the game engine camera or an NPC FoS detects the audio file’s placement when played within a game environment Audio Mixer – replicating an analog mixing board, allows the mixing of multiple sound and music sources, grouping of sources, and the ability to assign sounds to software effects and filter modules Although an audio designer may also use native C++ to edit, manipulate, mix, and assign audio files, game audio engines also allow the use of the same scripting languages and tools such as JavaScript, Lua, C#, or Blueprint to assign sound files to objects, design sound detection schemes, or imbed music in game level. Using contemporary audio engines residing within game engines, game sound designers and composers can launch multiple mixers and dozens of sound effects and music scores in one level scene, and even assign these audio sources to multiple ‘speaker’ channels, creating even 6-channels (5.1) audio

226 Computer-Intelligent Games

game environment. Most game scores and sound effects use digital signal processing (DSP) techniques that manipulate time domain algorithms to add reverb and echo to sound, frequency domain to change pitch and tuning, and amplitude domain processing to manipulate loudness over space and time – all available through native and nonnative internal and external audio plug-ins and APIs as well. As an example, Unity Technologies developed their own DSP plug-in, SDK, for the Unity game engine, which allows sound designers to add more complex sound effects and sound functionality to objects in a 3D game environment, such as applying the doppler effect and sound localization origination in 3D space.57 As the audio files sizes and the sheer number of audio tracks increase, many game engines allow for redistribution of processing calculations from CPU to GPUs using Open Audio Libraries (APL), a specific cross platform API for simulated 3D audio space, a software variation of OpenGL.58 Although early versions of OpenAL divided computational tasks between a CPU and specific internal audio hardware cards, newer implementation allows for the API to communicate with GPUs and the reprogramming and replacement of graphic software shaders with audio-specific processing ‘shaders.’ Commonly used today are third-party middleware applications like FMOD, Wwise, and Steam Audio that interface with core game engine audio libraries through APIs allowing much more detailed and precise management of sound behaviors, signal processing, dynamic filtering, and spatial sound/object placement during game development. Some middleware audio APIs integrate directly with GPU graphic shaders to directly connect amplitude processing with object mesh datasets so that developers can block sounds (amplitude) from traveling through them, like walls or mountains, or limit sound effects to a particular defined volumetric space.59 We’ve previously discussed game ‘AI’ engines that are primarily used to execute NPC actions and behaviors. We’ve also reviewed how internal or external GPU utilizations may offer the ability for precise PEM, NPC modeling, player matching, and difficulty scaling, and overall personalization of game interactivity and flow. But we haven’t addressed how game/game engine real-time performance data may interface with machine-learning models and GPUs to accomplish our personalized game learning goal. As OpenAL is an audio-specific derivative of the OpenGL graphics processing internal API used to interconnect a computer’s CPU and GPUs, Open Computing Library (OpenCL) is a specifically designed API medium to allow complex non-graphic computations on hardware platforms like GPU chips.60 Exploited in Google’s Tensorflow and as a variant in PyTorch (PyOpenCL), OpenCL (Apple Inc. 2009, 2020) provides a low-level common language and programming interface API for developers to create software applications that require data-parallel computations for computing environments with a host CPU and any connected OpenCL targeted chips, software-based

Computer-Intelligent Games 227

simulations, or other ‘devices.’ The OpenCL programming interface offers functionality for identifying target devices such as additional CPUs/cores, GPU/cores, and specialized accelerators, and helps manage memory allocation between host and device. OpenCL can also help compile applications and kernel functions on target devices, launch separate kernels on target devices, and even check for compiling errors. Although rarely utilized in game engines, OpenCL provides an ideal internal API to exploit the massive number of core computing units available in either internal or external GPU chips for parallel computation required for effective employment of machine-learning models and neural networks.61 Two major GPU manufacturers, Advanced Micro Devices (AMD) and Nvidea, have released multiple advanced OpenCL software drivers and SDKs that allow developers to help coordinate CPU and multicore GPU workloads to GPU bandwidth, provide C++ hosting to create parallel data processing optimization, and the manage multiple GPU, theoretically exploiting up to millions of computing cores (distributed computing). Similar to OpenCL, but specifically designed for machine-vision applications, Open Computer Vision (OpenCV) first release in 2002 is another open source internal API supported by a library of now over 2500 optimized algorithms for visual object recognition, facial recognition and tracking, video tracking, and red eye removal.62 Although OpenCL is mature and a stable internal API framework for managing heterogeneous computing for CPU-to-GPU data, Nvidia, the leader in GPU design, also launched its own Compute Unifies Device Architecture (CUDA) in 2006–2008, a customizable parallel computing framework for its own GPU chips.63 As Nvidia has launched special purpose GPUs with an ever increasing number of computing cores – for graphics, machine learning, data mining, crypto mining, and robotics, so has CUDA improved to support massively parallel GPU computing available in cloud server farms across the globe. Perhaps then, with both the advent immensely powerful GPUs, and the mature OpenCL and CUDA parallel processing frameworks to interconnect and manage the parallel data flow to and from these hardware chips we now have the makings of a new learning game architecture, or at least all the elements to support a machine-learning-based learning game engine that may offer the potential to realize and underpin our goal of designing a personalized learning game engine.

A Personalized Learning Game (PLG) Engine Viewed as an art, the success of education is almost impossible since the essential conditions of success are beyond our control. Our efforts may bring us within sight of the goal, but fortune must favour us if we are to reach it.64 Jean-Jacques Rousseau (1762)

228 Computer-Intelligent Games

Now that we have a summary understanding of the architecture of current commercial game engines, let us now reimage that design as an AI-focused learning game engine where the AI engine is at the center of the game engine design entirely hosted in a cloud server to take advantage of their ever-expanding GPU computing cores. Critical to this design is the ability for any learner using any hardware device or OS to experience the same content at any time, and for the learning game to be able to adapt to accommodate learner-player game actions, decisions, tactics, and strategies while maintaining game-course learning goals and outcomes. Cloud game streaming makes high-end 3D game playing possible on low-end ‘thin’ devices like tablets and phones without dedicated multi-core GPUs by offloading real-time graphics rendering to GPU-powered cloud servers, server containers, or ‘edge servers’ (not device edge chips but virtual machines like the Shadow Technology product edge network). A cloud server runs a game engine or game build, renders game play graphics and sound/music generated by the game, encodes it as video and imbedded audio, and streams the combined encoded data to a thin device, where the stream is decoded and displayed on a screen and played through a local device audio speaker. In a cloud game streaming architecture, a thin device will also parse player’s screen touches or mouse movements and transmits them as game commands to the cloud game server which relays the realtime datasets to the game engine I/O manager/event handler. On the cloud-side, a player’s finger, pen, or mouse movements occurs locally to the resident game/engine, and the results are encoded again and transmitted back to the player’s thin device screen and audio speakers. Figure 7.6 provides a simplified overview of this ‘thin device’ to cloud-based game video/ audio streaming architecture. Clearly, another advantage to this scheme is that developers only need to create a game for one platform and one OS, the server’s OS, and not have to convert or port a game to other platform formats. However, as we read earlier, sufficient and dependable bandwidth between the game cloud server/encoder and the ‘thin’ device decoder is now a critical Quality of Service (QoS) and Quality of Experience (QoE) requirement. QoS generically may be defined as a set of technical attributes deployed to define a network’s capability to meet the expectations of a user and the requirements of an application. QoE is a combination of subjective experience expectations of a game designer, developers, and player, and the cognitive level of sustainable ‘flow’ a player achieves during game play.65 Other game play video-streaming schemes have been deployed to accommodate lower available bitrates or interrupted network service. Browserbased games allow all game executions like event handling, logic, memory management, NPC functionalities, and screen touches and mobile device button click to be parsed in the cloud, and all graphics rendering to occur within a browser via onboard integrated graphic chips.66 This architecture

Computer-Intelligent Games 229

Figure 7.6 Thin Device/Game Cloud Architecture

is acceptable for 2D games with limited maps and non-timed limited player choices, as a computer or mobile device browser will eventually need to offload from the CPU to a dedicated GPU rendering chip. This technique eliminates the need of extensive bandwidth for cloud server encoded video game streaming but limits the design of the game. Effective splitting schemes of dynamic graphics rendering loads have also been proposed that can divide-up graphics rendering between a cloud video streaming server GPU and a thin device’s special-purpose resident GPU chip or ‘edge chip.’ Such a split scheme architecture requires the uploading of game commands with a rendering delay optimization layer, server-side video rate adaptive streaming, and a thin device delay optimization model to decode the video and to ensure all the game play video and audio from both sources are synced correctly.67 This complex architecture may be an effective work-around for limited internet bandwidth but still requires a complete redesign of a game engine/game. Another server-cloud game play scheme that has been tested, but not widely adopted, to alleviate the need for stable and sufficient high bandwidth is based on the human vision theory of foveation. Here, a perceived resolution of an image corresponds to the direct visual focus, or the density, of human eye photoreceptor cones focused on one location. Using an eye-tracker to detect where a player is looking, we would only need to provide high-resolution video streaming resolution for that area on a screen and drop the resolution elsewhere on the screen for every second of game play.68 Foveated game play rendering and game encoded/decoding streaming holds promise but, then again, requires calibrated eye-tracking hardware

230 Computer-Intelligent Games

and its software drivers for each thin device used for game play, undermining ease of use and adding expense and technical complexity. Although modern game streaming architecture alleviates the need for developers to modify games for multiple platforms, ‘thin’ devices like smart phones, tablets, and even watches create additional challenges. Desktop mouse clicks, keyboard strokes, or joystick motions still need to be converted to mobile buttons and screen touch. This is accomplished by essentially adding scripts or calling a preprogrammed library that captures a touch position on a screen and maps that to a mouse click or keyboard click in the same location on a desktop display, or assigning a UI touch to the same function as a click or keyboard command in a cloud desktop game build. Although there are several methods and schemes for achieving adequate video streaming bandwidth that attempt to ensure a QoE by throttling video resolution up and down bounding latency in acceptable ranges, the most common is the scalable and adaptable bitrate streaming protocol (ABS).69 The ABS methodology allows for dynamic stream adjustments, from slightly lower resolution to higher resolution, based on available bandwidth and quality state of the network used. Meaning, data-drops based on usage demands, network damage, network data flooding, or other temporal issues are addressed in real time by the ABS throttling up or down to continue to stream video. Most video and audio software drivers on PCs and mobile ‘thin’ devices utilize the Advanced Video Coding (MPEG-4 AVC) standard, better known as the H.264 standard to decode Standard Definition, High Definition (HD), and even Ultra-High 4K video/audio resolution streams. Full HD requires a stable and consistent bit rate in the ranges of 7–10 Mbps at 60 frames per second, and UltraHigh 4K requires a minimum transfer bit rate of rate of 60–80 Mbps for a resolution of 60 FPS.70 If transfer rates drop based on capacity limits, usage gates, or network damage, the H264 standard drops the bitrate and/ or frame rate to stream at a lower resolution and bumps the rates higher as bandwidth becomes available or network repairs are made. Today, 4K @ 60 FPS bitrate is about the fastest, best resolution that nongovernment commercial internet service providers offer. The current mobile fifth-generation (5G) wireless specifications, rolling out across the U.S. en masse in 2020 and 2021, allow for mobile device cell upload speeds of 10 Gbps and server download transmission rates of up to 20 Gbps. Although these data rates are certainly advantages for 3D or immersive game video streaming encoding and decoding, the 5G transmission single transmission tower signal strength is inherently weak and so necessitates a quadrupling of new transmitters to be closely installed next to each other to overlap in areas of use. Lastly, as for the 5G cell protocol, regardless of using the millimeter wavelength, mid-band, or low-band protocol, all require a new cell receiver/transceiver chip in every thin device.71

Computer-Intelligent Games 231

In summary, the limitations and the technical requirements of employing cloud game play architecture to stream near real-time 3D games equitably to both desktop and mobile ‘thin’ devices pose challenges, but none that are insurmountable. Moreover, the advantages of deploying learning games using this scheme over the others examined in this chapter truly open up the possibility of low-cost scaling of a single game/game engine to any computing device across an entire state community college system or school district. Only increased cloud-server bandwidth instance usage costs and potential internet service provider fees would increase with usage, but as more companies enter the internet service provider market, through cables, satellites, and cell towers, bandwidth is being priced as a commodity, and costs will fall. Therefore, in order for all learners, regardless of the device they use, to experience personalized game learning, we now need to design a new integrated cloud-based game engine architecture with AI/machine-learning capabilities as its core. Our new PLG AI engine would need to include a dedicated event manager or event processor/classifier, a dedicated memory manager, a knowledge (data)base, and a machine-learning intelligent inference framework. It must also include a UI/editor that allows developers to design or program different types of ML classifiers, to employ various training techniques, to program OpenGL, OpenCL and/or CUDA APIs, and to manage other special purpose plug-ins, libraries, and APIs. The event processor in our AI engine is a parallel processing framework intended to manage massive dataset streams commonly produced from games. Our event processor would also require near real-time analytics, aggregation, and data (pre)classification to help speed our machine-learning processing and GPU computations, so that player adaptations are quickly reflected in a game. Examples of such opensource event processing tools are Apache Spark and Hadoop.72 To be most effective, the machine-intelligence models that will reside in our learning games’ AI engine must provide descriptive analytics (what the player is doing during game play), predictive analytics (what the player may do during game play), and prescriptive analytics (what the player should be doing during game play). An AI learning game engine may also require a game world intelligent inference model that applies knowledge from a preexisting knowledge base to current learner-player game states, and decides on which internal and external actions to trigger based on a game’s learning goals. Game states of a learner-player are determined by data points fed by game environment locations sensor scripts and action triggers, similar to NPC states and other player states stored in the AI engine knowledge base. Based on game states, an intelligent inference model must detect and identify knowledge relevant to both stored PEM data, the learning silhouette (PLM) of a learner-player, and the stored learning goals of a game in order to make real-time game play

232 Computer-Intelligent Games

adjustments. Examples of internal actions include lowering an NPC power jumping rating and/or increasing a learner-player’s jumping ability to avoid losing a mathematical geometry challenge for a third time. Alternately, an external example may include simplifying the learning goals and associated assessment rubric for one game level to give the learner-player the perception of more control of their learning environment to increase motivation and satisfaction. Historically, in games, inference algorithms mimic the NPC ‘idle, attack, or flee’ logic scripting decision tree schemes, whereas new intelligent inference machines more depend upon a type of reasoning to analyze, predict, and/or prescribe actions or tasks. Although all inference models walk through a series of decision steps or go through a decision cycle to ‘detect’ an action, ‘select’ associated rules based on the detection, and ‘execute’ the action or task in a game, intelligent models deploy an additional chaining algorithm to infer not just existing knowledge from a knowledge base, but potential new knowledge as well:73 •



Backward chaining – begins with an extensive facts list and works backwards to see if the learner-player actions, once detected and matched to predefined rules, support a fact. Backward chaining highlights what actions or decisions must be true to support a hypothesis as a sort of ‘why has this happened?’ from ‘what already happened?’ and deduces a fact. Forward chaining – reasoning methodology beings with available data and utilizing rules to infer new data. Forward chaining starts with known facts and uses them to create new facts through deduction and probabilistic reasoning rules.

Deploying either backward and forward chaining, depending on the learnerplayer game state, is also referred to as opportunistic reasoning, a methodology of applying either or both forms of inference when the opportunity surfaces to increase or improve upon the knowledge database.74 In our learning game engine, that may be when a learner-player makes an unforeseen unplanned navigation decision in a game that provides a better angle to use a laser blaster to hit a target tied to achieve a goal, or discover an unplanned path to cross a river to that holds the answer to solve a problem. Backward chaining would help infer why it happened from what happened in the past, and forward chaining would add the fact the goal was achieved, how it happened to the knowledge base. Figure 7.7 depicts a diagram of basic forward and backward chaining. For some of the probabilistic reasoning problems such as these, our famously described Bayesian networks are commonly utilized, as they offer structured probabilistic relationships between multiple known and random

Computer-Intelligent Games 233

Figure 7.7 Backward and Forward Chaining

datasets, representation of conditional independencies, and generative modeling that allows seemingly arbitrary quests to be answered and solved – all of which we will require to personalize game play.75 Moreover, as previously mentioned, although Bayesian networks have historically been used to perform statistical inference for linear regression and to classify data to train machine-learning algorithms, modern utilization has deployed Bayesian ‘learning’ to reinterpret traditional machine-learning models as probabilistic models that can take seemingly random variables and infer unknown reasoned variables.76 Some advantages of using Bayesian learning over tradition machinelearning techniques in our AI engine are as follows:77 •



When training traditional machine-learning models with a small number of datasets, the model will only ‘learn’ about the data structure ingested. However, Bayesian learning may infer new data from these limited number of datasets and, through reinforcement learning, add this probability to the next ingested training dataset and knowledge base. Traditional machine-learning algorithm often struggle to learn from big data, as the computation requires large amounts of computer and GPU memory allocation. Bayesian learning models may be employed

234 Computer-Intelligent Games



for chunk incremental learning from extremely large datasets, allowing efficient data inference and memory management. Prediction uncertainty, a concept developed in the wake of the irregularities in predicting an output based on plausible alternative inputs, is often ignored or misinterpreted in traditional machine-learning science. However, due to its dynamic complexity, it may potential benefits in teaching and learning. Bayesian learning subtracts much of this uncertainty by using probability distribution to infer possible alternative inputs and includes these in the training datasets.

As we have learned, traditional machine-learning models are designed from deterministic algorithms that map input to output using variable weights calculated by a maximum-likelihood supervised or unsupervised corrective technique. Although the maximum-likelihood technique is a type of dataset density estimation that also uses probability statistics to adjust weights based on corrective ingested datasets, Bayesian learning models uses a Bayes Naïve mathematical probability statistical technique to improve the input datasets themselves, improving predictive outputs. Bayesian learning incorporates historic past-ingested datasets and superimposes these upon new dataset variable uncertainly to adjust model weights to offer probabilistic inference and reasoning of these new datasets to improve output accuracy and prediction. In late 2019, Google’s TensorFlow Framework development team released TensorFlow Probability, a probabilistic reasoning and statistical analysis Python library built upon TensorFlow that allows the integration of probabilistic models into deep-learning models to run together on contemporary GPU designs.78 The probabilistic machine-learning libraries also allow inference through automatic differentiation, forward and backward chaining, smaller incremental training from large ‘big data’ datasets, GPU acceleration, and simultaneous distributed computing that allows massive scalability of GPU calculations throughout a cloud GPU server farm. The new TensorFlow Probability tools can be programmed using Python libraries. They are stacked as follows, and their functionalities are summarized as follows:79 • • •

Layer 0: TensorFlow – an open source core platform to build and deploy machine-learning models supported by high-level and low-level tools, libraries, plug-ins, and APIs Layer 1: Statistical Building Blocks – a collection of probability distribution and related statistical schemes and methods Layer 2: Model Building – the ability to join multiple distribution and build neural network layers for modeling aleatoric dataset uncertainty

Computer-Intelligent Games 235



Layer 3: Probabilistic Inference and Reasoning – multiple stochastic methods, optimizers, kernels, and algorithms for approximating variables, computing expectations, and random walks

Even though we will deploy Bayesian (Reinforcement) learning and utilize the TensorFlow Probability library framework as the machine-learning model in our new AI engine for our learning game engine, we must still design the AI engine architecture as open-ended and agnostic to support additional techniques and future frameworks that may be more effective for certain learning game genres, different types of learning goals and outcomes, and various assessment matrixes in the future. Therefore, our new AI engine will contain the following components: • • • • • •

GPU Memory Manager to supervise incoming CPU system memory allocation, Event processor memory management, and GPU memory requirements Event Handler/Processor/Classifier evaluate, aggregate, and pre-classify game data streams Machine-learning models, in this case Bayesian Reinforcement Learning Inference Model A knowledge base of stored game states and updated with inferred and reasoned game states Customized OpenCL to interface with the Event Processor and CUDA to interface directly with the ML GPUs An editor to allow programming and editing of the machine-learning models, event processor, OpenCL, and CUDA

Figure 7.8 provides a simplified block diagram of our new AI engine that will reside in our new PLG engine. In this diagram, the GPUs and GPU Memory are moved inside the AI engine structure to indicate dedicated AI GPUs, rather than the sharing of GPUs for graphics rendering. Additionally, the knowledge base could also be located external to the diagram, but is representative of one of many databases or cloud containers that store software dependencies, game statistics, game states, player-learner states, and a multitude of libraries, models, APIs, and plug-ins. Lastly, our AI engine editor is also shown inside our AI engine architecture interfacing with the machine-intelligence inference models and OpenCL API, but would also interface with the CUDA API and Knowledge data(base) if a 3D figure was presented. Integrated into our PLG Engine, as seen in Figure 7.8, the AI engine resides prominently as the core to manage most game operations in our new cloud-based learning game engine architecture. The PLG Engine design includes our familiar physics engine interconnected to the rendering engine,

236

Computer-Intelligent Games

Figure 7.8 Simplified AI Engine for the Cloud-Based PLG Engine

and a sound engine, but includes the new AI engine interfacing with the rendering engine and sound engine to manipulate the logic output of those resources to satisfy game play adjustments and adaptations. Excluding the pre-programmed game user interface, pre-loaded game level environments, and character graphics and sounds, all the other game play attributes must now take instructions sets from the AI engine based on a player-learner actions. The AI engine also directly interfaces with the game asset database, which now includes a game’s learning goals and expected outcomes, along with the cloud knowledge base to retrieve and learn from historical PEM and game play datasets to inferred new ones. Lastly, in Figure 7.8, we see the AI engine finally assigned its own dedicated scalable GPU array. Moreover, even though the graphics engine is dependent upon the AI engine for rendering instructions, rendering shader computations will occur in their own dedicated series of parallel processing GPUs chips in the cloud server. The AI engine is designed as an editable open-source, semistand-alone platform that can be customized to track, measure, prescribe, and predict learner-player actions as needed to match most learning game objectives. Combining the generic commercial game engine architecture and our game streaming diagram, as seen in Figure 7.4 and Figure 7.7, respectively,

Computer-Intelligent Games 237

Figure 7.9 Simplified Cloud-based PLG Engine Architecture

one can view the united cloud-based PLG Engine, knowledge base, and associated databases, with GPUs used for both parallel algorithmic mathematical computation and rendering, and finally encode the resultant game video stream to a learner-player’s device. Besides the addition of the AI engine core, the PLG Engine design provides the same familiar developer with UI to build or upload traditional game assets, program the familiar initial game logic, and use Python scripting to either program traditional NPC decision trees or build custom TensorFlow Probability game inference models. A developer could also integrate a game level’s learning goals in the Game Asset data cylinder, labeled as an asset, but critical in a PLG engine game design. Lastly, because it is quite difficult to indicate all the modules and their interconnected complexity in one diagram, the cloud knowledge databases/ML framework cylinder is just symbolic of multiple sequence and relational storage databases and containers. This book has featured citations from both qualitative and quantitative empirical and antidotal research results that have proven that serious games designed specifically for education, offering a particular pedagogy with associated learning goals for a certain learning population, may be an effective medium for teaching and learning. It has also included results from recent

238 Computer-Intelligent Games

research studies demonstrating that variations of AI/machine-learning applications may be a most promising medium to adapt course content, to customize the learning environment, to provide unsupervised interventions, to tutor and teach course content, and to personalize the overall learning experience for certain learners. Furthermore, the SEAD game’s ongoing case study in Chapter 5 and the DALI design study in Chapter 6 explained that learning games or separate machine-learning models may also provide unique insights into the knowledge-acquisition process of a learner; and that game or machine-learning models may also be able to detect and disclose potential external factors that may negatively affect and influence a learner’s academic journey. What this book has not explored, however, is research results that examine the efficacy of combining the two – proving that personalizing learning through the power of inference predictive machine-learning algorithms may be one of the most effective and scalable teaching methodologies since the advent of the nineteenth-century Lancastrian method. It is this author’s hope that the proposed theoretical cloud-hosted PLG Engine design will effortlessly fuse the development of future learning games and AI/machinelearning capabilities together to offer a scalable platform for personalized teaching and learning; and that the PLG Engine will help facilitate the next generation of educational practice and research – results that will not only provide undiscovered insights from all segments and demographics of human learning but also offer more effective technology-enhanced teaching philosophies and methodologies for the twenty-first century. In proposing a new teaching and learning game engine, by necessity, this chapter has been quite technically complex. The PLG Engine attempts to assemble all of the technical descriptive subsections of this chapter together to unveil a new architecture to facilitate the development of true personalized learning. The concluding chapter will focus more on how one designs a game for such a game engine – what design approaches a professor, teacher, or curriculum designer may take to develop a game that will match or exceed certain course learning outcomes. As was reviewed in Chapter 4, the reader will learn in the concluding chapter what genre, mechanics, motivators, or reward systems should be in our new PLG game, and what elements can now be added or are no longer needed. The chapter concludes with a new PLG Engine Game Design Document (GDD) template, including an upgraded sample PLG GDD of the author’s soft-skill Computer Game Design’s Game 489 Pre-Internship Seminar course.

Notes 1 Bowling, M., Fürnkranz, J., Graepel, T., & Musick, R. (2006). Machine learning and games. Machine Learning, 63(3), 211–215.

Computer-Intelligent Games 239 2 www.androidcentral.com/stadia 3 Even though they have many more cores than CPUs, all running at fairly fast GHz clock speeds, GPUs can’t be deployed as CPUs because they are designed as single instruction, multiple data (SIMD). Meaning, GPUs are extremely fast at rendering high-resolution 3D graphics tasks or singularly classifying lexical semantics from a book chapter. 4 Chen, Y. H., Emer, J., & Sze, V. (2016). Eyeriss: A spatial architecture for energyefficient dataflow for convolutional neural networks. ACM SIGARCH Computer Architecture News, 44(3), 367–379. 5 Silicon Semiconductor, “Aviation depends on sensors and big data,” November 6, 2017. View in article. 6 Boyd, R. (2017). Implementing reinforcement learning in Unreal Engine 4 with Blueprint. Doctoral dissertation, University Honors College, Middle Tennessee State University. 7 Laird, J., & VanLent, M. (2001). Human-level AI’s killer application: Interactive computer games. AI Magazine, 22(2), 15. https://doi.org/10.1609/aimag.v22i2.1558 8 Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., . . . & Lillicrap, T. (2019). Mastering atari, go, chess and shogi by planning with a learned model. arXiv preprint arXiv:1911.08265. 9 Van Hoorn, N., Togelius, J., Wierstra, D., & Schmidhuber, J. (2009, May). Robust player imitation using multiobjective evolution. In 2009 IEEE congress on evolutionary computation (pp. 652–659). New York: IEEE. 10 https://news.xbox.com/en-us/2014/09/30/games-forza-horizon-2-drivatars/ 11 Isla, D. (2005). GDC 2005 Proceeding: Handling complexity in the Halo 2 AI. In Game Developers Conference. San Francisco, CA. Retrieved from https://www.gamasutra. com/view/feature/130663/gdc_2005_proceeding_handling_.php?page=2; www. pocketgamer.biz/news/68576/supercells-jarno-seppnen-on-how-clash-royale-usesmachine-learning-automate-monetization/ 12 Van Der Heijden, M., Bakkes, S., & Spronck, P. (2008, December). Dynamic formations in real-time strategy games. In 2008 IEEE symposium on computational intelligence and games (pp. 47–54). New York: IEEE. 13 Cowley, B., & Charles, D. (2016). Behavlets: A method for practical player modelling using psychology-based player traits and domain specific features. User Modeling and User-Adapted Interaction, 26(2–3), 257–306. 14 Van Lankveld, G., Spronck, P., Van den Herik, J., & Arntz, A. (2011, August). Games as personality profiling tools. In 2011 IEEE conference on computational intelligence and games (CIG’11) (pp. 197–202). New York: IEEE. 15 Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41(1), 417–440. 16 Bakkes, S. C., Spronck, P. H., & van Lankveld, G. (2012). Player behavioural modelling for video games. Entertainment Computing, 3(3), 71–79. 17 Revised but based on: ibid. 18 Billings, D., Peña, L., Schaeffer, J., & Szafron, D. (2002). The challenge of poker. Artificial Intelligence, 134(1–2), 201–240, Special Issue on Games, Computers and Artificial Intelligence; Spronck, P., Ponsen, M., Sprinkhuizen-Kuyper, I., & Postma, E. (2006). Adaptive game AI with dynamic scripting. Machine Learning, 63(3), 217– 248; Kocsis, L., & Szepesvári, C. (2006). Universal parameter optimisation in games based on SPSA. Machine Learning, 63(3), 249–286. 19 Schadd, F., Bakkes, S., & Spronck, P. (2007). Opponent modeling in real-time strategy games. In GAMEON (pp. 61–70). London: GameOn Networking Ltd. 20 Zhao, R., & Szafron, D. (2009, October). Learning character behaviors using agent modeling in games. In AIIDE. Palo Alto, CA: AAAI Press.

240 Computer-Intelligent Games 21 Woolf, S. D., Ganapathy, M., & O’kelley, P. (2010). U.S. Patent No. 7,849,043. Washington, DC: U.S. Patent and Trademark Office. 22 Risi, S., & Togelius, J. (2019). Procedural content generation: From automatically generating game levels to increasing generality in machine learning. arXiv preprint arXiv:1911.13071. 23 Spronck, P., Sprinkhuizen-Kuyper, I., & Postma, E. (2004, November). Difficulty scaling of game AI. In Proceedings of the 5th international conference on intelligent games and simulation (GAME-on 2004) (pp. 33–37). London: GameOn Networking Ltd. 24 Prensky, M. (2001). Digital game-based learning. New York: McGraw Hill. 25 Muñoz, K., Mc Kevitt, P., Lunney, T., Noguez, J., & Neri, L. (2010, September). PlayPhysics: An emotional games learning environment for teaching physics. In International conference on knowledge science, engineering and management (pp. 400–411). Berlin, Heidelberg: Springer; Su, P. H., Wang, Y. B., Yu, T. H., & Lee, L. S. (2013, May). A dialogue game framework with personalized training using reinforcement learning for computer-assisted language learning. In 2013 IEEE international conference on acoustics, speech and signal processing (pp.  8213–8217). New York: IEEE; Raffert, A., Zaharia, M., & Griffiths, T. (2012). Optimally designing games for cognitive science research. In Proceedings of the annual meeting of the cognitive science society (Vol. 34, No. 34). Seattle, WA: Cognitive Science Society. 26 Sabourin, J. L., Shores, L. R., Mott, B. W., & Lester, J. C. (2013). Understanding and predicting student self-regulated learning strategies in game-based learning environments. International Journal of Artificial Intelligence in Education, 23(1–4), 94–114; Costa, R. M. E. M., Mendonça, I., & Souza, D. S. (2010). Exploring the intelligent agents for controlling user navigation in 3D games for cognitive stimulation. In 8th international conference on disability, virtual reality and associated technologies (Vol. 1, pp.  1–6). Reading: University of Reading; Santos, F. E., Bastos, A. P., Andrade, L. C., Revoredo, K., & Mattos, P. (2011, May). Assessment of ADHD through a computer game: An experiment with a sample of students. In 2011 third international conference on games and virtual worlds for serious applications (pp. 104–111). New York: IEEE. 27 Zhang, L., Wade, J. W., Bian, D., Swanson, A., Warren, Z., & Sarkar, N. (2014, June). Data fusion for difficulty adjustment in an adaptive virtual reality game system for autism intervention. In International conference on human-computer interaction (pp. 648–652). Cham: Springer; Puzenat, D., & Verlut, I. (2010). Behavior analysis through games using artificial neural networks. In Proceedings of 3rd international conference on advance computer-human interactions (pp. 134–138); Syufagi, M. A., Purnomo, M. H., & Hariadi, M. (2012). Tendency of players is trial and error: Case study of cognitive classification in the cognitive skill games. Journal Ilmu Komputer dan Informasi, 5(1), 31. 28 Momose, H., Kaneko, T., & Asai, T. (2020). Systems and circuits for AI chips and their trends. Japanese Journal of Applied Physics, 59(5), 050502. 29 Derbali, L., Chalfoun, P., & Frasson, C. (2011, March). A theoretical and empirical approach in assessing motivational factors: From serious games to an ITS. In Twentyfourth international FLAIRS conference. Palo Alto, CA: AAAI Press; Adnan, M. H. M., & Husain, W. (2012). Hybrid approaches using decision tree, naive Bayes, means and euclidean distances for childhood obesity prediction. International Journal of Software Engineering and Its Applications, 6(3), 99–106. 30 Ahn, J.-S., & Lee, W.-H. (2011). Using EEG pattern analysis for implementation of game interface. In Proceedings of IEEE 15th international symposium on consumer electronics (pp. 348–351); Berta, R., Bellotti, F., De Gloria, A., Pranantha, D., & Schatten, C. (2013). Electroencephalogram and physiological signal analysis for assessing flow in games. IEEE Transactions on Computational Intelligence and AI in Games, 5(2),

Computer-Intelligent Games 241

31 32 33 34 35 36

37

38

39 40

41 42 43 44 45 46 47 48

164–175; Zhang, M., Xu, M., Liu, Y., He, G., Han, L., Lv, P., & Li, Y. (2011, March). The framework and implementation of virtual network marathon. In 2011 IEEE International Symposium on VR Innovation (pp. 161–167). New York: IEEE. Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994 (pp. 157–163). Burlington, MA: Morgan Kaufmann. Safavian, S. R., & Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3), 660–674. Bellhouse, D. R. (2004). The reverend Thomas Bayes, FRS: A biography to celebrate the tercentenary of his birth. Statistical Science, 19(1), 3–43. Olabenjo, B. (2016). Applying naive bayes classification to google play apps categorization. arXiv preprint arXiv:1608.08574. Salakhutdinov, R., & Hinton, G. (2009, April). Deep Boltzmann machines. In Artificial intelligence and statistics (pp.  448–455). Copenhagen, Denmark: ML Research Press. Dongare, A. D., Kharde, R. R., & Kachare, A. D. (2012). Introduction to artificial neural network. International Journal of Engineering and Innovative Technology (IJEIT), 2(1), 189–194; Geisler, B. (2004). Integrated machine learning for behavior modeling in video games. In Challenges in game artificial intelligence: Papers from the 2004 AAAI workshop (pp. 54–62). Menlo Park: AAAI Press. Lamb, R. L., Annetta, L., Vallett, D. B., & Sadler, T. D. (2014). Cognitive diagnostic like approaches using neural-network analysis of serious educational videogames. Computers & Education, 70, 92–104; Puzenat, D., & Verlut, I. (2010, February). Behavior analysis through games using artificial neural networks. In 2010 third international conference on advances in computer-human interactions (pp. 134–138). New York: IEEE; Rebolledo-Mendez, G., de Freitas, S., & Gaona, A. R. G. (2009, March). A model of motivation based on empathy for AI-driven avatars in virtual worlds. In 2009 conference in games and virtual worlds for serious applications (pp. 5–11). New York: IEEE. Mercier, G., & Lennon, M. (2003, July). Support vector machines for hyperspectral image classification with spectral-based kernels. In IGARSS 2003. Proceedings of 2003 IEEE international geoscience and remote sensing symposium (IEEE Cat. No. 03CH37477) (Vol. 1, pp. 288–290). New York: IEEE. Ibid. Raybourn, E. M., Fabian, N., Tucker, E., & Willis, M. (2010). Beyond game effectiveness part II: A qualitative study of multi-role experiential learning. In Proceedings of the interservice/industry training, simulation and education conference (I/ITSEC). Arlington, VA: NDIA; Berta, R., Bellotti, F., De Gloria, A., Pranantha, D., & Schatten, C. (2013). Electroencephalogram and physiological signal analysis for assessing flow in games. IEEE Transactions on Computational Intelligence and AI in Games, 5(2), 164–175. https://docs.microsoft.com/en-us/archive/msdn-magazine/2014/august/ unity-developing-your-first-game-with-unity-and-csharp http://accord-framework.net/docs/html/R_Project_Accord_NET.htm www.shogun-toolbox.org; https://r-nimble.or www.tensorflow.org/api_docs/python/tf/linalg/eight https://github.com/migueldeicaza/TensorFlowSharp https://blogs.unity3d.com/2020/05/12/announcing-ml-agents-unity-packagev1-0/ https://blogs.unity3d.com/2019/03/01/unity-ml-agents-toolkit-v0-7-a-leaptowards-cross-platform-inference/ https://docs.unrealengine.com/en-US/Engine/ArtificialIntelligence/index.html

242 Computer-Intelligent Games 49 https://github.com/getnamo/tensorflow-ue4 50 https://docs.aws.amazon.com/lumberyard/latest/tutorials/tutorials-python.html 51 Cowan, B., & Kapralos, B. (2017). An overview of serious game engines and frameworks. In Recent advances in technologies for inclusive well-being (pp. 15–38). Cham: Springer. 52 Panourgias, N. S., Nandhakumar, J., & Scarbrough, H. (2014). Entanglements of creative agency and digital technology: A sociomaterial study of computer game development. Technological Forecasting and Social Change, 83, 111–126. 53 www.haroldserrano.com/blog/how-to-develop-a-rendering-engine-an-overview 54 http://ogldev.atspace.co.uk/www/tutorial30/tutorial30.html 55 www.geeks3d.com/20111111/simple-introduction-to-geometry-shaders-glslopengl-tutorial-part1/ 56 https://docs.unity3d.com/Manual/AudioOverview.html; Stevens, R., & Raybould, D. (2015). Game audio implementation: A practical guide using the unreal engine. Boca Raton, FL: CRC Press. 57 https://docs.unity3d.com/Manual/AudioMixerNativeAudioPlugin.html 58 https://openal.org/documentation/openal-1.1-specification.pdf 59 https://fmod.com/resources/documentation-api?version=2.0&page=core-guide. html 60 Stone, J. E., Gohara, D., & Shi, G. (2010). OpenCL: A parallel programming standard for heterogeneous computing systems. Computing in science & engineering, 12(3), 66–73. 61 Ibid. 62 https://opencv.org/anniversary/20/ 63 Kirk, D. (2007, October). NVIDIA CUDA software and GPU parallel computing architecture. In ISMM (Vol. 7, pp. 103–104). New York: Association of Computing Machinery. 64 Rousseau, J.-J. (1762). Émile (1911 ed., p. 6). London: Dent. 65 Singh, K. D., Hadjadj-Aoul, Y., & Rubino, G. (2012, January). Quality of experience estimation for adaptive HTTP/TCP video streaming using H. 264/AVC. In 2012 IEEE consumer communications and networking conference (CCNC) (pp. 127–131). New York: IEEE; Kilkki, K. (2008). Quality of experience in communications ecosystem. Journal of Universal Computer Science, 14(5), 615–624. 66 Cai, W., Zhou, C., Leung, V. C., & Chen, M. (2013, December). A cognitive platform for mobile cloud gaming. In 2013 IEEE 5th international conference on cloud computing technology and science (Vol. 1, pp. 72–79). New York: IEEE. 67 Wang, S., & Dey, S. (2010, December). Rendering adaptation to address communication and computation constraints in cloud mobile gaming. In 2010 IEEE global telecommunications conference (GLOBECOM 2010) (pp. 1–6). New York: IEEE. 68 Illahi, G. K., Gemert, T. V., Siekkinen, M., Masala, E., Oulasvirta, A., & Ylä-Jääski, A. (2020). Cloud gaming with foveated video encoding. ACM transactions on multimedia computing, communications, and applications (TOMM), 16(1), 1–24. 69 Gao, G., Zhang, H., Hu, H., Wen, Y., Cai, J., Luo, C., & Zeng, W. (2018). Optimizing quality of experience for adaptive bitrate streaming via viewer interest inference. IEEE Transactions on Multimedia, 20(12), 3399–3413. 70 Kufa, J., & Kratochvil, T. (2015, April). Comparison of H. 265 and VP9 coding efficiency for full HDTV and ultra HDTV applications. In 2015 25th international conference radioelektronika (RADIOELEKTRONIKA) (pp. 168–171). New York: IEEE. 71 Ateya, A. A., Muthanna, A., Makolkina, M., & Koucheryavy, A. (2018, November). Study of 5G services standardization: Specifications and requirements. In 2018 10th international congress on ultra-modern telecommunications and control systems and workshops (ICUMT) (pp. 1–6). New York: IEEE.

Computer-Intelligent Games 243 72 https://spark.apache.org; https://hadoop.apache.org 73 Singh, S., & Karwayun, R. (2010, April). A comparative study of inference engines. In 2010 seventh international conference on information technology: New generations (pp. 53–57). New York: IEEE. 74 Simina, M., & Kolodner, J. L. (1995, July). Opportunistic reasoning: A design perspective. In Proceedings of the seventeenth annual conference of the cognitive science society (Vol. 17, p. 78). East Sussex: Psychology Press. 75 Biedermann, A., & Taroni, F. (2006). Bayesian networks and probabilistic reasoning about scientific evidence when there is a lack of data. Forensic Science International, 157(2–3), 163–167. 76 Jihan, N. (2019). Re: How does Bayesian inference compare against other machine learning models? Retrieved from www.researchgate.net/post/How_does_Bayesian_inference_ compare_against_other_machine_learning_models/5d0a1ad8d7141b7d8643d972/ citation/download 77 Jihan, N., Jayasinghe, M., & Perera, S. (2019). Streaming stochastic variational Bayes; An improved approach for Bayesian inference with data streams. PeerJ Preprints, 7, e27790v1. Thousand Oaks, CA, USA. 78 www.tensorflow.org/probability/overview 79 Ibid.

Personalized Learning Game Design Pedagogy

8

As there is a dearth of research that can demonstrate the efficacy of combined personalized learning games using machine-learning algorithms, there is also a deficit of literature that explores personalized AI learning games’ pedagogical design effectiveness. There are, however, a great number of pedagogical theories, research results, and studies published to help guide the design of learning games and powerful personalized learning environments. Before we can design an effective and engaging personalized learning game with engrossing and fun game play that leads to successful learning outcomes, let’s first explore common pedagogical design features found in well-designed standard learning games. Now that we have offered an extensive compendium of serious game history, human cognition, early origins of AI, and serious learning game technologies, we can examine these important learning game design features, principles, and instructional strategies. This should allow for a better understanding of the feasibility of a particular theory, technique, or strategy as applied to distinct learning populations in the future. In Chapter 4, we explored what successful commercial game elements should be adopted and integrated into learning games to make them more engaging and fun, but we never deconstructed successful generic learning game nontechnical past pedagogical strategies to identify key approaches, design principles, or frameworks. This knowledge will be imperative to designing successful future personalized learning games that can offer multiple instructional strategies required for many learning populations. Let’s first revisit a popular description of the many features found in any entertainment of a game.

Personalized Learning Game Design Pedagogy 245

By examining the commonalities and differences of past attempts by scholars to define what a game is, Jesper Juul, the renowned Dutch game philosopher and ludologist, defined a ‘game’ as: a rule-based formal system with a variable and quantifiable outcome, where different outcomes are assigned different values, the player exerts effort in order to influence the outcome, the player feels attached to the outcome, and the consequences of the activity are optional and negotiable.1 This is a fairly all-embracing definition whose pieces we can appropriate and expand upon to redefine the features that we may use in an ideal personalized learning game. Dissecting the first part of Juul’s definition, all games need rules, policies, and procedures, as did the historical practice of successful teaching and learning. The practice of structured education served many functions within societies through the ages, and they were not necessarily altruistic ones. The origins and purpose of education within the Western Hemisphere may be examined from varying viewpoints that suggested education served to reinforce links between religious and political universalisms. Education shaped superior workers, better trained soldiers, and molded contributing citizens. The rules established in education shaped learners to comprehend formal values and ritualistic rules of society; and taught learners about their relationship to authority, be it divine, political, law, or economic.2 In a personalized learning game pedagogical framework, however, game rules may be more informal and dynamically sloped to accommodate a learner-player’s skill level to meet game subject-matter learning goals. Therefore, informal structure and accommodating variable rules within personalized learning games still require a quantifiable outcome, even if game learning goals themselves may be adjusted or varied during game play. Furthermore, in order to continually motivate and engage a learner-player, personalized learning games may assign a variable value to the game’s currency, such as increasing or decreasing points or coins earned to win challenges or quests. In education or learning games, the learning outcomes may have dual purposes, such as assigning the writing of a resume, where the learner completes the outcome but also has a document to use to apply for future employment. In a learning game, rewarding a learner-player with a high score for maneuvering their character to avoid major damage and reaching a game checkpoint at the top of a crest, while simultaneously accomplishing a subject-matter assignment of measuring the circumference arc of a circle, would be a learning goal.

246 Personalized Learning Game Design Pedagogy

Personalized learning games also may use a more complex value matrix, awarding a learner-player one score for maneuvering their character to avoid major damage and for reaching a game checkpoint crest, but may or may not disclose a ‘grade’ score for finishing a subject-matter measurement of a circle’s circumference learning goal. Learning outcome scores may be better determined by learner-player past and current game play performances together in the same game, or even include past performances from other related games in a same academic subject-matter series. Moreover, a personalized learning game may use a clock/timer, providing a certain amount of time to complete a challenge or game level for one set of learners, while the same challenge would be un-clocked for a different learner population. As we read in Chapter 1, learning requires the suspension of cognitive resistance, which is not always possible for some learners due to inherent states of anxiety, frustration, and even fear that may lead to an unconscious determination to actually fail or give up.3 Unfortunately, if games are not designed to accommodate these sometimes-common psychological states brought on by certain learning conditions, the game’s intent will also fail. Even un-clocked, simple puzzle games may cause inherent anxiety, stress, and frustration. Welldesigned personalized learning game pedagogy must dynamically adjust not only the game play challenge levels and anticipated learning outcomes but also potentially the timeframe needed to beat a quest or level, as well. This may be considered an instructional strategy used to personalize player effort. This means dynamically adjusting a game challenge or quest that isn’t too strident or too easy in order to keep a learner-player focused and engaged in a gameembedded learning task – to manipulate the pedagogical ‘flow’ of a game in order to provide both a fun, challenging gaming experience and earned selfefficacy when a learning outcomes in completed. Also derived from Juul’s definition, learning game design pedagogy must also allow a player-learner to own, or become emotionally attached, to a game’s outcomes or achievements. Emotions and learning success have proven to be coupled during the knowledge-acquisition process and to sway confidence and motivation. Emotions may also define aspects of learner-player engagement, as they may be more interested in subject-matter content they feel ‘good’ about – perhaps because they successfully completed a previous related task or surpassed their own self-assessed abilities by completing a higher ranked academic game challenge. Likewise, a learner-player may feel emotionally and socially deflated if anxiety, fear, and nervousness become barriers to successful academic engagement.4 Lastly, Juul’s negotiable consequences refer to external factors. For example, the time likely required to play a game may be borrowed from that of other reallife tasks, or the internal game play itself may negatively or positively impact external social experiences or even the professional workplace. This feature may be reconstituted in personalized learning games design as the inclusion of larger external objectives within a personalized game’s pedagogical

Personalized Learning Game Design Pedagogy 247

framework. If a learner-player understands the role that each game’s learning outcomes perform in preparation for an external professional journey, then their motivation and perception of control of the learning process and selfsatisfaction may increase, as may new subject-matter appeal. Akin to entertainment games, learning games also include multimedia (animation, images, sound, and text), but design pedagogy must provide a sensitive balance between each of these media channels to ensure that associated learning goals and anticipated outcomes are not obscured and undermined. Although the same sensitive balance must be maintained between different media channels for personalized learning games, there may be certain game challenges tied to associated learning outcomes for special learners that require a pedological overlap or a density alteration of media channels. Although not designed for personalized learning games per say, or learning games in general, Richard Mayer’s and Roxana Moreno’s Multimedia Learning (2002) design principles are generally applicable for multimediamultisensory demanding learning content:5 • • • • • • •





Multimedia Principle: Students learn better from words and pictures than from words alone. Spatial Contiguity Principle: Students learn better when corresponding words and pictures are presented near rather than far from each other on a screen or page. Temporal Contiguity Principle: Students learn better when corresponding words and pictures are presented simultaneously rather than successively. Coherence Principle: Students learn better when extraneous material is excluded rather than included. Modality Principle: Students learn better from animation and recorded narration than from animation and on-screen text. Redundancy Principle: Students learn better from animation and narration than from animation, narration, and on-screen text. Pretraining Principle: Students learn better when training precedes rather than follows a message, meaning that when students receive pretraining or advance information about a subject, they are able to build mental models of expectations, thereby filtering the cognitive load when postulating a potential solution. Signaling Principle: Students learn better when training is singled rather than non-singled, meaning that when learners are aware of what expectations are desired within a learning experience, they are better able to allocate cognitive resources appropriately and reduce cognitive resistance. Personalization Principle: Students learn better when words are presented in conversational style rather than formal style, meaning that when words are presented from a computer in a conversational manner,

248 Personalized Learning Game Design Pedagogy

students are more likely to accept the computer as a social partner, and therefore try harder to comprehend the given messages. Although these are good general multisensory principles to remember when creating personalized learning games, design pedagogy must also consider that a learner’s cognitive capacity is fluid and influenced by fluctuating levels of motivation and emotional states potentially caused by artificial stimulants, illness, diet, or even lack of sleep. Hence, on one day a learner may perform differently on the same learning game quest than the next day, and therefore resultant learning outcomes may be quite dissimilar. These cognitive capacity oscillations may require the personalized game design to adjust knowledge transfer rates – to make greater periodic personalization modifications to match a learner’s metacognitive state on any given day.6 Another organic aspect of a well-designed learning game is the ‘active’ learning nature of the medium itself. Of course, games require interactivity to play, for learner-players to interact with game environment objects, NPCs, or other players that stimulate multiple channels of our metacognitive processing and improve overall knowledge acquisition. Curriculum designers embarking on personalized learning should also take note that active learning also reinforces the psychological concepts of embodied and situational cognition. Bodies move around and interact with their environment, and, therefore, the brain as the controlling instrument of the body must be influenced by the body’s movements and interactivity within its environment. Perhaps understood as a body’s situated environment. A player perceives their game character’s body on screen as their own, manipulated by a game controller ‘bridge’ through neural impulses. A learner’s metacognitive activity does not singularly occur in the mind but is shaped by its body’s relationship to the environment around it; cognitive activity, and therefore cognitive capacity, is formed through both the mind and the body’s situated environment.7 Revised by this author from the National Academy of Sciences Engineering, and Medicine’s How People Learn II: Learners, Contexts, and Cultures (2018), the following framework provides general guidelines to include active learning design principles in personalized learning games:8 • •



Design Interactivity: for every action from a player-learner in a personalized learning game, there must be an immediate response, indicated visually or aurally or both. Knowledge Domain Feedback: successful personalized learning games must provide active learning performance updates – cues, signals, and status on both game states/embedded learning outcomes throughout a game. Content Choice: personalized learning games must provide choices of game/content paths with dynamic temporal benchmarks to maintain

Personalized Learning Game Design Pedagogy 249



• • •

active learning, and to match player-learner cognitive capacity to continually motivate and engage. Alternative Representations: personalized learning games should offer access to different content perspectives, game play approaches, and various methods for solutions, not just one, allows for a learner-player’s fluctuating cognitive capacity. Instructor Interactivity: some personalized learning games may require communications channels for real-time learner-player-to-teacher interactivity if a non-game-embedded intervention may be necessary. Adaptability: of course, personalized learning game environments need to dynamically adapt and personalize domain content to match a learner’s cognitive capacity, interests, and performance and motivation levels. Nonlinear Content Access: personalized learning games should allow learner-players to access non-sequential or nonlinear game/domain content. This active learning feature offers learner-player the perception of control over the learning environment and facilitates learning self-regulation and intelligent adaptivity.

Richard Mayer’s and Roxana Moreno’s media channel management framework and the preceding active learning design guidelines represent excellent principles to obey when designing personalized learning games. Indeed, a PLG Engine, as outlined in Chapter 7, was designed to theoretically offer an authoring environment inclusive of such learning game principles, depending on the instructional strategies required and the complexity of the domain content. The more complex the knowledge domain, the greater importance proper application of instructional strategies becomes for successful overall pedagogical design.

Instructional and Pedagogical Strategies for Personalized Game Design Generally, a first step prior to planning and designing an instruction strategy for any learning experience is to first define the following generic course planning modules seen in Table 8.1. In a post-secondary setting, the modules may be co-developed by a professor and a course designer or curriculum designer to ensure the learning content properly integrates into an overall degree matriculation plan. Adding faculty contact information, required and optional textbooks, attendance policy, and honor code to these modules composes the standard components in post-secondary course syllabus. When these modules have been completed, the course syllabus learning goals should then inform what future instructional methodologies must be employed to ensure that learning outcomes are met. Let’s first apply the preceding planning modules template to a real-world course and then explore potential instructional strategies to accomplish our learning outcomes. The

250 Personalized Learning Game Design Pedagogy Table 8.1

Generic Course Planning Modules

(1) Domain Subject: course, subtopic, exercise, quiz, etc. (2) Learning Goals: what learners should know or be able to do at end of a learning experience. (3) Teaching Methodology: general instructional pedagogy, philosophy, and learning management. (4) Delivery Medium: how is the domain subject going to be provided to a learner. (5) Assessment Matrix: how are students to be evaluated and learning goals to be assessed. (6) Learning Outcomes: what did the students actually learn and what are they able to do once the learning experience has concluded.

author’s current Game 489, Pre-Internship Seminar, a required undergraduate senior-level soft-skills development course taught in the Computer Game Design Programs at George Mason University, is used here as a course planning module example: Game 489, Pre-Internship Seminar Course Description: This course will teach, demonstrate, and instill workplace professionalism, and assist and guide students to prepare them for the required internship in the Computer Game Design Program, and job application process. This course will further teach and guide students to create a professional resume, biography, and portfolio, cultivate a professional demeanor, attitude, and learn proper attire, and finally adopt and learn professional communication skills. Game 489, Pre-Internship Seminar Course Planning Modules: a) Domain Subject: Four units of instruction will be taught including self-reflection, personal vision and mission, and personal SWOT analysis, how to develop portfolios, resumes, and biographies, preparing for the phone and in-person interviews, and final public presentation. b) Learning Goals: Students will use approaches and strategies learned in and out of class to build effective and professional resumes and portfolios, learn professional demeanor, appropriate dress, and practice professional phone and in-person job interview skills. Students will also learn how to successfully negotiate job offers, maintain employment, climb the industry ladder, and overall advance in their profession. Students will produce a professional resume and portfolio ready for submission to potential

Personalized Learning Game Design Pedagogy 251

intern sites and employers, and possess professional communication skills, attire knowledge, and teamwork approaches to professionally succeed in the workplace. c) Teaching Methodology: Currently mixed lecture, inquiry-based, peer-group, and (manual) personalized instruction. d) Delivery Medium: Currently hybrid asynchronous and synchronous online. e) Assessment Matrix: Based on unit assignments (25%), virtual class attendance (25%), participation (25%), and the final presentation and project (25%). To receive a grade of ‘A’ a student must achieve a minimum average grade of 90% on the course work requirements. To receive a grade of ‘B’ a student must achieve a minimum average grade of 80% on the course work requirements. To receive a grade of ‘C’ a student must achieve a minimum average grade of 70% on the course work requirements. To receive a grade of ‘D’ a student must achieve a minimum average grade of 60% on the course work requirements. Failure to receive a ‘D’ grade will result in a grade of ‘F.’ f ) Learning Outcomes: Students will have an online portfolio that reflects their skills, knowledge, talents, professional interests, and a professional resume and biography. Students will know how to dress professionally, write and speak in a manner that is common in a professional workplace, and be prepared for future phone and in-person job interviews. Lastly, students will know how to prepare for and provide public presentations. Instructional strategies should be developed in relation to the planned learning environment. For instance, the delivery method of the author’s course example assumes the use of one or more online platforms, so instructional strategies may be somewhat confined to the capabilities and features available on those platforms. Still, many strategies can still be implemented. Case in point, instructional strategies such as gaining the attention of learners to secure their interest and focus their energies on upcoming tasks, and inform learners of objectives after their attention is gained to outline the purpose of instruction could be accomplished by employing the platform’s synchronous teaching option. During this synchronous learning experience, the author as instructor is required to instill and link the value and importance of the course’s goals and objectives to a learner’s future professional success. Furthermore, it is critical for an instructor at the onset of a new learning task to stimulate recall of relevant prior acquired knowledge to maintain interest and motivation. Again, using

252 Personalized Learning Game Design Pedagogy

our course example, this strategy may be achieved by providing linkages to previous curricula team project outcomes, past project presentation criticisms, informal and formal portfolio critiques, and previous task assessment results. A few additional instructional strategies, first proposed by Robert M. Gagne and revised by the author, to consider include the following:9 •







• •

Presentation of the Course Content: use of a variety of media channels to keep content engaging but a design that follows the Richard Mayer’s and Roxana Moreno’s Multimedia Learning framework to not overload learner cognitive capacitance and includes personalized active learning guidelines. Provide Learner Guidance and Support: create and embed two-way communication channels within the course plan (or platform) to provide learner guidance or advise to help navigate the course content, and personalized interventions when needed. Offer Learner Practice Opportunities: provide learners with opportunities for alternative but relevant practice of course content tasks, perhaps even interactive tutorials prior and during for each assigned task. Embed two-way communication channels to assist with and augment learner understanding. Provide Temporal-Sensitive Feedback and Comments: provide useful, personalized, and immediate feedback if a learner makes an egregious error or mistake that indicates they are lost or confused about an assigned task. Provide equitable, fair, and personalized corrective guidance to correct the mistakes. Assess and Evaluate During the Learning Process: provide formative assessment, such as non-graded discussion board and journal assignment assessments, as well as the expected end of the learning task summative assessment. Enhance Engagement, Motivation, and Retention: by using stories, allegories, and analogies, tie past and current knowledge to future tasks and how they relate to career goals and life experiences.

The preceding media channel design management framework, active learning design guidelines, and instructional strategies are applicable to either traditional on-site or online teaching and learning, but let us now combine some of these pedogeological strategies, frameworks, and guidelines to create a learning game design process. The first step is to generically combine and redefine the discussed course planning modules and applicable instructional strategies into a single learning game design process. This process should outline how the game content is presented to the learner-player, and then the application of associated instructional strategies aligned to the course module planning scheme. Figure 8.1 offers such an outline of this process, identifying the learner-player experience, applicable instructional strategies on the left of the diagram, and the generic course planning module steps on the right.

Personalized Learning Game Design Pedagogy 253

Figure 8.1 Diagram of a Learning Game Design Process Using Traditional Course Planning Modules and Applicable Instructional Strategies

Personalized learning game pedagogical strategies require an extra level of combined dynamic game play/learning goals integrated into the game design matrix. Because personalized learning games are essentially assumed to be played by a learner without invasive instructional oversight, sans timely machine-intelligence model feedback and corrective interventions, design pedagogy options must rely on strategies positioned within successful autodidacticism. Well-designed learning games are ideal mediums to transform extrinsic motivations (EM), commonly referred to as behavior that is driven by external incentives such as monetary rewards, fame, awards, and recognition, into intrinsic motivation (IM). However, EM may be flawed as a true measurement of pure motivation, as an athlete may run just fast enough to surpass her competitors to win a race but not exert herself and run at her top speed. Similarly, a learner may invest the minimal effort to complete a homework assignment, undermining the learning process once a short-term objective is achieved.10 As stated throughout this book, games may increase motivation and engagement. More specifically, structural

254 Personalized Learning Game Design Pedagogy

elements or mechanics found in games may be great intrinsic motivators of critical human behaviors found in overall successful learning experiences such as inquisitiveness, focus, resolve, and determination.11 IM may generate continuing pleasure, joy, fun, and overall self-satisfaction from participating in and successfully completing a task – emotional and psychological states all great instructors try to emulate. If a learning game can exhibit productive emotional and psychological IM states over time, then self-regulation, self-efficacy, and self-sufficiency may be reached.12 Consistent self-efficacy that leads to self-sufficiency may produce deeper comprehension of subject matter that then may lead to (long-term memory) knowledge acquisition and greater confidence in tackling future learning tasks.13 Because learning games offer environments where thoughts and notions are shaped, re-emphasized, and reshaped through various learning experiences, and therefore may create a deeper understanding and new knowledge, they are often considered a prime example of experiential learning.14 Parallel to the common design of games, experiential learning includes the design of learner/player-centric hands-on active learning pedagogy that fosters an educational process that encourages repetition that may improve aptitude and lead to IM.15 Learning games, or the gamification of learning content, may offer the ideal environment to apply experiential learning pedagogy, as the organic nature of the medium requires proactive interactivity from a player to engage the content. In order to personalize an experiential learning pedagogy that may lead to IM self-efficacy and self-sufficient learning, personalized game design would need to include a dynamic game/learning outcome content generation prediction engine, or inference capability based on past game play, to ensure future self-satisfaction game play is maintained. Furthermore, personalized learning games – designed to accommodate the learning style, behavior traits, and influential external factors at any point in time for a single learner – must also include a dynamic self-learning pedagogy construct that can dynamically serve the appropriate methodology during a learning experience to foster positive learner behavior traits that deepen the process of knowledge acquisition. Examples of personalized learning game pedagogical methodologies that may be integrated into just one personalized learning game include the following: • • • • • • •

Experiential Extrinsic Motivational Intrinsic Motivational Self-Efficacy Self-Sufficient Self-Directed Problem-Solving

Personalized Learning Game Design Pedagogy 255

Figure 8.2 Four Modules of a Personalized Learning Game Planning Rubric

These pedagogical methodologies and associated instructional strategies fill several vessels of a personalized learning game design rubric, the others being a classification and machine-learning model(s), course planning module content, and eventually an actual game design document. Figure 8.2 offers a condensed diagram of the four modules of a game design planning rubric. Notice how the learning content informs the pedagogical methods that then drive the personalized interventions, which steer the game play experience. This design philosophy ensures that both learning goals are met through game play and that both may be personalized for most any learner population.

Personalized Learning Game Planning Using an adaptation from Atsusi Hirumi’s and Christopher Stapleton’s game design phase schema, we now transform the author’s current online Pre-Internship Seminar course syllabus into a personalized learning game plan by breaking down each learning goal into game-based instructional design tasks and machine-intelligent-driven intervention strategies.16 The first is the pedagogical and instructional design analysis phase and initial game preproduction phase, where the learning goals are identified and integrated into the game design planning process. Pedagogical analysis includes

256 Personalized Learning Game Design Pedagogy

identifying the targeted learning population for the game, their demographics, prior academic achievement levels, prior aptitude and motivation levels, behavioral traits, social and psycho-metrics, all perhaps derived from a personal learning map. Additionally, an AI architect and software engineer would work alongside both a curriculum designer or course planner and a game designer to determine which machine-intelligent frameworks and classifiers to use, techniques to deploy, and algorithms to develop that may offer the greatest personalization intervention accuracy. The first phase usually includes a discussion about final delivery methods and game play platform, but since this personalized learning game will be created using a cloud-hosted PLG Engine and delivered from a cloud server, this step may be skipped. The second phase includes pedagogical planning to determine learning content sequences and objectives, design of tasks and sub-task exercises, instructional strategies, and assessment methods. Phase two game preproduction includes settling on game genre, story type, navigation schemes, game mechanics, quests, levels, corresponding scoring/assessment method, creating game sounds and music, and storyboarding the UI, game concept art, environmental art, and the overall game flowchart. Phase two also requires the AI architect and software engineer to design the backend schema and data flowchart. Essentially, when the machine-intelligence algorithms interface with game user logs, asset databases, rendering engine, sound engine, APIs; the overall technical data flowchart, and how the models intervene to personalize game play and adjust learning goals. With the instructional designer, AI architect, software engineer, and game designer working together, this phase should also result in a game and integrated technical design document. Actual personalized learning game development occurs in phase three, when generative prototypes of UIs, game levels, machine-intelligentdriven game play, learning goals interventions, sound scores, and NPC functionality are developed and tested, and eventually take the shape of an alpha build of the game. Although an alpha game build should be fully playable, it generally contains numerous bugs and issues – the art may not be completely finished, collision physics may be broken, and, depending on the quality of the data and the amount of training time, the machineintelligent algorithm’s interventions may be incorrectly applied and/or inaccurate. Furthermore, embedded instructional strategies may not be appropriately applied, and assessment outcome may be incorrect. These are just a few issues that the game team may discover playing through an alpha build, but small group testing from the game’s target population should still be undertaken in this phase. Without access to a personalized learning map (PLM) to ascertain a learner’s knowledge base, pre-tests should

Personalized Learning Game Design Pedagogy 257

be administered to assess each learner’s game play experience, skill level, pre-play subject matter knowledge and achievement level, and history of needed support services. Formative evaluations, to gauge instructional effectiveness, feasibility, and efficacy, as well as game play interest, appeal, engagement level, and fun, should be conducted across multiple sessions with varying groups of target learners. Types of formative evaluations should include written solicitation that includes open comments sections and individual and group interviews. Post-play analysis should evaluate learner-player achievement scores and machine-intelligent-driven intervention effectiveness against learning outcomes. With machine-intelligent algorithms in place, we may also conduct a formative assessment by modeling alpha player experience (PEM) to gather greater insight into a learner-player’s in-game behavior traits, tactics, and strategies. The final phase comprises all the changes, repairs, and revisions made through post-testing feedback to produce a beta version of the Pre-Internship Seminar personalized learning game. This is also the phase where final instructional summative assessment is planned and conducted, and game post-production commences. Game assets are polished, final UI adjustments are made, navigation tweaks occur, sound effects and music scores are mixed, and network connections are optimized. The game’s machine-intelligent engine and dependencies are further refined, memory management is adjusted, and GPU performance is tweaked. It’s not uncommon to submit this version to field testing that still targets identified learner populations, but this happens without the development team’s oversight of the written reviews or in-person interviews. With a larger population testing and less evaluation oversight of our personal learning game performance, summative analysis results may provide greater and more honest insights into the game’s pedagogical effectiveness, amusement level, and efficacy. Moreover, field testing may definitively provide the answers to whether the personalized learning game performs as intended with no interaction from the game design team or from instructors. Using the author’s Pre-Internship Seminar Course content as a planning template, Figure 8.3 provides a summary of the four phases just described. The Appendix to this chapter provides the author’s Pre-Internship Seminar course transcribed into a personalized learning game design document. The template of the document is derived from Mark Baldwin’s Game Design Document Outline (2005), but has been updated to include personalized learning strategies achieved through the machine-intelligence algorithms.17

258

Personalized Learning Game Design Pedagogy

Figure 8.3 Personalized Learning Game Planning Phases

Epilogue: The Novel Education Paradigm For the past hundred years in the United States, university curricula, degree matriculation plans, and admission standards have been created and shaped not only by faculty, department chairs, deans, and by central administrations but also by external public accrediting bodies. In many cases, the deliberation of what courses and credits to include in a degree, what learning outcomes a learner should gain upon successful completion of a course, and what profession a degree may provide a graduate to enter has been debated and scrutinized by all these stakeholders in a semi-public square. The public K–12 education curriculum approval process has been even more democratic, whereby most decisions about course content, curriculum, education standards, and policies are decided by publicly elected local school boards that generally hold open meetings about such topics. Even though standards of learning, textbooks, and learner assessment models are decided at the state level, local school systems, their individual school principals, and teachers themselves all hold a certain level of autonomy to skip and add certain course content or change teaching pedagogies. Local schools also hold stakeholder open houses and public meetings to solicit critiques of their

Personalized Learning Game Design Pedagogy 259

curriculum, courses, pedagogy, or even chosen auxiliary textbooks. This regionalization of our public education systems, and to a certain extent in public and private higher education, is inherently American in nature, as the open consent process not only helps foster community values and standards but also underlines the principles of government transparency and citizen participation.18 Of course, not all public debate about university, public school administrative, or faculty decisions and choices have been without controversy. Even though, on the post-secondary level, learners that dislike the views of a professor or assigned textbook can simply withdraw early enough from non-required courses without paying a penalty; petitions and protests on university campuses are not uncommon. Besides student activism and demonstrations about social, racial, and economic inequities, there have been student protests about everything from faculty views, assigned textbook content, and their costs.19 In public K–12 education, the textbook wars have been raging since the Kanawha County, West Virginia, textbook controversy in 1974.20 Still, most conflicts between the public stakeholders and school staff have resulted in healthy compromises that have had a prevailing positive impact on sustaining community bonds around local and regional schools. Personalized learning platforms, especially technical ones as put forward in this book, are commonly antithetical to the principles of transparency preferred by the education community public square. It’s not just the apprehension about the potential meticulous data gathered from learners through such systems, or trepidation about the privacy and security of those datasets, but it is also concern about the imperceptible architecture of personalized learning systems that choose what and how to teach. Additionally, as we read in Chapter 2, even the U.S. Army in the 1970s was fearful of disclosing to the public the value of personalized learning games that taught battle tactics and strategy to teach courses in a university setting or in schools. Finally, as was also discussed in Chapters 2 and 6, even though machine-learning algorithms have become ubiquitous in our lives (setting our credit rating, interest rates, life insurance policy, and even what we may want to buy), the black box hidden layers of these self-learning algorithms and the potential biases ingested from under-scrutinized training datasets have raised alarms in educational circles.21 None of these issues, singularly or combined, will be easily breached to smooth the adoption of personalized learning games in the academy or in school systems unless there is a wholesale shift in the relationships between education technology vendors and university clients. A change in the overall technology acquisition process is needed. Currently, most university and school system CTOs or CIOs and their information technology staffs make the decision to fund and adopt campus-wide technology, sometimes with

260

Personalized Learning Game Design Pedagogy

stakeholder input, but rarely after a formal needs assessment, pilot efficacy study, or training budget analysis is performed. Under a strong recommendation from department chairs and faculty, schools and colleges also typically purchase discipline-specific support technology, occasionally with student participation, but seldomly acquired after a structured needs assessment, efficacy study, or cost analysis was completed. Even more egregious, EdTech vendors often pitch software solutions to universities and school systems, touting the effectiveness of understudied in-the-field application and tools, with limited or no data demonstrating positive impact on learner populations. Vendors tend to display a long list of clients on their websites rather than cite research studies involving their products. Even though most software, such as learning management systems, procured by central administrative offices or subunit departments are intended to support academic agency and do not cross over into actual instruction, most educational institutions have learned that independent analysis and study of effectiveness and efficacy should still be conducted prior to adoption. Since most software solutions and tools procured by secondary and postsecondary systems are for purposes other than direct instruction, and since that procurement process is at least partially open for stakeholder feedback, any computer software that actually impedes, intrudes, augments, or replaces the act of academic instruction will require even greater transparency. In order to prepare all the constituents of a learning environment for such a potentially disruptive modification, it’s critical to follow the formative small group and larger field-testing recommendations outlined in the preceding Personalized Learning Game Planning section – but first targeted toward specific learner populations. In 2019, the special learner populations reached 14% of the total public school population and 19% in post-secondary education.22 Although public universities and public school systems are mandated by state and federal law to accommodate most special learners, the funding levels available, mostly covered by the state, rarely cover the teaching specialists and support staff costs required by many special learners. Likewise, as many of these special learners access education opportunities online, academic support, teaching specialists, and support structures are seldom adequate and mostly absent for the online-only special learner.23 In order to convince and satisfy the demands of all the stakeholders that may contribute to the decision to adopt personalized learning games within on-site and online learning spaces, data from small group formative evaluations need to be collected that should include written solicitation as well as the results from individual and group special learner interviews. Post-play summative analyses should evaluate learner-player achievement scores and machine-intelligent-driven intervention effectiveness by comparing current teaching pedagogy and learning outcomes to those from a personalized

Personalized Learning Game Design Pedagogy 261

learning game. Furthermore, a separate quantitative empirical field study or two should measure the current methods of instruction, manual interventions, and associated learning outcomes against those autonomously produced from a personalized learning game played by a special learner sample population with commonly identified cognitive parameters. With proof of efficacy data for a special learner population sample in hand, it is imperative to open the personalized learning game architecture, pedagogy choices, intervention methodologies, and learner performance metrics to academic and community stakeholders. As with the adoption of innovative textbooks or pedagogical frameworks, the greater the transparency, the less the potential resistance from immediate stakeholders. When pitching any new technology, lecture room tool, or nascent teaching method, uniform assimilation into the current instructional framework must be evident, and the new system or method must not create disruption or add to faculty labor. Training must be kept at a minimum, and teaching and learning vernacular must be familiar to both faculty and administration. For all studies conducted, Institutional Review Board (IRB) approval must be secured from the appropriate body, and security and privacy protocols and rules for logins, learning datasets, and personal information must also align and integrate with standing institution, state, and federal FERPA and COPPA laws and policies. With education researchers, faculty, and administration allied with the concept of deploying personalized learning games, efficacy studies should be conducted of the largest learner population: the continuing education/adult learner population. Nearly 37 million American adults currently have some college credits but no degree or certification, and over 40% of these learners are nontraditional first-time learners and come from historically underserved populations. This number is projected to rise and even surpass that of traditional learners by 2024.24 As nontraditional first-generation learners and historically underserved populations enter post-secondary community colleges and universities in greater numbers and increasingly through online classes, cloud-based personalized learning games may help meet this rising need to scale and personalize effective instruction, while adding additional levels of engagement and self-satisfaction to the learning process.25 Finally, personalized learning game efficacy studies should be conducted in the secondary and post-secondary ‘general’ learner populations through online channels. State and county financial support for schools and college capital projects have all but disappeared in the past two decades, leaving only municipal bond sales and/or fundraising capital campaigns to support facility expansion to meet future rising enrollments – both uncertain endeavors in the current economic recession.26 As the numbers of general learners entering post-secondary education overall are projected to swell, slowly but more accurately reflecting the demographics in the United States, the

262

Personalized Learning Game Design Pedagogy

necessity for secondary and post-secondary education to expand their curriculums through online channels will only increase.27 After several decades of haphazard experiments with perpetual technical hiccups, and given the discovery of massive hidden fees in LMS contracts, most educational institutions have concluded that online education is a viable option to teach and scale their degrees and courses, and less expensive than expanding the traditional capital construction model. One recent examination found four out of six colleges in a recent study had a fouryear cost savings between 3% and 50% over their traditional residential/onsite teaching paradigm – with most savings occurring from lower facilities energy consumption and maintenance costs.28 Even with the need to hire additional faculty and some technical staff to teach and support expanding online offerings, and despite consistent faculty concern about academic rigor, the quality of online education versus costs savings to provide that education are indisputable.29 However, as I’ve written about quite extensively in the past and touched upon in several chapters in this book, not all online education pedagogical options are equal. Legacy online learning platforms, some now more than 20 years old, were originally designed as asynchronous tools so that learners could check their grades and assignments outside of classroom hours. As colleges and school systems collectively began to employ LMSs to actually teach courses to scale their academic offerings, a historic problem that first occurred in the days of correspondence schools and cable TV broadcast courses slowly re-emerged. Online learners exhibited lower academic achievement levels and lower retention rates compared to their on-site classroom peers. Although these learning platforms could scale course content to a much higher degree, they were still a one-way isolated learning experience for the learner.30 Live synchronous courses, however, offer the sense of psychological immediacy of the in-class learning experience, adding the potential critical real-time interactive element between learners, and between instructor-learner, albeit from a two-dimensional instructor. Live-streaming of instructor-led educational content has also exhibited the concept of teacher social presence for the online learner. If the stream is live and in real-time and not archived, and a learner has the option to post real-time questions in a course chat channel or forum, research studies have demonstrated that these learners are more engaged and motivated and therefore perform at a greater academic achievement level.31 Moreover, a learner’s attitude may be influenced by the teaching style and personality of an instructor. Even live-streamed nonverbal gestures and cues, such as eye contact, body language, facial expressions, and tone of voice through a computer camera, may affect learner focus and concentration within a learning experience.32 Though asynchronous online

Personalized Learning Game Design Pedagogy 263

learning management platforms allow learners to dictate when and where they choose to learn, live-streaming synchronous learning more closely mimics the bridging of transactional distance that occurs through verbal and nonverbal cue exchanges between a learner and instructor in an onsite classroom.33 Still, knowing the instructor is being live-streamed online is one-half of this transactional equation; facilitating learner interactivity, between learner and instructor in this case, and seeing the nonverbal cues in response to a question or from a team discussion is the other. But for all the advantages that an interactive live-streaming learning environment may provide over an asynchronous option, there are downsides all instructors have experienced teaching during the Covid-19 pandemic. These include the amount of wasted time it takes for learners to troubleshoot technical issues, log on, and adjust their cameras and audio levels before the instructor ever begins the course. Moreover, environmental and ambient sounds in each learner’s location – such as family chat, fans, air conditioners, and dogs barking – can also disrupt and delay a live-streamed discussion, forcing several outed offending learners to mute their microphones and cameras. Too often those same learners unwillingly forget to unmute their microphones when they wish to add a salient point to the overall discussion. It’s also not uncommon for a few not in proper attire to block their cameras, contributing only as disembodied voices. Learners whose only visual representation is a static icon offer no nonverbal feedback to other learners or to the instructors, so their social presence likewise evaporates. Aside from the disruptions caused by new users of the technology, the advantages that synchronous online learning offers over asynchronous learning are obvious, and this technology, in use in the corporate community for a decade, has been a godsend to many schools and colleges during this pandemic crisis. However, the current design of synchronous learning architecture will never accommodate machine-learning-driven personalized learning experiences. Current steaming architecture is simply comprised of a cloud-hosted codec application that streams up or down to/from a thin device or PC-based encoder/decoder interface. Users that log in to the cloud-hosted codec application can upload an encoded audio/video stream, and other users can download, decode, and view the same stream if their thin device or PC has the same interface with little lag. As we learned in Chapter 7, a robust encoding/decoding engine could indeed also stream games to players, but the current simplistic architecture as employed in synchronous streaming tools could not also accommodate data rich streaming personalized learning games. So, we’re back full circle to pointing to our novel personalized learning game engine as an ideal platform for designing and building learning games that may offer a most engaging, effective, and enjoyable individualized learning experience. Although it is theoretical as I write this today, the PLM

264

Personalized Learning Game Design Pedagogy

Engine is indeed feasible, and, thus, this author has already begun assembling a research and development team to build a prototype of the game engine and the AI engine as described in Chapter 7. Perhaps future updates of this text will include the results of research studies conducted using the PLG Engine designed and built games targeting the various underserved online learner populations that I have mentioned in this chapter. Then, perhaps, other PLG-like engines, platforms, and solutions will be designed, built, and tested in the future by other researchers that will also help contribute to improving the practice of education and overall learner knowledge acquisition through games and machine learning algorithms. I can only hope that this book served as an inspiration for such an important undertaking.

Appendix Game 489, Pre-Internship Seminar: Personalized Learning Game Design Document

1. Title Page 1.1. Pre-Internship Seminar Game: Time to Prepare for Your First Job! 1.2. Copyright Information 1.3. Version 1.0, Dr. Scott M. Martin, November 2020 2. Design History: V.1.1 3. Section I: – Game Overview: This is a personalized learning game that teaches, demonstrates, and instills workplace professionalism, and assists and guides learners to prepare for the required internship in the Computer Game Design Program. This game will further teach and guide students to create a professional resume and portfolio; cultivate a professional demeanor, attitude; and finally to learn professional communication and presentation skills. 3.1 Game Concept: Time to Prepare for Your First Job! is an exploration and adventure 2D game where a character explores four level maps and chooses one of several minigames on each level representing Unit course content to play and win/complete each assignment. NPC characters confronted on each level function as competitors, distractors, and overall arch-nemeses to try to cause the learnerplayer to fail. Minigames consist of Unit 1 course content selfexploration puzzle games to learn about personal and professional traits and values, Unit 2 narrative/story game to learn how to design a portfolio website and resume, Unit 3 strategy game to prepare for the interview process, and Unit 4 role-playing/simulation game to rehearse and provide a final presentation.

266 Personalized Learning Game Design Pedagogy

3.2 Feature Set: a) Primary 2D exploration and adventure game with four levels: Level 1 – Home Bedroom, Level 2 – Coffee Shop, Level 3 – Corporate Office, Level 4 – Small Stage. b) Primary Game Modes: Exploration and Adventure Four Minigames: Puzzle, Narrative/Story, Strategy, and Roleplay/Sim. c) Level Objectives: Level 1: Learner-player navigates to their bedroom learn about their positive and negative personal and professional traits. Various level of success generates ‘internship’ points. Next quest is to write a personal values statement, help is found in a desk drawer. Internship points provides a key. Various level of success generates ‘internship’ points. Level 2: Learner-player navigates to a coffee shop to learn about and to write a professional resume and create and post a portfolio. Exploring the coffee shop provides resume examples/templates, and using internship points allows the learnerplayer to get a coupon from an NPC barista to learn about portfolio creation and see examples/templates. Various level of success generates ‘internship’ and ‘relationship’ points. Level 3: Learner-player navigates a corporate office waiting room. For a few ‘internship’ points and ‘relationship points’ the receptionists will put you through a mock interview before your main interview with the boss. When the learner-player passes an ‘internship’ points threshold, they can open the door to the boss’s office for the primary internship interview. Level 4: Learner-player navigates to a theater stage ‘greenroom,’ and rehearses and prepares their final presentation. For a few ‘internship’ and ‘relationship’ points, the stagehand in the greenroom will provide a few tips and examples of great presentation to try to emulate. When ready, the learner-player will try to enter the main stage to give their final presentation. However, if they arrive at the door without enough ‘internship’ and ‘relationship’ points, the stage door scanner won’t recognize their biometric thumbprint, and they can’t give their final presentation. In which case, they need to go back to previous levels and earn enough points. d) Three NPC Enemies: Bob the Backstabber, Tina the Talker, and Jose the Joker. They try to trip up the learner-player during

Personalized Learning Game Design Pedagogy 267

game play and force the learner-player to lose ‘internship’ points by using weapons. Three Support NPCs: Barbara the Barista, Rick the Receptionist, and Svetlana the Stagehand. e) Collectables: Relationship points are earned through polite and appropriate interaction with support NPCs. Internship points are earned through successfully learning tasks on each game level. f ) Store/Shop: Bedroom Desk Drawer Key, Coffee Shop Coupon, Corporate Office Elevator Code, Presentation Stage Door Scan g) Weapons: Eye laser stare stream and hand blast, both used to push away enemy NPCs, and both cost relationship or internship points to use. 3.3 Genre: 2D Exploration and Adventure, mixed quest minigames. 3.4 Target Audience: Upper-level undergraduate learner preparing for an internship. Adult learners preparing for job interviews. 3.5 Game Flow Summary: Learner-player character walks, runs, and can sit down in the game when navigating through a level and to and from locations. Frame is birds’ eye and first-person. Learnerplayer transitions between levels by entering each location’s bathroom after learning tasks are completed and ‘internship’ and if applicable, ‘relationship’ points are earned. Enemy NPCs annoy, bug, and distract the learner-player throughout the game on each level, during each learning task, at each location. Support NPC can help achieve learning tasks. 3.6 Look and Feel – 2D Vector Art (The Banner Saga). Algorithmic polygons, dots, lines, and color datasets. 3.7 Project Scope – 2D top down exploration and adventure game 3.7.1 3.7.2 3.7.3 3.7.4 3.7.5 3.7.6 3.7.7 3.7.8

Number of Locations: Four Number of Levels: Four Number of NPCs: Six (three enemy and three helpful) Number of Weapons: Two Number of Sounds: Two Weapon, 25 Ambient Number of Soundtracks: Ten (one opener, one for each level, one for each location, one for end game) Number of Voice Tracks: Six (for NPCs) and one for the learner player accomplished through a test-to-voice text box Number of Machine-Intelligence Algorithms: Three classifiers (one for PED, one for NLP text-to-voice, and one for computer vision), two inference algorithms (learning tasks and navigation/interaction)

268 Personalized Learning Game Design Pedagogy

4. Section II Game Play and Mechanics 4.1 Game play 4.1.2

Game Progression: Learner-player must complete Learning Tasks 1 (traits) and 2 (personal vision statement) in Level 1, location one, and earn 10 ‘internship’ points to progress to Level 2. Learner-player must complete Tasks 3 (resume) and 4 (portfolio) on Level 2 location two and earn 10 internship points and 5 ‘relationship’ points to progress to Level 3. Learner-player must complete Tasks 5 (mock interview one) and 6 (mock interview two) on Level 3, location three, and earn 10 ‘internship’ points and 5 ‘relationship’ points, to progress to Level 4. Learner-player must complete Task 7 (Final Presentation) on Level 4, location four, and earn 10 ‘internship’ points and 5 ‘relationship’ points, to complete the game.

4.1.3

Mission/Challenge Structure: By researching, completing, and submitting a personal and professional traits assignments in Level 1, location one, the learner-player earns 10 ‘internship’ points that can be used to hand blast and eye stare laser away enemy NPCs, and solicit supporting NPC help in future levels. Submission of assignments are parsed by NLP algorithm to match learning goals. By researching, completing, and submitting a resume and portfolio assignments in Level 2, location two, the learnerplayer earns 10 ‘internship’ points and potentially 5 ‘relationship’ points that can be used to hand blast and eye stare laser away enemy NPCs, and solicit supporting NPC help in future levels. Submission of assignments are parsed by NLP algorithm to match resume learning goals, and computer vision algorithm for portfolio assessment. By researching, practicing and completing a mock interview and boss interview in Level 3, location three, the learner-player earns 10 ‘internship’ points and potentially 5 ‘relationship’ points that can be used to hand blast and eye stare laser away enemy NPCs, and solicit supporting NPC help in level 4. By practicing and completing a mock interview and boss interview in Level 4, location four, the learner-player earns

Personalized Learning Game Design Pedagogy 269

4.1.4

10 ‘internship’ points and potentially 5 ‘relationship’ points that can be used to hand blast and eye stare laser away enemy NPCs before their final presentation, and solicit NPC Svetlana’s help to practice and prepare, and to open the stage door for the final presentation. Puzzle Structure: Completing each level task produces 10 ‘internship’ points, and except for the first level tasks, 5 ‘relationship’ points (for cultivating professional relationships with supportive NPCs). Learner-players can in total earn 60 ‘internship’ points and 25 ‘relationship’ points in the game. It requires 8 ‘internship’ points to access location one’s desk drawer. It requires 6 internship and 3 relationship points to buy the coffee shop coupon and receive advice about creating great portfolios. It requires 6 ‘internship’ and 3 ‘relationship’ points to access the corporate elevator building code to go up to the learner-player interviews, and 5 ‘relationship’ points to secure the first mock interview with the receptionist. It requires 6 ‘internship’ and 3 ‘relationship’ points to access the biometric scan to enter the stage to provide the final presentation. It requires 4 ‘internship’ points to use the eye stare laser to push away enemy NPCs. To use the open hand blaster to eliminate an enemy NPC from a level it requires either both 5 ‘internship’ points and 2 ‘relationship’ points, or just 6 ‘relationship’ points.

4.1.5

4.1.6

Game Objectives/Learning Outcomes: Learners will have an online portfolio that reflects their skills, knowledge, talents, professional interests, and a professional resume. Learners will know how to dress professionally, write and speak in a manner that is common in a professional workplace, and be prepared for future job interviews. Lastly, learners will know how to prepare for and provide public presentations. Play Flow: Learner-player navigates by walking and running through each level (House, Town Street, Corporate Office Building, and Theater Building) and locations (Bedroom, Coffee Shop, Boss’s Office, Greenroom). ‘Internship’ and ‘relationships’ points can be earned by along the way by accomplishing learning content tasks, and proper interaction with supportive NPCs respectively.

270 Personalized Learning Game Design Pedagogy

Points can be used to provide access to helpful material to complete tasks, and to progress through each level to complete tasks, and to avoid/eliminate enemy NPCs. 4.2 Mechanics: The game is un-clocked, so learner-player can complete the game over any amount of time. However, a course professor may assign levels tasks to be completed within a set timeframe. If a learner-player runs out or doesn’t earn enough points within or between levels, and they can’t progress through the game, they must repeat a past level and perform the tasks over again. Learnerplayer must perform each task correctly to earn the full amount of potential points, and use the points judiciously in order to progress through the game. Earning ‘relationship’ points is more ambiguous, and learner-players must pay attention to what they say/type, where they stand or sit, and how they interact with supportive NPCs to gauge how points are accumulated. Moreover, enemy NPC may interact with supportive NPCs at the same time the learner-player does, creating a major distraction. Enemy NPCs must be managed prior to interacting and soliciting help from supportive NPCs, or ‘relationship’ points won’t be earned. On the other hand, pushing away or eliminating annoying enemy NPCs for no reason and/or too early in a map may cause the learnerplayer to spend too many points, when they may be needed later in the level or the next. 4.2.1 4.2.2

Physics: Environment Newtonian, 2D soft-body learnerplayer character. Semi-ragdoll for enemy NPC characters. Movement 4.2.2.1 General Movement: General 2D top-down 8 way and click and move movement for each level, camera follows learner-player character, stand, walk, run, sit. Locations stand, walk, run, and sit third and first person. 4.2.2.2 Other Movement: NPCs stand, walk, run, and sit in level locations.

4.2.3

Objects: Bedroom Desk, Coffee House Table and Order Counter, Boss’s Reception Chair and Desk, Boss’s Office Chairs and Desk, Greenroom Chair and Desk, Theater Stage. 4.2.3.1 Picking Up Objects: Location 1 – book with Traits List examples, and paper stack with Personal Values Statement examples. Location 2 – paper

Personalized Learning Game Design Pedagogy 271

stack with resume template, and computer tablet with portfolio templates. Location 3 – paper stack with potential interview questions. Location 4 – book with final presentation hints and suggestions. 4.2.3.2 Moving Objects: Location 1 – desk drawer, desk chair, bedroom door. Location 2 – coffee house door, table chair, coupon, coffee cup, and computer tablet. Location 3 – front office building door, elevator door, office door, desk chair, boss’s office door. Location 4 – greenroom door, desk chair, stage door. 4.2.4

Actions: opening desk drawer, pulling open chairs from desk and table, pushing open doors (no doorknob turns). 4.2.4.1 Switches and Buttons: Elevator call and floor buttons. 4.2.4.2 Picking Up, Carrying, and Dropping: Two books, two stacks of papers, coffee cup, coupon, computer tablet. 4.2.4.3 Talking: The tutorial has a prerecorded introduction narrative track and interventions triggered by navigation and mechanics mistakes. NPCs each have a prerecorded dialog tree for questions and answers triggered by speech-to-text parsed NLP content (or typed in a dialog box on the UI) from the learner-player character.

4.2.5

4.2.6

Combat: Enemy NPCs have no weapons, and can only get in your way and continually harass the learner-player character. Eye stare laser pushes them away, or open hand blaster eliminates them from the game level, but they may return in the next level. Economy: Accumulate and spend ‘internship’ and ‘relationship’ points to progress through the game.

4.3 Screen Flow: Tutorial screen provides a demo of the second and third level screens and helps the learner-player character to navigate to level screen locations. Main screen displays learner-player level environment, and then switches to level locations where tasks are completed, supporter NPC are engaged, and enemy NPC are confronted. 4.3.1

Screen Descriptions: Tutorial screen displays a top-down Level 2 city street with stores and shops represented by icons/names on each side of the street. Learner character

272 Personalized Learning Game Design Pedagogy

navigates to the one shop open by triggering a green highlighted door, and then proceeds to push open the door to a coffee shop. 4.3.1.1 Main Menu Screen: The main screen displays a 2D top-down view of each 2D level the learnerplayer navigates to one door that highlights green. The main screen transitions to a door that the learner-player must push open. The screen transitions to one of four 2D level click-andmove locations described earlier. 4.3.1.2 Options Screen: Learner-player can click on HUD Score mini-window to expand it and see tasks completed, tasks-to-be completed, and number of ‘internship’ and ‘relationship’ points earned, spent, lost, and available to be earned. 4.4 Replaying and Saving: If the learner-player can’t open a door, or progress to the next level/location task, they must replay the previous level. All learner-players can replay any level once. After each level is completed, the results of tasks completed, and points earned are saved. Learner-player can use the save button in the HUD to save their status at any point in the game. 4.5 Cheats and Easter Eggs: If the learner-player clicks on a time piece (watch, clock) in any level location, they are awarded 10 ‘internship’ points with a ‘ka-ching’ sound. If the learner-player click on the menu board in the coffee shop they receive 5 extra relationship points. If the learner-player click on the coffee maker in the office reception room, any enemy NPC goes away. 5. Section III Story, Setting and Character 5.1 Story and Narrative: Learner-player can select one of five potential characters: visually depicting two males, two females, and one androgynous. All the primary characters are in their 20s, and are anxious about their future job prospects. 5.1.1

5.1.2

Backstory: All the primary characters choices have a mixed academic record, so are insecure about their skills and abilities. They also are unsure how to improve their professional preparation skills like resume writing and interviewing, and how to improve their soft skills like time management, persistence, and communication abilities. Plot Elements: The enemy NPCs are analogous to real-life people that try and tempt learner-player away from professional goals. They also function as metaphors for real-life

Personalized Learning Game Design Pedagogy 273

5.1.3

options that may distract the learner-player from successfully completing the tasks in the game. Game Progression: Learner-player transitions from each level opening map, by selecting the correct location and completing location learning goals tasks, accumulating points, soliciting help from supporter NPC, and avoiding and eliminating enemy NPCs.

5.2 Game World 5.2.1

5.2.2

General Look and Feel of World: Technical 2D Vector Art (The Banner Saga). Algorithmic polygons, dots, lines, and color datasets. Placed in a modern-day detached home, coffee shop, corporate office, and theater stage. Location #1: Bedroom in a modern-day detached house. 5.2.2.1 General Description: Small, modern single bed, desk with modern laptop on top, game console on top, and with one drawer in the middle and three large drawers on the right (on top of each other). Modern desk chair, bookcase, two-door closet with one door open showing hanging clothes. Clothes, shoes, books, and papers are on the floor rug. Pictures of bands and actresses on the wall along with one hanging wall clock. Walls are light blue; rug is light pink/red. Bed blankets are striped red, yellow, blue. Soft level of 80s pop music is playing. Bathroom door. 5.2.2.2 Levels that Use Area: Level 1

5.2.3

Location #2: Coffee shop off an old town street. 5.2.3.1 General Description: Modern, sharp aluminum corner glass display case. Starbucks-type order counter. Four green tables, with four tan chairs around each. Flowers atop each table. Pattern black and white tile floor. Order board and clock on wall behind counter. Supporter NPC Rick the Receptionist stand behind the counter. Framed pictures of city images on the other wall, and a bathroom door is visible. Small square wall shop, two floor-to-ceiling windows. Color scheme is soft tan, light brown, red. Ceiling is white with two fans slowly turning. Soft level of

274 Personalized Learning Game Design Pedagogy

American classical jazz music is playing. Barbara the Barista is behind the counter. 5.2.3.2 Levels that use area: Level 2 5.2.4

Location #3: Corporate reception room and boss’s office 5.2.4.1 General Description: Corporate modern chic . . . brush aluminum, angled lines against the curved wooden framed front reception desk. Chair with supporter NPC Rick the Receptionist sits in a black tech chair behind reception desk. Floor is striped red and black carpet. Ceiling is white with track lights around the edges. Black couch and one red chair in the waiting area, with brush aluminum end tables. Boss’s office has three floor to ceiling glass windows looking across to other high-rise buildings, and down to a street scape. One desk is at the back of the office and the chair is facing the back window. The learner-player just see the top of the boss’s head as she faces the window looking away. Learner-player never sees the boss, just hears her voice. The one backwall of the boss’s office, with the entrance door, is grain wood. The desk is black with a desktop computer, notepad, pen holder, and one desk sculpture. The floor is gray and black carpet. Ceiling has LED track lights. Soft rap is playing in the reception area. No music is played in the boss’s office. Next to the boss’s entrance door is a private bathroom door. 5.2.4.2 Levels that use area: Level 3

5.2.5

Location #4: Greenroom and theater stage 5.2.5.1. General Description: Greenroom is small with four walls painted green with one brown desk under a huge mirror. One small chair is at the desk, and another small folding chair is in the corner where supporter NPC Svetlana the Stagehand sits. Floor carpet is dark green, and ceiling is white with one small ceiling fan moving slowly. Performing groups and theater companies’ posters framed are on the walls. One wall has a call speaker/button for announcements. Outside of the greenroom door is the stage door with a

Personalized Learning Game Design Pedagogy 275

biosensor reader near the doorknob. The theater stage floor is curved on the left, edges barely visible, light gray floor surrounded by very bright lights. A microphone on a stand (with a cable running down the front of the stage edge) is the only object seen. Learner-player can’t see anything else expect bright lights from all directions. Classical string quartet music is playing softly in the background. 5.2.5.2 Levels that use area: 4 5.3 Primary Character Options: Learner-player can select one of five potential characters: visually depicting two males, two females, and one androgynous character, all in their 20s, and are anxious about their future job prospects. Male 1 has short hair and is dressed in a suit; Male 2 has glasses, longer hair and is dressed in a sports jacket and pants. Female 1 has short hair and is dressed in a suit; Female 2 has glasses, long hair, and wears a jacket and dress. The androgynous character has glasses, short hair, and wears a sports jacket and pants. All the primary character choices have a mixed academic record, so are insecure about their skills and abilities. They also are unsure how to improve their professional preparation skills like resume writing and interviewing, and how to improve their soft skills like time management, persistence, and communication abilities. Animation is walk/run hands and legs cycle, and standing right hands up for enemy NPC blast. 5.3.1

NPC Enemy Character #1: Bob the Backstabber 5.3.1.1 Backstory: He tries to distract the learner-player character with invitations to lunch, dinner, concert tickets, etc., but is only trying to slow the game progression and prevent the primary character from finishing tasks. 5.3.1.2 Personality: Used car salesman type. Pushy, aggressive, confident along with lots of false promises. 5.3.1.3 Look: White collar shirt, khaki pants, blue blazer. 5.3.1.3.1 Physical characteristics: male, Caucasian, early 30s, slightly overweight, short. 5.3.1.3.2 Animations: 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move

276 Personalized Learning Game Design Pedagogy

5.3.1.4 5.3.1.5

5.3.1.6 5.3.1.7

5.3.2

when talking, but not synced to word pronunciation. Special Abilities: None Relevance to game story: Functions as analogous to a person in a friend circle that tries and tempts the learner-player away from professional goals. Also functions as metaphor for real-life fun activities that may distract the learner-player from successfully completing the tasks in the game. Relationship to other characters: None Statistics: 4 ‘internship’ points to use the eye stare laser to push away enemy NPCs. To use the open hand blaster to eliminate an enemy NPC from a level requires either both 5 ‘internship’ points and 2 ‘relationship’ points, or just 6 ‘relationship’ points.

NPC Enemy Character #2: Tina the Talker 5.3.2.1 Backstory: She tries to build a strong friendship with learner-player by calling, texting, and through social media postings. Innocent, but slows the game progression and distracts the learner-player character from finishing tasks. 5.3.2.2 Personality: Sassy, stylish, confident, but sensitive. 5.3.2.3 Look: Bright pantsuit, running shoes, stylish hat, and glasses. 5.3.2.3.1 Physical characteristics: Female, Asian, early 20s, thin, semi-tall. 5.3.2.3.2 Animations: 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move when talking, but not synced to word pronunciation. 5.3.2.4 Special Abilities: None 5.3.2.5 Relevance to game story: Functions as analogous to a person in a friend circle that tries and tempts the learner-player away from professional goals. Also functions as analogous to real-life time some friends demand and how social media may distract from successfully completing the tasks in the game.

Personalized Learning Game Design Pedagogy 277

5.3.2.6 Relationship to other characters: None 5.3.2.7 Statistics: 4 ‘internship’ points to use the eye stare laser to push away enemy NPCs. To use the open hand blaster to eliminate an enemy NPC from a level requires either both 5 ‘internship’ points and 2 ‘relationship’ points, or just 6 ‘relationship’ points. 5.3.3

NPC Enemy Character #3: Jose the Joker 5.3.3.1 Backstory: He tries to build a friendship with the learner-player character by telling bad jokes, stand-up comedy, and silly pranks. Purposely tries to slow the progression of the game and prevent finishing tasks. 5.3.3.2 Personality: Loud, boisterous, confident, but insecure and sensitive. 5.3.3.3 Look: Hipster clothes, jeans, t-shirt, hoodie, Nike sneakers untied. 5.3.3.3.1 Physical characteristics: Male, Hispanic, late 20s, tall and thin. 5.3.3.3.2 Animations: 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move when talking, but not synced to word pronunciation. 5.3.3.4 Special Abilities: None 5.3.3.5 Relevance to game story: Analogous to a person in a friend circle that tries too hard to be a close friend and time commitment distracts away from professional goals. 5.3.3.6 Relationship to other characters: None 5.3.3.7 Four ‘internship’ points to use the eye stare laser to push away enemy NPCs. To use the open hand blaster to eliminate an enemy NPC from a level requires either both 5 ‘internship’ points and 2 ‘relationship’ points, or just 6 ‘relationship’ points.

5.3.4. Supportive NPC Character #1: Barbara the Barista 5.3.4.1 Backstory: Wise and knowledgeable about preparing materials for job applications. Had several

278 Personalized Learning Game Design Pedagogy

previous careers. Is she the owner of the coffee shop? 5.3.4.2 Personality: Mature, calm, caring, speaks slowly and carefully. 5.3.4.3 Look: Coffee shop uniform, necklace, rings. 5.3.4.3.1 Physical characteristics: Female, Kenyan, late 40s, medium height, and average weight. 5.3.4.3.2 Animations: 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move when talking, but not synced to word pronunciation. 5.3.4.4 Special Abilities: None 5.3.4.5 Relevance to game story: Analogous to a mentor that helps advise our learner-player about where to find professional resume help and providing portfolio template examples. 5.3.4.6 Relationship to other characters: None 5.3.4.7 Statistics: it costs 6 ‘internship’ and 3 ‘relationship’ points to buy the coffee shop coupon and receive advice from Barbara about creating great portfolios. 5.3.5

Supportive NPC Character #2: Rick the Receptionist 5.3.5.1 Backstory: Wise and funny, light-hearted, but comforting. Has had several professional careers. 5.3.5.2 Personality: Mature and relaxed, witty and kind. 5.3.5.3 Look: Pinstriped suit and tie. 5.3.5.3.1 Physical characteristics: Male, Caucasian, late 40s, short and trim. 5.3.5.3.2 Animation, 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move when talking, but not synced to word pronunciation. 5.3.5.4 Special Abilities: None 5.3.5.5 Relevance to game story: Analogous to an older friend that helps provide a practice ‘mock’

Personalized Learning Game Design Pedagogy 279

interview, and critiques that interview prior to the boss interview. 5.3.5.6 Relationship to other characters: None 5.3.5.7 Statistics: It costs 5 ‘relationship’ points to secure the first mock interview with the receptionist (6 ‘internship’ and 3 ‘relationship’ points to access the corporate elevator building code to go up to the interviews). 5.3.6

Supportive NPC Character #3: Svetlana the Stagehand 5.3.6.1 Backstory: Tired, nervous, anxious. Doesn’t make much eye contact. Has struggled in the theater industry working in several countries. Not happy with her job as stagehand. 5.3.6.2 Personality: Moves from serious and dark to bubbly and optimistic. 5.3.6.3 Look: Sweatpants, t-shirt, black sneakers, hair pulled back. 5.3.6.3.1 Physical characteristics: Female, Eastern European, early 30s, tall and thin. 5.3.6.3.2 Animations: 2D, stand and sit, and simple arm-leg walk and run cycle. Head-neck up/down/left right. Fixed facial expression of happy. Lips move when talking, but not synced to word pronunciation. 5.3.6.4 Special Abilities: None 5.3.6.5 Relevance to game story: Functions as a close friend to provide insight and advice from your ‘Traits’ exercise. What you should expect and prepare for to give your best final presentation performance. 5.3.6.6 Relationship to other characters: None 5.3.6.7 Statistics: Her advice is free, but it will cost 6 ‘internship’ and 3 ‘relationship’ points to access the biometric scan to enter the stage to provide the final presentation.

6. Section V Interface 6.1 Visual System 6.1.1

HUD: Top right of UI is the HUD Score mini-window Score box, which indicates points earned and spent.

280 Personalized Learning Game Design Pedagogy

6.1.2 6.1.3 6.1.4

Clicking on HUD Score mini-window expands it to see tasks completed; tasks-to-be completed; and number of ‘internship’ and ‘relationship’ points earned, spent, lost, and available to be earned. Also, the HUD contains current level, location, and Save, Pause, Start, and Restart buttons, a Help button, and Tutorial button. Rendering System: Cloud-based OpenGL GPU with eight programable vector shaders. Camera: Levels are 2D top-down. Locations are isometric, third and first person. Lighting Models: Real-time, mixed, and baked.

6.2 Control System: Keyboard WASD to move. Mouse left-click for right-hand blast, Right-click for eye laser stare. Shift to run. Ctrl for sit. Arrow key can also be used for movement. 6.3 Audio: Eight different stereo music tracks. Six voice tracks to match characters as described in NPCs earlier. 6.4 Music: Level 1: 80s Dance Music Level 2: 80s Heavy Metal Level 3: Italian Opera Arias Level 4: Classical String Quartet Location 1: 80s Dance Music Location 2: Classical Jazz Location 3: 90s Rap Location 4: Classical String Quartet 6.5 Sound Effects: 16 different sounds – Eye stare laser sound. Hand Blast sound, Easter egg sound. Footsteps (walk and run), chair legs scraping floor (coffee shop), elevator arrive ding sound, elevator door open and close movement sound, location door open and close (3), stage audience applause sound. 6.6 Help System: Brings up Control System, navigation hints, point collection hints, tasks submission hints. 7. Section VI Artificial Intelligence 7.1 Personalized Learning Machine-Intelligence AI Engine: Cloudbased Bayesian learning forward and backward chaining algorithms that use past game play ingested datasets and superimposes these upon the new dataset variable uncertainty to adjust model weights to offer probabilistic inference and reasoning of these new datasets produced by learner-player game play to improve output accuracy and prediction. The Bayesian learning inference and

Personalized Learning Game Design Pedagogy 281

reasoning algorithms are built on top of the TensorFlow Probability library, built on top of the Tensorflow Framework (see Chapter 7 for more detail). 7.2 Enemy NPCs: Based on the task learning outcomes/score of the learner-player, the Bayesian algorithmic model adjusts the intensity higher or lower of the enemy NPC interference scale during progress of the game and during task work sessions. 7.3 Supportive NPCs: Based on the task learning outcomes/score of the learner-player, the Bayesian algorithmic model adjusts the amount of support scale higher or lower (guidance, advice, answers) the supporter NPCs provide during location interaction and during tasks work sessions. 7.4 Support AI 7.4.1 7.4.2

Player and Collision Detection: The cloud-based PLG Engine has Sort and Sweep and Dynamic Bounding Volume Tree collision detection built in its physics engine. Pathfinding: NavMesh Algorithms, Steering and Stringpulling Algorithms

8. Section VII Technical 8.1 Target Hardware: Cloud-based streaming to any desktop computer or ‘thin’ hardware device. 8.2 Game Engine: Proprietary cloud-based Personalized Learning Game (PLG) Engine 8.3 Network: Full HD requires a stable and consistent bit rate in the ranges of 7–10 Mbps at 60 frames per second, and Ultra-High 4K requires a minimum transfer bit rate of rate if 60–80 Mbps for a resolution of 60 FPS. If transfer rates drop based on capacity limits, usage gates, or network damage the H264 standard drops the bitrate and/or frame rate to stream at a lower resolution, and bumps the rates higher as bandwidth becomes available or network repairs are made. 8.4 Scripting Language: C++, Python

Notes 1 Juul, J. (2018). The game, the player, the world: Looking for a heart of gameness. PLURAIS-Revista Multidisciplinary, 1(2). 2 Martin, S. M. (2019). Artificial intelligence, mixed reality, and the redefinition of the classroom. Lanham, MD: Rowman & Littlefield. 3 Michaelson, P. (2018). The invisible wall of psychological resistance. WhyWeSuffer.com, Retrieved December 2, 2018 from www.whywesuffer.com.

282 Personalized Learning Game Design Pedagogy 4 Knox, J. (2013). Jaak Panksepp and Lucy Biven. The archaeology of mind: Neuroevolutionary origins of human emotions. New York, London: WW Norton. Journal of Analytical Psychology, 58(3), 439–441; Schmader, T., & Johns, M. (2003). Converging evidence that stereotype threat reduces working memory capacity. Journal of Personality and Social Psychology, 85(3), 440. 5 Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. 6 Martin, S. M. (2019). Artificial intelligence, mixed reality, and the redefinition of the classroom. Lanham, MD: Rowman & Littlefield. 7 Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625–636. 8 Learn II, H. P. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: National Academy of Sciences. 9 Gagné, R. M., & Gagné, R. M. (1985). Conditions of learning and theory of instruction. Austin, TX: Holt, Rinehart and Winston. 10 Banfield, J., & Wilkerson, B. (2014). Increasing student intrinsic motivation and self-efficacy through gamification pedagogy. Contemporary Issues in Education Research (CIER), 7(4), 291–298. 11 Lei, S. A. (2010). Intrinsic and extrinsic motivation: Evaluating benefits and drawbacks from college instructors’ perspectives. Journal of Instructional Psychology, 37(2). 12 Chentanez, N., Barto, A. G., & Singh, S. P. (2004). Intrinsically motivated reinforcement learning. In Advances in neural information processing systems (pp.  1281–1288). Red Hook, NY: Curran Associates. 13 Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. 14 Wolf, D. M., & Kolb, D. A. (1984). Career development, personal growth and experiential learning. In Organisational psychology: Readings on human behaviour (4th ed., p. 26). Englewood Cliffs, NJ: Prentice-Hall. 15 Muntean, C. I. (2011, October). Raising engagement in e-learning through gamification. In Proceedings of 6th international conference on virtual learning ICVL (pp. 323–329). Bucharest, Romania: ICVL. 16 Hirumi, A., & Stapleton, C. (2009). Applying pedagogy during game development to enhance game-based learning. In Games: Purpose and potential in education (pp. 127–162). Boston, MA: Springer. 17 Baldwin, M. (2005). Game design document outline. Golden, CO: Baldwin Consultant. Retrieved from http://baldwinconsulting.org 18 Hansen, J. S., & Roza, M. (2005). Decentralized decision making for schools: New promise for an old idea? RAND Corporation. 19 www.theatlantic.com/politics/archive/2017/03/middlebury-free-speech-violence/ 518667/; www.wtoc.com/2019/10/11/georgia-southern-responds-book-burningafter-author-visits-statesboro-campus/; www.insidehighered.com/news/2015/10/26/ dispute-required-math-textbook-escalates-broader-debate-about-costs-and-academic 20 Page, A. L., & Clelland, D. A. (1978). The Kanawha County textbook controversy: A study of the politics of life style concern. Social Forces, 57(1), 265–281. 21 Griffiths, A. (2017). Forecasting failure: Assessing risks to quality assurance in higher education using machine learning. Doctoral dissertation, King’s College London; Layman, D. L. Emerging risks and dangers of “uncontrolled” computer-use within our classrooms. Plainfield, IL: Robowatch, LLC. 22 www.nea.org/student-success/smart-just-policies/special-education; https://nces.ed. gov/fastfacts/display.asp?id=60 23 Carnahan, C., & Fulton, L. (2013). Virtually forgotten: Special education students in cyber schools. TechTrends, 57(4), 46–52.

Personalized Learning Game Design Pedagogy 283 24 Snyder, T. D., De Brey, C., & Dillow, S. A. (2018). Digest of education statistics 2016. NCES 2017–094. National Center for Education Statistics. 25 Hussar, W. J., & Bailey, T. M. (2014). Projections of education statistics to 2022. NCES 2014–051. National Center for Education Statistics. 26 Tandberg, D. A. (2010). Politics, interest groups and state funding of public higher education. Research in Higher Education, 51(5), 416–450; www.wsj.com/articles/ public-universities-see-state-funding-disappear-effective-immediately-11587653753 27 Hussar, W. J., & Bailey, T. M. (2014). Projections of education statistics to 2022. NCES 2014–051. National Center for Education Statistics. 28 Bailey, A., Vaduganathan, N., Henry, T., Laverdiere, R., & Pugliese, L. (2018). Making digital learning work: Success strategies from six leading universities and community colleges. Boston, MA: Boston Consulting Group. 29 Bowen, W. G., Chingos, M. M., Lack, K. A., & Nygren, T. I. (2013). Online learning in higher education: Randomized trial compares hybrid learning to traditional course. Education Next, 13(2), 58–65; Bailey, A., Vaduganathan, N., Henry, T., Laverdiere, R., & Pugliese, L. (2018). Making digital learning work: Success strategies from six leading universities and community colleges. Boston, MA: Boston Consulting Group. 30 Abdous, M. H., & Yen, C. J. (2010). A predictive study of learner satisfaction and outcomes in face-to-face, satellite broadcast, and live video-streaming learning environments. The Internet and Higher Education, 13(4), 248–257. 31 Anderson, T. (2003). Getting the mix right again: An updated and theoretical rationale for interaction. The International Review of Research in Open and Distributed Learning, 4(2). 32 Conaway, R. N., Easton, S. S., & Schmidt, W. V. (2005). Strategies for enhancing student interaction and immediacy in online courses. Business Communication Quarterly, 68(1), 23–35. 33 Moore, M. G. (2013). The theory of transactional distance. In M. G. Moore (Ed.), Handbook of distance education (pp. 84–103). New York: Routledge.

Index

Note: Page numbers in italic indicate a figure on the corresponding page. Abt, Clark C. xi, 48–49, 58–59, 67 Achilles 48–50, 50 adaptability 33–34, 44, 148, 168, 249 Adcox, Orin 132–134 Advanced Micro Devices (AMD) 227 Advance Wars 82 Ajax 50, 50 Alexa (Amazon) 207 AlphaZero 163–164 Amazon: Alexa 207; Game Studios 221; LEX 158; Lumberyard 218, 221 amphora 50, 50 Ancient Greece: rules-based ‘deep play’ 49–53 Apple: Bionic AI (edge) Chips 207; Macintosh 55–56, 58–59, 204; OpenCL 226; Siri 207 application programming interface (API) 164, 185, 209, 219–222, 224, 226–227, 234–235 Aptitude Treatment Interaction (ATI) 42–43 Aristotle 53–54, 58, 110n23 Arizona State University: Sanford Harmony 130 Artifcial General Intelligence (AGI) 28, 31–34, 44, 207; applied to teaching and learning 161–166, 171, 189–190, 192, 194

artifcial intelligence (AI) 42, 44, 147–152, 280–281; Artifcial Super Intelligence (ASI) 190–195; Deep Academic Learning Intelligence 153–156; game engine integration 218–227, 236–237; Jill Watson 2019 157–168; next-gen machine-learning algorithms, models, and frameworks 152–153; overview of the history of 27–34; see also Personalized Recursive Online Facilitated Intelligent Teaching artifcial narrow intelligence (ANI) 28, 31 artifcial neural networks (ANNs) 30, 213–214, 216, 219 Artifcial Super Intelligence (ASI) 190–195 art of learning: cell change and learning at the biological level 1–7; early childhood memory development and learning 7–10; a human knowledge acquisition system and memory maps 10–23; long-term memory episodic recall promoter apparatus 22–23; long-term memory module 20; procedural memory 21–22; sensory memory module 13–14; short-term memory module 17–20; working memory module 15–17

Index 285 Assassin’s Creed 115, 134: Origins Discovery Tour 56–57 assessment engines 91–96 Atari 121; Pong 79 automatic teacher (Pressey) 40–41, 41 AutoML (Google) 158 Avalon Hill 62 Azure Cloud (Microsoft) 210 backward chaining 232–234, 233, 280 Baer, Ralph 122–123 Barrat, James 190 Bartle taxonomy of player types 85–86, 85, 107 Bayesian learning 232–235 behavior trees 209, 221 biology 1–7, 2, 75 Bishop, Kyle 123–124 Blackboard 220–221 Blair-Jimenez, Dawn 123 Bloom’s taxonomy 91–92, 92 Blow, Joshua 126, 128 Blueprint 220–221, 225 board games: and learning 60–67 Bostrom, Nick 147; SuperIntelligence 162, 166–167, 191 Bower, Gordon H. ix, 22–23 Brinkmann, Derek 135 Burning Man 54 Bushnell, Nolan 121 business 137–139 Call of Duty 83, 135 Candy Crush 83 Carcossone 83 case study: hospital training game 124–128; Interactive Virtual Incident Simulator (IVIS) 122–124, 123; Jill Watson 2019 157–168; Legends of Aria 134–137; social-emotional academic development game (SEAD) 128–134; UI and UX designs 86–91; see also design study cell change 1–7 Chancellorsville 62 character 272–279 Chaturanga 61 chess 28, 54, 61, 83, 163, 209 Citadel Studios 122, 135, 135 Civilization III (Meier) 134

Civil War 62 Clash Royal (Supercell) 210 Classroom Management Intelligence (CMI) 175–178, 177, 181, 181, 188 cognitive balance 73–78, 78 cognitive capacity 73–78, 78, 248–249 cognitive load 74–77, 77, 247; assessment engines and game-based learning evaluation 91–93; multiplayer games and learning 108; principles of designing successful learning games 97, 104; UI and UX design 89 cognitive process 78, 78 coherence principle 99–100, 99 Cold War 58, 135 Collaborating and Compromising Agents schema 30, 43 College of Visual and Performing Arts (CVPA) 120–121; see also George Mason University Command and Conquer 83 Communities in Schools (CIS) 130–132, 134 computer-intelligent games 203–209; personalized learning game (PLG) engine 227–238; serious games and machine learning 209–218; the state of AI game engine integration 218–227 Compute Unifes Device Architecture (CUDA) 220, 227, 231, 235 course planning 249–253, 250, 253, 255 Covid-19 pandemic 44, 74, 108, 148–152, 195, 263 CPUs 167, 190, 204–205, 220, 224, 226–229, 235 Cronbach, Lee 42–43 Cubberley, Ellwood P. 38–39 CycleGAN (CG) 185–187, 186 Dead Space 89–90 Debord, Guy 54 Deep Academic Learning Intelligence (DALI) 153–158, 154–155, 195, 207; and Personalized Recursive Online Facilitated Intelligent Teaching 168, 171, 175–176, 179–180, 189 Deep Blue 163 deep learning 28, 154–155, 163, 168, 209, 216, 234

286 Index DeepMind (Google) 163, 209 deep play 49–53, 66 de Freitas, Sara 120 design study: Deep Academic Learning Intelligence 153–156; see also Personalized Recursive Online Facilitated Intelligent Teaching Dieterich, Robert 126, 128 Dolby, Thomas 121 Donkey Kong 90 Dungeons and Dragons 82, 106 Duplex (Google) 185 Durand, Maxime 56 Dynamic Adaptable Teaching Methodology Intelligence (DATMI) 181–184, 184 early childhood: memory development and learning 7–10 education 137–139; novel education paradigm 258–264 education quiz games 93–94, 94 edutainment 117–119 Eigen 219–220 Elementary and Secondary Education Act (ESEA) 42 Encierro 54 engagement 78–86, 84 entertainment game appropriation 78–86 entertainment quiz game 93, 94 entropy fltering 19–21, 19 Envision 121 episodic recall promoter apparatus 22–23 Escape from Zombie Island 121 Estep, Alex 123 Éthier, Marc-André 57 evaluation see game-based learning evaluation Everquest 83 Expression Module (EM) 185–189, 185 Facebook: PyTorch 219 Fallout 2 128–129, 135 Feferman, Nina 108 feld-of-sound (FoS) 207–208, 208, 213 feld-of-view 208 fnite state machine (FSM) 209 fow 64–67, 76–77, 76–77, 89, 91–92, 213–214, 226–228 FlyGuy Interactive 122, 132–133 FMOD 226 Fortnite 81, 83, 110n26

forward chaining 232–233, 233 Forza Motorsport (Microsoft) 210 France 35, 61, 68n25 Friday Night at the ER (FNER) 126 Frogger 59 game-based learning evaluation 91–96 game cloud 228–229, 229 game design 72–73, 253; assessment engines and game-based learning evaluation 91–96; entertainment game appropriation 78–86; human cognitive capacity and balance 73–78; instructional and pedagogical strategies for personalized game design 249–255; multiplayer games and learning 104–109; and the novel education paradigm 258–264; personalized learning game design pedagogy 244–249, 265–281; personalized learning game planning 255–258; principles of designing successful learning games 96–104; UI and UX designs 86–91 game engine integration 218–227, 223 game planning 255–258, 255, 258 game play 268–270 games 48–49, 137–139; Achilles and Ajax 50; board games and learning 60–67; rules-based Ancient Greek ‘deep play’ 49–53; serious games pioneers 53–60; see also specifc game types and specifc games Garriott, Richard 82 GeForce Now (Nvidia) 204 genre 79–81, 80 George Mason University (GMU) 119–123, 126, 132, 138 Germany 61 Gettysburg 62 Gibson & Hopkins Experientiality Rubric xi, 49, 66–67 Go 54, 83, 161 Goel, Dr. Ashok 157–160 Google: AutoML 158; DeepMind 163, 209; Duplex 185; Stadia 203–204; Tensorfow 219–221, 226, 234–235, 237, 281 GPUs 167, 190, 204–207, 206, 239n3, 257; and a personalized learning game engine 228–229, 225, 231–237; serious games and machine learning 210, 214; and the state of AI game engine integration 220, 224–227

Index 287 Greece see Ancient Greece Grimm, Leslie 55 Guesdon, Jean 56 Halo (Microsoft) 210 Hamilton (Miranda) 115–116, 139n2 Hearthstone 82 Helwig, C.L.: Krieg-spiel 61 Highly Adaptive Content Knowledge Distributor (HACKD) 171–181, 172, 174, 176 Homer 48–50; Iliad 48–50; Odyssey 50 Homo Ludens 49, 53, 55, 57 Hospital Rescue: Emergency Department 125–128 Hospital Training Games 122, 124–128, 138 Huizinga, Johan xi, 49, 53–55, 58, 67 IBM 58, 157, 163; see also Deep Blue; Jill Watson id software 136 IMMERSE 122 information classifers 15–17, 16 instructional strategies 244, 249–256 Integrated Domain Knowledge (IDK) module 169–171, 170 intelligent system 32–33; Minsky’s 30, 30, 43 intelligent tutoring systems (ITS) 43–44 Interactive Virtual Incident Simulator (IVIS) 122–124, 123 interconnectivity scheme 170 interface 279–280 Isbister, Katherine 129 Jaeger, Werner 51 James, William 10–11 Japan 61 Javascript (.js) 135, 218–219, 225 Jeferson, Thomas 35–36 Jill Watson 157–168, 158–160 Jones, Gregory V. 22–23 Juul, Jesper 245–246 Kasparov, Garry 163 Keras 220–221 knowledge acquisition system (KAS) 10–13, 12; long-term memory episodic recall promoter apparatus 22–23; long-term memory module 20; procedural memory 21–22;

sensory memory module 13–14, 14; short-term memory module 17–20; working memory module 15–17 Knowledge Distribution Sub-Model (KDSM) 180, 180 Koenigspiel 58 Korean War 61 Kreigs-spiel (Reisswitz) 58, 61 Krieg-spiel (Helwig) 61 learning 48–49; and basic biological changes 2, 75; at the biological level 1–7; and board games 60–67; and early childhood development 7–10; a human knowledge acquisition system 10–23; levels of learning experience 65; long-term memory episodic recall promoter apparatus 22–23; long-term memory module 20; and multiplayer games 104–109; procedural memory 21–22; and rules-based Ancient Greek ‘deep play’ 49–53; sensory memory module 13–14; and serious games pioneers 53–60; short-term memory module 17–20; working memory module 15–17; see also artifcial intelligence; computerintelligent games; game-based learning evaluation; game design; personalized learning Learn Japanese to Survive! Kanji Combat 95 Leary, Dr. Kevin 131–134 Legends of Aria 100, 134–137, 136 LISP programming language 31, 43 Little Arms Studios 122–124, 138 Lofgren, Eric 108 long-term memory (LTM) 20; episodic recall promoter apparatus 22–23 Lotus 123 59 Lua 221, 225 ludology 48–49; board games and learning 60–67; rules-based Ancient Greek ‘deep play’ 49–53; serious games pioneers 53–60 Lumberyard (Amazon Game Studios) 218, 221 machine intelligence 188, 190 machine learning 152–153; and serious games 209–218 machine vision 28 Madden 83

288 Index Magic the Gathering 83 Malone, Dr. Thomas 116–117, 138 Markov Models 209, 213, 215, 219 Marou, Henri xi, 49 Martin, Scott M. 86, 119–120, 137 Mason Game and Technology Academy (MGTA) 121 massive multiplayer online (MMO) 85, 86, 100, 104–109, 113n71, 134–137 Massive Multiplayer Online Role-Playing Games (MMORPG) see massive multiplayer online (MMO) Mattel Interactive 56 McCormick, Ann 55 mechanics 82–83, 268–272 Meier, Sid 129; Civilization III 134 memory: early childhood memory development 7–10; long-term memory (LTM) 20, 22–23; procedural memory 21–22; sensory memory 13–14; short-term memory (STM) 17–20, 18–19; working memory 15–17, 16 memory maps 10–13; long-term memory episodic recall promoter apparatus 22–23; long-term memory module 20; procedural memory 21–22; sensory memory module 13–14; short-term memory module 17–20; working memory module 15–17 metacognition 78, 78 Microsoft: Azure Cloud 210; Forza Motorsport 210; Halo 210; Xbox 210; xCloud 204, 218 Minecraft 107 Minsky, Marvin 29–31, 30, 43; Collaborating and Compromising Agents schema 30, 43; Intelligent System theories 30, 30, 43; intelligent tutoring systems (ITS) 43–44; Society of Mind 166 Miranda, Lin-Manuel: Hamilton 115–116, 139n2 modality principle 100–101, 100 modulatory synapsis 4–6, 5, 168 Morville, Peter 88 motivation 78–86, 86 motivators 83–86, 106, 254 multiplayer games 104–109; see also massive multiplayer online multiplayer learning environments (MLEs) 105, 107–108

multiplayer modality 105–107 Multi-User Dungeons (MUDS) 85, 106 Naïve Bayes 31, 179, 214–216, 219 National Defense Education Act (NDEA) 40–42 Need for Speed 83 neural network algorithms 205–206, 205–206 neural synapsis 5, 33, 168; see also modulatory synapsis Newell, Allen 11, 29, 33 Nimble 219 Nintendo Entertainment System (NES) 90 nonplayer character (NPC) 57, 104, 207–209; and personalized learning game design pedagogy 248, 256; and a personalized learning game engine 228, 231–232, 237; serious games and machine learning 210, 208, 212–213, 215–216; and the state of AI game engine integration 218, 220–223, 225–226; and the Virginia Serious Game Institute 127, 133 Norman, Don 87, 132 Not Just for Entertainment: How Games Can Improve the Human Condition 203 Number Munchers 118, 118 Nvidia 227; GeForce Now 204 Odyssey 56–57 Onfray, Michel 27 Open Computing Library (OpenCL) 226–227, 231, 235 Oregon Trail 59, 118–119 Pac-Man 59 paideia 51, 53–54, 66 paidia 53, 66 pais/paizo xi, 50 paizein xi, 50–52 pandemics 107–109; see also Covid-19 pandemic PC architecture 205, 205–206 pedagogy see personalized learning game design pedagogy peer tutoring 37, 40, 180, 180 Perl, Teri 55 personalization principle 103–104, 103 personalized learning: historical approaches to 34–45

Index 289 personalized learning game (PLG) 227–238, 236–237, 255; see also personalized learning game design pedagogy personalized learning game design pedagogy 244–249, 265–281; instructional and pedagogical strategies for personalized game design 249–255; and the novel education paradigm 258–264; personalized learning game planning 255–258 personalized learning map (PLM) 11–15, 19, 21–23, 34; and artifcial intelligence applied to teaching and learning 155–156, 168, 171, 180; and personalized learning game design pedagogy 256, 263–264; teaching and learning computer-intelligent games 207, 218, 231 Personalized Recursive Online Facilitated Intelligent Teaching (PROF(it))168–169, 170; Dynamic Adaptable Teaching Methodology Intelligence (DATMI) 181–184; Expression Module (EM) 185–189; Highly Adaptive Content Knowledge Distributor (HACKD) 171–181; Integrated Domain Knowledge (IDK) module 169–171; master list of supersystem variables 196–198 personalized teaching 156–157 Plato xi, 49, 51–55, 58, 171 play see deep play player experience modeling (PEM) 209–218, 212, 221–222, 226, 231, 236, 257 Pong (Atari) 79 Pressey, Sidney: Automatic Teacher 40–41, 41 pretraining principle 102 PROF(it) see Personalized Recursive Online Facilitated Intelligent Teaching Python 135, 219–221, 234, 237 PyTorch 219, 221, 226 Quake 136 Quality of Experience (QoE) 228, 230 Quality of Service (QoS) 228 quiz games see education quiz games Ramos, Romel 126, 128 Reader Rabbit 55–56, 59

redundancy principle 101–102 Reisswitz, Baron von: Kreigs-spiel 61 Risk 83 risk aversion 38, 44, 148, 195 Roberts, Charles Swann xi, 49, 61–62, 67 Roblox 107 Rollercoaster Tycoon 3 134 Ross, Brian H. 22–23 SageMaker 221 Sanford Harmony 130 Schackoder Koenig-spiel, Das (Weikmann) 61 Schell, Jesse 82 Schell Games 82 schole 51, 53–54, 66 science of learning: cell change and learning at the biological level 1–7; early childhood memory development and learning 7–10; a human knowledge acquisition system and memory maps 10–23; long-term memory episodic recall promoter apparatus 22–23; long-term memory module 20; procedural memory 21–22; sensory memory module 13–14; short-term memory module 17–20; working memory module 15–17 Script Canvas 221 Second Life 107 sensory memory 13–14, 14 Serious Game Institute (SGI) at GMU 120; see also Virginia Serious Game Institute serious games 117–119; and machine learning 209–218; pioneers 53–60 Serious Play 121 setting 272–279 Shadow Technology 204, 218, 228 Shannon, Claude 29, 163 Shards Online 135 SheSoft 121 Shogi 209 Shogun 219 short-term memory (STM) 17–20, 18; entropy flter 19 Sicart, Miguel 82 signaling principle 102–103 SimCity 83 Simon, Herbert 29, 33 Sims 2, The 134 Siri (Apple) 207

290

Index

Small World 83 smart AI bot 156–157 Snow, Richard 42–43 social-emotional academic development game (SEAD) 128–134, 133 social-emotional learning (SEL) 130–131 Soviet Union 40, 58, 61 Space Invaders 59 Space War 79, 110n24 spatial contiguity 96–98, 97 Spore (Maxis) 57, 134 spoude 50 Squire, Kurt 119, 134, 138 Stadia (Google) 203–204, 218 Staples, Nathaniel 134–137 Starcraft 83 Starship Technologies 138 Steam Audio 226 Stockfsh 163 story 272–279 Supercell: Clash Royal 210 superintelligence see Artifcial Super Intelligence Super Mario Brothers 90, 102 Super Mario Kart 83, 111n37 support vector machine (SVM) 31, 214–219 synapsis see modulatory synapsis; neural synapsis Tactics 61–62 taxonomy: Bartle taxonomy of player types 85–86, 85, 107; Bloom’s taxonomy 91–92, 92 teaching see artifcial intelligence; computer-intelligent games; game design; personalized teaching Team Fortress 2 136 temporal contiguity 98–99, 98 Tennis for Two 79, 110n24 Tensorfow (Google) 219–221, 226, 234–235, 237, 281 The Learning Company (TLC) 55–56 theoria 53 thin device 228–230, 229, 263 Top Chess Engine Champion (TCEC) 163 transfer learning 28, 163 Uberlehrer 49 Ubisoft 56

UI designs 86–91, 87 Ultima Online (UO) 104–106, 135 United States Army 122 Unity 218–221, 226 University of Coventry 120 Unreal 218–221 UX designs 86–91, 88 Valve: STEAM 204 Vietnam War 62 VIPER (Virtual Integrated Practice of Emergency Response) 122 Virginia Serious Game Institute (VSGI) 115–117; edutainment and serious games 117–119; efects of games, business, and education in a community 137–139; history, mission, and philosophy 119–122; hospital training game 124–128; Interactive Virtual Incident Simulator 122–124; Legends of Aria 134–137; socialemotional academic development game 128–134 VisiCalc 59 Wang, Pei 32–33 War Chess 58 Weikmann, Christopher: Das Schackoder Koenig-spiel 61 Weiser, Mark 86 Wheeler, Phillip 125–128 Wheel of Fortune 83 Where in the World Is Carmen Santiago 59, 119 Witcher 3 83 Women in Technology 121 working memory 15–17 World of Warcraft 82, 107 Worldstrides 121 World War I 36 World War II 36, 61 Wwise 226 Xavier, Charles 122 Xbox (Microsoft) 98, 210 xCloud (Microsoft) 204, 218 X-Com 82 X-Men 123 Yee, Nick 85–86